id
stringlengths
18
42
text
stringlengths
0
3.39M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
4 values
original_shard_dir
stringclasses
158 values
original_shard_idx
int64
0
311k
num_tokens
int64
1
569k
proofpile-arXiv_065-6
\section{Introduction} Machine Learning (ML) applications recently demonstrated widespread adoption in many critical missions, as a way to deal with large-scale and noisy datasets efficiently, in which human expertise cannot be used due to practical reasons. Although ML-based approaches have achieved impressive results in many data processing tasks, including classification, and object recognition, they have been shown to be vulnerable to small adversarial perturbations, and thus tend to misclassify, or not able to recognize minimally perturbed inputs. Figure~\ref{fig:adversarial-input} illustrates how an adversarial sample can be generated by adding a small perturbation, and as a result can get misclassified by a trained Neural Network (NN). \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{{adversarial-input}.png} \caption{By adding an unnoticeable perturbation to an image of "panda", an adversarial sample is created, and it was misclassified as "gibbon" by the trained network. (Image credit: ~\cite{goodfellow2015})\label{fig:adversarial-input}} \end{figure} Adversarial perturbation can be achieved either through \emph{white-box} or \emph{black-box} attacks. In the threat model of \emph{white-box} attacks, an attacker is supposed to have full knowledge of the target NN model, including the model architecture and all relevant hyperparameters. For the \emph{black-box} attacks, an attacker has no access to the NN model and associated parameters; thus, an attacker relies on generating adversarial samples using the NN model on hand (known as \emph{attacker model}), and then uses these adversarial samples on the target NN model (known as \emph{victim model}). White-box attacks are considered to be difficult to launch in real world scenarios, as it is often not possible for an attacker to have access to full information of the victim model. Thus, in this paper, we focus on \emph{black-box} attacks which are posing practical threats for many ML applications, and evaluate the strategies of generating adversarial samples (which can be used for launching black-box attacks) and their transferability to victim models. {\bf\textit{Transferability}} is  an ability of an adversarial sample that is generated by a machine learning attack on a particular machine learning model (i.e., on an attacker  model) to be  effective against a different, and potentially unknown machine learning model (i.e., on a victim model). Attacker model refers to the model used in generating the adversarial samples (i.e., malicious inputs that are  modified to yield erroneous output while appearing as unmodified to the human or an agent), whereas the target model refers to the NN model to which the adversarial samples will be transferred. There is a long literature on transferability of adversarial samples and machine learning attacks that generate them; however, they often analyze the transferability from the perspective of a specific network model~\citep{szegedy2014, goodfellow2015, papernot2016, demontis2019}. That is, they have tried to present an explanation on why transferability is able to occur based on the NN model properties (of a given specific target model). Hence, we say that  most research have taken a \emph{model-centric} approach. In contrast, we are presenting an {\bf \textit{attack-centric}} approach, in this paper. In \textit{attack-centric} approach, we provide insights on why adversarial samples can actually transfer by analyzing the adversarial samples generated using different machine learning attacks. A particular insight that we want to build is to see if machine learning attacks and input set have any inherent feature that causes or increases the likelihood of adversarial samples to transfer effectively to the victim models. In the following, we provide motivation on studying transferability of adversarial samples and exemplify ML-based applications in which they may pose significant security and reliability threats. \subsection{Motivation for Research on Transferability of Adversarial Samples} Machine learning has become a driving force for many data intensive innovative technologies in different domains, including (but not limited to) health care, automotive, finance, security, and predictive analytics, thanks to the widespread availability of data sources and computational power allowing to process them in a reasonable time. However, machine learning systems may have security concerns which can be detrimental (and even life threatening) for many application use cases. For motivating the readers regarding the importance of transferability of adversarial samples, and demonstrate the feasibility and possible consequences of machine learning attacks, here we highlight some practical security threats which exploit the transferability of adversarial samples. \cite{thys2019} generated adversarial samples that were able to successfully hide a person from a person detector camera which relies on a machine learning model. They showed that this kind of attack is feasible to maliciously circumvent surveillance systems and intruders can sneak around undetected by holding on to the adversarial sample/patch in the form of cardboard in front of their body which is aimed towards the surveillance camera.  Another sector that heavily relies on ML approaches health care due to high volume of data being processed is health care. A particular example of exploiting adversarial samples in this domain is as follows. Dermatologists usually operate under a "fee-for-service" revenue model in which physicians get paid for procedures they perform for a patient. This has caused unethical dermatologists to apply unnecessary procedures to increase their revenue. To avoid frauds in this nature, insurance companies often rely on machine learning models that analyze patient data (e.g., dermatoscopy images) to confirm that suggested procedures are indeed necessary. According to the hypothetical scenario presented by ~\cite{finlayson2018}, an attacker could generate adversarial samples composed of dermatoscopy images such that when they are analyzed with the machine learning model used by insurance company (victim's model), it would (incorrectly) report that a suggested procedure is appropriate and necessary for the patient. For security applications that rely on audio commands (which are processed by a ML-based speech recognition system), an attacker can construct adversarial audio samples to be used in breaking into the targeted system. Such an attack, if successful, may lead to information leakage, cause denial of service, or executing unauthorized commands. A feasibility of an attack on speech recognition system is demonstrated by ~\cite{carlini2016} that generated adversarial audio samples (called obfuscated commands) which were used in attacking Google Now's speech recognition system.  ~\cite{jia2017} used Stanford Question Answering Dataset (SQuAD) to test whether text recognition systems can answer questions about paragraphs that contain adversarial sentences inserted by a malicious user. These adversarial samples were automatically generated to mislead the system without changing the correct answers or misleading humans. Their results showed that the accuracy of sixteen published models drops from an average of 75\% F1 score to 36\%, and when the attacker was allowed to add ungrammatical sequences of words, the average accuracy on four of the tested models dropped further down to 7\%.  As machine learning approaches find their ways into many application domains, the concerns associated with the reliability and security of systems are getting profound. While covering all application areas is out of scope for this paper, our goal is to motivate the study of transferability of adversarial samples to better understand the mechanisms and factors that influence their effectiveness. Without loss of generality, we focus primarily on image classification as a use case to demonstrate the impact of machine learning attacks and their role on effectiveness of transferability of adversarial samples in this paper (though the findings and insights obtained can be generalized for other use cases). \section{Related Work} The study of machine learning attacks and transferability of adversarial samples have gained a momentum, following the widespread use of Deep Neural Networks (DNNs) in many application domains. In the following, we detail the recent studies in this area, and discuss their relevance to our work. \cite{szegedy2014} studied the transferability of adversarial samples on different models that were trained using MNIST dataset. They focused on examining why DNNs were so vulnerable to images with little perturbation. In particular, they examined non-linearity and overfitting in neural networks as the cause of DNNs vulnerability to adversarial samples. Their experiments and methodology, however, were limited to the  NN model characteristics to gain intuition on transferability.  \cite{goodfellow2015} carried out a new study on transferability of adversarial samples which was built on the previous study of~\cite{szegedy2014}. In contrast, they argued that non-linearity of NN models actually helps to reduce the vulnerability to adversarial samples, and linearity of a model is the cause that makes adversarial samples work. Also, they further suggest that transferability is more likely when the adversarial perturbation or noise is highly aligned with the weight vector of the model. The entire analysis was based on attack called Fast Gradient Sign Method (FGSM) that computes the gradient of the loss function once, and then finds the minimum step size that generates the adversarial samples. Another study on transferability was conducted by~\cite{papernot2016} in which they aimed at experimenting how transferability works across traditional machine learning classifiers, such as Support Vector Machines (SVMs), Decision Trees (DT), K-nearest neighbors (KNN), Logistic Regression (LR) and DNNs. Their motivation is to determine if adversarial samples constitute a threat for a specific type or implementation of machine learning model. In other words, they would like to analyze if adversarial samples would be transferred to any of these models; and if so, which of the classifiers (or models) are more prone to such black-box attacks. They also examined intra-technique and cross-technique transferability across the models, and provided in depth explanation on why deep neural network and LR were more prone to intra-technique transferability when compared to SVM, DT, KNN, and LR. However, similar to previous studies, their analysis did not consider the possible impacts of intrinsic properties of attacks on transferability of adversarial samples. \cite{papernot2017} extended their earlier findings by demonstrating how a black-box attack can be launched on hosting DNN without prior knowledge of the model structure nor its training dataset. The attack strategy employed consists of training a local model (i.e., substitute/attacker model) using synthetically generated data by the adversary that was labeled by the targeted DNN. They demonstrated the feasibility of this strategy to launch black-box attacks on machine learning services hosted by Amazon, Google and MetaMind. Similar study was conducted by~\cite{liu2017}, in which they assumed the model and training process, including both training and test datasets are unknown to them before launching the attack. \cite{demontis2019} presented a comprehensive analysis on transferability for both test-time evasion and training-time poisoning attacks. They showed that there are two main factors contributing to the success of the attack that include intrinsic adversarial vulnerability of the target model, and the complexity of the substitute model used to optimize the attack. They further defined three metrics/factors that impacts transferability, which are: i) size of the input gradient, ii) alignment of the input gradients of the loss function computed using the target and the substitute (attacker) models, and iii) variability of the loss landscape. All these findings and factors, while essential, are restricted to explain the transferability from the model-centric perspective. However, our investigation is not limited to the assessment of models, but expands the analysis on various attack implementations and the adversarial samples generated to see if there are underlying characteristics that contribute to increasing or decreasing the chances of transferability among NN models. \section{Machine Learning Attacks} The adversarial perturbations crafted to generate adversarial samples for fooling a trained network are referred as machine learning attacks. The full list of machine learning attacks presented in the literature is exhaustive, however, we present the subset of attacks analyzed in this work with a brief description of their characteristics in Table~\ref{tab:attacks}. Following the categorization presented by~\cite{rauber2018}, we categorize the attacks used in this paper into two main families: i) gradient-based, and ii) decision-based attacks. Gradient-based attacks try to generate adversarial samples by finding the minimum perturbation through a gradient descent mechanism. Decision-based attacks involve the use of image processing techniques to generate adversarial samples. It is called decision-based because the algorithms rely on comparing the generated adversarial samples with the original output until misclassification occurs. \begin{longtable}{| p{.25\textwidth} | p{.18\textwidth} | p{.46\textwidth}|} \hline Name of Attack & Attack Family & Short Description\\ \hline\hline Deep Fool Attack & gradient-based & It obtains minimum perturbation by approximating the model classifier with a linear classifier~\citep{moosavi2016}.\vspace{0.1cm} \\ \hline Additive Noise Attack & decision-based & Adds Gaussian or uniform noise and gradually increases the standard deviation until misprediction occurs~\citep{rauber2018}.\vspace{0.1cm}  \\ \hline Basic Iterative Attack & gradient-based & Applies a gradient with small step size and clips pixel values of intermediate results to ensure that they are in the neighborhood of the original image~\citep{kurakin2017}. \vspace{0.1cm} \\ \hline Blended Noise Attack & decision-based & Blends the input image with a uniform noise until the image is misclassified.\vspace{0.1cm}\\ \hline Blur Attack & decision-based & Finds the minimum  blur needed to turn an input image into an adversarial sample by linearly increasing the standard deviation of a Gaussian filter. \vspace{0.1cm}\\ \hline Carlini Wagner Attack & gradient-based & Generates adversarial sample by finding the smallest noise added to an image that will change the classification of the image~\citep{carlini2017}.\vspace{0.1cm}\\ \hline Contrast Reduction Attack & decision-based & Reduces the contrast of an input image by performing a line-search internally to find minimal adversarial perturbation. \vspace{0.1cm}\\ \hline Search Contrast Reduction Attack& decision-based & Reduces the contrast of an input image by performing a binary search internally to find minimal adversarial perturbation. \vspace{0.1cm}\\ \hline Decoupled Direction and Norm (DDN) Attack & gradient-based & Induces misclassifications with low L2-norm, through decoupling the direction and norm of the adversarial perturbation that is added to the image~\citep{rony2019}. The attack compensates for the slowness of Carlini Wagner attack.\vspace{0.1cm}\\ \hline Fast Gradient Sign Attack & gradient-based & Uses a one-step method that computes the gradient of the loss function with respect to the image once and then tries to find the minimum step size that will generate an adversarial sample~\citep{goodfellow2015}.\\ \hline Inversion Attack & decision-based & Creates a negative image (i.e., image complement of the original image, in which the light pixels appear dark, and vice versa) by inverting the pixel values~\citep{hosseini2017}.\vspace{0.1cm}\\ \hline Newton Fool Attack & gradient-based & Finds small adversarial perturbation on an input image by significantly reducing the confidence probability~\citep{jang2017}.\vspace{0.1cm}\\ \hline Projected Gradient Descent Attack & gradient-based & Attempts to find the perturbation that maximizes the loss of a model (using gradient descent)  on an input. It is ensured that the size of the perturbation is kept smaller than specified error by relying on clipping the samples generated~\citep{madry2017}.\vspace{0.1cm}\\ \hline Salt and Pepper Noise Attack & decision-based & Involves adding salt and pepper noise to an image in each iteration until the image is misclassified, while keeping the perturbation size within the specified epsilon $\epsilon$.\vspace{0.1cm}\\ \hline Virtual Adversarial Attack & gradient-based & Calculates untargeted adversarial perturbation by performing an approximated second order optimization step on the Kullback–Leibler divergence between the unperturbed predictions and the predictions for the adversarial perturbation~\citep{miyato2015}. \vspace{0.1cm}\\ \hline Sparse Descent Attack & gradient-based & A version of basic iterative method that minimizes the L1 distance. \vspace{0.1cm}\\ \hline Spatial Attack & decision-based & Relies on spatially chosen rotations, translations, scaling~\citep{engstrom2019}.\vspace{0.1cm}\\ \hline \hline \caption{The machine learning attacks used in this work.} \label{tab:attacks} \end{longtable} \section{Methodology} In the following, we detail the Convolutional Neural Network (CNN) models, infrastructure and tools used in the evaluation, as well as the procedure employed in carrying out the experiments. \subsection{Infrastructure and Tools} To build, train and test the CNNs that use in our evaluation, we rely on PyTorch and TorchVision. We also use Foolbox~\citep{rauber2018} which is a Python library to generate adversarial samples. It provides reference implementations for many of the published adversarial attacks, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation. We use Python version 3.7.3 on Jupyter Notebook. We run our experiments on Google Colab which provides an interactive environment that allows to write and execute Python code. It is similar to Jupyter notebook, but rather than being installed locally, it is hosted on the cloud. It is heavily customized for data science workloads, as it contains most of the core libraries used in data science/machine learning research. We used this environment in training the neural network as it provides large memory capacity and access to GPUs, thereby reducing the training time.  \subsection{CNNs Used in This Study} Here, we provide the brief description and details of CNNs used in this work. Note that a particular CNN may be in one of two roles, namely it can be either an attacker model (on which the adversarial samples are generated), or a victim model (to which the adversarial samples will be used to attack). {\bf LeNet:} It is simple, yet popular CNN architecture that was first introduced in 1995 but came to limelight in 1998 after it demonstrated success in handwritten digit recognition task~\citep{lecun1998}. The LeNet architecture used for this work is slightly modified to train for CIFAR-10 dataset (instead of MNIST). {\bf AlexNet:} It is an advanced form of LeNet architecture, with a depth of 8 layers. It showed groundbreaking results in 2012- ILSVRC competition by achieving an error  rate from 25.8\% to 16.4\% on ImageNet dataset with about 60 million trainable parameters~\citep{krizhevsky2017}. It also has different optimization techniques such as dropout, activation function and Local Response (LR) normalization. Since LR normalization had shown minimal (if any) contribution in practice it was not included in the AlexNet model trained for this project.  Aside from the increase in the depth of the  network, another difference between the LeNet and AlexNet model trained in this work is that AlexNet has a dropout layer added to it. {\bf Vgg-11:} It was introduced to improve the image classification accuracy on ImageNet dataset by~\cite{simonyan2015}. Compared to LeNet and AlexNet, Vgg-11 has an increased network depth, and  it made use of small (3 x 3) convolutional filters. The architecture secured a second place at the ILVRSC 2014 competition after reducing the error rate on the ImageNet dataset down to 7.3\%. Hence, the architecture is an improvement over AlexNet. There are different variants of Vgg : Vgg-11, 13, 16 and 19. Only Vgg-11 is used in this paper. In addition to being deeper than  AlexNet architecture, batch normalization is also introduced in the Vgg-11 used in this project. Table~\ref{tab:cnn-models} summarizes the major features of these three CNN models. We choose these models to evaluate how machine learning attacks and corresponding adversarial samples generated respond to these models. \begin{longtable}{| p{.08\textwidth} | p{.072\textwidth} | p{.12\textwidth}| p{.109\textwidth} | p{.125\textwidth} | p{.065\textwidth} | p{.12\textwidth} | p{.1\textwidth}|} \hline CNN& \# Conv. Layers&\# Inner activation func., type&Output activation func.& \# Pooling Layers, type& \# FC Layers&\# Dropout Layers (\%)&\# BatchNorm Layers \vspace{0.1cm}\\ \hline LeNet&2&4, RELU &Softmax& 2, maxpool& 3 &None & None \vspace{0.1cm}\\ \hline AlexNet&5&7, RELU&Softmax &3, maxpool& 3 & 2 (\%0.5)& None \vspace{0.1cm}\\ \hline Vgg-11& 8&8, RELU&Softmax&4, maxpool& 3 & 2 (\%0.5) & 8 \vspace{0.1cm}\\ \hline \hline \caption{Features of the CNN models used in this paper.} \label{tab:cnn-models} \end{longtable} \subsection{Data Processing and Training} {\bf Dataset:} We used CIFAR-10 dataset~\citep{Krizhevsky2009} for our analysis, since it is arguably one of the most widely used dataset in the field of image processing and computer vision research. It contains 60,000 images which belong to one of ten classes. Training dataset contains 45,000 images, validation dataset has 500 images, whereas testing dataset contains 10,000 images. To generate adversarial samples, 500 images are selected from the testing dataset (50 images picked from each class to have a balanced dataset). \noindent {\bf Preprocessing:} At the very beginning, we performed training transformations, including random rotation, random horizontal flip, random cropping, converting the dataset to tensor and normalization. Likewise, we performed test transformations, including converting the dataset to tensor, and normalized it. Random rotation and horizontal flip introduce complexity to the input data which helps the model to learn in a more robust way. It is necessary to convert inputs to tensor because PyTorch works with tensor objects. The three channels are normalized (dividing by 255) to increase learning accuracy. Final step of data pre-processing was forming a batch size of 256 and creating a data loader for train and validation data (the method loads 256 images in each iteration during the training and validation). We choose batch size of 256 as it is large enough to make the training faster.  \noindent {\bf Training:} For the training, we first created the network model which comprises of feature extraction, classification and forward propagation. In each epoch, we calculated the training loss, training accuracy, validation loss and validation accuracy.  To perform training, we specified the following parameters for the train function: model, training iterator, optimizer (Adam optimizer) and criterion (cross entropy loss criterion). To perform validation, we specified the following parameters to the evaluation function: model, validation iterator, and criterion (cross entropy loss criterion). After completing training phase, we saved parameter values for the given model. \begin{longtable}{| p{.3\textwidth} | p{.2\textwidth}| p{.2\textwidth} | p{.2\textwidth} |} \hline Characteristics & LeNet & AlexNet & Vgg-11 \vspace{0.1cm}\\ \hline \hline Epoch number & 25 & 25 & 10 \vspace{0.1cm}\\ \hline Training loss & 0.953 & 0.631 & 0.244 \vspace{0.1cm}\\ \hline Validation loss & 0.956 & 0.695 & 0.468 \vspace{0.1cm}\\ \hline Training accuracy & 66.34\% & 78.34\%& 91.94\% \vspace{0.1cm}\\ \hline Validation accuracy & 66.70\% & 76.74\%&87.11\% \vspace{0.1cm}\\ \hline Testing accuracy & 66.64\% &76.03\%& 85.87\% \vspace{0.1cm}\\ \hline \hline \caption{Training characteristics for NN models.} \label{tab:training-characteristics} \end{longtable} The final step is the testing stage. To test the trained models, we loaded in the saved model parameters, including trained weights. Then, we checked for testing accuracy of the networks. Table~\ref{tab:training-characteristics} summarizes the training characteristics and reports train, validation and testing accuracy obtained. \subsection{Adversarial Samples Generation} {\bf Machine learning attacks:} Table~\ref{tab:attacks} detailed 17 unique machine learning attacks employed in the evaluation. However, for some of the attacks, more than one norms (L0, L1, L-infinity) are used for estimating the error ($\epsilon$), thus increasing the number of unique attacks evaluated to 40. For the sake of brevity, we enumerate the attacks ranging from 1 to 40 (as listed in Table~\ref{tab:attack-enumeration}), and used this enumeration as labels, instead of providing the full name and used-norm when showing the results in the following Figures. \begin{longtable}{| p{.05\textwidth} | p{.3\textwidth}| p{.055\textwidth} || p{.05\textwidth} | p{.3\textwidth}| p{.055\textwidth} | } \hline Label & Attack Name & Norm & Label & Attack Name & Norm \\ \hline \hline 1& Deep Fool Attack& L-inf & 21& BSCR Attack& L2\\ \hline 2& Deep Fool Attack& L2 & 22& BSCR Attack& L-inf\\ \hline 3& Additive Gaussian Noise (AGN) Attack& L2 & 23& Linear Search Contrast Reduction (LSCR) Attack& L1\\ \hline 4& Additive Uniform Noise (AUN) Attack& L2 & 24& LSCR Attack& L2\\ \hline 5& AUN Attack& L-inf & 25& LSCR Attack& L-inf\\ \hline 6& Repeated AGN Attack& L2 & 26& Decoupled Direction and Norm Attack& L2\\ \hline 7& Repeated AUN Attack& L2 & 27& Fast Gradient Sign Attack& L1\\ \hline 8& Repeated AUN Attack& L-inf & 28& Fast Gradient Sign Attack& L2\\ \hline 9& Basic Iterative Attack& L1 & 29& Fast Gradient Sign Attack& L-inf\\ \hline 10& Basic Iterative Attack& L2& 30& Inversion Attack& L1\\ \hline 11& Basic Iterative Attack& L-inf& 31& Inversion Attack& L2\\ \hline 12& Blended Uniform Noise Attack& L1 & 32& Inversion Attack& L-inf\\ \hline 13& Blended Uniform Noise Attack& L2 & 33& Newton Fool Attack& L2\\ \hline 14& Blended Uniform Noise Attack& L-inf & 34& Projected Gradient Descent Attack& L1\\ \hline 15& Blur Attack& L1 & 35& Projected Gradient Descent Attack& L2\\ \hline 16& Blur Attack& L2 & 36& Projected Gradient Descent Attack& L-inf\\ \hline 17& Blur Attack& L-inf & 37& Salt and Pepper Attack& L2\\ \hline 18& Calini Wagner Attack& L2 & 38& Sparse descent Attack& L1\\ \hline 19& Contrast Reduction Attack& L2 & 39& Virtual adversarial Attack& L2\\ \hline 20& Binary Search Contrast Reduction (BSCR) Attack& L1 & 40& Spatial Attack& N/A\\ \hline \caption{Labels of attacks and norms used to generate adversarial samples.} \label{tab:attack-enumeration} \end{longtable} {\bf Adversarial Sample Formulation:} Given a classification function $f(x)$, class $C_x$, adversarial classification function $f(x\prime)$, distance $D(x, x\prime)$  and epsilon $\epsilon$ (smallest allowable perturbation or error), adversarial sample $x$ can be mathematically expressed as: \[ f(x)\; = \;C_x \land f(x\prime)\;\neq\;C_x \land D(x,x\prime) \leq \epsilon. \] To craft adversarial samples via Foolbox~\citep{rauber2018}, we need to specify a criterion that defines the impact of adversarial action (misclassification in our case), and a distance measure that defines the size of a perturbation (i.e., L1-norm, L2-norm, and/or L-inf which must be less than specified $\epsilon$). Then, these are taken into consideration in an attacker model to generate an adversarial sample.  The following equation shows the general distance formula. Depending on the value of p, L1, L2 or L-inf norm is obtained. \[ ||x - \hat{x}||_p \; = \; (\; \sum_{i=1}^{d} | x_i = \hat{x_i}|^p \;)^{1/2} \] We picked the value of epsilon as 1.0, since it allows to generate a significant number of adversarial samples for all the attack methods used. Because it takes a lot of time to generate adversarial samples using the attack algorithms, we used 500 balanced inputs (i.e., 50 images from each of the 10 classes) from the test data. To demonstrate how well adversarial samples transfer, we use a confusion matrix as a visual guide. In a given confusion matrix, each row represents instances in a predicted class, whereas each column represents instances in an true/actual class that a given input belongs. The diagonal of the confusion matrix shows the number of each class that were correctly predicted after an attack is launched. For example, Figure~\ref{fig:confusion-linf} shows a confusion matrix of adversarial samples generated by using Deep Fool attack (with L-inf norm) on LeNet. It has all zero entries on the diagonal which means that the inputs (i.e., adversarial samples) were misclassified in all classes. This implies that the attack that generated the adversarial samples is very powerful since they were all misclassified. On the other hand, Figure~\ref{fig:confusion-l2} shows a confusion matrix of adversarial samples generated by using Gaussian Noise attack (with L2 norm) on LeNet. In this confusion matrix, however, the diagonal has non-zero, larger positive entries that illustrates the attack used in generating the adversarial samples are less powerful leading as many of the samples correctly classified. \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{{confusion-linf}.png} \caption{Confusion matrix of adversarial samples generated using Deep Fool attack with L-inf norm on LeNet. \label{fig:confusion-linf}} \vspace{-0.2cm} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{{confusion-l2}.png} \caption{Confusion matrix of adversarial samples generated using Additive Gaussian Noise attack with L2 norm on LeNet.\label{fig:confusion-l2}} \vspace{-0.2cm} \end{figure} \subsection{Experimental Procedure} Here, we describe the procedure in performing the analysis and generating the results shown in the Evaluation. First, the adversarial samples are generated by using the attack model and original dataset on an attacker model (which can be one of the LeNet, AlexNet, or Vgg-11 at any given scenario). Once the adversarial samples are generated on the attacker model, they are used on the victim models (which can be one of the LeNet, AlexNet or Vgg-11). Then, the statistics regarding the number of mispredictions, as well as their prediction classes are collected. We also calculate the Structural Similarity Index Measure (SSIM) between adversarial samples and the original sample to compare how visually similar they are (SSIM value ranges from 0 to 1; the higher value indicates more similarity). This measure has been used in the literature to correlate more with human perception than Mean Absolute Distance (MAD). Hence, it serves as a metric for estimating how much perturbed (adversarial) and the original image will differ visually. \section{Evaluation} We obtained three kinds of results using adversarial samples generated on attacker models: i) number of mispredictions when adversarial samples are used on victim models; ii) the classes that (mis)predictions belong when adversarial samples are used on victim models; and iii) SSIM value between original and adversarial samples. We used these results to assess the effectiveness of attacks used in generating adversarial samples. This assessment led us to identify four main factors that contribute immensely towards the transferability of adversarial samples. In the following, we discuss these factors and provide results obtained to backup our findings for each factor's implication.     \subsection{Factor 1: The attack itself} We observed that some of the attacks used in generating adversarial samples are just more powerful than others (regardless of the victim model). That is, the adversarial samples generated by these attacks are easily transferable, hence leading to high number of misprediction on the target model. \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{attacks}.png} \caption{Average number of mispredictions for adversarial samples transferred to the LeNet, AlexNet and Vgg-11. \label{fig:attacks}} \end{figure} Figure~\ref{fig:attacks} shows that the attacks with labels 1, 5, 8, 11, 14, 17, 25, 29, 32, 36, and 40 have higher number of  mispredictions when adversarial samples are used on victim models. Hence, those attacks are more powerful. Further, attacks with labels 11, 29 and 36 appear to have the highest number of mispredictions (on any victim model). This result shows that the transferability of an adversarial sample highly depends on the attack that generated the given adversarial sample. \subsection{Factor 2: Norm Used in the Attack} We observed that a particular attack that uses different norm to generate adversarial samples yielded varying degree of transferability. In general, the attacks that use L-inf tend to produce adversarial samples that exhibit higher number of mispredictions compared to attacks using L2 and L1. Figures~\ref{fig:lenet-attacker-distances},~\ref{fig:alexnet-attacker-distances} and \ref{fig:vgg11-attacker-distances} show results for attacks that use different norms when generating adversarial samples. In particular, Figure~\ref{fig:lenet-attacker-distances} shows the average number of mispredictions per attack for adversarial samples that are generated on LeNet. Among the attacks, Deep Fool, AUN and RAUN are implemented by using just L-inf and L2, whereas the rest have implementation for L1, L2 and L-inf norms. Clearly, the adversarial samples generated with L-inf norm have stronger ability to transfer, compared to the ones generated with L1 and L2 norms. Likewise, Figure~\ref{fig:alexnet-attacker-distances} and~\ref{fig:vgg11-attacker-distances} show the average number of mispredictions per attack for adversarial samples that are generated on AlexNet and Vgg-11, respectively. The findings are consistent among the victim models, indicating the norm to be used for a given attack has a significant impact on transferability of adversarial samples. \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{lenet-attacker-distances}.png} \caption{Average number of mispredictions per attack for adversarial samples generated on LeNet. \label{fig:lenet-attacker-distances}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{alexnet-attacker-distances}.png} \caption{Average number of mispredictions per attack for adversarial samples generated on AlexNet. \label{fig:alexnet-attacker-distances}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{vgg11-attacker-distances}.png} \caption{Average number of mispredictions per attack for adversarial samples generated on Vgg-11. \label{fig:vgg11-attacker-distances}} \end{figure} While L-inf norm yields adversarial samples to transfer better compared to other norms, it should be noticed that the disturbance made to an input sample may become more pronounced. Comparing SSIM values of adversarial samples generated by using different norms shows that L-inf always produces significantly perturbed samples. In Figure~\ref{fig:ssim}, the range for SSIM values are labeled as: Excellent = ( 0.75 $\leq$ SSIM $\leq$ 1.0 ), Good = ( 0.55 $\leq$ SSIM $\leq$0.74 ), Poor = (0.35 $\leq$ SSIM $\leq$ 0.54), and Bad = (0.00 $\leq$ SSIM $\leq$ 0.34). We observed that many of the adversarial samples generated by L-inf norm have lower SSIM, indicating that perturbations made may be perceived by human. Therefore, checking the SSIM values can be used to guide the effectiveness of a given attack. Although an attack aims to maximize the number of mispredictions, it should be considered as a stronger attack if it can keep SSIM higher while yielding higher number of mispredictions, at the same time.  \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{ssim}.png} \caption{SSIM values for adversarial samples generated on AlexNet. \label{fig:ssim}} \end{figure} \subsection{Factor 3: Closeness of the Target Model to the Attacker Model} Not surprisingly, we observed that adversarial samples yielded higher number of mispredictions for the models on which they were generated (i.e., the case in which attacker and victim models are the same). For example, adversarial samples generated on AlexNet lead to higher number of misprediction when these samples are used on AlexNet, or on a closer model (e.g., a variation of AlexNet). However, when these adversarial samples are used on other (or dissimilar) victim models, they lead to a comparably lower number of mispredictions. These findings are shown in Figures~\ref{fig:lenet-attacker-model},~\ref{fig:alexnet-attacker-model} and \ref{fig:vgg11-attacker-model}. \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{lenet-attacker-model}.png} \caption{Number of mispredictions for adversarial samples that are generated on LeNet.\label{fig:lenet-attacker-model}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{alexnet-attacker-model}.png} \caption{Number of mispredictions for adversarial samples that are generated on AlexNet. \label{fig:alexnet-attacker-model}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{{vgg11-attacker-model}.png} \caption{Number of mispredictions for adversarial samples that are generated on Vgg-11.\label{fig:vgg11-attacker-model}} \end{figure} The implication of this factor is that if an attacker can generate adversarial samples on a model that is similar to victim models, then the probability of adversarial samples generated to be transferred effectively is higher. This methodology can be used by industry experts to test how well adversarial samples can transfer  to their ML models. One way to exploit this observation for security-critical applications is to build multiple ML models that are dissimilar in terms of structure, but providing similar prediction accuracy; and then using majority vote (or similar schemes) to decide what should be proper prediction. If a particular attack would transfer and be effective on one of the ML model, (as evident by the analysis) it is very likely that other ML models (which are dissimilar) would be less sensitive to the same attack, providing a way to detect the anomaly and avoid the undesired consequences of adversarial samples. Building ML models that are different in structure, but yielding similar accuracy would be active research direction, not just for security-related concerns, but also be useful for reliability, power management, performance and scalability. \subsection{Factor 4: Sensitivity of an Input} Inherent sensitivity of an input to a particular attack can determine the strength of adversarial sample and how well it can transfer to a victim model. We can summarize our observations about the sensitivity of inputs used in the attacks as follows. \begin{enumerate} \item Some inputs are very sensitive to almost any attack, thus the adversarial samples generated for them can effectively transfer to victim models (e.g., input images with index 477, 479, 480 and 481 in Figure~\ref{fig:vgg11-misprediction}). \item Some inputs are insensitive to attacks, thus the adversarial samples generated are ineffective and cannot get mispredicted, regardless of the victim model (e.g., input images with index 481, 484, 494 in Figure~\ref{fig:vgg11-misprediction}). \item Some inputs are sensitive to specific attacks on a particular victim model, meaning the adversarial samples become effective when they are generated by particular subset of attacks, targeting a particular model (but not effective when used on other models). For example, the input images with index 465 and 467 in Figure~\ref{fig:vgg11-misprediction} become more sensitive (thus corresponding adversarial samples are more effective) when they are transferred to LeNet and AlexNet models, respectively (but not on other models). \end{enumerate} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{{vgg11_models_last40_df}.png} \caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on Vgg-11 as an attacker model (zoomed in to see last 40 input images). \label{fig:vgg11-misprediction}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{{lenet_models_total_df}.png} \caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on LeNet as an attacker model. \label{fig:lenet-misprediction-all}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{{alexnet_models_total_df}.png} \caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on AlexNet as an attacker model. \label{fig:alexnet-misprediction-all}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{{vgg11_models_total_df}.png} \caption{The number of effective attacks (yielding a adversarial sample that would be mispredicted) for a particular input used on Vgg-11 as an attacker model. \label{fig:vgg11-misprediction-all}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth]{{collective-histogram}.png} \caption{Histogram that summarizes the sensitivity of inputs to attacks. The x-axis indicates the number of effective attacks for a given input (i.e., generated adversarial sample would transfer to victim model successfully regardless of the attacker model), and y-axis indicates the number of inputs whose adversarial samples (generated by a set of attacks) would transfer effectively to the victim models. \label{fig:collective-histogram} } \end{figure} Figure~\ref{fig:vgg11-misprediction} shows the effective number of attacks used to generate adversarial samples on Vgg-11. For better visibility, only the last 40 input images (out of 500) are zoomed in Figure~\ref{fig:vgg11-misprediction} where the x- axis shows the index of input image and the y-axis shows the number of attacks that lead to misprediction of generated adversarial samples on victim models (please, see Figure~\ref{fig:vgg11-misprediction-all} for all 500 inputs used on Vgg-11). Since there are 40 attacks used to generate adversarial samples, the y-axis can be at most 40 (in which case it would mean that all of the attacks yielded adversarial sample that result in misprediction). The results obtained for the complete 500 input images used are shown in Figure~\ref{fig:alexnet-misprediction-all},~\ref{fig:lenet-misprediction-all} for AlexNet, and LeNet (as attacker model), respectively. The implication of this factor is that the inherent characteristics of the input may play a role on how effectively the generated adversarial samples would be transferred to victim models. When combined with the strength of an attack, some inputs that are sensitive to the given set of attacks (irrespective of attacker model) may yield more effective adversarial samples than the other inputs. Figure~\ref{fig:collective-histogram} illustrates this phenomena. It can be seen that most of the input images are sensitive to roughly 10 attacks out of the 40 (regardless of the attacker model being used), but relatively very few inputs are very sensitive to all the attacks (23 input images yield adversarial samples that were mispredicted on all the victim models, regardless of attacker model and attack used).  \section{Conclusion} In its simplest form, \textit{transferability} can be defined as the ability of adversarial samples generated using the attacker model to be mispredicted when transferred to the victim model. We identified that most of the literature on transferability focuses on interpreting and evaluating transferability from the machine learning model perspective alone, which we refer as model-centric approach. In this work, we took an alternative path that we call attack-centric approach that focuses on investigating machine learning attacks to interpret and evaluate how adversarial samples transfer to the victim models. For each attacker model, we generated adversarial samples that are transferred to the three victim models (i.e., LeNet, AlexNet and Vgg-11). We identified four factors that influence how well an adversarial sample would transfer. Our hope is that these factors would be useful guidelines for researchers and practitioners in the field to prohibit the adverse impact of black-box attacks and to build more attack resistant/secure machine learning systems.  \vskip 0.2in
2024-02-18T23:39:39.787Z
2021-12-06T02:15:43.000Z
algebraic_stack_train_0000
2
6,922
proofpile-arXiv_065-66
\section{Preface} \label{s_preface} This paper primarily serves as a reference for my Ph.D. dissertation, which I am currently writing. As a consequence, the framework is not under active development. The presented concepts, problems, and solutions may be interesting regardless, even for other problems than Neural Architecture Search (NAS). The framework's name, UniNAS, is a wordplay of University and Unified NAS since the framework was intended to incorporate almost any architecture search approach. \section{Introduction and Related Work} \label{s_introduction} An increasing supply and demand for automated machine learning causes the amount of published code to grow by the day. Although advantageous, the benefit of such is often impaired by many technical nitpicks. This section lists common code bases and some of their disadvantages. \subsection{Available NAS frameworks} \label{u_introduction_available} The landscape of NAS codebases is severely fragmented, owing to the vast differences between various NAS methods and the deep-learning libraries used to implement them. Some of the best supported or most widely known ones are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item {NASLib~\citep{naslib2020}} \item { Microsoft NNI \citep{ms_nni} and Archai \citep{ms_archai} } \item { Huawei Noah Vega \citep{vega} } \item { Google TuNAS \citep{google_tunas} and PyGlove \citep{pyglove} (closed source) } \end{itemize} Counterintuitively, the overwhelming majority of publicly available NAS code is not based on any such framework or service but simple and typical network training code. Such code is generally quick to implement but lacks exact comparability, scalability, and configuration power, which may be a secondary concern for many researchers. In addition, since the official code is often released late or never, and generally only in either TensorFlow~\citep{tensorflow2015-whitepaper} or PyTorch~\citep{pytorch}, popular methods are sometimes re-implemented by some third-party repositories. Further projects include the newly available and closed-source cloud services by, e.g., Google\footnote{\url{https://cloud.google.com/automl/}} and Microsoft\footnote{\url{https://www.microsoft.com/en-us/research/project/automl/}}. Since they require very little user knowledge in addition to the training data, they are excellent for deep learning in industrial environments. \subsection{Common disadvantages of code bases} \label{u_introduction_disadvantages} With so many frameworks available, why start another one? The development of UniNAS started in early 2020, before most of these frameworks arrived at their current feature availability or were even made public. In addition, the frameworks rarely provide current state-of-the-art methods even now and sometimes lack the flexibility to include them easily. Further problems that UniNAS aims to solve are detailed below: \paragraph{Research code is rigid} The majority of published NAS code is very simplistic. While that is an advantage to extract important method-related details, the ability to reuse the available code in another context is severely impaired. Almost all details are hard-coded, such as: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item { the used gradient optimizer and learning rate schedule } \item { the architecture search space, including candidate operations and network topology } \item { the data set and its augmentations } \item { weight initialization and regularization techniques } \item { the used hardware device(s) for training } \item { most hyper-parameters } \end{itemize} This inflexibility is sometimes accompanied by the redundancy of several code pieces that differ slightly for different experiments or phases in NAS methods. Redundancy is a fine way to introduce subtle bugs or inconsistencies and also makes the code confusing to follow. Hard-coded details are also easy to forget, which is especially crucial in research where reproducibility depends strongly on seemingly unimportant details. Finally, if any of the hard-coded components is ever changed, such as the optimizer, configurations of previous experiments can become very misleading. Their details are generally not part of the documented configuration (since they are hard-coded), so earlier results no longer make sense and become misleading. \paragraph{A configuration clutter} In contrast to such simplistic single-purpose code, frameworks usually offer a variety of optimizers, schedules, search spaces, and more to choose from. By configuring the related hyper-parameters, an optimizer can be trivially and safely exchanged for another. Since doing so is a conscious and intended choice, it is also documented in the configuration. In contrast, the replacement of hard-coded classes was not intended when the code was initially written. The disadvantage of this approach comes with the wealth of configurable hyper-parameters, in different ways: Firstly, the parametrization is often cluttered. While implementing more classes (such as optimizers or schedules) adds flexibility, the list of available hyper-parameters becomes increasingly bloated and opaque. The wealth of parametrization is intimidating and impractical since it is often nontrivial to understand exactly which hyper-parameters are used and which are ineffective. As an example, the widely used PyTorch Image Models framework~\citep{rw2019timm} (the example was chosen due to the popularity of the framework, it is no worse than others in this respect) implements an intimidating mix of regularization and data augmentation settings that are partially exclusive.\footnote{\url{https://github.com/rwightman/pytorch-image-models/blob/ba65dfe2c6681404f35a9409f802aba2a226b761/train.py}, checked Dec. 1st 2021; see lines 177 and below.} Secondly, to reduce the clutter, parameters can be used by multiple mutually exclusive choices. In the case of the aforementioned PyTorch Image Models framework, one example would be the selection of gradient-descent optimizers. Sharing common parameters such as the learning rate and the momentum generally works well, but can be confusing since, once again, finding out which parameters affect which modules necessitates reading the code or documentation. Thirdly, even with an intimidating wealth of configuration choices, not every option is covered. To simplify and reduce the clutter, many settings of lesser importance always use a sensible default value. If changing such a parameter becomes necessary, the framework configurations become more cluttered or changing the hard-coded default value again results in misleading configurations of previous experiments. To summarize, the hyper-parametrization design of a framework can be a delicate decision, trying for them to be complete but not cluttered. While both extremes appear to be mutually exclusive, they can be successfully united with the underlying configuration approach of UniNAS: argument trees. \paragraph{} Nonetheless, it is great if code is available at all. Many methods are published without any code that enables verifying their training or search results, impairing their reproducibility. Additionally, even if code is overly simplistic or accompanied by cluttered configurations, reading it is often the best way to clarify a method's exact workings and obtain detailed information about omitted hyper-parameter choices. \section{Argument trees} \label{u_argtrees} The core design philosophy of UniNAS is built on so-called \textit{argument trees}. This concept solves the problems of Section~\ref{u_introduction_disadvantages} while also providing immense configuration flexibility. As its basis, we observe that any algorithm or code piece can be represented hierarchically. For example, the task to train a network requires the network itself and a training loop, which may use callbacks and logging functions. Sections~\ref{u_argtrees_modularity} and~\ref{u_argtrees_register} briefly explain two requirements: strict modularity and a global register. As described in Section~\ref{u_argtrees_tree}, this allows each module to define which other types of modules are needed. In the previous example, a training loop may use callbacks and logging functions. Sections~\ref{u_argtrees_config} and~\ref{u_argtrees_build} explain how a configuration file can fully detail these relationships and how the desired code class structure can be generated. Finally, Section~\ref{u_argtrees_gui} shows how a configuration file can be easily manipulated with a graphical user interface, allowing the user to create and change complex experiments without writing a single line of code. \subsection{Modularity} \label{u_argtrees_modularity} As practiced in most non-simplistic codebases, the core of the argument tree structure is strong modularity. The framework code is fragmented into different components with clearly defined purposes, such as training loops and datasets. Exchanging modules of the same type for one another is a simple issue, for example gradient-descent optimizers. If all implemented code classes of the same type inherit from one base class (e.g., AbstractOptimizer) that guarantees specific class methods for a stable interaction, they can be treated equally. In object-oriented programming, this design is termed polymorphism. UniNAS extends typical PyTorch~\citep{pytorch} classes with additional functionality. An example is image classification data sets, which ordinarily do not contain information about image sizes. Adding this specification makes it possible to use fake data easily and to precompute the tensor shapes in every layer throughout the neural network. \begin{figure*}[ht] \hfill \begin{minipage}[c]{0.97\textwidth} \begin{python} @Register.task(search=True) class SingleSearchTask(SingleTask): @classmethod def args_to_add(cls, index=None) -> [Argument]: return [ Argument('is_test_run', default='False', type=str, is_bool=True), Argument('seed', default=0, type=int),` Argument('save_dir', default='{path_tmp}', type=str), ] @classmethod def meta_args_to_add(cls) -> [MetaArgument]: methods = Register.methods.filter_match_all(search=True) return [ MetaArgument('cls_device', Register.devices_managers, num=1), MetaArgument('cls_trainer', Register.trainers, num=1), MetaArgument('cls_method', methods, num=1), ] \end{python} \end{minipage} \vskip-0.3cm \caption{ UniNAS code excerpt for a SingleSearchTask. The decorator function in Line~1 registers the class with type ''task'' and additional information. The method in Line~5 returns all arguments for the task to be set in a config file. The method in Line~13 defines the local tree structure by stating how many modules of which types are needed. It is also possible to specify additional requirements, as done in Line~14. } \label{u_fig_register} \end{figure*} \subsection{A global register} \label{u_argtrees_register} A second requirement for argument trees is a global register for all modules. Its functions are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item { Allow any module to register itself with additional information about its purpose. The example code in Figure~\ref{u_fig_register} shows this in Line~1. } \item { List all registered classes, including their type (task, model, optimizer, data set, and more) and their additional information (search, regression, and more). } \item { Filter registered classes by types and matching information. } \item { Given only the name of a registered module, return the class code located anywhere in the framework's files. } \end{itemize} As seen in the following Sections, this functionality is indispensable to UniNAS' design. The only difficulties in building such a register is that the code should remain readable and that every module has to register itself when the framework is used. Both can be achieved by scanning through all code files whenever a new job starts, which takes less than five seconds. Python executes the decorators (see Figure~\ref{u_fig_register}, Line~1) by doing so, which handle registration in an easily readable fashion. \subsection{Tree-based dependency structures} \label{u_argtrees_tree} \begin{figure*} \vskip-0.7cm \begin{minipage}[l]{0.42\linewidth} \centering \includegraphics[trim=0 320 2480 0, clip, width=\textwidth]{./images/uninas/args_tree_s1_col.pdf} \vskip-0.2cm \caption{ Part of a visualized SingleSearchTask configuration, which describes the training of a one-shot super-network with a specified search method (omitted for clarity, the complete tree is visualized in Figure~\ref{app_u_argstree_img}). The white colored tree nodes state the type and number of requested classes, the turquoise boxes the specific classes used. For example, the \textcolor{red}{SingleSearchTask} requires exactly one type of \textcolor{orange}{hardware device} to be specified, but the \textcolor{cyan}{SimpleTrainer} accepts any number of \textcolor{green}{callbacks} or loggers. \\ \hfill } \label{u_argstree_trimmed_img} \end{minipage} \hfill \begin{minipage}[r]{0.5\textwidth} \begin{small} \begin{lstlisting}[backgroundcolor = \color{white}] "cls_task": <@\textcolor{red}{"SingleSearchTask"}@>, "{cls_task}.save_dir": "{path_tmp}/", "{cls_task}.seed": 0, "{cls_task}.is_test_run": true, "cls_device": <@\textcolor{orange}{"CudaDevicesManager"}@>, "{cls_device}.num_devices": 1, "cls_trainer": <@\textcolor{cyan}{"SimpleTrainer"}@>, "{cls_trainer}.max_epochs": 3, "{cls_trainer}.ema_decay": 0.5, "{cls_trainer}.ema_device": "cpu", "cls_exp_loggers": <@\textcolor{black}{"TensorBoardExpLogger"}@>, "{cls_exp_loggers#0}.log_graph": false, "cls_callbacks": <@\textcolor{green}{"CheckpointCallback"}@>, "{cls_callbacks#0}.top_n": 1, "{cls_callbacks#0}.key": "train/loss", "{cls_callbacks#0}.minimize_key": true, \end{lstlisting} \end{small} \vskip-0.2cm \caption{ Example content of the configuration text-file (JSON format) for the tree in Figure~\ref{u_argstree_trimmed_img}. The first line in each text block specifies the used class(es), the other lines their detailed settings. For example, the \textcolor{cyan}{SimpleTrainer} is set to train for three epochs and track an exponential moving average of the network weights on the CPU. } \label{u_argstree_trimmed_text} \end{minipage} \end{figure*} A SingleSearchTask requires exactly one hardware device and exactly one training loop (named trainer, to train an over-complete super-network), which in turn may use any number of callbacks and logging mechanisms. Their relationship is visualized in Figure~\ref{u_argstree_trimmed_img}. Argument trees are extremely flexible since they allow every hierarchical one-to-any relationship imaginable. Multiple optional callbacks can be rearranged in their order and configured in detail. Moreover, module definitions can be reused in other constellations, including their requirements. The ProfilingTask does not need a training loop to measure the runtime of different network topologies on a hardware device, reducing the argument tree in size. While not implemented, a MultiSearchTask could use several trainers in parallel on several devices. The hierarchical requirements are made available using so-called MetaArguments, as seen in Line~16 of Figure~\ref{u_fig_register}. They specify the local structure of argument trees by stating which other modules are required. To do so, writing the required module type and their amount is sufficient. As seen in Line~14, filtering the modules is also possible to allow only a specific subset. This particular example defines the upper part of the tree visualized in Figure~\ref{u_argstree_trimmed_img}. The names of all MetaArguments start with "cls\_" which improves readability and is reflected in the visualized arguments tree (Figure~\ref{u_argstree_trimmed_img}, white-colored boxes). \subsection{Tree-based argument configurations} \label{u_argtrees_config} While it is possible to define such a dynamic structure, how can it be represented in a configuration file? Figure~\ref{u_argstree_trimmed_text} presents an excerpt of the configuration that matches the tree in Figure~\ref{u_argstree_trimmed_img}. As stated in Lines~6 and~9 of the configuration, CudaDevicesManager and SimpleTrainer fill the roles for the requested modules of types "device" and "trainer". Lines~14 and~17 list one class of the types ''logger'' and ''callback'' each, but could provide any number of comma-separated names. Also including the stated "task" type in Line~1, the mentioned lines state strictly which code classes are used and, given the knowledge about their hierarchy, define the tree structure. Additionally, every class has some arguments (hyper-parameters) that can be modified. SingleSearchTask defined three such arguments (Lines~7 to~9 in Figure~\ref{u_fig_register}) in the visualized example, which are represented in the configuration (Lines~2 to~4 in Figure~\ref{u_argstree_trimmed_text}). If the configuration is missing an argument, maybe to keep it short, its default value is used. Another noteworthy mechanism in Line~2 is that "\{cls\_task\}.save\_dir" references whichever class is currently set as "cls\_task" (Line~1), without naming it explicitly. Such wildcard references simplify automated changes to configuration files since, independently of the used task class, overwriting "\{cls\_task\}.save\_dir" is always an acceptable way to change the save directory. A less general but perhaps more readable notation is "SingleSearchTask.save\_dir", which is also accepted here. A very interesting property of such dynamic configuration files is that they contain only the hyper-parameters (arguments) of the used code classes. Adding any additional arguments will result in an error since the configuration-parsing mechanism, described in Section~\ref{u_argtrees_build}, is then unable to piece the information together. Even though UniNAS implements several different optimizer classes, any such configuration only contains the hyper-parameters of those used. Generated configuration files are always complete (contain all available arguments), sparse (contain only the available arguments), and never ambiguous. A debatable design decision of the current configuration files, as seen in Figure~\ref{u_argstree_trimmed_text}, is that they do not explicitly encode any hierarchy levels. Since that information is already known from their class implementations, the flat representation was chosen primarily for readability. It is also beneficial when arguments are manipulated, either automatically or from the terminal when starting a task. The disadvantage is that the argument names for class types can only be used once ("cls\_device", "cls\_trainer", and more); an unambiguous assignment is otherwise not possible. For example, since the SingleSearchTask already owns "cls\_device", no other class that could be used in the same argument tree can use that particular name. While this limitation is not too significant, it can be mildly confusing at times. Finally, how is it possible to create configuration files? Since the dynamic tree-based approach offers a wide variety of possibilities, only a tiny subset is valid. For example, providing two hardware devices violates the defined tree structure of a SingleSearchTask and results in a parsing failure. If that happens, the user is provided with details of which particular arguments are missing or unexpected. While the best way to create correct configurations is surely experience and familiarity with the code base, the same could be said about any framework. Since UniNAS knows about all registered classes, which other (possibly specified) classes they use, and all of their arguments (including defaults, types, help string, and more), an exhaustive list can be generated automatically. However, resulting in almost 1600 lines of text, this solution is not optimal either. The most convenient approach is presented in Section~\ref{u_argtrees_gui}: Creating and manipulating argument trees with a graphical user interface. \begin{algorithm} \caption{ Pseudo-code for building the argument tree, best understood with Figures~\ref{u_argstree_trimmed_img} and~\ref{u_argstree_trimmed_text} For a consistent terminology of code classes and tree nodes: If the $Task$ class uses a $Trainer$, then in that context, $Trainer$ the child. Lines starting with \# are comments. } \label{alg_u_argtree} \small \begin{algorithmic} \Require $Configuration$ \Comment{Content of the configuration file} \Require $Register$ \Comment{All modules in the code are registered} \State{} \State{$\#$ recursive parsing function to build a tree} \Function{parse}{$class,~index$} \Comment{E.g. $(SingleSearchTask,~0)$} \State $node = ArgumentTreeNode(class,~index)$ \State{} \State{$\#$ first parse all arguments (hyper-parameters) of this tree node} \ForEach{($idx, argument\_name$) \textbf{in} $class.get\_arguments()$} \Comment{E.g. (0, $''save\_dir''$)} \State $value = get\_used\_value(Configuration,~class,~index,~argument\_name)$ \State $node.add\_argument(argument\_name,~value)$ \EndFor \State{} \State{$\#$ then recursively parse all child classes, for each module type...} \ForEach{$child\_class\_type$ \textbf{in} $class.get\_child\_types()$} \Comment{E.g. $cls\_trainer$} \State $class\_names = get\_used\_classes(Configuration,~child\_classes\_type)$ \Assert{ The number of $class\_names$ is in the specified limits} \State{} \State{$\#$ for each module type, check all configured classes} \ForEach{($idx,~class\_name$) \textbf{in} $class\_names$} \Comment{E.g. (0, $''SimpleTrainer''$)} \State $child\_class = Register.get(child\_class\_name)$ \State $child\_node = $\Call{parse}{$child\_class,~idx$} \State $node.add\_child(child\_class\_type,~idx,~child\_node)$ \EndFor \EndFor \Returnx{ $node$} \EndFunction \State{} \State $tree = $\Call{parse}{$Main, 0$} \Comment{Recursively parse the tree, $Main$ is the entry point} \Ensure every argument in the configuration has been parsed \end{algorithmic} \end{algorithm} \subsection{Building the argument tree and code structure} \label{u_argtrees_build} The arguably most important function of a research code base is to run experiments. In order to do so, valid configuration files must be translated into their respective code structure. This comes with three major requirements: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ Classes in the code that implement the desired functionality. As seen in Section~\ref{u_argtrees_tree} and Figure~\ref{u_argstree_trimmed_img}, each class also states the types, argument names and numbers of additionally requested classes for the local tree structure. } \item{ A configuration that describes which code classes are used and which values their parameters take. This is described in Section~\ref{u_argtrees_config} and visualized in Figure~\ref{u_argstree_trimmed_text}. } \item{ To connect the configuration content to classes in the code, it is required to reference code modules by their names. As described in Section~\ref{u_argtrees_register} this can be achieved with a global register. } \end{itemize} Algorithm~\ref{alg_u_argtree} realizes the first step of this process: parsing the hierarchical code structure and their arguments from the flat configuration file. The result is a tree of \textit{ArgumentTreeNodes}, of which each refers to exactly one class in the code, is connected to all related tree nodes, and knows all relevant hyper-parameter values. While they do not yet have actual class instances, this final step is no longer difficult. \begin{figure*}[h] \vskip -0.0in \begin{center} \includegraphics[trim=30 180 180 165, clip, width=\linewidth]{images/uninas/gui/gui1desc.png} \hspace{-0.5cm} \caption{ The graphical user interface (left) that can manipulate the configurations of argument trees (visualized right). Since many nodes are missing classes of some type ("cls\_device", ...), their parts in the GUI are highlighted in red. The eight child nodes of DartsSearchMethod are omitted for visual clarity. } \label{fig_u_gui} \end{center} \end{figure*} \subsection{Creating and manipulating argument trees with a GUI} \label{u_argtrees_gui} Manually writing a configuration file can be perplexing since one must keep track of tree specifications, argument names, available classes, and more. The graphical user interface (GUI) visualized in Figures~\ref{fig_u_gui} and~\ref{app_u_gui} solves these problems to a large extent, by providing the following functionality: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ Interactively add and remove nodes in the argument tree, thus also in the configuration and class structure. Highlight violations of the tree specification. } \item{ Setting the hyper-parameters of each node, using checkboxes (boolean), dropdown menus (choice from a selection), and text fields (other cases like strings or numbers) where appropriate. } \item{ Functions to save and load argument trees. Since it makes sense to separate the configurations for the training procedure and the network design to swap between different constellations easily, loading partial trees is also supported. Additional functions enable visualizing, resetting, and running the current argument tree. } \item{ A search function that highlights all matches since the size of some argument trees can make finding specific arguments tedious. } \end{itemize} In order to do so, the GUI manipulates \textit{ArgumentTreeNodes} (Section~\ref{u_argtrees_build}), which can be easily converted into configuration files and code. As long as the required classes (for example, the data set) are already implemented, the GUI enables creating and changing experiments without ever touching any code or configuration files. While not among the original intentions, this property may be especially interesting for non-programmers that want to solve their problems quickly. Still, the current version of the GUI is a proof of concept. It favors functionality over design, written with the plain Python Tkinter GUI framework and based on little previous GUI programming experience. Nonetheless, since the GUI (frontend) and the functions manipulating the argument tree (backend) are separated, a continued development with different frontend frameworks is entirely possible. The perhaps most interesting would be a web service that runs experiments on a server, remotely configurable from any web browser. \subsection{Using external code} \label{u_external} There is a variety of reasons why it makes sense to include external code into a framework. Most importantly, the code either solves a standing problem or provides the users with additional options. Unlike newly written code, many popular libraries are also thoroughly optimized, reviewed, and empirically validated. External code is also a perfect match for a framework based on argument trees. As shown in Figure~\ref{u_fig_external_import}, external classes of interest can be thinly wrapped to ensure compatibility, register the module, and specify all hyper-parameters for the argument tree. The integration is seamless so that finding out whether a module is locally written or external requires an inspection of its code. On the other hand, if importing the AdaBelief~\citep{zhuang2020adabelief} code fails, the module will not be registered and therefore not be available in the graphical user interface. UniNAS fails to parse configurations that require unregistered modules but informs the user which external sources can be installed to extend its functionality. Due to this logistic simplicity, several external frameworks extend the core of UniNAS. Some of the most important ones are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ pymoo~\citep{pymoo}, a library for multi-objective optimization methods. } \item{ Scikit-learn~\citep{sklearn}, which implements many classical machine learning algorithms such as Support Vector Machines and Random Forests. } \item{ PyTorch Image Models~\citep{rw2019timm}, which provides the code for several optimizers, network models, and data augmentation methods. } \item{ albumentations~\citep{2018arXiv180906839B}, a library for image augmentations. } \end{itemize} \begin{figure*} \hfill \begin{minipage}[c]{0.95\textwidth} \begin{python} from uninas.register import Register from uninas.training.optimizers.abstract import WrappedOptimizer try: from adabelief_pytorch import AdaBelief # if the import was successful, # register the wrapped optimizer @Register.optimizer() class AdaBeliefOptimizer(WrappedOptimizer): # wrap the original ... except ImportError as e: # if the import failed, # inform the user that optional libraries are not installed Register.missing_import(e) \end{python} \end{minipage} \vskip-0.3cm \caption{ Excerpt of UniNAS wrapping the official AdaBelief optimizer code. The complete text has just 45 lines, half of which specify the optimizer parameters for the argument trees. } \label{u_fig_external_import} \end{figure*} \section{Dynamic network designs} \label{u_networks} As seen in the previous Sections, the unique design of UniNAS enables powerful customization of all components. In most cases, a significant portion of the architecture search configuration belongs to the network design. The FairNAS search example in Figure~\ref{app_u_argstree_img} contains 25 configured classes, of which 11 belong to the search network. While it would be easy to create a single configurable class for each network architecture of interest, that would ignore the advantages of argument trees. On the other hand, there are many technical difficulties with highly dynamic network topologies. Some of them are detailed below. \subsection{Decoupling components} In many published research codebases, network and architecture weights jointly exist in the network class. This design decision is disadvantageous for multiple reasons. Most importantly, changing the network or NAS method requires a lot of manual work. The reason is that different NAS methods need different amounts of architecture parameters, use them differently, and optimize them in different ways. For example: \begin{itemize}[noitemsep,parsep=0pt,partopsep=0pt] \item{ DARTS~\citep{liu2018darts} requires one weight vector per architecture choice. They weigh all different paths, candidate operations, in a sum. Updating the weights is done with an additional optimizer (ADAM), using gradient descent. } \item{ MDENAS~\citep{mdenas} uses a similar vector for a weighted sample of a single candidate operation that is used in this particular forward pass. Global network performance feedback is used to increase or decrease the local weightings. } \item{ Single-Path One-Shot~\citep{guo2020single} does not use weights at all. Paths are always sampled uniformly randomly. The trained network is used as an accuracy prediction model and used by a hyper-parameter optimization method. } \item{ FairNAS~\citep{FairNAS} extends Single-Path One-Shot to make sure that all candidate operations are used frequently and equally often. It thus needs to track which paths are currently available. } \end{itemize} \begin{figure}[t] \vskip -0.0in \begin{center} \includegraphics[trim=0 0 0 0, clip, width=\linewidth]{images/draw/search_net.pdf} \hspace{-0.5cm} \caption{ The network and architecture weights are decoupled. \textbf{Top}: The structure of a fully sequential super-network. Every layer (cell) uses the same set of candidate operations and weight strategy. \textbf{Bottom left}: One set of candidate operations that is used multiple times in the network. This particular experiment uses the NAS-Bench-201 candidate operations. \textbf{Bottom right}: A weight strategy that manages everything related to the used NAS method, such as creating the architecture weights or which candidates are used in each forward pass. } \label{fig_u_decouple} \end{center} \end{figure} The same is also true for the set of candidate operations, which affect the sizes of the architecture weights. Once the definitions of the search space, the candidate operations, and the NAS method (including the architecture weights) are mixed, changing any part is tedious. Therefore, strictly separating them is the best long-term approach. Similar to other frameworks presented in Section~\ref{u_introduction_available}, architectures defined in UniNAS do not use an explicit set of candidate architectures but allow a dynamic configuration. This is supported by a \textit{WeightStrategy} interface, which handles all NAS-related operations such as creating and updating the architecture weights. The interaction between the architecture definition, the candidate operations, and the weight strategy is visualized in Figure~\ref{fig_u_decouple}. The easy exchange of any component is not the only advantage of this design. Some NAS methods, such as DARTS, update network and architecture weights using different gradient descent optimizers. Correctly disentangling the weights is trivial if they are already organized in decoupled structures but hard otherwise. Another advantage is that standardizing functions to create and manage architecture weights makes it easy to present relevant information to the user, such as how many architecture weights exist, their sizes, and which are shared across different network cells. An example is presented in Figure~\ref{app_text}. \begin{figure}[hb!] \begin{minipage}[c]{0.24\textwidth} \centering \includegraphics[height=11.5cm]{./images/draw/mobilenetv2.pdf} \end{minipage} \hfill \begin{minipage}[c]{0.5\textwidth} \small \begin{python} "cell_3": { "name": "SingleLayerCell", "kwargs": { "name": "cell_3", "features_mult": 1, "features_fixed": -1 }, "submodules": { "op": { "name": "MobileInvConvLayer", "kwargs": { "kernel_size": 3, "kernel_size_in": 1, "kernel_size_out": 1, "stride": 1, "expansion": 6.0, "padding": "same", "dilation": 1, "bn_affine": true, "act_fun": "relu6", "act_inplace": true, "att_dict": null, "fused": false } } } }, \end{python} \end{minipage} \caption{ A high-level view on the MobileNet~V2 architecture~\citep{sandler2018mobilenetv2} in the top left, and a schematic of the inverted bottleneck block in the bottom left. This design uses two 1$\times$1 convolutions to change the channel count \textit{n} by an expansion factor of~6, and a spatial 3$\times$3 convolution in their middle. The text on the right-hand side represents the cell structure by referencing the modules by their names ("name") and their keyworded arguments ("kwargs"). } \label{u_fig_conf} \end{figure} \subsection{Saving, loading, and finalizing networks} \label{u_networks_save} As mentioned before, argument trees enable a detailed configuration of every aspect of an experiment, including the network topology itself. As visualized in Figure~\ref{app_u_argstree_img}, such network definitions can become almost arbitrarily complex. This becomes disadvantageous once models have to be saved or loaded or when super-networks are finalized into discrete architectures. Unlike TensorFlow~\citep{tensorflow2015-whitepaper}, the used PyTorch~\citep{pytorch} library saves only the network weights without execution graphs. External projects like ONNX~\citep{onnx} can be used to export limited graph information but not to rebuild networks using the same code classes and context. The implemented solution is inspired by the official code\footnote{\url{https://github.com/mit-han-lab/proxylessnas/tree/master/proxyless_nas}} of ProxylessNAS~\citep{proxylessnas}, where every code module defines two functions that enable exporting and importing the entire module state and context. As typical for hierarchical structures, the state of an outer module contains the states of all modules within. An example is visualized in Figure~\ref{u_fig_conf}, where one cell in the famous MobileNet V2 architecture is represented as readable text. The global register can provide any class definition by name (see Section~\ref{u_argtrees_register}) so that an identical class structure can be created and parameterized accordingly. The same approach that enables saving and loading arbitrary class compositions can also be used to change their structure. More specifically, an over-complete super-network containing all possible candidate operations can export only a specific configuration subset. The network recreated from this reduced configuration is the result of the architecture search. This is made possible since the weight strategy controls the use of all candidate operations, as visualized in Figure~\ref{fig_u_decouple}. Similarly, when their configuration is exported, the weight strategy controls which candidates should be part of the finalized network architecture. In another use case, some modules behave differently in super-networks and finalized architectures. For example, Linear Transformers~\citep{ScarletNAS} supplement skip connections with linear 1$\times$1 convolutions in super-networks to stabilize the training with variable network depths. When the network topology is finalized, it suffices to simply export the configuration of a skip connection instead of their own. Another practical way of rebuilding code structures is available through the argument tree configuration, which defines every detail of an experiment (see Section~\ref{u_argtrees_config}). Parsing the network design and loading the trained weights of a previous experiment requires no further user interaction than specifying its save directory. This specific way of recreating experiment environments is used extensively in \textit{Single-Path One-Shot} tasks. In the first step, a super-network is trained to completion. Afterward, when the super-network is used to make predictions for a hyper-parameter optimization method (such as Bayesian optimization or evolutionary algorithms), the entire environment of its training can be recreated. This includes the network design and the dataset, data augmentations, which parts were reserved for validation, regularization techniques, and more. \section{Discussion and Conclusions} \label{u_conclusions} We presented the underlying concepts of UniNAS, a PyTorch-based framework with the ambitious goal of unifying a variety of NAS algorithms in one codebase. Even though the use cases for this framework changed over time, mostly from DARTS-based to SPOS-based experiments, its underlying design approach made reusing old code possible at every step. However, several technical details could be changed or improved in hindsight. Most importantly, configuration files should reflect the hierarchy levels (see Section~\ref{u_argtrees_config}) for code simplicity and to avoid concerns about using module types multiple times. The current design favors readability, which is now a minor concern thanks to the graphical user interface. Other considered changes would improve the code readability but were not implemented due to a lack of necessity and time. In summary, the design of UniNAS fulfills all original requirements. Modules can be arranged and combined in almost arbitrary constellations, giving the user an extremely flexible tool to design experiments. Furthermore, using the graphical user interface does not require writing even a single line of code. The resulting configuration files contain only the relevant information and do not suffer from a framework with many options. These features also enable an almost arbitrary network design, combined with any NAS optimization method and any set of candidate operations. Despite that, networks can still be saved, loaded, and changed in various ways. Although not covered here, several unit tests ensure that the essential framework components keep working as intended. Finally, what is the advantage of using argument trees over writing code with the same results? Compared to configuration files, code is more powerful and versatile but will likely suffer from problems described in Section~\ref{u_introduction_available}. Argument trees make any considerations about which parameters to expose unnecessary and can enforce the use of specific module types and subsets thereof. However, their strongest advantage is the visualization and manipulation of the entire experiment design with a graphical user interface. This aligns well with Automated Machine Learning (AutoML), which is also intended to make machine learning available to a broader audience. {\small \bibliographystyle{iclr2022_conference}
2024-02-18T23:39:40.040Z
2021-12-06T02:16:42.000Z
algebraic_stack_train_0000
11
5,863
proofpile-arXiv_065-89
\section{Introduction} \label{sec:introduction} \IEEEPARstart{D}{ental} cone-beam computerized tomography (CBCT) and intraoral scan (IOS) are used for virtual implant positioning, maxillofacial surgery simulation, and orthodontic treatment planning. Dental CBCT has been widely used for the three-dimensional (3D) imaging of the teeth and jaws \cite{sukovic2003cone,miracle2009conebeam}. Recently, IOS has been increasingly used to capture digital impressions that are replicas of teeth, gingiva, palate, and soft tissue in the oral cavity \cite{mangano2017intraoral,zimmermann2015intraoral}, as digital scanning technologies have rapidly advanced \cite{robles2020digital}. The use of IOS addresses many of the shortcomings of the conventional impression manufacturing techniques \cite{siqueira2021intraoral, manicone2021patient}. This paper aims to provide a fully automated method of integrating dental CBCT and IOS data into one image such that the integrated image utilizes the strengths and supplements the weaknesses of each image. In dental CBCT, spatial resolution is insufficient for elaborately depicting tooth geometry and interocclusal relationships. Moreover, image degradation associated with metal-induced artifacts is becoming an increasingly frequent problem, as the number of older people with artificial dental prostheses and metallic implants is rapidly increasing with the rapidly aging populations. Metallic objects in the CBCT field of view produce streaking artifacts that highly degrade the reconstructed CBCT images, resulting in a loss of information on the teeth and other anatomical structures\cite{schulze2011artefacts}. IOS can compensate for the aforementioned weaknesses of dental CBCT. IOS provides 3D tooth crown and gingiva surfaces with a high resolution. However, tooth roots are not observed in intraoral digital impressions. Therefore, CBCT and IOS can be complementary to each other. A suitable fusion between CBCT and IOS images allows to provide detailed 3D tooth geometry along with the gingival surface. Numerous attempts have been made to register dental impression data to maxillofacial models obtained from 3D CBCT images. The registration process is to find a rigid transformation by taking advantage of the properties that the upper and lower jaw bones are rigid and the tooth surfaces are partially overlapping areas (\textit{e.g.}, the crowns of the exposed teeth). Several methods \cite{gateno2003new, uechi2006novel, swennen2007use, xia2009new, swennen2009cone} utilized fiducial markers for registration, which require a complicated process that involves the fabrication of devices with the markers, double CT scanning and post-processing for marker removal. To simplify these processes, virtual reference point-based methods \cite{kim2010integration, lin2013artifact, hernandez2013new, nilsson2016virtual} were proposed to roughly align two models using reference points, and achieve a precise fit by employing an iterative closest point (ICP) method \cite{besl1992method}. ICP is a widely used iterative registration method consisting of the closest point matching between two data and minimization of distances between the paired points. However, the ICP method relies heavily on initialization because it can easily be trapped into a local optimal solution. Therefore, these methods based on ICP typically require the user-involved initial alignment, which is a cumbersome and time-consuming procedure due to manual clicking. Furthermore, registration using ICP can be difficult to achieve acceptable results for patients containing metallic objects \cite{flugge2017registration}. Teeth in CBCT images that are contaminated by metal artifacts prevent accurate point matching with teeth in impressions. Therefore, there is a high demand for a fully automated and robust registration method. Recently, a deep learning-based method \cite{chung2020automatic} was used to automate the initial alignment by extracting pose cues from two data. This approach has limitations in achieving sufficient registration accuracy enough for clinical application. Without the use of a very good initial guess, the point matching for multimodal image registration is affected by the non-overlapping area of the two different modality data (\textit {e.g.}, the soft tissues in IOS and the jaw bones and tooth surfaces contaminated by metal artifacts in CBCT). For an accurate registration, it is necessary to separate the non-overlapping areas as much as possible to prevent incorrect point matching. Therefore, individual tooth segmentation and identification in CBCT and IOS are required as important preprocessing tasks. In recent years, owing to advances in deep learning methods, numerous fully automated 3D tooth segmentation methods have been developed for CBCT images \cite{lee2020automated,rao2020symmetric,chen2020automatic,cui2019toothNet,jang2021fully} and impression models \cite{lian2020deep,zanjani2021mask,cui2021tsegnet}. Although the performance of intraoral scanners is improving, full-arch scans have not yet surpassed the accuracy of conventional impressions \cite{zhang2021accuracy,giachetti2020accuracy}. IOS at short distances is available to obtain partial digital impressions that can replace traditional dental models, but it may not yet be suitable for clinical use on long complete-arches due to the global cumulative error introduced during the local image stitching process \cite{ender2019accuracy}. To achieve sophisticated image fusion, it is therefore necessary to correct the stitching errors of IOS. We propose a fully automated method for registration of CBCT and IOS data as well as correction of IOS stitching errors. The proposed method consists of four parts: (i) individual tooth segmentation and identification module from IOS data (TSIM-IOS); (ii) individual tooth segmentation and identification module from CBCT data (TSIM-CBCT); (iii) global-to-local tooth registration between IOS and CBCT; and (iv) stitching error correction of the full-arch IOS. We developed TSIM-IOS using 2D tooth feature-highlighted images, which are generated by orthographic projection of the IOS data. This approach allows high-dimensional 3D surface models to efficiently segment individual teeth using low-dimensional 2D images. In TSIM-CBCT, we utilize the panoramic image-based deep learning method \cite{jang2021fully}. This method is robust against metal artifacts because it utilizes panoramic images (generated from CBCT images) not significantly affected by metal artifacts. The TSIM-IOS and -CBCT are used to focus only on the teeth, while removing as many non-overlapping areas as possible. In (iii), we then align the two highly overlapping data (\textit{i.e.}, the segmented teeth in the CBCT and IOS data) through global-to-local fashion, which consists of global initialization by fast point feature histograms (FPFH) \cite{rusu2009fast} and local refinement by ICP based on individual teeth (T-ICP). T-ICP allows the closest point matching only between the same individual teeth in the CBCT and IOS. The last part (iv) corrects the stitching errors of IOS using CBCT-derived tooth surfaces. Owing to the reliability of CBCT \cite{baumgaertel2009reliability}, location information of 3D teeth in the CBCT data can be used as a reference for correction of IOS teeth. After registration, each IOS tooth is fixed through a slightly rigid transformation determined by the reference CBCT tooth. The main contributions of this paper are summarized as follows. \begin{itemize} \item To the best of our knowledge, this study is the first to provide a sophisticated fusion of IOS and CBCT data at the level of accuracy required for clinical use. \item The proposed method can provide accurate intraoral digital impressions that correct cumulative stitching errors. \item This framework is robust against metal-induced artifacts in low-dose dental CBCT. \item The combined tooth-gingiva models with individually segmented teeth can be used for occlusal analysis and implant surgical guide production in digital dentistry. \end{itemize} The remainder of this paper is organized as follows. Section 2 describes the proposed method in detail. In Section 3, we explain the experimental results. Section 4 presents the discussion and conclusions. \begin{figure*} \centering \includegraphics[width=1\textwidth]{framework.pdf} \caption{Overall flow diagram of the proposed method consisting of four parts; tooth segmentation and identification from IOS and CBCT data, global-to-local tooth registration of IOS and CBCT, and stitching error correction in IOS. Therefore, the proposed method integrates IOS and CBCT images into one coordinate system while improving the accuracy of full-arch IOS.} \label{fig:framework} \end{figure*} \begin{figure} \centering \subfloat[]{\includegraphics[width=.2\textwidth]{segid_ios.pdf}}~~~ \subfloat[]{\includegraphics[width=.225\textwidth]{segid_cbct.pdf}} \caption{Results of TSIM-IOS and -CBCT, respectively. The indicated numbers represent mandibular teeth by the universal notation. (a) Individual IOS teeth and their split gingiva parts, and (b) CBCT teeth containing unexposed wisdom teeth.} \label{fig:segid} \end{figure} \section{Method} The overall framework of the proposed method is illustrated in Fig. \ref{fig:framework}. It is designed to automatically align a patient's IOS model with the same patient's CBCT image. IOS models consist of 3D surfaces (triangular meshes) of the upper and lower teeth, and are acquired in Standard Triangle Language (STL) file format containing 3D coordinates of the triangle vertices. The 3D vertices of IOS data can be expressed as a set of 3D points and the unit of these points is millimeter. Dental CBCT images are isotropic voxel structures consisting of sequences of 2D cross-sectional images, and are saved in Digital Imaging and Communications in Medicine (DICOM) format. Registration between two different imaging protocols must be separately obtained for the maxilla and mandible. For convenience, only the method for the mandible is described in this section. The method for the maxilla is the same. \subsection{Individual Tooth Segmentation and Identification in IOS} \label{subsec:IOS} As shown in Fig. \ref{fig:segid}a, TSIM-IOS decomposes the 3D point set $X$ of the IOS model into \begin{equation} X = \underbrace{X_{t_1} \cup \cdots \cup X_{t_J}}_{X_{\mbox{\scriptsize teeth}}}\cup X_{\mbox{\scriptsize gingiva}}, \end{equation} where each $X_{t_j}$ represents a tooth with the code $t_j$ in $X$, $J$ is the number of teeth in $X$, and $X_{\mbox{\scriptsize gingiva}}$ is the rest including the gingiva in $X$. According to the universal notation system \cite{nelson2014wheeler}, $t_j$ is the number between 1 and 32 that is assigned to an individual tooth to identify the unique tooth. A detailed explanation is provided in Appendix \ref{app:sec1}. Additionally, we divide the gingiva $X_{\mbox{\scriptsize gingiva}}$ into \begin{equation} \label{eq:gingiva} X_{\mbox{\scriptsize gingiva}} = X_{g_1} \cup \cdots \cup X_{g_J}, \end{equation} where \begin{equation} X_{g_j} = \left\{\mathbf{x} \in X_{\mbox{\scriptsize gingiva}} : \underset{\mathbf{x}' \in X_{\mbox{\tiny teeth}}}{\mbox{argmin}}\| \mathbf{x}-\mathbf{x}'\| \in X_{t_j}\right\} . \end{equation} Therefore, a point in $X_{\mbox{\scriptsize gingiva}}$ belongs to a separated gingiva $X_{g_j}$ according to the nearest tooth $X_{t_j}$. \subsection{Individual Tooth Segmentation and Identification in CBCT} \label{subsec:CBCT} TSIM-CBCT is based on a deep learning-based individual tooth segmentation and identification method developed by Jang \textit{et al.} \cite{jang2021fully}. As shown in Fig. \ref{fig:segid}b, we obtain the teeth point cloud $Y$ that consists of individual tooth point clouds, denoted by \begin{equation} Y = \underbrace{Y_{t_1} \cup \cdots \cup Y_{t_J}}_{Y_{\mbox{\scriptsize teeth}}} \cup Y_{\mbox{\scriptsize rest}}, \end{equation} where each $Y_{t_j}$ represents the $t_j$-tooth for $j=1,\cdots,J$ and $Y_{\mbox{\scriptsize rest}}$ refers to a point cloud of unexposed teeth (\textit{e.g.}, impacted wisdom teeth) if presented. Because the impacted teeth do not appear in IOS images, they are separated by $Y_{\mbox{\scriptsize rest}}$. Each tooth point cloud $Y_{t_j}$ is obtained from a 3D binary image of the $t_j$-tooth determined by the individual tooth segmentation and identification method \cite{jang2021fully}. The points in $Y_{t_j}$ lie on isosurfaces (approximating the boundary of the segmented tooth image) that are generated by the marching cube algorithm \cite{lewiner2003efficient}. Because the unit of points in $Y_{t_j}$ is associated with the image voxels, the points are scaled in millimeters by the image spacing and slice thickness. \subsection{Global-to-Local Tooth Registration of IOS and CBCT} This subsection describes the registration method to find the optimal transformation $\mathcal{T}^*$ such that the transformed point cloud $\mathcal{T}^*(X) = \{\mathcal{T}^*(\mathbf{x}) : \mathbf{x} \in X \}$ best aligns with the target $Y$ in terms of partially overlapping tooth surfaces. The registration problem consists of the following two steps: \begin{enumerate} \item Construct a set of correspondences $Corr= \{(\mathbf{x},\mathbf{y})\in X \times Y\}$ between a source $X$ and target $Y$. \item Find the optimal rigid transformation with the following mean square error minimization to best match the pairs in the correspondences \begin{equation} \mathcal{T}^* = \underset{\mathcal{T} \in SE(3)}{\mbox{argmin}} \sum_{(\mathbf{x},\mathbf{y}) \in Corr} \|\mathbf{y}-\mathcal{T}(\mathbf{x})\|^2, \end{equation} where $SE(3)$ is the set of rigid transformations that are modeled with a $4\times4$ matrix determined by three angles and a translation vector. \end{enumerate} Here, we adopt two registration methods: FPFH \cite{rusu2009fast} for global initial alignment, and an improved ICP using individual tooth segmentation for local refinement. \subsubsection{Global initial alignment of the IOS and CBCT teeth} We compute the two sets of FPFH vectors \cite{rusu2009fast}; $\mbox{FPFH}(X_{\mbox{\scriptsize teeth}})=\{\mbox{FPFH}(\mathbf{x}):\mathbf{x} \in X_{\mbox{\scriptsize teeth}}\}$ and $\mbox{FPFH}(Y_{\mbox{\scriptsize teeth}})=\{\mbox{FPFH}(\mathbf{y}):\mathbf{y} \in Y_{\mbox{\scriptsize teeth}}\}$. $\mbox{FPFH}(\mathbf{x})$ represents not only the geometric features of the normal vector and the curvature at $\mathbf{x} \in X_{\mbox{\scriptsize teeth}}$, but also the relevant information considering its neighboring points over $X_{\mbox{\scriptsize teeth}}$. The details of FPFH are provided in Appendix \ref{app:sec2}. $\mbox{FPFH}(X_{\mbox{\scriptsize teeth}})$ and $\mbox{FPFH}(Y_{\mbox{\scriptsize teeth}})$ are used to find correspondences between $X_{\mbox{\scriptsize teeth}}$ and $Y_{\mbox{\scriptsize teeth}}$. For each $\mathbf{x} \in X_{\mbox{\scriptsize teeth}}$, we select $\mathbf{y} \in Y_{\mbox{\scriptsize teeth}}$, denoted by $\mbox{match}^{\mbox{\tiny FPFH}}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})$, whose FPFH vector is most similar to $\mbox{FPFH}(\mathbf{x})$: \begin{equation} \mbox{match}^{\mbox{\tiny FPFH}}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})=\underset{\mathbf{y} \in Y_{\mbox{\tiny teeth}}}{\text{argmin}}~{\|\mbox{FPFH}(\mathbf{x})-\mbox{FPFH}(\mathbf{y})\|}. \end{equation} Similarly, we compute $\mbox{match}^{\mbox{\tiny FPFH}}_{X_{\mbox{\tiny teeth}}}(\mathbf{y})$ for all $\mathbf{x} \in X_{\mbox{\scriptsize teeth}}$. Then, we obtain the correspondence set \begin{equation} Corr = Corr_{X_{\mbox{\tiny teeth}}} \cap Corr_{Y_{\mbox{\tiny teeth}}}, \end{equation} where \begin{align} &Corr_{X_{\mbox{\tiny teeth}}} = \left\{\left(\mathbf{x},\mbox{match}^{\mbox{\tiny FPFH}}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})\right):\mathbf{x} \in X_{\mbox{\scriptsize teeth}} \right\},\\ &Corr_{Y_{\mbox{\tiny teeth}}} = \left\{\left(\mbox{match}^{\mbox{\tiny FPFH}}_{X_{\mbox{\tiny teeth}}}(\mathbf{y}),\mathbf{y} \right):\mathbf{y} \in Y_{\mbox{\scriptsize teeth}} \right\}. \end{align} The set $Corr$ contains pairs $(\mathbf{x},\mathbf{y}) \in X_{\mbox{\scriptsize teeth}} \times Y_{\mbox{\scriptsize teeth}}$ where $\mbox{FPFH}(\mathbf{x})$ and $\mbox{FPFH}(\mathbf{y})$ are the most similar to each other. However, such simple feature information alone cannot provide a proper point matching between $X_{\mbox{\scriptsize teeth}}$ and $Y_{\mbox{\scriptsize teeth}}$, because there are too many points with similar geometric features in the point clouds. To filter out inaccurate pairs from the set ${Corr}$, we randomly sample three pairs $(\mathbf{x}_1,\mathbf{y}_1)$, $(\mathbf{x}_2,\mathbf{y}_2)$, $(\mathbf{x}_3,\mathbf{y}_3)\in {Corr}$ and select them if the following conditions \cite{zhou2016fast} are met, and drop them otherwise: \begin{equation} \label{eq:filter} \tau < \frac{\|\mathbf{x}_i - \mathbf{x}_j\|}{\|\mathbf{y}_i - \mathbf{y}_j\|} < \frac{1}{\tau},~~\text{for}~1\leq i<j \leq 3, \end{equation} where $\tau$ is a number close to 1. We denote this filtered subset as ${Corr}^{(0)}$. Then, the initial transformation is determined by \begin{equation} \mathcal{T}^{(0)}=\underset{\mathcal{T} \in SE(3)}{\mbox{argmin}} \sum_{(\mathbf{x},\mathbf{y}) \in Corr^{(0)}} \|\mathbf{y}-\mathcal{T}(\mathbf{x})\|^2. \end{equation} \subsubsection{Local refinement of the roughly aligned teeth} We denote $X_{\mbox{\scriptsize teeth}}$ transformed by the previously obtained $\mathcal{T}^{(0)}$ as $X_{\mbox{\scriptsize teeth}}^{(0)} = X_{t_1}^{(0)} \cup \cdots \cup X_{t_J}^{(0)}$, where $X_{t_j}^{(0)} = \mathcal{T}^{(0)}(X_{t_j})$ for $j=1,\cdots,J$. $X_{\mbox{\scriptsize teeth}}^{(0)}$ and $Y_{\mbox{\scriptsize teeth}}$ are then roughly aligned, but fine-tuning is needed to achieve accurate registration. A fine rigid transformation is obtained through an iterative process, which gradually improves the correspondence finding. We propose an improved ICP (T-ICP) method with point matching based on individual teeth. For $k \geq 1$, we denote $X_{\mbox{\scriptsize teeth}}^{(k)} = \mathcal{T}^{(k)}(X_{\mbox{\scriptsize teeth}}^{(k-1)})$. Here, the $k$-th rigid transformation $\mathcal{T}^{(k)}$ is determined by \begin{equation} \mathcal{T}^{(k)} = \underset{\mathcal{T} \in SE(3)}{\mbox{argmin}} \sum_{(\mathbf{x},\mathbf{y}) \in Corr^{(k)}} \|\mathbf{y}-\mathcal{T}(\mathbf{x})\|^2. \end{equation} The correspondence set $Corr^{(k)}$ for $k$ is given by \begin{equation} Corr^{(k)} = \left\{ \left(\mathbf{x}, \mbox{match}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x}) \right) : \mathbf{x} \in X_{\mbox{\scriptsize teeth}}^{(k-1)} \right\} \cap P^{(k)}, \end{equation} where \begin{align} & \mbox{match}_{Y_{\mbox{\tiny teeth}}}(\mathbf{x})=\underset{\mathbf{y} \in Y_{\mbox{\tiny teeth}}}{\mbox{argmin}} \|\mathbf{x}-\mathbf{y}\|, \\ & P^{(k)} = \bigcup_{j=1}^n \left\{(\mathbf{x},\mathbf{y}) \in X_{t_j}^{(k-1)} \times Y_{t_j} \right\}. \end{align} Using the set $P^{(k)}$ prevents undesired correspondences between two teeth with different codes. Note that this is the vanilla ICP when $P^{(k)}$ is not used. The final rigid transformation $\mathcal{T}^*$ is obtained by the following composition of transformations: $\mathcal{T}^*=\mathcal{T}^{(K)} \circ \cdots \circ \mathcal{T}^{(0)}$, where $K$ is the number of iterations until the stopping criterion is satisfied for a given $\varepsilon>0$: \begin{equation} \sum_{(\mathbf{x},\mathbf{y}) \in Corr^{(K)}} \| \mathcal{T}^{(K)} \circ \cdots \circ \mathcal{T}^{(0)}(\mathbf{x}) - \mathbf{y} \|<\varepsilon. \end{equation} \subsection{Stitching Error Correction in IOS} Next, we edit the IOS models with stitching errors by referring to the CBCT images. We denote $X_{t_j}^*=\mathcal{T}^*(X_{t_j})$ and $X_{g_j}^*=\mathcal{T}^*(X_{g_j})$ for $j=1,\cdots,J$. Each tooth $X^*_{t_j}$ is transformed by a corrective rigid transformation $\mathcal{T}_j^{**}$, which is obtained by applying the vanilla ICP to sets $X^*_{t_j-1} \cup X^*_{t_j} \cup X^*_{t_j+1}$ and $Y^*_{t_j-1} \cup Y^*_{t_j} \cup Y^*_{t_j+1}$ as the source and target. Here, $X^*_{t_j-1}$ (or $X^*_{t_j+1}$) is an empty set if $t_j-1$ (or $t_j+1$) is not equal to $t_{j'}$ for every $j'=1,\cdots,J$. Using the individual corrective transformations, IOS stitching errors are corrected separately by $X_{t_j}^{**}=\mathcal{T}_{j}^{**}(X_{t_j}^{*})$ for $j=1,\cdots,J$. In this procedure, we use one tooth and two adjacent teeth on both sides for reliable correction. It takes advantage of the fact that narrow digital scanning is accurate. Now it remains to fix the gingiva area whose boundary shares the boundaries with the teeth. To fit the boundaries between the gingiva and individually transformed teeth, the gingival surface is divided according to the areas in contact with the individual teeth by Eq. \eqref{eq:gingiva}. Therefore, the rectified gingiva is obtained by $X_{g_j}^{**} = \mathcal{T}_{j}^{**}(X_{g_j}^{*})$ for $j=1,\cdots,J$. \section{Experiments and Results} Experiments were carried out using CBCT images in DICOM format and IOS models in STL format. Each CBCT image is produced by a dental CBCT machine: DENTRI-X (HDXWILL), which uses tube voltages of 90kVp and a tube current of 10mA. The size of images obtained by the machine is $800\times800\times400$. The pixel spacing and slice thickness are both $0.2$mm. Each IOS model is scanned by one of two intraoral scanners: i500 (Medit) and TRIOS 3 (3shape). An IOS model is either maxilla or mandible, which has approximately 200,000 vertices and 120,000 triangular faces, respectively. The dataset were provided by HDXWILL. Additionally, we used maxillary and mandibular digital dental models to train TSIM-IOS. These dataset were collected by the Yonsei University College of Dentistry. Personal information in all dataset was de-identified for patient privacy and confidentiality. In Sections \ref{subsec:IOS} and \ref{subsec:CBCT}, the proposed deep convolutional network models were trained by labeled dataset for individual tooth segmentation and identification. For TSIM-IOS, 71 maxillary and mandibular dental models were used for training and 35 models for testing. Similarly, in TSIM-CBCT, 49 3D CBCT images were used for training and 23 images for testing. \begin{figure*}[h] \centering \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap1.pdf}} \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap2.pdf}} \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap3.pdf}} \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap4.pdf}} \subfloat[]{\includegraphics[width=.1975\textwidth]{distmap5.pdf}} \caption{Qualitative comparison results of registration methods. (a) MR, (b) CPD, (c) FPFH, (d) FPFH followed by ICP, and (e) the proposed method. The colors in the teeth represent distances between the IOS and CBCT tooth surfaces.} \label{fig:quantitative_reg_result} \end{figure*} \begin{figure}[h] \centering \includegraphics[width=.4\textwidth]{comparison_fpfh.pdf} \caption{Correspondence pairs of FPFH-based methods. The figure on the left shows poor matching from the FPFH method without TSIM. On the other hand, the figure on the right shows modest correspondences between the teeth obtained by TSIM.} \label{fig:comparison_fpfh} \end{figure} \subsection{Evaluation and Result of the Proposed Registration Method} We used 22 pairs of IOS models and CBCT images to evaluate the performance of the proposed registration method. Each pair was obtained from the same patient. To measure the registration accuracy, we used a landmark distance between tooth landmarks pre-marked on IOS and CBCT data: \begin{equation}\label{eq:land} E_{land}(\hat{X},\hat{Y};\mathcal{T}) = \frac{1}{N}\sum_{i=1}^N \| \mathcal{T}(\hat{\mathbf{x}}_i)-\hat{\mathbf{y}}_i\|, \end{equation} where $\mathcal{T}$ is a rigid transformation and, $\hat{X}=\{\hat{\mathbf{x}}_1,\cdots,\hat{\mathbf{x}}_N\}$ and $\hat{Y}=\{\hat{\mathbf{y}}_1,\cdots,\hat{\mathbf{y}}_N\}$ are the landmark sets of the pair of IOS and CBCT data, respectively. These landmarks were selected points with discernible features such as cusps. In addition, we computed a surface distance from the IOS tooth surfaces to the CBCT tooth surfaces: \begin{equation}\label{eq:surf} E_{surf}(\bar{X},\bar{Y};\mathcal{T}) = \sup_{\bar{\mathbf{x}} \in \bar{X}} \inf_{\bar{y} \in \bar{Y}} \| \mathcal{T}(\bar{\mathbf{x}})-\bar{\mathbf{y}} \|, \end{equation} where $\bar{X}$ and $\bar{Y}$ are the ground-truth tooth segmentations of IOS and CBCT data, respectively. The metric $E_{surf}$ evaluates how far the IOS crown surface $\bar{X}$ is from the CBCT-derived tooth surface $\bar{Y}$. To verify the effectiveness of the proposed method, we compared it manual clicking registration with ICP (MR), coherent point drift (CPD) \cite{myronenko2010point}, FPFH, and FPFH followed by ICP. These methods are implemented using raw IOS model and skull model, which is obtained by applying thresholding segmentation and the marching cube algorithm to CBCT images. Table \ref{tbl:eval_reg} provides the quantitative evaluations of the methods, and Fig. \ref{fig:quantitative_reg_result} displays the qualitative results by visualizing distance maps between the ground-truth tooth surfaces of CBCT and IOS, which are aligned by rigid transformations obtained from the employed methods. Also, we performed an ablation study to present the advantage of TSIM-IOS and CBCT, as reported in Table \ref{tbl:eval_reg}. \begin{table}[] \footnotesize \centering \caption{Quantitative Comparison Results of Registration Methods. \label{tbl:eval_reg}}\vskip 0.0in \begin{tabular}{ccccccc} \hline & {\bf Method} &{\bf Landmark (mm)} & {\bf Surface (mm)} \\ \cline{1-4} \multirow{4}{*}{\parbox{.95cm}{w/o TSIM}} & {MR} & {$1.47 \pm 2.40$} & {$3.11 \pm 3.68$}\\ & {CPD} & {$12.77 \pm 6.12$} & {$17.57 \pm 6.26$}\\ & {FPFH} & {$0.46 \pm 0.34$} & {$0.91 \pm 0.54$}\\ & {FPFH + ICP} & {$0.28 \pm 0.11$} & {$0.55 \pm 0.10$}\\ \cline{1-4} \multirow{4}{*}{\parbox{.95cm}{w/ TSIM}} & {MR} & {$0.67 \pm 1.66$} & {$1.70 \pm 3.41$}\\ & {CPD} & {$3.68 \pm 2.62$} & {$5.01 \pm 3.55$}\\ & {FPFH} & {$0.40 \pm 0.19$} & {$0.71 \pm 0.16$}\\ & {FPFH + ICP} & {$0.22 \pm 0.10$} & {$0.48 \pm 0.09$}\\ & {\bf Proposed method} & {$\bf 0.22 \pm 0.09$} & {$\bf 0.47 \pm 0.08$}\\ \hline \end{tabular} \end{table} \begin{table*} \footnotesize \centering \caption{Results of Stitching Error Correction according to Registration Methods. \label{tbl:eval_cor}}\vskip 0.0in \begin{tabular}{cccccccc} \hline & {\bf Method} & {\bf Landmark (mm)} & {\bf Difference (mm)} & {\bf Surface (mm)} & {\bf Difference (mm)}\\ \cline{1-6} \multirow{4}{*}{\parbox{.8cm}{w/ TSIM}} & {MR} & {$0.53 \pm 1.69$} & {$-0.14 \pm 0.03$} & {$1.49 \pm 3.50$} & {$-0.21 \pm 0.09$}\\ & {CPD} & {$3.62 \pm 2.71$} & {$-0.06 \pm 0.09$} & {$4.94 \pm 3.69$} & {$-0.07 \pm 0.14 $}\\ & {FPFH} & {$0.14 \pm 0.09$} & {$-0.26 \pm -0.10$} & {$0.40 \pm 0.15$} & {$ -0.31 \pm -0.01 $}\\ & {FPFH + ICP} & {$0.12 \pm 0.07$} & {$-0.10 \pm -0.03$} & {$0.32 \pm 0.12$} & {$-0.16 \pm 0.03 $}\\ & {\bf Proposed method} & {$\bf 0.11\pm 0.07$} & {$\bf -0.10 \pm -0.02$} & {$\bf 0.30 \pm 0.11$} & {$\bf -0.17 \pm 0.03$}\\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \subfloat{\includegraphics[width=.21\textwidth]{result1.pdf}}~~~ \subfloat{\includegraphics[width=.21\textwidth]{result3.pdf}}~~~ \subfloat{\includegraphics[width=.21\textwidth]{result4.pdf}}~~~ \subfloat{\includegraphics[width=.21\textwidth]{result2.pdf}} \caption{Qualitative results before and after correction of four selected evaluation data. The yellow and red lines represent contours of the IOS models with the proposed registration and correction methods, respectively. The contours are obtained from the IOS models cut along the corresponding CT slices. The two contours almost overlap, but the differences appear at the end of the arches. } \label{fig:correction_results} \end{figure*} When source and target point clouds partially overlap, the MR and CPD were less accurate than FPFH, suggesting that the feature-based method is more suitable than user interaction and probabilistic-based methods. But above all, these methods suffer from the unnecessary points because non-overlapping areas between the IOS and skull models (\textit{i.e.}, alveolar bones in CBCT and soft tissues in IOS) occupy the most of the entire area. In such condition, FPFH may produce inaccurate correspondence pairs due to the non-overlapping points that are not properly filtered out in Eq. \eqref{eq:filter}, as shown in Fig. \ref{fig:comparison_fpfh}. Therefore, the use of TSIM-IOS and -CBCT is beneficial by eliminating the areas that may adversely affect accurate registration. In the ablation study, the methods with TSIM showed improved performances compared to those without TSIM. Still, the MR and CPD have limitations in achieving automation and improving accuracy due to the roots of CBCT teeth, respectively. To precisely match the models roughly aligned by FPFH, we developed T-ICP, which is an improved ICP method that uses individual tooth segmentation. Adopting T-ICP instead of ICP led to increased accuracy. The advantage of T-ICP is that it avoids point correspondences between adjacent teeth with different codes. This constraint prevents unwanted correspondences by performing point matching only between the same CBCT and IOS teeth. \subsection{Correction of the IOS Stitching Errors} This subsection presents the results before and after the correction of distortions in IOS, which occur in the stitching process of locally scanned images. Table \ref{tbl:eval_cor} reports the correction results for the registration methods with TSIM that were used in the subsection above. All post-correction accuracies increased compared with the pre-correction accuracies. However, these correction results depend on the performances of registration methods. Each tooth of the IOS aligned with CBCT in the previous registration step is used as an initial guess to determine a corrective transformation. The locations of the IOS teeth should be as close as possible to the CBCT teeth, as the ICP may become stuck in local minima. Fig. \ref{fig:correction_results} presents the results of the proposed registration and correction methods. Due to accumulated stitching errors, the scanned arches tend to be narrower or wider than the actual arches. Thus, the registration results show that the full-arch IOS models slightly deviated at the end of the arches. In contrast, the corrected IOS models fit edges of the teeth in CBCT images. \section{Discussion and Conclusion} In this paper, we developed a fully automatic registration and correction technique that integrates two different imaging modalities (\textit{i.e.}, IOS and CBCT images) in one scene. The proposed method is intended not only to compensate CBCT-derived tooth surfaces with the high-resolution surfaces of IOS, but also to correct cumulative IOS stitching errors across the entire dental arch by referring to CBCT. The most important contribution of the proposed method is its registration accuracy at the level of clinical application, even with severe metal artifacts in CBCT. The accuracy is achieved by the use of TSIM-IOS and -CBCT, which allow the minimization of the non-congruent points in CBCT and IOS data. The tooth-focused approach addresses the drawbacks of existing methods by achieving improved accuracy and fully automation. Moreover, this approach helps to correct full-arch digital impressions with distortion caused by stitching errors. The fusion of the CBCT images and IOS models provides high-resolution crown surface even in the presence of serious metal-related artifacts in the CBCT images. Metal artifact reduction (MAR) in dental CBCT is known to be the most difficult and important issue. By avoiding the challenging problem of MAR with the help of IOS, the merged image may be used for occlusal analysis. The proposed multimodal data integration system can provide a jaw-tooth-gingiva composite model, which is a basic tool in digital dentistry workflow. Thus, it may be used to produce a surgical wafer for orthognathic surgical planning and orthodontic mini-screw guide to reduce failure by minimizing root contact. Furthermore, because the jaw-tooth-gingiva model is componentized with jaw bones, individual teeth, and soft tissues (gingiva and palate), it is useful in terms of versatility and practicality in various dental treatment tasks (\textit{e.g.}, dental implant placement, orthodontic simulation and evaluation). The proposed method can eliminate the hassle of traditional dental prosthetic treatments that are labor intensive, costly, require at least two individual visits, and require temporary prosthesis to be worn until the final crown is in place. Moreover, if the final crown made in the dental laboratory does not fit properly at the second visit, the patient and dentist will have to repeat the previous operation, and the laboratory may have to redesign the restoration prosthesis. Note that the proposed integration of dental CBCT and IOS data can provide an alternative to traditional impressions, thereby reducing the time-consuming laboratory procedure of manually editing individual teeth using a computer-aided interface. \section*{Acknowledgment} This research was supported by a grant of the Korea Health Technology R\&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health \& Welfare, Republic of Korea (grant number : HI20C0127). We would like to express our deepest gratitude to HDXWILL which shares dataset.
2024-02-18T23:39:40.152Z
2021-12-06T02:15:55.000Z
algebraic_stack_train_0000
14
5,242
proofpile-arXiv_065-105
\section{Introduction} In noble liquid detectors for dark matter searches \cite{Chepel13} and low-energy neutrino experiments \cite{Majumdar21}, the scattered particle produces two types of signals: that of primary scintillation, produced in the liquid and recorded promptly ("S1"), and that of primary ionization, produced in the liquid and recorded with a delay ("S2"). In two-phase (liquid-gas) detectors \cite{Akimov21}, to record the S2 signal proportional electroluminescence (EL) is used produced by drifting electrons in the gas phase under high enough electric fields. According to modern concepts~\cite{Buzulutskov20}, there are three mechanisms responsible for proportional EL in noble gases: that of excimer (e.g. Ar$^*_2$) emission in the vacuum ultraviolet (VUV) \cite{Oliveira11}, that of emission due to atomic transitions in the near infrared (NIR) \cite{Oliveira13,Buzulutskov17}, and that of neutral bremsstrahlung (NBrS) emission in the UV, visible and NIR range \cite{Buzulutskov18}. These three mechanisms are referred to as excimer (ordinary) EL, atomic EL and NBrS EL, respectively. NBrS EL is due to bremsstrahlung of drifting electrons scattered on neutral atoms: \begin{eqnarray} \label{Rea-NBrS-el} e^- + \mathrm{A} \rightarrow e^- + \mathrm{A} + h\nu \; . \end{eqnarray} The presence of NBrS EL in two-phase Ar detectors has for the first time been demonstrated in our previous work~\cite{Buzulutskov18}, both theoretically and experimentally. Recently, the similar theoretical approach has been applied to all noble gases, i.e. overall to He, Ne, Ar, Kr and Xe, to calculate the photon yields and spectra for NBrS EL \cite{Borisova21}. NBrS EL in noble gases was further studied experimentally in \cite{Bondar20,Tanaka20,Kimura20,Takeda20,Takeda20a,Aoyama21,Aalseth21,Monteiro21} and theoretically in \cite{Amedo21}. On the other hand, much less is known about proportional EL in noble liquids \cite{Buzulutskov20,Masuda79,Schussler00,Aprile14,Ye14,Lightfoot09,Stewart10}. In a sense, the experimental data are even confusing. Indeed, in liquid Ar the observed threshold in the electric field for proportional EL, of about 60 kV/cm \cite{Buzulutskov20,Lightfoot09}, was 2 orders of magnitude less than expected for excimer EL \cite{Stewart10}. In liquid Xe, the EL threshold was more reasonable, around 400 kV/cm, but some puzzling EL events were observed below this threshold \cite{Aprile14}. In our previous works \cite{Buzulutskov18,Buzulutskov20} it was suggested that these puzzling events at unexpectedly low fields might be induced by proportional EL produced by drifting electrons in a noble liquid due to NBrS effect, the latter having no threshold in the electric field. In this work we verify this hypothesis, namely we extend the theoretical approach developed for noble gases to noble liquids in order to develop a quantitative theory that can predict the photon yields and spectra for NBrS EL in all noble liquids. What is new in this work is that the electron energy and transport parameters in noble liquids are calculated in the framework of rigorous Cohen-Lekner \cite{Cohen67} and Atrazhev \cite{Atrazhev85} theory. In this theory, the electron transport through the liquid is considered as a sequence of single scatterings on the effective potential. Therefore, such a parameter as electron scattering cross section can be used in the liquid in a way similar to that of the gas \cite{Akimov21}. An important concept of the theory is the distinction between the energy transfer scattering, which changes the electron energy, and that of momentum transfer, which only changes the direction of the electron velocity. Both processes have been assigned separate cross sections \cite{Cohen67,Atrazhev85,Stewart10}: that of energy transfer (or else effective) and that of momentum transfer. These are obvious analogs of those in the gas, namely of the total elastic and momentum transfer (transport) cross sections, respectively. The latest modifications of the theory can be found elsewhere \cite{Boyle15,Boyle16}. Accordingly, in this work the photon yields and spectra are calculated for NBrS EL in all noble liquids: in liquid He, Ne, Ar, Kr and Xe. The relevance of the results obtained to the development of noble liquid detectors for dark matter searches and neutrino detection is also discussed. \section{Theoretical formulas} To calculate the photon yields and spectra for NBrS EL in noble liquids we used the approach developed for noble gases in~\cite{Buzulutskov18}. Let us briefly recall the main points of this approach. The differential cross section for NBrS photon emission is expressed via electron-atom total elastic cross section ($\sigma _{el}(E)$)~\cite{Buzulutskov18,Park00,Firsov61,Kasyanov65,Dalgarno66,Biberman67}: \begin{eqnarray} \label{Eq-sigma-el} \frac{d\sigma}{d\nu} = \frac{8}{3} \frac{r_e}{c} \frac{1}{h\nu} \left(\frac{E - h\nu}{E} \right)^{1/2} \times \hspace{40pt} \nonumber \\ \times \ [(E-h\nu) \ \sigma _{el}(E) \ + \ E \ \sigma _{el}(E - h\nu) ] \; , \end{eqnarray} where $r_e=e^2/m c^2$ is the classical electron radius, $c=\nu \lambda$ is the speed of light, $E$ is the initial electron energy and $h\nu$ is the photon energy. To be able to compare results at different medium densities and temperatures, we need to calculate the reduced EL yield ($Y_{EL}/N$) as a function of the reduced electric field ($\mathcal{E}/N$), where $\mathcal{E}$ is the electric field and $N$ is the atomic density. The reduced EL yield is defined as the number of photons produced per unit drift path and per drifting electron, normalized to the atomic density; for NBrS EL it can be described by the following equation \cite{Buzulutskov18}: \begin{eqnarray} \label{Eq-NBrS-el-yield} \left( \frac{Y_{EL}}{N}\right)_{NBrS} = \int\limits_{\lambda_1}^{\lambda_2} \int\limits_{h\nu}^{\infty}\frac{\upsilon_e}{\upsilon_d} \frac{d\sigma}{d\nu} \frac{d\nu}{d\lambda} f(E) \ dE \ d\lambda \; , \end{eqnarray} where $\upsilon_e=\sqrt{2E/m_e}$ is the electron velocity of chaotic motion, $\upsilon_d$ is the electron drift velocity, $\lambda_1-\lambda_2$ is the sensitivity region of the photon detector, $d\nu/d\lambda=-c/\lambda^2$, $f(E)$ is the electron energy distribution function normalized as \begin{eqnarray} \label{Eq-norm-f} \int\limits_{0}^{\infty} f(E) \ dE = 1 \; . \end{eqnarray} The distribution functions with a prime, $f^\prime=f/E^{1/2}$, is often used instead of $f$, normalized as \begin{eqnarray} \label{Eq-norm-fprime} \int\limits_{0}^{\infty} E^{1/2} f^\prime(E) \ dE = 1 \; . \end{eqnarray} $f^\prime$ is considered to be more enlightening than $f$, since in the limit of zero electric field it tends to Maxwellian distribution. Consequently, the spectrum of the reduced EL yield is \begin{eqnarray} \label{Eq-NBrS-el-yield-spectrum} \frac{d (Y_{EL}/N)_{NBrS}}{d\lambda} = \int\limits_{h\nu}^{\infty}\frac{\upsilon_e}{\upsilon_d} \frac{d\sigma}{d\nu} \frac{d\nu}{d\lambda} f(E) \ dE \ \; . \end{eqnarray} In our previous works \cite{Buzulutskov18,Borisova21}, the electron energy distribution function and drift velocity in noble gases, at a given reduced electric field, were calculated using Boltzmann equation solver \cite{Hagelaar05}. In this work, we follow exactly the Atrazhev paper \cite{Atrazhev85} to calculate the electron energy distribution function and drift velocity in noble liquids. Another modification is that the total elastic cross section in Eq.~\ref{Eq-sigma-el} is replaced with the energy transfer cross section for electron transport through the liquid. With these two modifications all the Eqs.~\ref{Eq-sigma-el},\ref{Eq-NBrS-el-yield},\ref{Eq-norm-f},\ref{Eq-NBrS-el-yield-spectrum} can directly apply to noble liquids. \section{Cross sections, electron energy distribution functions and drift velocities in noble liquids} According to Cohen-Lekner and Atrazhev theory the drift and heating of excess electrons by an external electric field in the liquid are determined by two parameters, the collision frequency of energy transfer ($\nu_{e}$) and that of momentum transfer ($\nu_{m}$)~\cite{Atrazhev85}: \begin{eqnarray} \label{Eq01} \nu_{e} = \delta N \sigma_{e}(E)(2E/m)^{1/2} \: , \\ \nu_{m} = N \sigma_{m}(E)(2E/m)^{1/2} \: , \\ \sigma_{m}(E) = \sigma_{e}(E)\widetilde{S}(E) \,. \end{eqnarray} \noindent Here $N$ is the atomic density of the medium; $E$ is the electron energy; $\delta = 2m/M$ is twice the electron-atom mass ratio; $\sigma_{e}(E)$ and $\sigma_{m}(E)$ is the energy transfer (effective) and momentum transfer electron scattering cross section in the liquid, respectively; $\widetilde{S}(E)$ is the function, which takes into account liquid structure. To calculate collision frequencies one need to know $\sigma_{e}(E)$ and $\sigma_{m}(E)$; for liquid Ar, Kr and Xe these were given in \cite{Atrazhev85}: see Fig.~\ref{fig01} (top). For comparison, Fig.~\ref{fig01} (bottom) presents the total elastic cross sections for gaseous Ne, Ar, Kr, and Xe taken from the BSR database~\cite{DBBSR}; since for He it is not available, the momentum transfer cross section is shown instead taken from the Biagi database~\cite{DBBiagi}. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig01a} \includegraphics[width=0.99\columnwidth]{fig01b} \caption{Top: Electron scattering cross sections in liquid Ar, Kr and Xe as a function of electron energy, namely that of energy transfer (or effective), $\sigma_{e}$, and that of momentum transfer, $\sigma_{m}$, both taken from~\cite{Atrazhev85}. Bottom: Electron scattering cross section in noble gases as a function of electron energy: that of total elastic for Ne, Ar, Kr, and Xe, taken from the BSR database~\cite{DBBSR}, and that of momentum transfer for He, taken from the Biagi database~\cite{DBBiagi}.} \label{fig01} \end{figure} The electron distribution function $f^\prime(E)$ in a strong electric field is expressed via both collision frequencies \cite{Atrazhev85}: \begin{eqnarray} \label{Eq02} f^\prime(E) = f(0) exp\left(-\int\limits_{0}^{E} \frac{3m\nu_{e}(E)\nu_{m}(E)}{2e^{2}\mathcal{E}^{2}}dE\right). \end{eqnarray} The constant $f(0)$ is determined from the normalization condition of Eq.~\ref{Eq-norm-fprime}. Using the electron energy distribution functions, one can calculate the electron drift velocity in the liquid \cite{Atrazhev85}: \begin{eqnarray} \label{Eq03} \upsilon_d = -\frac{2}{3}\frac{e\mathcal{E}}{m} \int\limits_{0}^{\infty} \frac{E^{3/2}}{\nu_{m}(E)} \frac{df^\prime}{dE} dE. \end{eqnarray} It is shown in Fig.~\ref{fig02} as a function of the reduced electric field, the latter being expressed in Td units: 1~Td~=~$10^{-17}$~V~cm$^2$. It is possible to check the correctness of the distribution functions by comparing the calculated and measured electron drift velocities: this is done in Fig.~\ref{fig02} using the experimental data compiled in~\cite{Miller68}. It can be seen that the theoretical and experimental drift velocities are in a reasonable agreement, within a factor of 2, thus confirming the correctness of the calculated distribution functions for liquid Ar, Kr and Xe. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig02} \caption{Comparison of electron drift velocity ($\upsilon_d$) in liquid Ar, Kr and Xe theoretically calculated in this work (curves) with that measured in experiment \cite{Miller68} (data points). The color of the curve and the data points is the same for a given noble liquid.} \label{fig02} \end{figure} It should be remarked that in light noble liquids, He and Ne, the Cohen-Lekner and Atrazhev theory cannot apply to calculate the electron energy distribution functions, since the appropriate cross sections for electron transport in the liquid, $\sigma_{e}(E)$ and $\sigma_{m}(E)$, are not available in the literature. Thereby in the following for these liquids a "compressed gas" approximation will be used, similarly to that developed in \cite{Borisova21}. In this approximation, the Eqs.~\ref{Eq-sigma-el},\ref{Eq-NBrS-el-yield},\ref{Eq-norm-f},\ref{Eq-NBrS-el-yield-spectrum} apply directly as for the gas, i.e. with electron energy distribution function and drift velocity obtained using Boltzmann equation solver, with the input elastic cross sections taken for the gas from Fig.~\ref{fig01} (bottom), and with the atomic density $N$ equal to that of the liquid. \onecolumn \begin{table*} [h!] \caption{Properties of noble gases and liquids, and parameters of neutral bremsstrahlung (NBrS) electroluminescence (EL) theoretically calculated in this work.} \label{table} \begin{center} \begin{tabular}{p{0.5cm}p{6cm}p{1.5cm}p{1.5cm}p{1.54cm}p{1.5cm}p{1.5cm}} No & Parameter & He & Ne & Ar & Kr & Xe \\ \\ (1) & Boiling temperature at 1.0~atm, $T_b$~\cite{Fastovsky71} (K) & $4.215$ & $27.07$ & $87.29$ & $119.80$ & $165.05$ \\ (2) & Gas atomic density at $T_b$ and 1.0 atm, derived from~\cite{Fastovsky71} (cm$^{-3}$) & $2.37\cdot10^{21}$ & $3.41\cdot10^{20}$ & $8.62\cdot10^{19}$ & $6.18\cdot10^{19}$ & $5.75\cdot10^{19}$ \\ (3) & Liquid atomic density at $T_b$ and 1.0 atm, derived from~\cite{Fastovsky71} and from ~\cite{Theeuwes70} for Xe (cm$^{-3}$) & $1.89\cdot10^{22}$ & $3.59\cdot10^{22}$ & $2.10\cdot10^{22}$ & $1.73\cdot10^{22}$ & $1.35\cdot10^{22}$ \\ (4) & Threshold in electric field for excimer EL in noble liquid deduced from the corresponding threshold in noble gas by reduction to the atomic density of the liquid, obtained using data of \cite{Borisova21} (kV/cm) & $1134$ & $538$& $840$ & $519$ & $472$\\ (5) & Number of photons for NBrS EL in noble liquid produced by drifting electron in 1~mm thick EL gap at $T_b$ and 1.0~atm, at electric field of 100 kV/cm & $0.13$ & $2.5$& $0.93$ & $1.6$ & $1.1$\\ (6) & The same at 500 kV/cm & $4.3$ & $40$& $12$ & $24$ & $30$\\ \end{tabular} \end{center} \end{table*} \begin{multicols}{2} \twocolumn The values of the atomic densities for the gas and liquid phases at boiling temperatures at 1 atm are presented in Table~\ref{table}. We will see in the following on the example of heavy noble liquids that "compressed-gas" approximation works well: the difference in photon yields for NBrS EL between the "liquid" and "compressed-gas" approximation is not that large, remaining within a factor of 1.5. It should be also remarked, that all the calculations in this work were provided for atomic densities of the medium, liquid or gas, corresponding to the boiling temperature of a given noble element at 1 atm. \section{Operational range of reduced electric fields in noble liquids for NBrS EL} It is obvious that NBrS EL in noble liquids is much weaker than excimer EL and thus becomes insignificant above the electric field threshold for excimer EL. Table~\ref{table} gives an idea of these thresholds in noble liquid deduced from the corresponding threshold in noble gas by reduction to the atomic density of the liquid, obtained using the data of \cite{Borisova21}. To compare with the results for noble gases, one also need to determine the operational range of reduced electric fields for NBrS EL in noble liquids from the experimental works, where it was presumably observed and where the operation electric field can be reliably estimated. Basically 3 works do fit to these conditions: that of \cite{Buzulutskov12}, operating the gas electron multiplier (GEM,\cite{Sauli16}) in liquid Ar, that of \cite{Lightfoot09}, operating the thick GEM (THGEM, \cite{Breskin09}) in liquid Ar, and that of \cite{Aprile14}, operating the thin anode wire in liquid Xe. Deduced from the absolute electric field values given in \cite{Buzulutskov12} and \cite{Aprile14}, the required range of reduced electric fields within which NBrS EL was presumably observed amounts to 0.1-5 Td. In particular, for liquid Ar this range corresponds to electric fields ranging from 21 to 1040 kV/cm. We will restrict our calculations to this range of fields. \section{NBrS EL spectra and yields in noble liquids} Figs.~\ref{fig03} show the NBrS spectra of the reduced EL yield for liquid Ar, Kr and Xe at different reduced electric fields. The spectra were calculated by numerical integration of Eq.~\ref{Eq-NBrS-el-yield-spectrum}. One can see that NBrS EL spectra are similar in all noble liquids; moreover they look almost identical to those obtained in noble gases at the same reduced electric field: compare Fig.~\ref{fig03} to Fig.~10 of \cite{Borisova21} at 5 Td. The spectra are rather flat, extending from the UV to visible and NIR range at higher reduced electric field, e.g. at 5 Td. In each noble liquid, the NBrS EL spectrum has a broad maximum that gradually moves to longer wavelength with decreasing electric field. At lower reduced electric field, in particular at 0.3 Td corresponding to 60 kV/cm in liquid Ar, the spectra have moved completely to the visible and NIR ranges. In all noble liquids, the spectra are mostly above 200 nm (in the UV, visible and NIR range), i.e. just in the sensitivity region of commonly used photomultiplier tubes (PMTs) and silicon photomultipliers (SiPMs). \end{multicols} \twocolumn \begin{figure} \includegraphics[width=0.99\columnwidth]{fig03} \caption{Spectra of the reduced EL yield for NBrS EL in liquid Ar, Kr and Xe at different reduced electric fields (0.3, 1 and 5 Td), calculated using Eq.~\ref{Eq-NBrS-el-yield-spectrum}.} \label{fig03} \end{figure} \begin{figure} \includegraphics[width=0.99\columnwidth]{fig04} \caption{Reduced EL yield for NBrS EL at 0-1000 nm in liquid Ar, Kr and Xe as a function of the reduced electric field, calculated in this work in the framework of Cohen-Lekner and Atrazhev theory using Eq.~\ref{Eq-NBrS-el-yield} (solid lines). For comparison, the reduced yield for NBrS EL at 0-1000 nm in noble gases is shown calculated in \cite{Borisova21} using Boltzmann equation solver (dashed lines). The color of the curves are the same for a given noble element. The top scale shows the corresponding absolute electric field in liquid Ar.} \label{fig04} \end{figure} \begin{figure} \includegraphics[width=0.99\columnwidth]{fig05} \caption{Absolute EL yield (number of photons per drifting electron per 1 cm) for NBrS EL at 0-1000 nm in noble liquids as a function of the absolute electric field, calculated in this work. For heavy noble liquids (Ar, Kr and Xe) the rigorous Cohen-Lekner and Atrazhev theory was used to calculate the electron energy and transport parameters in the liquid, while for light noble liquids (He and Ne) the "compressed gas" approximation was used.} \label{fig05} \end{figure} The EL yield for NBrS EL in noble liquids is presented in Figs.~\ref{fig04} obtained by numerical integration of Eq.~\ref{Eq-NBrS-el-yield}: the reduced EL yield is shown as a function of the reduced electric field. For comparison, the reduced yield for NBrS EL in noble gases is shown calculated in \cite{Borisova21} using Boltzmann equation solver. Surprisingly, this "compressed-gas" approximation, successfully applied before to describe NBrS EL in noble gases, has led to almost the same results as that of the rigorous "liquid" theory in terms of the reduced EL yields and spectra when formally extrapolated to the atomic density of the noble liquid: for a given noble element and given reduced electric field the difference between them remains within a factor of 1.5 up to reduced electric field of 5 Td. This fact indicates that the scaling law, stating that the reduced EL yield ($Y/N$) is a function of the reduced electric field ($\mathcal{E}/N$), is valid not only for noble gases, but also for noble liquids to some extent, at least as concerned the NBrS EL effect. It also indicates on the applicability of the "compressed gas" approximation to noble liquids at moderate reduced electric fields, below 5 Td, thus justifying its use for light noble liquids, He and Ne, where the Cohen-Lekner and Atrazhev theory cannot be used due to the lack of the data. Furthermore, Fig.~\ref{fig05} shows the practical photon yield suitable for verifying in experimental conditions, namely the number of photons produced by drifting electron per 1 cm in all noble liquids, as a function of the absolute electric field. In this figure, for heavy noble liquids (Ar, Kr and Xe) the rigorous Cohen-Lekner and Atrazhev theory was used to calculate the electron energy and transport parameters in the liquid, while for light noble liquids (He and Ne) the "compressed gas" approximation was used, with the calculations identical to those of \cite{Borisova21}. The appropriate NBrS EL spectra and yields for He and Ne can be found in \cite{Borisova21}. Table~\ref{table} (items 5 and 6) gives an idea of the magnitude of the NBrS EL effect in a practical parallel-plate EL gap, of a thickness of 1 mm: at a field of 500 kV/cm the photon yield varies as 4, 40, 12, 24 and 30 photons for He, Ne, Ar, Kr and Xe, respectively. On the other hand, at 100 kV/cm the photon yield is reduced by about an order of magnitude down to about 1 photon per drifting electron in almost all noble liquids. It is remarkable that up to 600 kV/cm, liquid Ne has the highest EL yield for NBrS EL, obviously due to much lower elastic cross section between 1 and 10 eV of the electron energy compared to other noble elements (see Fig.~\ref{fig01} (bottom)), resulting in stronger electron heating by the electric field and thus in more intense NBrS photon emission. \section{Possible applications and discussion} In order to produce noticeable NBrS EL in noble liquids, one should provide high enough electric fields, ranging from 50 to 500 kV/cm, in practical devices. Based on previous experience, such devices might be GEMs \cite{Buzulutskov12}, THGEMs \cite{Lightfoot09} and thin anode wires \cite{Aprile14}. The parallel-plate EL gap of a thickness of 1 mm can also be considered, albeit being not tested in real experiment in noble liquids at such high fields. It should be remarked that the larger EL gap thickness, e.g. 1 cm, can hardly be used in practice due to the existing limit on high voltage breakdowns in noble liquids: the absolute voltage before breakdown cannot exceed values of about 100 kV in liquid He~\cite{Gerhold94} and several hundreds kV in other noble liquids \cite{Buzulutskov20,Auger16,Tvrznikova19}. It looks natural to use GEMs or THGEMs as EL plates instead of parallel-plate EL gaps in noble liquids, since the former are more resistant to breakdowns than the latter. Note that the NBrS EL spectrum is mostly in the visible and NIR range: see Fig.~\ref{fig03}. This implies a possible practical application of NBrS EL in noble liquid detectors, namely the method of direct optical readout of S2 signal in the visible range, i.e. without using a wavelength shifter (WLS). A similar technique has been recently demonstrated in two-phase Ar detector with direct SiPM-matrix readout using NBrS EL in the gas phase \cite{Aalseth21}. These results have lead us to the idea of using THGEM plates in combination with SiPM-matrices that have high sensitivity in the visible and NIR range, to optically record the S2 signal in single-phase noble liquid detectors for dark matter search and neutrino experiments. In addition, the recently proposed transparent very-thick GEM \cite{Kuzniak21} can be used as EL plate, with enhanced light collection efficiency. We can verify the theory of NBrS EL in noble liquids by experiments where it was presumably observed, where the electric field is explicitly known and where it is known how to convert the emitted photons into recorded photoelectrons. At first glance, only two works qualify for these criteria: \cite{Lightfoot09} and \cite{Aprile14}. In particular, in \cite{Lightfoot09} the operation electric field in the center of THGEM hole (1.5 mm height), of 60 kV/cm \cite{Buzulutskov12}, corresponds to $\mathcal{E}/N$=0.3~Td in liquid Ar, resulting in about 0.6 photons per drifting electron predicted by the NBrS EL theory according to Figs.~\ref{fig04} and \ref{fig05}. However, this is more than 2 orders of magnitude smaller than the light gain reported in \cite{Lightfoot09}. We therefore suggest to interpret the results of \cite{Lightfoot09} as caused by the presence of gas bubbles associated to THGEM holes, inside which proportional EL in the gas phase took place, similarly to what happens in Liquid Hole Multipliers \cite{Erdal20}. \begin{figure} \includegraphics[width=0.99\columnwidth]{fig06} \caption{Number of photoelectrons recorded in liquid Xe by a PMT as a function of the voltage on $10~\mu m$ thick anode wire \cite{Aprile14}: the experimental data (data points) and linear fit of proportional EL to the data (solid line) are shown, the latter defining the threshold of excimer EL. Top scale shows the corresponding reduced electric field on anode wire surface. For comparison, the theoretical assessment of the number of photoelectrons due to NBrS EL obtained in this work is shown (area between dashed lines).} \label{fig06} \end{figure} In \cite{Aprile14}, where puzzling EL events were observed in liquid Xe under the threshold of excimer EL, the operation fields near the anode wire were much higher, around 400 kV/cm. Fig.~\ref{fig06} shows the experimental data and linear fit of proportional EL to the data, the latter defining the threshold of excimer EL. In addition, the experimental conditions were explicitly described. This allowed us to predict the number of photoelectrons recorded by the PMT due to NBrS EL, although with some difficulties associated with highly inhomogeneous field near the wire. Due to the latter, Eq.~\ref{Eq-NBrS-el-yield} if applied directly gives only the lower limit of the event amplitude, since it does not take into account the electron diffusion, which significantly increases the travel time of the electron to the wire and thus the overall photon yield. We tried to take into account the diffusion effect: as a result, the theoretical prediction in Fig.~\ref{fig06} is shown in the form of an area between two dashed curves, thus setting the theoretical uncertainty. Within this uncertainty, the NBrS EL theory well describes the puzzling underthreshold events, namely their absolute amplitudes and the dependence on the anode voltage, which might be treated as the first experimental evidence for NBrS EL in noble liquids. \section{Conclusion} In this work we systematically studied the effect of neutral bremsstrahlung (NBrS) electroluminescence (EL) in all noble liquids: the photon yields and spectra for NBrS EL have for the first time been theoretically calculated in liquid He, Ne, Ar, Kr and Xe. For heavy noble liquids, the calculations were done in the framework of Cohen-Lekner and Atrazhev theory describing the electron energy and transport parameters in the liquid medium. Surprisingly, the "compressed-gas" approximation, successfully applied before to describe NBrS EL in noble gases, has led to almost the same results as that of the rigorous "liquid" theory in terms of the reduced EL yields and spectra when formally extrapolated to the atomic density of the noble liquid. The predicted magnitude of the NBrS EL effect in a practical parallel-plate EL gap, of a thickness of 1 mm, is noticeable: at a field of 500 kV/cm the photon yield varies from 12 to 30 and 40 photons per drifting electron in liquid Ar, Xe and Ne respectively. The NBrS EL spectra in noble liquids are in the visible and NIR range. The practical applications of the results obtained might be the use of THGEMs as EL plates in combination with SiPM-matrices, to optically record the S2 signal in single-phase noble liquid detectors for dark matter search and neutrino experiments. \acknowledgments This work was supported by Russian Science Foundation (project no. 19-12-00008). It was done within the R\&D program of the DarkSide-20k experiment. \bibliographystyle{eplbib}
2024-02-18T23:39:40.194Z
2021-12-06T02:13:07.000Z
algebraic_stack_train_0000
17
4,620
proofpile-arXiv_065-287
\section{Data Specifications Table} \begin{table}[htb] \centering \footnotesize \label{DataSpecificationTable} \begin{tabular}{|l|p{10cm}|} \hline \textbf{ Subject }& Management of Technology and Innovation. \\\hline \textbf{ Specific subject area }& A focus area maturity model for API management. \\\hline \textbf{ Type of data }& Text, literature references, and tables. \\\hline \textbf{ How data were acquired }& Systematic literature review and expert interviews. \\\hline \textbf{ Data format }& Raw, analyzed, and evaluated. \\\hline \textbf{ Parameters for data collection }& The collected practices had to fit strict requirements in terms of having to be executable, implementable, and easily understandable by practitioners that are involved with API management within their organization. \\\hline \textbf{ Description of data collection }& The initial data was collected through a SLR \cite{mathijssen2020identification}. Initially, the data was grouped according to topical similarity. Practices were categorized, analyzed and verified through discussion sessions with all involved researchers, inter-rater agreement and information gathered from grey literature. Capabilities and practices were then evaluated through 11 expert interviews. For information on selection of the practitioners, we refer to the related research article \textit{(to be published)}. If at least 2 or more practitioners found a practice relevant and useful, they became a part of the collection. Additionally, six discussion sessions among the researchers were conducted, during which all suggested changes (i.e. removal, addition, and relocation of practices and capabilities) were discussed, interpreted, and processed. The resulting practices and capabilities were then evaluated with 3 experts whom were previously interviewed. Finally five case studies were conducted to evaluate different software products. \\\hline \textbf{ Data source location }& All included source literature can be reviewed in the associated research article~\cite{mathijssen2020identification}. \\\hline \textbf{ Related research article }& Mathijssen, M., Overeem, M., \& Jansen, S. (2020). Identification of Practices and Capabilities in API Management: A Systematic Literature Review. arXiv preprint arXiv:2006.10481.\\\hline \end{tabular} \end{table} \onecolumn \section{Introduction} \label{sec:introduction} This data set describes the API Management Focus Area Maturity Model (API-m-FAMM). The model supports organizations that expose their API(s) to third-party developers, in a structured manner, in their API management activities. Using the API-m-FAMM, organizations may evaluate, improve upon and assess the degree of maturity their business processes regarding the topic of API management have. We define API Management as an activity that enables organizations to design, publish and deploy their APIs for (external) developers to consume. API Management encompasses capabilities such as controlling API lifecycles, access and authentication to APIs, monitoring, throttling and analyzing API usage, as well as providing security and documentation. \begin{itemize} \item The data may be used by API management researchers for evaluation, validation and extension of the model. \item The data can be used by focus area maturity researchers to establish the vocabulary used in the field. \item The data can be used by researchers as a basis for future research work in the domains of API management, versioning and evolution. \item The data is reusable by consultants and practitioners to assess whether they have implemented a practice fully. \end{itemize} The research approach is explained in Section~\ref{sec:design}. Section~\ref{sec:apimfamm} describes the final API-m-FAMM in full detail. The different intermediate versions are described in Sections~\ref{sec:version01}, \ref{sec:version02}, \ref{sec:version03}, \ref{sec:version04}, \ref{sec:version05}, and \ref{sec:version10}. \section{Experimental Design, Materials, and Methods} \label{sec:design} The Focus Area Maturity Model is constructed using the design methodology of \cite{van2010design} and \cite{de2005understanding}. The development of the FAMM is done in five phases: \emph{Scope}, \emph{Design}, \emph{Populate}, \emph{Test}, and \emph{Deploy}. These phases are executed through a SLR, expert interviews, case studies, and numerous discussions among the authors. Between the execution of every method, the authors discussed the state of the model until consensus was reached on its contents and structure. This was done using online \textit{Card Sorting}~\citep{nielsen1995}, with \textit{Google Drawings} as a tool. Figure~\ref{fig:research-steps} shows which methods were used in each phase, by linking them to the different intermediate versions of the API-m-FAMM. The intermediate versions including a changelog are described in Sections~\ref{sec:version01}, \ref{sec:version02}, \ref{sec:version03}, \ref{sec:version04}, \ref{sec:version05}, and \ref{sec:version10}. \begin{figure*}[!h] \centering \includegraphics[page=1, clip, trim=1.0cm 12.5cm 2.1cm 0.8cm, width=\textwidth]{Figures/ResearchApproach.pdf} \caption{The steps that were executed in constructing the API-m-FAMM and its various intermediate versions.} \label{fig:research-steps} \end{figure*} \subsection{Scope, Design, Populate Phases} The initial data was acquired through the SLR as described in \cite{mathijssen2020identification}. Based on this SLR, a primary source was chosen~\cite{de2017api}. Using this source as a starting point, the scope of the API-m-FAMM was determined and the initial model was constructed (\textbf{version 0.1}, Section~\ref{sec:version01}). Subsequently, the SLR was used to populate the model, which resulted in a FAMM consisting of 114 practices and 39 capabilities that are categorized into 6 focus areas (\textbf{version 0.2}, Section~\ref{sec:version02}). These practices and capabilities were then analyzed and verified through four validation sessions with all involved researchers, inter-rater agreement and information gathered from grey literature, such as online blog posts, websites, commercial API management platform documentation and third-party tooling (\textbf{version 0.3}, Section~\ref{sec:version03}). \subsection{Test Phase} The API-m-FAMM underwent two evaluation cycles. First, 11 semi-structured interviews with experts were conducted. During these interviews, experts were asked whether they agree with the inclusion of practices, capabilities, and focus areas as part of the API-m-FAMM, as well as whether they could suggest the addition of any new practices or capabilities. Additionally, practices were ranked by these experts in terms of their perceived maturity in order to determine their respective maturity levels. As a result of these interviews, many suggestions were made to either move practices to a different capability, remove them entirely, rename them, or newly add practices. These suggestions were then analyzed, processed, and discussed through 6 discussion sessions with all involved researchers. As a result, the model was quite substantially modified, with the existing body of practices and capabilities being narrowed down to 87 practices and capabilities, as well as numerous focus areas, capabilities, and practices being renamed. Additionally, all practices were assigned to individual maturity levels within their respective capabilities (\textbf{version 0.4}, Section~\ref{sec:version04}). The second evaluation cycle consisted of three unstructured interviews with experts originating from the sample of experts that were interviewed during the first evaluation cycle. During these interviews, the changes made as a result of the previous evaluation cycle, as well as the newly introduced maturity assignments were presented and discussed. Additionally, experts were asked to evaluate the model again with regards to the same criteria used in the first cycle. The API-m-FAMM was not significantly changed after this second cycle (\textbf{version 0.5}, Section~\ref{sec:version05}). \subsection{Deploy Phase} Finally the API-m-FAMM was used to evaluate five different software products. The evaluation was done by using a \emph{do-it-yourself} kit, which is available on \url{https://www.movereem.nl/api-m-famm.html}. These evaluations led to some minor changes (\textbf{version 1.0}, Section~\ref{sec:version10}). \section{API-m-FAMM} \label{sec:apimfamm} The API-m-FAMM and the practices and capabilities it consists of are divided into six focus areas. The focus areas are not equal in size, with the smallest focus area consisting of 2 capabilities and 11 practices, while the largest is composed of 5 capabilities and 18 practices. This is caused by the fact that the topic of API management is broad and not evenly distributed across its domains. For example, the \textit{Community} and \textit{Lifecycle Management} focus areas that are described below contain many practices, while \textit{Observability} is a domain consisting of a small but relevant amount of practices and capabilities. We have defined capabilities as the ability to achieve a goal related to API Management, through the execution of two or more interrelated practices. Combined, these practices and capabilities form the focus areas which describe the functional domain the topic of API management is composed of. A practice is defined as an action that has the express goal to improve, encourage, and manage the usage of APIs. Furthermore, the practice has to be executable, implementable and verifiable by an employee of the organization. Each individual practice is assigned to a maturity level within its respective capability. As mentioned earlier, these maturity levels were determined by having experts rank the practices according to their perceived maturity within their respective capabilities. Additionally, they were asked whether they could identify any dependencies with regards to the implementation of other practices. Practices can not depend on practices as part of another capability that have a higher maturity level. For example, practice 1.1.6 is dependant on the implementation of practices 1.3.3 and 4.2.3, resulting in a higher overall maturity level being assigned to this practice. The API-m-FAMM in its entirety, including the maturity level that each practice has been assigned to, is depicted visually in Figure~\ref{fig:api-m-famm}.\\ Section~\ref{subsec:areas} describes and defines the focus areas and capabilities. Section~\ref{subsec:practices} details the practices. Practices are described by using the following elements: \begin{itemize} \item \textbf{Practice code -} The practice code is made up of three numbers. The first number concerns the focus area, the second number the capability, and the third number the maturity level it has been assigned to. \item \textbf{Practice -} The name of the practice, as it is mentioned in the API-m-FAMM. \item \textbf{Focus area -} The focus area is mentioned to indicate the domain in which this practice is relevant. \item \textbf{Description -} A paragraph of text is provided to describe the practice in detail. The main reason for providing a lengthy description is internal validity: in future evaluations by third parties, they should be able to perform the evaluations independently. \item \textbf{When implemented -} Provides a series of necessary conditions before this practice can be marked as implemented. Again, to strengthen internal validity of the API-m-FAMM. \item \textbf{Literature -} Several references are included to articles that mention the practice. The literature can be found in the SLR~\cite{mathijssen2020identification}. References may also consist of online blog posts, websites, commercial API management platform documentation and third-party tooling. \end{itemize} \begin{figure*} \centering \includegraphics[page=1, clip, trim=0.5cm 0.5cm 0.5cm 0.5cm, width=\textwidth]{Figures/API-m-FAMMv1.0.pdf} \caption{The API-m-FAMM model, showing all six focus areas, the capabilities, and the practices regarding API management. The columns correspond with the maturity level of the practice. } \label{fig:api-m-famm} \end{figure*} \newpage \subsection{Focus Areas \& Capabilities} \label{subsec:areas} \begin{enumerate} \item \textbf{Lifecycle Management}: Generally speaking, an API undergoes several stages over the course of its lifetime; creation, publication, realization, maintenance and retirement \citedata{medjaoui2018continuous}. In order to control and guide the API through these stages, the organization must be able to perform a variety of activities. In order to maintain the API, the organization must decide on a versioning strategy, notification channels and methods in case of updates, as well as decouple their API from their application. In doing so, the organization is able to manage and maintain the versions the API goes through as it evolves over time.\\ \begin{enumerate} \item [1.1] \textit{Version Management}: APIs evolve over time with newer business requirements. In order to cope with this, the organization should have a versioning strategy in place, such as managing multiple versions of an API to support existing consumers, or by avoiding breaking changes as part of an evolutionary strategy. Additionally, the organization should be able to deprecate and retire older versions of their API smoothly. With proper notice and period, deprecated APIs should be retired and removed so as to avoid any maintenance overheads \citedata{de2017api}. In order to guide this process, the organization may also have a deprecation protocol in place. \item [1.2] \textit{Decoupling API \& Application}: When an organization creates an API to expose its data and services, it needs to ensure that the API interface is intuitive enough for developers to easily use \citedata{de2017api}. However, the interface for the API will most likely be different from that of the back-end services that it exposes. Therefore, the organization should be able to transform the API interface to a form that the back end can understand. \item [1.3] \textit{Update Notification}: Changes made to an API may adversely affect its consumers. Hence, consumers must be notified of any planned updates of the API \citedata{de2017api}. The organization should have the ability to inform developers using the API of any changes by distributing change logs, using a communication channel such as email, the developer portal, or preemptively through the use warning headers or a versioning roadmap.\\ \end{enumerate} \item \textbf{Security}: APIs provide access to valuable and protected data and assets \citedata{de2017api}. Therefore, security for APIs is necessary to protect the underlying assets from unauthenticated and unauthorized access. Due to the programmatic nature of APIs and their accessibility over the public cloud, they are also prone to various kinds of attacks. Hence, the organization should undertake various measures to prevent this from happening. For example, one of many available authentication and authorization protocols should be implemented, prevention for attacks such as DoS or SQL script injection attacks should be in place and sensitive data should be encrypted or masked.\\ \begin{enumerate} \item [2.1] \textit{Authentication}: Authentication is the process of uniquely determining and validating the identity of a client \citedata{de2017api}. In order to achieve this, the organization may implement an authentication mechanism such as API keys or protocols such as WSS or OpenID Connect, or the Single Sign-on method. \item [2.2] \textit{Authorization}: Authorization controls the level of access that is provided to an app making an API call and controls which API resources and methods that can invoke \citedata{de2017api}. The organization may implement authorization through access control or an industry-standardized authorization protocol such as OAuth 2.0. \item [2.3] \textit{Threat Detection \& Protection}: The likelihood of bad actors making attacks using malicious content is high, in addition to common threats such as DoS attacks. Content-based attacks can be in the form of malformed XML or JSON, malicious scripts, or SQL within the payload \citedata{de2017api}. Therefore, the organization should be able to detect malformed request formats or malicious content within the payload and then protect against such attacks. \item [2.4] \textit{Encryption}: Oftentimes, message payloads sent in API calls contain sensitive information that can be the target for man-in-the-middle attacks \citedata{de2017api}. Therefore, the organization should secure all communication between the client app and the API service through using techniques such as TLS encryption by default. Furthermore, it is desirable for the organization to prevent exposure of sensitive data by making utilizing methods such as masking or hashing.\\ \end{enumerate} \item \textbf{Performance}: APIs are no longer exclusively seen as mechanisms for integration but have become mainstream for the delivery of data and services to end users through various digital channels \citedata{de2017api}. This increases the demand on APIs to perform well under loads. The overall performance of a client app is dependent on the performance of the underlying APIs powering the app. Hence, the importance of performance for APIs increases greatly. In order to ensure performance and stability of their APIs, organizations must be able to perform various activities. For example, enabling consumers to implement caching improves an API's performance through reduced latency and network traffic. Additionally, using rate limiting and throttling mechanisms to manage traffic and using load balancing to route traffic more effectively also improves the API's performance.\\ \begin{enumerate} \item [3.1] \textit{Resource Management}: In order to improve the performance of their API(s), it is important for an organization to effectively manage the available resources. This may be accomplished through the use of mechanisms such as load balancing, scaling, or by having a failover policies in place. \item [3.2] \textit{Traffic Management}: Another aspect of improving API performance is effectively managing incoming traffic. In order to do so, the organization may choose to implement mechanisms such as caching, rate limiting or throttling, or by prioritizing traffic based on customer characteristics.\\ \end{enumerate} \item \textbf{Observability}: As an organization, it is necessary to have insight into the API program to make the right investments and decisions during its maintenance. Through various monitoring techniques, the organization is able to collect metrics which can shed light on the API's health, performance and resource usage. In turn, these metrics may be aggregated and analyzed to improve the decision making process on how to enhance the business value by either changing the API or by enriching it \citedata{de2017api}. Additionally, by being able to log API access, consumption and performance, input may be gathered for analysis, business value or monetization reports. These may be used to strengthen communication with consumers and stakeholders or check for any potential service-level agreement violations.\\ \begin{enumerate} \item [4.1] \textit{Monitoring}: As an organization, it is important to be able to collect and monitor metrics and variables concerning the exposed API. For example, information regarding the health and performance of the API, as well as resources used by the API should be monitored so that it may be used as input for activities such as generating analysis reports and broadcasting the API's operational status. \item [4.2] \textit{Logging}: In monitoring their API(s), it is helpful for the organization to be able to perform logging of consumer behavior and activities. This may include logging of API access, usage and reviewing historical information. \item [4.3] \textit{Analytics}: As an organization, it is important to be able to analyze the metrics and variables that are collected through monitoring. For example, information regarding the health and performance of the API may be utilized to decide which features should be added to the API. Additionally, it is desirable for the organization to be able to extract custom variables from within the message payload for advanced analytics reporting.\\ \end{enumerate} \item \textbf{Community}: As an organization exposing APIs for external consumers and developers to consume, it is often desirable to foster, engage and support the community that exists around the API. For example, this entails offering developers the ability register on the API and offering them access to test environments, code samples and documentation. Additionally, the organization may support developers in their usage of the API by offering them support through a variety of communication channels and allowing them to communicate with the organization or among another through a community forum or developer portal. Furthermore, it is desirable for developers to be able to freely browse through the API offering, review operational status updates regarding the API, create support tickets in the event of an error and to share knowledge, views and opinions with other developers.\\ \begin{enumerate} \item [5.1] \textit{Developer Onboarding}: To start consuming APIs, developers must first register with the organization that is providing them. The sign up process should be simple and easy, possibly by supporting developers with resources such as (automatically generated) SDKs and testing tools such as an API console or sandbox environment. \item [5.2] \textit{Support}: In order to strengthen the community around the API, the organization should support developers whom are consuming it. This may be accomplished by establishing an appropriate communication channel, adequately managing issues and handling errors, should they present themselves. \item [5.3] \textit{Documentation}: API documentation can help speed up the adoption, understanding and effectiveness of APIs \citedata{de2017api}. Hence, the organization must provide consumers of their API(s) with reference documentation. Additionally, they may be supplied with start-up documentation, code samples and FAQs to further accelerate understanding of the API. \item [5.4] \textit{Community Management}: Oftentimes, app developers wish to know the views of other developers in the community. They may want to collaborate and share their API usage learnings and experiences with one another \citedata{de2017api}. In order to facilitate these wishes, the organization may choose to provide developers with a community forum or developer portal. \item [5.5] \textit{Portfolio Management}: As an API providing organization, a platform to publicize and document APIs is needed. Hence, a discoverable catalog of APIs through which potential consumers are able to browse may be provided.\\ \end{enumerate} \item \textbf{Commercial}: Organizations have been consuming third-party APIs to simplify and expand business partnership. APIs provide faster integration and an improved partner/customer experience, enabling organizations to grow rapidly \citedata{de2017api}. Oftentimes, exposing and consuming APIs has a commercial aspect tied to it. For API consumers and providers, this is often embodied by legal business contracts for the use of the APIs which they are bound to. These business contracts called service-level agreements govern the service levels and other aspects of API delivery and consumption. Another commercial aspect of API management is that of monetization. Considering APIs provide value to the consuming party, organizations often opt to monetize the services and APIs and build a business model for them \citedata{de2017api}. Utilizing the right monetization model for APIs enables organizations to reap the benefits of their investment in their APIs.\\ \begin{enumerate} \item [6.1] \textit{Service-Level Agreements}: A service-level agreement (SLA) defines the API’s non-functional requirements, serving as a contract between the organization and consumers of their API. As such, the organization should ensure that the consumer of their API agrees with the SLA's contents. These may include matters such as terms and conditions for API usage, consumption quotas, uptime guarantees and maintenance or downtime information. \item [6.2] \textit{Monetization Strategy}: APIs securely expose digital assets and services that are of value to consumers. Hence, the organization may wish to adopt a monetization strategy to enable monetization of the exposed services and APIs by constructing a business model around them. This may be accomplished through a monetization model which can be based on consumer characteristics such as their type of subscription, access tier or the amount of resources used. \item [6.3] \textit{Account Management}: It is desirable to effectively manage accounts in order to foster a qualitative relationship with customers, stakeholders and the organization's management. This may be achieved by reporting on the API's business value internally through the use of business value reports, as well as externally by providing consumers of the API with subscription reports and training them in using the API as efficiently as possible. \\ \end{enumerate} \end{enumerate} \subsection{Practices} \label{subsec:practices} \newarray\MyData \readarray{MyData} { 1.1.2 & Implement Evolutionary API Strategy & Version Management & Lifecycle Management & The organization utilizes an evolutionary strategy to continuously version their API over time. Using this strategy, the organization evolves a single API by avoiding the introduction of breaking changes. Optionally, this may be accomplished by adhering to the GraphQL specification \citedata{graphqlVersioning}. & $\bullet$ The organization maintains one version of their API. \newline $\bullet$ The organization utilizes an evolutionary API versioning strategy. & \citedata{ploesserVersioning, icappsVersioning} & & 6& 1.1.5 & Implement Multiple API Versioning Strategy & Version Management & Lifecycle Management & The organization has a versioning strategy in place which entails the process of versioning from one API to a newer version. In order to do so, the organization must be able to maintain multiple versions of (one of) their API(s) for a period of time. Possible strategies include URI/URL Versioning (possibly in combination with adherence to the Semantic Versioning specification), Query Parameter versioning, (Custom) Header versioning, Accept Header versioning or Content Negotiation. & $\bullet$ The organization utilizes one of the following versioning strategies: URI/URL Versioning, Query Parameter versioning, (Custom) Header versioning, Accept Header versioning or Content Negotiation. & \citedata{de2017api, redhatVersioning, anjiVersioning, rapidVersioning} & & 6& 1.1.6 & Implement API Deprecation Protocol & Version Management & Lifecycle Management & The organization has a protocol in place that details what steps should be taken when deprecating one of their APIs. This includes determining the amount of developers currently consuming the API through the use of monitoring, and then setting a threshold that details the amount of developers that should have migrated to the new version of the API before commencing with deprecation of the old version. Furthermore, developers, including their contact information, should be identified so that they may be notified of the deprecation through their preferred communication channel. This notification should be accompanied by a migration period and deprecation date, so that consumers have a clear target to migrate their apps over to the new API version. Additionally, referrals to to documentation and the new endpoint should be included. Furthermore, the protocol should detail what course of action should be taken to roll back to a previously deployed version of an API in the event of an incorrect deployment of the API. & $\bullet$ The organization has implemented the 'Distribute Versioning Notification Through Channel(s)' (1.3.3) and 'Log Activity' (4.2.3) practices. \newline $\bullet$ The organization has a deprecation protocol in place. & \citedata{peterLifecycle} & & 6& 1.1.7 & Check Backwards Compatibility & Version Management & Lifecycle Management & The organization has an approach in place with which it is able to detect breaking changes when versioning their API(s). Approaches include using a unit test suite, plugging an automated contract test suite into the CI/CD pipeline or by using the \emph{swagger-spec-compatibility} library to detect differences between two Swagger / OpenAPI specifications \citedata{swaggerComp}. & $\bullet$ The organization has implemented the 'Implement Evolutionary API Versioning Strategy' (1.1.2) practice. \newline $\bullet$ The organization has a backwards compatibility checking approach in place. & \citedata{bhojwaniCheck} & & 6& 1.2.1 & Decouple API \& Software Versioning & Decoupling API \& Application & Lifecycle Management & The organization has decoupled the version of their API(s) from its software implementation. The API version should never be tied to the software version of the back-end data/service. A new API version should be created only if there is a change in the contract of the API that impacts the consumer. & $\bullet$ The organization has decoupled the version of their API(s) from its software implementation. & \citedata{de2017api} & & 6& 1.2.4 & Decouple Internal \& External Data Model & Decoupling API \& Application & Lifecycle Management & The organization has decoupled the data models that are used internally and externally from one another. Doing so is considered to be beneficial, since an application might use a normalized relation data model internally. While this data model is less suitable to expose through a public API, this separation of concerns allows the organization to evolve the relational data model at a different speed than the API. & The organization has decoupled the data models that are used internally and externally from one another & None. & & 6& 1.2.5 & Decouple Internal \& External Data Format & Decoupling API \& Application & Lifecycle Management & The organization has decoupled the data format that are used internally and externally from one another. Doing so is considered to be beneficial, since an application might use a data format such as XML internally, while using a data format such as JSON for the API(s). This separation of concerns grants the organization greater flexibility in designing and developing their APIs. & $\bullet$ The organization has decoupled the data format that are used internally and externally from one another. & None. & & 6& 1.2.6 & Decouple Internal \& External Transport Protocol & Decoupling API \& Application & Lifecycle Management & The organization has decoupled the transport protocol that are used internally and externally from one another. Considering that an application might internally use a protocol that is less commonly used in modern APIs such as SOAP or JDBC internally, which may be less suitable for public APIs, the organization may opt to use a different protocol for their API(s). This separation of concerns grants the These protocols are less commonly used in modern APIs, or are less suitable for public APIs, and the organization can decide to use a different protocol for the APIs. This separation of concerns grants the organization greater flexibility in designing and developing their APIs. & $\bullet$ The organization has decoupled the transport protocol that are used internally and externally from one another. & None. & & 6& 1.3.2 & Distribute Changelogs & Update Notification & Lifecycle Management & The organization uses (automated) email services to distribute changelogs describing the versioning of their API(s) to consumers. Ideally, the organization offers consumers the ability to opt-in or opt-out of this service. & $\bullet$ The organization uses (automated) email services to distribute changelogs describing the versioning of their API(s) to consumers. & \citedata{sandovalChange} & & 6& 1.3.3 & Distribute Versioning Notification Through Channel(s) & Update Notification & Lifecycle Management & The organization has the ability to distribute versioning notifications among consumers of their API(s) through established communication channels. Possible channels include email, social media, and announcements within the developer portal or reference documentation. Ideally, the organization offers consumers of their API(s) the option to select the communication channel they prefer receiving versioning notifications through. & $\bullet$ The organization has implemented the 'Establish Communication Channel' (5.2.1) and 'Distribute Changelogs' (1.3.2) practices. \newline $\bullet$ The organization has the ability to distribute versioning notifications among consumers of their API(s) through established communication channels. & \citedata{de2017api, sandovalChange} & & 6& 1.3.5 & Extend API with Versioning Information & Update Notification & Lifecycle Management & The organization has the ability to extend their API specification to incorporate warning headers into responses in run-time. By doing so, consumers of the API are notified of its impending deprecation, and possibly requested to change their implementation. & $\bullet$ The organization has the ability to introduce warning headers. & \citedata{de2017api} & & 6& 1.3.9 & Announce Versioning Roadmap & Update Notification & Lifecycle Management & The organization has announced a roadmap that details the planned dates on which the current (old) version of their API will be versioned to a new version, in order to notify consumers ahead of time. This may be done through email, social media, announcements within the developer portal or reference documentation.& $\bullet$ The organization has implemented the 'Distribute Versioning Notification Through Channel(s)' (1.3.3) practice. \newline $\bullet$ The organization has announced a versioning roadmap. & \citedata{de2017api} & & 6& 2.1.1 & Implement Basic Authentication & Authentication & Security & The organization has the ability to implement basic authentication in order to authenticate consumers of their API(s). This may be accomplished through the use of HTTP Basic Authentication, with which the consumer is required to provide a username and password to authenticate, or by issuing API keys to consumers of the API. An app is identified by its name and a unique UUID known as the API key, often serving as an identity for the app making a call to the API. & $\bullet$ The organization has implemented HTTP Basic Authentication, or is able to issue API keys. & \citedata{biehl2015api, de2017api, Zhao_2018, sandoval2018_2} & & 6& 2.1.4 & Implement Authentication Protocol & Authentication & Security & The organization has implemented an authentication protocol or method in order to authenticate consumers of their API(s). In order to apply security For SOAP APIs, the usage of a WS Security (WSS) protocol \citedata{wikipediaWS} may be opted for. This protocol specifies how integrity and confidentiality can be enforced on messages and allows the communication of various security token formats, such as Security Assertion Markup Language (SAML), X.509 and User ID/Password credentials. Consumers of REST APIs may be authenticated by using methods and protocols such as Client Certificate authentication, SAML authentication, or OpenID Connect \citedata{openIDConnect}. OpenID Connect 1.0 is an authentication protocol that builds on top of OAuth 2.0 specs to add an identity layer. It extends the authorization framework provided by OAuth 2.0 to implement authentication.& $\bullet$ The organization has implemented a WSS authentication protocol, or methods and protocols such as Client Certificate authentication, SAML authentication, or OpenID Connect. & \citedata{de2017api, oracleWS, wikipediaWS} & & 6& 2.1.7 & Implement Single Sign-On & Authentication & Security & The organization has implemented Single Sign-on (SSO), which is a authentication method that enables users to securely authenticate with multiple applications and websites by using one set of credentials. The user is then signed in to other applications automatically, regardless of the platform, technology, or domain the user is using. & $\bullet$ The organization has implemented the 'Implement Authentication Protocol' (2.1.4) practice. \newline $\bullet$ The organization has implemented the Single Sign-on (SSO) authentication method. & \citedata{de2017api, Onelogin, SSO} & & 6& 2.2.2 & Implement Access Control & Authorization & Security & The organization has implemented an access control method in order to identify and authorize consumer potential users of their API(s). In order to accomplish this, the Role-based Access Control (RBAC) method may be used, with which permissions may be assigned to users based on their role within the organization. Alternatively, the Attribute-based Access Control (ABAC) may be used, with which permissions are granted based on an identities' attributes. Optionally, RBAC and ABAC policies may be expressed by using the eXtensible Access Control Markup Language (XACML). & $\bullet$ The organization has implemented the Role-based Access Control (RBAC) or Attribute-based Access Control (ABAC) method. & \citedata{de2017api, hofman2014technical, thielens2013apis, WikiXACML} & & 6& 2.2.4 & Implement Token Management & Authorization & Security & The organization provides consumers of their API(s) with the ability to perform (access) token and API key management. This is an activity that involves measures to manage (i.e. review, store, create and delete) the tokens and API keys that are required to invoke back-end APIs. & $\bullet$ The organization allows consumers to manage their tokens and API keys. & \citedata{de2017api, hofman2014technical} & & 6& 2.2.6 & Implement Standardized Authorization Protocol & Authorization & Security & The organization has implemented an industry-standardized authorization protocol, such as the OAuth 2.0 Authorization protocol. OAuth is used as a mechanism to provide authorization to a third-party application for access to an end user resource on behalf of them. OAuth helps with granting authorization without the need to share user credentials. & $\bullet$ The organization has an industry-standardized authorization protocol. & \citedata{de2017api,gadge2018microservice,gamez2015towards,hohenstein2018architectural,matsumoto2017fujitsu,patni2017pro,thielens2013apis,hofman2014technical,Xu_2019,Zhao_2018} & & 6& 2.2.7 & Implement Authorization Scopes & Authorization & Security & The organization has implemented an authorization scopes mechanism, such as the OAuth 2.0 Scopes mechanism \citedata{OAuthScopes}, to limit access to their application(s) to their users' accounts. An application can request one or more scopes, where after this information is then presented to the user in a consent screen. Then, the access token that was issued to the application will be limited to the scopes granted. & $\bullet$ The organization has an authorization scopes mechanism in place. & None. & & 6& 2.3.1 & Implement Allow \& Deny IP Address Lists & Threat Detection \& Protection & Security & The organization has the ability to impose allow and deny list policies. Through these policies, specific IPs can either be excluded from requests, or separate quotas can be given to internal users by throttling access depending on their IP address or address range. & $\bullet$ The organization has the ability to impose allow and deny list policies. & \citedata{gadge2018microservice, gamez2015towards, hohenstein2018architectural} & & 6& 2.3.2 & Implement Injection Threat Protection Policies & Threat Detection \& Protection & Security & The organization has implemented injection threat protection security policies. Injection threats are common forms of attacks, in which attackers try to inject malicious code that, if executed on the server, can divulge sensitive information. These attacks may take the form of XML and JSON bombs or SQL and script injection.& $\bullet$ The organization has injection threat policies in place against XML or JSON bombs or SQL or script injection. & \citedata{de2017api, preibisch2018api, OWASPInjection} & & 6& 2.3.5 & Implement DoS Protection & Threat Detection \& Protection & Security & The organization has protection against DoS attacks in place. Hackers may try to bring down back-end systems by pumping unexpectedly high traffic through the APIs. Denial-of-service (DoS) attacks are very common on APIs. Hence, the organization should be able to detect and stop such attacks. Identification of a DoS attack is done through Spike Arrest. & $\bullet$ The organization has protection against DoS attacks in place. & \citedata{de2017api, gadge2018microservice, gamez2015towards} & & 6& 2.3.7 & Implement Security Breach Protocol & Threat Detection \& Protection & Security & The organization has a security breach protocol in place, which details what steps should be taken in the event where a security breach occurs. This protocol may include activities such as notifying stakeholders and consumers of the API, identifying the source of the breach by scanning activity logs, containing the breach by stopping the data leakage, and consulting third-party IT security and legal advice providers. & $\bullet$ The organization has a security breach protocol in place. & \citedata{Reynold2020, Soliya2020} & & 6& 2.3.9 & Conduct Security Review & Threat Detection \& Protection & Security & The organization has the ability to conduct security reviews that potential consumers of their API(s) must pass before being allowed to integrate the organization's API(s) into their application. This typically involves testing the degree to which customer data is protected and encrypted, and identifying security vulnerabilities that may be exploited, such as threats related to script injections and non-secure authentication and access control protocols. & $\bullet$ The organization has the ability to conduct security reviews. & \citedata{Salesforce2020} & & 6& 2.3.10 & Implement Zero Trust Network Access (ZTNA) & Threat Detection \& Protection & Security & The organization has implemented a Zero Trust Network Access (ZTNA) security architecture, where only traffic from authenticated users, devices, and applications is granted access to other users, devices, and applications within an organization. ZTNA may be regarded as a fine-grained approach to network access control (NAC), identity access management (IAM) and privilege access management (PAM), offering a replacement for VPN architectures. Optionally, a ZTNA may be implemented through third-party providers such as Akamai, Cloudflare, or Cisco. & $\bullet$ The organization has implemented a Zero Trust Network Access (ZTNA) security architecture. & \citedata{ZTNAwiki2020} & & 6& 2.4.1 & Implement Transport Layer Encryption & Encryption & Security & The organization has implemented current and up-to-date encryption protocols such as Transport Layer Security (TLS). It is always desirable to have TLS compliant endpoints to safeguard against man-in-middle attacks, and bi-directional encryption of message data to protect against tampering. & $\bullet$ The organization has implemented a current and up-to-date transport layer encryption protocol. & \citedata{de2017api, familiar2015iot, gadge2018microservice, hofman2014technical, preibisch2018api} & & 6& 2.4.3 & Implement Certificate Management & Encryption & Security & The organization has the ability to manage its TLS certificates. This involves monitoring and managing the certificates' acquisition and deployment, tracking renewal, usage, and expiration of SSL/TLS certificates. & $\bullet$ The organization has the ability to manage its TLS certificates. & \citedata{de2017api,hohenstein2018architectural,sine2015api,thielens2013apis,gadge2018microservice} & & 6& 3.1.2 & Implement Load Balancing & Resource Management & Performance & The organization has implemented load balancing to distribute API traffic to the back-end services. Various load balancing algorithms may be supported. Based on the selected algorithm, the requests must be routed to the appropriate resource that is hosting the API. Load balancing also improves the overall performance of the API. & $\bullet$ The organization has implemented load balancing. & \citedata{biehl2015api,ciavotta2017microservice,de2017api,gadge2018microservice,gamez2015towards,montesi2016circuit,nakamura2017fujitsu,Xu_2019,Zhao_2018} & & 6& 3.1.5 & Implement Scaling & Resource Management & Performance & The organization has the ability to scale the amount of available resources up or down depending on traffic and API usage in a reactive manner. This may be done either manually or automatically, through the use of a load balancer. & $\bullet$ The organization has implemented the 'Implement Load Balancing' (3.1.2) practice. \newline $\bullet$ The organization has the ability to scale the amount of available resources up or down. & \citedata{akbulut2019software,jacobson2011apis,gadge2018microservice,hofman2014technical} & & 6& 3.1.6 & Implement Failover Policies & Resource Management & Performance & The organization has the ability to mitigate outages through the implementation of failover policies. This may be done by automatically deploying a service to a standby data center if the primary system fails, or is shut down for servicing. By being able to perform a failover, the particular service is guaranteed to be operational at one of the data centers. This is an extremely important function for critical systems that require always-on accessibility. & $\bullet$ The organization has the ability to mitigate outages through the implementation of failover policies. & \citedata{Barracuda2020} & & 6& 3.1.10 & Implement Predictive Scaling & Resource Management & Performance & The organization has the ability to scale the amount of available resources up or down depending on traffic and API usage in a proactive manner. This may be done automatically, through the use of a load balancer as based on insights gained from predictive analytics. & $\bullet$ The organization has implemented the 'Implement Load Balancing' (3.1.2) and 'Enable Predictive Analytics' (4.3.9) practices. \newline $\bullet$ The organization has implemented predictive scaling. & None. & & 6& 3.2.1 & Set Timeout Policies & Traffic Management & Performance & The organization is able to set timeout policies, by detecting and customizing the amount of time that is allowed to pass before a connection times out and is closed. Using timeout policies, the organization is able to ensure that the API always responds within a given amount of time, even if a long-running process hangs. This is important in high-availability systems where response performance is crucial so errors can be dealt with cleanly. & $\bullet$ The organization is able to set timeout policies on their API(s). & \citedata{tykTimeout} & & 6& 3.2.2 & Implement Request Caching & Traffic Management & Performance & The organization utilizes caching as a mechanism to optimize performance. As consumers of the API make requests on the same URI, the cached response can be used to respond instead of forwarding those requests to the back-end server. Thus caching can help to improve an APIs performance through reduced latency and network traffic. & $\bullet$ The organization utilizes caching as a mechanism to optimize performance. & \citedata{biehl2015api,de2017api,gadge2018microservice,gamez2015towards,indrasiri2018developing,patni2017pro,preibisch2018api,vsnuderl2018rate,vijayakumar2018practical,hofman2014technical,Zhao_2018} & & 6& 3.2.3 & Perform Request Rate Limiting & Traffic Management & Performance & The organization has a mechanism in place with which limits on the amount of requests or faulty calls API consumers are allowed to make, may be imposed. Requests made within the specified limit are routed successfully to the target system. Those beyond the limit are rejected. & $\bullet$ The organization has a rate limiting mechanism in place for their API(s). & \citedata{de2017api,gamez2015towards,jacobson2011apis,lourencco2019framework,raivio2011towards,jayathilaka2015eager,vsnuderl2018rate,hofman2014technical,gadge2018microservice} & & 6& 3.2.4 & Perform Request Rate Throttling & Traffic Management & Performance & The organization has a mechanism in place with which API requests may be throttled down, without the connection being closed. This can help to improve the overall performance and reduce impacts during peak hours. It helps to ensure that the API infrastructure is not slowed down by high volumes of requests from a certain group of customers or apps. & $\bullet$ The organization has a rate throttling mechanism in place for their API(s). & \citedata{de2017api,fremantle2015web,familiar2015iot,gadge2018microservice,hohenstein2018architectural,indrasiri2018developing,jacobson2011apis,thielens2013apis,weir2015oracle} & & 6& 3.2.5 & Manage Quota & Traffic Management & Performance & The organization has policies in place regarding the number of API calls that an app is allowed to make to the back end over a given time interval. Calls exceeding the quota limit may be throttled or halted. The quota allowed for an app depends on the business policy and monetization model of the API. A common purpose for a quota is to divide developers into categories, each of which has a different quota and thus a different relationship with the API. & $\bullet$ The organization has implemented the 'Perform Request Rate Limiting' (3.2.3) practice or 'Perform Request Rate Throttling' (3.2.4) practice.\newline $\bullet$ The organization has quota policies for their API(s) in place. & \citedata{de2017api} & & 6& 3.2.6 & Apply Data Volume Limits & Traffic Management & Performance & The organization has a mechanism in place with which the amount of data consumers of their API(s) are allowed to consume in one call may be limited. This can help to improve the overall performance and reduce impacts during peak hours. It helps to ensure that the API infrastructure is not slowed down by calls that transport unnecessarily high chunks of data volumes. & $\bullet$ The organization has implemented the 'Monitor Resource Usage' (4.1.5) practice.\newline $\bullet$ The organization has a data volume limiting mechanism in place. & \citedata{DropboxDatalimiting} & & 6& 3.2.9 & Prioritize Traffic & Traffic Management & Performance & The organization is able to give a higher priority in terms of processing API calls, based on certain customer characteristics and/or classes. This priority may be based on their subscription, customer relationships, or agreements made in the SLA. & $\bullet$ The organization is able to prioritize traffic based on customer characteristics and/classes. &\citedata{de2017api} & & 6& 4.1.1 & Monitor API Health & Monitoring & Observability & The organization is able to perform health monitoring on its API(s), possibly through an management platform, external monitoring tool/dashboard, functional testing or custom scripts and plugins. This should return basic information such as the operational status of the API, indicating its ability to connect to dependent services. & $\bullet$ The organization is able to perform health monitoring on its API(s). & \citedata{averdunkHealth, gadge2018microservice} & & 6& 4.1.3 & Monitor API Performance & Monitoring & Observability & The organization is able to perform performance monitoring on its API(s), possibly through an management platform, external monitoring tool/dashboard, functional testing or custom scripts and plugins. Doing so should provide performance statistics that track the latency within the platform and the latency for back-end calls. This helps the organization in finding the source of any performance issues reported on any API. & $\bullet$ The organization is able to perform performance monitoring on its API(s). & \citedata{de2017api, Xu_2019} & & 6& 4.1.5 & Monitor Resource Usage & Monitoring & Observability & The organization is able to perform resource monitoring on its API(s), possibly through an management platform, external monitoring tool/dashboard, functional testing or custom scripts and plugins. Doing so should provide insights into the amount of resources that are consumed as a result of calls made to the API(s). This may be done by measuring hardware metrics such as CPU, disk, memory, and network usage, or by using an indirect approximation of the amount of resources that are consumed by calls. & $\bullet$ The organization is able to perform resource monitoring on its API(s). & \citedata{KubernetesResources} & & 6& 4.2.1 & Log Errors & Logging & Observability & The organization has the ability to internally log errors that are generated as a result of consumption of their APIs. Error logs should typically contain fields that capture information such as the date and time the error has occurred, the error code, and the client IP and port numbers. & $\bullet$ The organization has the ability to internally log errors. & \citedata{andrey_kolychev_konstantin_zaytsev_2019_3256462, de2017api, medjaoui2018continuous} & & 6& 4.2.2 & Log Access Attempts & Logging & Observability & The organization has the ability to generate access logs, in which HTTP requests/responses are logged, to monitor the activities related to an APIs usage. Access logs offer insight into who has accessed the API, by including information such as the consumer's IP address. & $\bullet$ The organization is able to perform access logging. & \citedata{wso2Access} & & 6& 4.2.3 & Log Activity & Logging & Observability & The organization has the ability to perform basic logging of API activity, such as access, consumption, performance, and any exceptions. In doing so, it may be determined what initiated various actions to allow for troubleshooting any errors that occur. & $\bullet$ The organization is able to perform activity logging. & \citedata{de2017api, fremantle2015web, gadge2018microservice} & & 6& 4.2.5 & Audit User Activity & Logging & Observability & The organization is able to perform user auditing. Doing so enables the organization to review historical information regarding API activity, to analyze who accesses an API, when it is accessed, how it is used, and how many calls are made from the various consumers of the API. & $\bullet$ The organization is able to perform user auditing. & \citedata{de2017api, gadge2018microservice} & & 6& 4.3.2 & Report Errors & Analytics & Observability & The organization has the ability to report any errors to consumers that may occur during usage of their API(s). Error reports typically include information such as the error code and text describing why the error has occurred. & $\bullet$ The organization has implemented the 'Log Errors' (4.2.1) practice.\newline $\bullet$ The organization is able to report any errors to consumers. & \citedata{andrey_kolychev_konstantin_zaytsev_2019_3256462, de2017api, medjaoui2018continuous} & & 6& 4.3.3 & Broadcast API Status & Analytics & Observability & The organization broadcasts the status of its API(s) to consumers by providing them with operational information on the API in the form of an external status page, possibly on the developer portal or a website. The function of this status page is to let consumers know what is going on with the API at a technical level at any point in time. & $\bullet$ The organization has implemented the 'Monitor API Health' (4.1.1) practice.\newline $\bullet$ The organization broadcasts the operational status of its API(s) to consumers. & \citedata{sandoval2018} & & 6& 4.3.6 & Generate Custom Analysis Reports & Analytics & Observability & The organization is able to generate custom analysis reports on metrics of choice, possibly through an API management platform or monitoring tool. & $\bullet$ The organization is able to generate custom analysis reports. & \citedata{de2017api} & & 6& 4.3.7 & Set Alerts & Analytics & Observability & The organization has the ability to set and configure alerts that should trigger in case of certain events or thresholds being exceeded. Such events or thresholds may include resource limits being exceeded, or occurrence of outages. Ideally, the organization is able to configure what persons should be alerted about the event, and through what communication channel they should be contacted. & $\bullet$ The organization has implemented the 'Monitor API Health' (4.1.1), 'Monitor API Performance' (4.1.3), and 'Monitor API Resource Usage' (4.1.5) practices.\newline $\bullet$ The organization has the ability to set and configure alerts. & \citedata{UptrendsAlerting} & & 6& 4.3.9 & Enable Predictive Analytics & Analytics & Observability & The organization has the ability to aggregate predictive analytics, through techniques such as pattern recognition, data mining, predictive modelling, or machine learning, by analyzing current and historical facts to make predictions about future or otherwise unknown events. & $\bullet$ The organization has implemented the 'Monitor API Performance' (4.1.3) and 'Monitor API Resource Usage' (4.1.5) practices.\newline $\bullet$ The organization has the ability to aggregate predictive analytics. & None. & & 6& 5.1.1 & Facilitate Developer Registration & Developer Onboarding & Community & The organization has a mechanism in place with which API consumers are able to register to the API so that they can obtain access credentials. Consumers can then select an API and register their apps to use it. & $\bullet$ The organization has a mechanism in place with which API consumers are able to register to their API(s). & \citedata{de2017api} & & 6& 5.1.4 & Provide SDK Support & Developer Onboarding & Community & The organization offers API consumers the option to either download client-side SDKs for the API, or generate the SDK themselves from standard API definition formats such as OpenAPI (formerly known as Swagger). These functionalities are usually offered through the developer portal, where app developers often look for device-specific libraries to interact with the services exposed by the API. & $\bullet$ The organization offers API consumers the option to download or generate client-side SDKs for their API(s). & \citedata{de2017api} & & 6& 5.1.5 & Implement Interactive API Console & Developer Onboarding & Community & The organization provides API consumers with an interactive console. Using this console, developers are able to test the behavior of an API. & $\bullet$ The organization provides API consumers with an interactive console. & \citedata{biehl2015api} & & 6& 5.1.8 & Provide Sandbox Environment Support & Developer Onboarding & Community & The organization provides API consumers with an environment that they can use to mimic the characteristics of the production environment and create simulated responses from all APIs the application relies on. & $\bullet$ The organization provides API consumers with a sandbox environment. & \citedata{buidesign, jacobson2011apis, Mueller:2020, patni2017pro} & & 6& 5.2.1 & Establish Communication Channel & Support & Community & The organization has established a communication channel between the API provider and consumer with which support may be provided to the consumer. Possible communication media include email, phone, form, web, community forum, blogs or the developer portal.& $\bullet$ The organization has established one of the following communication channels with consumers of their API(s): email/phone/form/web/ community forum/blog/developer portal. & \citedata{de2017api, jacobson2011apis} & & 6 & 5.2.4 & Manage Support Issues & Support & Community & The organization is able to manage any support issues with their API(s). API consumers must be able to report any issues, bugs or shortcomings related to the API. They should be able to raise support tickets and seek help regarding API usage. Additionally, the API provider must be able to track and prioritize support tickets. & $\bullet$ The organization is able to manage any support issues with their API(s). & \citedata{de2017api, jacobson2011apis} & & 6& 5.2.6 & Dedicate Developer Support Team & Support & Community & The organization employs a dedicated that offers support to consumers of their API(s). This team should be well-trained and possess knowledge that enables them to assist consumers with any problems or difficulties they may experience during the usage or implementation of the API. & $\bullet$ The organization has implemented the 'Establish Communication Channel' (5.2.1) practice. \newline $\bullet$ The organization employs a dedicated developer team that offers support to consumers of their API(s). & None. & & 6& 5.3.1 & Use Standard for Reference Documentation & Documentation & Community & The organization provides consumers of their API(s) with basic reference documentation on their website, developer portal or an external, third-party documentation platform. This documentation should document every API call, every parameter, and every result so that consumers are informed on the API's functionality. Additionally, it must be specified using a documentation framework such as Swagger, RAML, API Blueprint, WADL, Mashery ioDocs, Doxygen, ASP.NET API Explorer, Apigee Console To-Go, Enunciate, Miredot, Dexy, Docco or TurnAPI. & $\bullet$ The organization provides consumers of their API(s) with basic reference documentation.\newline $\bullet$ The organization utilizes one of the following (or comparable) documentation tools to specify its API documentation: Swagger (OpenAPI), RAML, API Blueprint, WADL, Mashery ioDocs, Doxygen, ASP.NET API Explorer, Apigee Console To-Go, Enunciate, Miredot, Dexy, Docco or TurnAPI. & \citedata{de2017api, jacobson2011apis, medjaoui2018continuous} & & 6& 5.3.3 & Provide Start-up Documentation \& Code Samples & Documentation & Community & The organization provides consumers of their API(s) with start-up documentation on on their website, developer portal or an external, third-party documentation platform. This type of documentation explains key concepts by summarizing the reference documentation, accelerating understanding as a result. Optionally, a list of Frequently Asked Questions and code samples that may be readily used in apps to invoke the API may be included. & $\bullet$ The organization has implemented the 'Use Standard for Reference Documentation' (5.3.1) practice. \newline $\bullet$ The organization provides consumers of their API(s) with start-up documentation. & \citedata{de2017api, jacobson2011apis} & & 6& 5.3.5 & Create Video Tutorials & Documentation & Community & The organization is able to create video tutorials in order to provide consumers with visual information that details how to use the API and integrate it into their applications. & $\bullet$ The organization is able to create video tutorials. & None. & & 6& 5.4.1 & Maintain Social Media Presence & Community Engagement & Community & The organization is able to maintain their social media presence on platforms such as Facebook or Twitter. This may involve activities such as reporting on the API's status, announcing news and updates, responding to questions, or reacting to feedback. & $\bullet$ The organization is able to maintain their social media presence on platforms such as Facebook or Twitter. & None. & & 6& 5.4.3 & Provide Community Forum & Community Engagement & Community & The organization provides (potential) consumers of their API(s) with a community forum, possibly through a website or API management platform. This forum may assist in building and interconnecting a developer community, by providing them with a central hub they can use to communicate with one another and the organization. Additionally, it may serve as a repository with guides on API usage, documentation and support. & $\bullet$ The organization provides API consumers with a community forum. & \citedata{de2017api} & & 6& 5.4.4 & Provide Developer Portal & Community Engagement & Community & The organization provides (potential) consumers of their API(s) with a developer portal. A developer portal provides the platform for an API provider to communicate with the developer community. Addtionally, it typically offers functionality such as user registration and login, user management, documentation, API key management, test console and dashboards. & $\bullet$ The organization has implemented a developer portal. & \citedata{de2017api, fremantle2015web, medjaoui2018continuous, sine2015api} & & 6& 5.4.7 & Organize Events & Community Engagement & Community & The organization is actively involved in organizing or participating in events that are aimed towards engaging and motivating the developer community to incorporate their API(s) into their applications. This may include events such as hackathons, conferences, or workshops. & $\bullet$ The organization is actively involved in organizing or participating in developer community events. & None. & & 6& 5.4.9 & Dedicate Evangelist & Community Engagement & Community & The organization employs a dedicated API evangelist. This individual is responsible for evangelizing the API by gathering consumer feedback, and promoting the organization's API(s) by creating samples, demos, training materials and performing other support activities aimed towards maximizing the developer experience. & $\bullet$ The organization employs a dedicated API evangelist. & None. & & 6& 5.5.1 & Enable API Discovery & Portfolio Management & Community & The organization provides potential consumers of their API(s) with a mechanism to obtain information, such as documentation and metadata, about their API(s). This mechanism may take the shape of an external website, hub or repository that consumers can freely browse through. & $\bullet$ The organization has a mechanism in place with which their API(s) may be discovered. & \citedata{biehl2015api, hofman2014technical} & & 6& 5.5.4 & Provide API Catalog & Portfolio Management & Community & The organization provides API consumers with an API Catalog. This is a a searchable catalog of APIs. An API catalog is also sometimes referred to as an API registry. API consumers should be able to search the catalog based on various metadata and tags. The catalog should document the API functionality, its interface, start-up documentation, terms and conditions, reference documentation, and so forth.& $\bullet$ The organization has implemented the 'Enable API Discovery' (5.5.1) practice. \newline $\bullet$ The organization provides API consumers with a searchable API catalog. & \citedata{de2017api, lourencco2019framework, vijayakumar2018practical, hofman2014technical, medjaoui2018continuous} & & 6& 5.5.5 & Bundle APIs & Portfolio Management & Community & The organization is able to combine two or more APIs into a bundle. This is a collection of API products that is presented to developers as a group, and typically associated with one or more rate plans for monetization. & $\bullet$ The organization is able to combine two or more APIs into a bundle. & \citedata{apigeebundling} & & 6& 6.1.1 & Publish Informal SLA & Service-Level Agreements & Commercial & The organization has the ability to publish and agree upon an informal, bare-bones SLA with consumers of their API(s). This type of SLA is minimalistic and loose in terms of the nature and amount of agreements it contains, as well as the consequences attached to these agreements should they be violated. This type of SLA is satisfactory for organizations that provide non-critical services and that have close relationships with their consumers and partners. & $\bullet$ The organization has the ability to publish and agree upon an informal SLA with consumers. & None. & & 6& 6.1.3 & Provide SLA & Service-Level Agreements & Commercial & The organization has the ability to provide and agree upon a formal, elaborate SLA with consumers of their API(s). This type of SLA is extensive and strict in terms of the nature and amount of agreements it contains, as well as the consequences attached to these agreements should they be violated. Typically, agreements regarding the guaranteed uptime of the API on a monthly or yearly basis are included in this type of SLA, along with guaranteed response times in the event of incidents, as well as policies regarding privacy, security, and possibly rate and data quotas. Additionally, when providing a formal SLA, the organization should have a plan in place that details what course of action should be taken in the event where agreements are failed to be upheld. & $\bullet$ The organization has the ability to provide and agree upon a formal SLA with consumers. & \citedata{de2017api} & & 6& 6.1.6 & Proactively Monitor SLAs & Service-Level Agreements & Commercial & The organization is able to proactively monitor metrics that are relevant in checking whether the agreements made with API consumers are adhered to. Such metrics may include availability, performance and functional correctness. & $\bullet$ The organization has implemented the 'Monitor API Resource Usage' (4.1.5) practice.\newline $\bullet$ The organization is able to perform SLA monitoring. & \citedata{moizSLA} & & 6& 6.1.7 & Customize Personalized SLA & Service-Level Agreements & Commercial & The organization has the ability to provide consumers of their API(s) with personalized SLAs. This type of SLA is suitable for intensive consumers that utilize services offered by the API in such a way that requires customized agreements as compared to those that are offered as part of the organization's standard SLA. For example, some consumers may require minimal latency and response times for their calls, want to make large amounts of calls, or demand API uptime approaching 100\%. Additionally, a personalized SLA may be required due to the consumer being located in a different geographic location than other consumers, requiring customized agreements with regards to privacy laws and regulations. & $\bullet$ The organization has implemented the 'Provide SLA' (6.1.3) practice.\newline $\bullet$ The organization has the ability to provide consumers of their API(s) with personalized SLAs. & \citedata{manualSLA} & & 6& 6.2.6 & Adopt Subscription-based Monetization Model & Monetization Strategy & Commercial & The organization has adopted a monetization model that is based on a subscription basis. With this model, API consumers pay a flat monthly fee and are allowed to make a certain number of API calls per month. & $\bullet$ The organization has implemented the 'Implement Subscription Management System' (6.3.2) and 'Manage Quota' (3.2.5) practices. \newline $\bullet$ The organization has adopted a monetization model that is based on a subscription basis. & \citedata{budzynskiMonetization} & & 6& 6.2.8 & Adopt Tier-Based Monetization Model & Monetization Strategy & Commercial & The organization has adopted a monetization model that is based on tiered access. Typically, each tier has its own set of services and allowances for access to API resources, with increasing prices for higher tiers. & $\bullet$ The organization has implemented the 'Prioritize Traffic' (3.2.7) and 'Manage Quota' (3.2.5) practices. \newline $\bullet$ The organization utilizes a monetization model that is based on tiered access. & \citedata{redhatMonetization, budzynskiMonetization} & & 6& 6.2.9 & Adopt Freemium Monetization Model & Monetization Strategy & Commercial & The organization has adopted a monetization model that is based on freemium functionalities and access. This involves providing consumers with a limited part of the services and functionalities the API offers as a whole. Consumers that wish to utilize all services and functionalities are required to have an active, paid subscription to the API. & $\bullet$ The organization utilizes a monetization model that is based on freemium functionalities and access. & \citedata{redhatMonetization, budzynskiMonetization} & & 6& 6.2.10 & Adopt Metering-Based Monetization Model & Monetization Strategy & Commercial & The organization utilizes a monetization model that is based on metering. With this model, API consumers pay for the amount of resources they use. This may be measured in terms of bandwidth, storage or amount of calls made. & $\bullet$ The organization has implemented the 'Monitor Resource Usage' (4.1.5) practice.\newline $\bullet$ The organization utilizes a monetization model that is based on metering. & \citedata{redhatMonetization, budzynskiMonetization} & & 6& 6.3.2 & Implement Subscription Management System & Account Management & Commercial & The organization has a system in place with which it is able to manage existing subscriptions (consumers of) on their API. A subscription management system provides support for billing on a recurring basis, as well as providing insight into active subscriptions. & $\bullet$ The organization has implemented a subscription management system. & \citedata{fremantle2015web, preibisch2018api, raivio2011towards} & & 6& 6.3.7 & Report on API Program Business Value & Account Management & Commercial & The organization is able to generate business value reports associated with their API(s). Business value reports gauge the monetary value associated with the API program. Monetization reports of API usage provide information on the revenue generated from the API. Value-based reports should also be able to measure customer engagements. Engagements can be measured by the number of unique users, the number of developers registered, the number of active developers, the number of apps built using the APIs, the number of active apps, and many other items. Optionally, these metrics may be visualized in the form of dashboards, so that they may then easily be shared and presented to relevant internal stakeholders to communicate the API program's business value. & $\bullet$ The organization has implemented the 'Generate Custom Analysis Reports' (4.3.6) practice. \newline $\bullet$ The organization is able to generate business value reports associated with their API(s). & \citedata{de2017api}& & 6& 6.3.8 & Provide Subscription Report to Customer & Account Management & Commercial & The organization is able to generate subscription reports for consumers of their API(s). These reports contain metrics gathered through internal monitoring and analytics. Such metrics may include amount of calls made, performance, and status regarding remaining allowed quotas. & $\bullet$ The organization has implemented the 'Generate Custom Analysis Reports' (4.3.6) and 'Implement Subscription Management System' (6.3.2) practices. \newline $\bullet$ The organization is able to generate subscription reports for consumers of their API(s). & \citedata{de2017api}& & 6& 6.3.9 & Proactively Suggest Optimizations to Customers & Account Management & Commercial & The organization has the ability to train and help customers in using their API(s) as well and efficiently as possible. This may be in the best interest of both parties, as optimizing inefficient calls may positively impact traffic load on the API infrastructure. & $\bullet$ The organization has implemented the 'Monitor API Performance' (4.1.3) and 'Monitor Resource Usage' (4.1.5) practices. \newline $\bullet$ The organization is able to generate business value reports. & \citedata{buidesign, de2017api}& & 6& } \dataheight=9 \def\returnData(#1){\expandafter\checkMyData(#1)\cachedata} \newcounter{deTeller} \newcounter{volgendeStart} \newcounter{volgendeStop} \setcounter{deTeller}{1} \setcounter{volgendeStart}{\value{deTeller}} \newcounter{tempCount} \newcounter{groteLoop} \newcounter{loop} \newcounter{loopPlusEen} \newcounter{loopMinEen} \newcounter{stopTeller} \newcounter{oldStopTeller} \newcommand{15.5cm}{15.5cm} \forloop{groteLoop}{1}{\value{groteLoop}<21}{ \setcounter{oldStopTeller}{0} \setcounter{stopTeller}{4} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{3} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{4} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{3} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{4} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{6} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{2} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{4} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{7} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{3} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{4} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{5} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{4} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{3} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{3} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{5} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{3} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{4} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{4} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{oldStopTeller}{\value{stopTeller}} \addtocounter{stopTeller}{4} \ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi \setcounter{loopPlusEen}{\value{loop}} \setcounter{loopMinEen}{\value{loop}} \addtocounter{loopPlusEen}{1} \addtocounter{loopPlusEen}{-1} \begin{table}[ht!] \footnotesize \begin{tabular}{|p{.1cm}|p{.1cm}|ll|ll|} \hline \multirow{15}{*}{\rotatebox[origin=c]{90}{\returnData(\value{deTeller},4)}} & \multirow{15}{*}{\rotatebox[origin=c]{90}{\returnData(\value{deTeller},3)}} & \forloop{loop}{\value{volgendeStart}}{\value{loop}<\value{volgendeStop}}{ \textbf{Practice Code}: & \returnData(\value{deTeller},1) & \textbf{Practice Name}: & \returnData(\value{deTeller},2) \\\cline{3-6} &&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Description: }}\returnData(\value{deTeller},5)}\\\cline{3-6} &&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Implemented when:}} \newline \returnData(\value{deTeller},6)}\\\cline{3-6} &&\multicolumn{4}{p{15.5cm}|}{Literature: \returnData(\value{deTeller},7)}\\\cline{3-6} &&\multicolumn{4}{|p{15.5cm}}{}\\\cline{3-6} && \addtocounter{deTeller}{1} } \setcounter{volgendeStart}{\value{deTeller}} \textbf{Practice Code}: & \returnData(\value{deTeller},1) & \textbf{Practice Name}: & \returnData(\value{deTeller},2) \\\cline{3-6} &&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Description: }}\returnData(\value{deTeller},5)}\\\cline{3-6} &&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Implemented when:}} \newline \returnData(\value{deTeller},6)}\\\cline{3-6} &&\multicolumn{4}{p{15.5cm}|}{Literature: \returnData(\value{deTeller},7)}\\\hline \end{tabular} \end{table} \addtocounter{deTeller}{1} } \newpage \section{Version 0.1} \label{sec:version01} This version was populated using the primary source~\cite{de2017api}. It consisted of four focus areas. Further details are omitted because of the intermediate state of the model. \begin{table}[h] \centering \begin{tabular}{l|c} Focus Area & Number of capabilities\\ \hline \textbf{Developer Enablement} & 4 \\ \textbf{Security and Communication} & 5 \\ \textbf{Lifecycle} & 2 \\ \textbf{Auditing and Analysis} & 3 \\ \end{tabular} \caption{API-m-FAMM version 0.1} \label{tab:version01} \end{table} \section{Version 0.2} \label{sec:version02} This version was populated using the SLR~\cite{mathijssen2020identification}. The re-location of practices and capabilities was primarily driven by the decision to split the \textit{security and communication} focus area up into two separate focus areas: \textit{security} and \textit{communication}. This decision was made because security was found to be an substantial and integral topic of API management in itself. Moreover, it was decided that the communication focus area, which was later renamed to \textit{performance}, comprises capabilities such as \textit{service routing} that are unrelated to security. Furthermore, the decision was made to split the \textit{auditing and analytics} focus area up into technical management, which was later renamed to \textit{monitoring}, and business-side, which was later renamed to \textit{commercial}. This was done due to the difference in nature between capabilities such as \textit{monetization} and \textit{analytics}, which were originally grouped together. This difference was further compounded by the decision to split the traffic management capability into two separate capabilities, with one capturing the business-level aspect of this capability and the other encompassing operational aspects. The former capability was then moved to the new commercial focus area along with the monetization capability, while the latter was moved to the performance focus area. \begin{table}[h] \centering \begin{tabular}{l|c} Focus Area & Number of capabilities\\ \hline \textbf{Community Engagement} & 4 \\ \textbf{Security} & 2 \\ \textbf{Communication} & 2 \\ \textbf{Lifecycle} & 5 \\ \textbf{Technical Management} & 4 \\ \textbf{Business Side} & 3 \\ \end{tabular} \caption{API-m-FAMM version 0.2} \label{tab:version02} \end{table} \section{Version 0.3} \label{sec:version03} More information was needed to determine whether practices and capabilities were suited to be included in the model with regards to their scope and relevance. In order to resolve this, the collection of practices and capabilities was verified by using information gathered from grey literature such as online blog posts, websites, commercial API management platform documentation and third-party tooling. Doing so resulted in the following changes made with regards to the contents of the API-m-FAMM: \begin{itemize} \item \textit{Removal} of several practices that were found to be irrelevant, redundant, or too granular. For example, \textit{filtering spam calls}, which was originally uncovered as part of the SLR, was found to be redundant as this practice is already covered by practices such as \textit{DoS protection} and \textit{rate limiting}. Consequently, such practices were removed. \item \textit{Addition} of several practices that were newly identified. For example, \textit{predictive analytics} was found to be a practice that is offered by multiple commercial API management platform providers. Similarly, \textit{including change logs} was found to be a practice that is recommended by practitioners as a best practice when updating APIs. Consequently, such practices were added to the API-m-FAMM. \item \textit{Merging} of several practices that were found to be irrelevant, redundant, or too granular. For example, practices that were originally uncovered through the SLR, such as \textit{email-based support}, \textit{phone-based support}, and \textit{form-based support} were found to be redundant, as no significant difference with regards to their maturity may be discerned among these practices. Consequently, these practices were merged into one practice: \textit{establish communication channel}. \item \textit{Splitting} of practices that were found to be compounded by practices that were thought to warrant separate, individual practices. For example, the \textit{black or whitelist IP addresses} was split up into the \textit{blacklist IP addresses} and \textit{whitelist IP addresses} practices because these were found to be relevant practices on their own. Additionally, Consequently, these practices were merged into one practice: \textit{establish communication channel}. \item \textit{Relocation} of practices to different capabilities than those they were originally assigned to. For example, the \textit{Oauth2.0 authorization} practice was moved from the \textit{authentication} capability to the newly introduced \textit{authorization} capability as Oauth is considered to be an authorization protocol. \item \textit{Renaming} of several practices, as well as updating descriptions and formulation of practice descriptions that were previously missing or incomplete. For example, the \textit{provide code samples} practice was renamed to \textit{provide FAQ with code samples} because it was found that these two practices often go hand in hand. Additionally, this practice's description was updated. \item \textit{Identification} of dependencies among practices, either among practices within the same capabilities or among practices across different capabilities or focus areas. Some dependencies were found to be relatively straightforward, such as the \textit{multiple API versioning strategy} practice depending on the implementation of the \textit{maintain multiple APIs} practice. However, dependencies between practices belonging to different capabilities such as \textit{quota management} depending on \textit{rate limiting} or \textit{rate throttling} were also identified. \item \textit{Arrangement} of practices based on their interrelated maturity with regards to the other practices in the capability they are assigned to. At this point in time, this was performed on a mostly subjective and empirical basis, and thus should be regarded as a first attempt to discern practices with regards to their relative maturity. \item \textit{Formulation} of implementation conditions corresponding to each practice, which are aimed at providing practitioners with an overview of the necessary conditions that must be met before a practice may be marked as implemented. \end{itemize} The amount of practices and capabilities that were added, removed, merged, split, relocated or renamed as a result of the supplemental material validation process and the aforementioned discussion session are shown in Table~\ref{tab:ResultsSupplemental} below. However, it should be noted that some practices that were added as a result of the online verification process were later removed as a result of the discussion session. As such, numbers corresponding to the \textit{added} and \textit{removed} operations presented in Table~\ref{tab:ResultsSupplemental} are slightly inflated. \begin{table}[h] \centering \begin{tabular}{l|c|c|c|c|c|c} \textbf{Component} & \textbf{Added} & \textbf{Removed} & \textbf{Merged} & \textbf{Split} & \textbf{Relocated} & \textbf{Renamed}\\ \hline Practice & 17 & 27 & 39 & 4 & 12 & 93 \\ Capability & 1 & 1 & 1 & 0 & 1 & 2 \\ \end{tabular} \caption{Number of practices and capabilities added, removed, merged, split, relocated or renamed as a result of the supplemental material validation process and the discussion session.} \label{tab:ResultsSupplemental} \end{table} At this stage of the design process, the model is grounded in literature, and is verified and supplemented by using grey literature. As a result of these activities, the initial body of 114 practices and 39 capabilities that was extracted as a result of the SLR was refined and narrowed down to 87 practices and 23 capabilities, which are divided among six focus areas. Instead, the contents of this version of the API-m-FAMM can be found in \emph{version2} of this published source document on arXiv~\cite{mathijssen2021source}. The general structure of the API-m-FAMM version 0.3 is presented in Figure~\ref{fig:api-m-famm03}. As shown, each individual practice is assigned to a maturity level within its respective capability. Additionally, it should be noted that practices can not depend on practices as part of another capability that have a higher maturity level. For example, practice 1.4.4 is dependant on the implementation of practice 1.2.3, resulting in a higher maturity level being assigned to the former of these practices. Figure~\ref{fig:api-m-famm03} also shows that at this stage, 17 practices were added in addition to those extracted through the SLR. Furthermore, 14 new practices were introduced as a result of merging 39 former practices, as shown in Table~\ref{tab:ResultsSupplemental}. Moreover, descriptions that are based on grey literature were formulated for 18 practices for which adequate descriptions were not able to be identified in academic literature. Lastly, 6 practices are accompanied by descriptions that were formulated by the researchers themselves, as based on empirical knowledge. Even though suitable descriptions could not be identified for these practices in academic literature or grey literature, they were included in this version of the API-m-FAMM because they were hypothesized to be relevant for practitioners. Among other things, this hypothesis is tested through expert interviews, which are part of the next phase in constructing the API-m-FAMM. \begin{figure*} \centering \includegraphics[page=1, clip, trim=0cm 0cm 0cm 0cm, width=\textwidth]{Figures/API-m-FAMMv0.3.pdf} \caption{version 0.3 of the API-m-FAMM and the focus areas, capabilities, and practices it consists of. Additionally, it is shown which capabilities and practices were newly introduced between API-m-FAMM v0.2 and v0.3, as well as for which practices descriptions were formulated based on supplemental material. Please consult the legend on the top left-hand side of the figure for more information regarding the differently shaped and/or colored components.} \label{fig:api-m-famm03} \end{figure*} \section{Version 0.4} \label{sec:version04} Eleven expert interviews were conducted. During these interviews, many additions and changes in terms of the API-m-FAMM's structure and contents were suggested by experts, whom were encouraged to elaborate on their motivation regarding these suggestions. By transcribing and processing the recordings of all interviews, the numerous suggestions that were made by experts to either add, remove, merge, split, relocate, or rename several focus areas, capabilities, and practices, are compiled. The amount in which these suggestions for changes occurred are shown in Table \ref{tab:EvaluationChanges} below, as grouped by the type of suggested change as well as the type of component they apply to. Additionally, these changes are visually represented in their entirety in Figure \ref{fig:api-m-famm04a}, along with the number of experts that suggested for a specific change to be made. Evidently, the number of practices that were suggested to be added is relatively high. It should be noted that while a large part of these practices were explicitly mentioned by experts, some were also indirectly extracted from transcripts as a result of comments the expert had made. Additionally, no suggestions are rejected at this point, hence all suggestions that were made by experts are taken into account and incorporated into Table \ref{tab:EvaluationChanges} and Figure \ref{fig:api-m-famm04a}. \begin{table}[h] \centering \begin{tabular}{l|c|c|c|c|c|c} \textbf{Component} & \textbf{Added} & \textbf{Removed} & \textbf{Merged} & \textbf{Split} & \textbf{Relocated} & \textbf{Renamed}\\ \hline \textbf{Practice} & 50 & 5 & 3 & 3 & 9 & 3 \\ \textbf{Capability} & 7 & 0 & 0 & 2 & 2 & 2 \\ \textbf{Focus Area} & 1 & 0 & 0 & 0 & 0 & 3\\ \end{tabular} \caption{Number of practices, capabilities, and focus areas that were suggested to be added, removed, merged, split, relocated or renamed by experts during interviews.} \label{tab:EvaluationChanges} \end{table} \begin{figure*} \centering \includegraphics[page=1, clip, trim=7cm 4cm 9cm 0.5cm, width=0.8\textwidth]{Figures/API-m-FAMMv0.4a.pdf} \caption{API-m-FAMM version 0.3 plus all suggested changes that were made by experts during interviews. Please consult the legend on the left-hand side of the figure for more information regarding the manner in which the colored outlines should be interpreted. Practices and capabilities that were not directly categorized by the expert during interviews are placed in the 'undecided' box on the top-left hand side.} \label{fig:api-m-famm04a} \end{figure*} After having compiled all suggestions made by experts, extensive discussion sessions are held among all authors to analyze, discuss, and interpret them. All suggested changes to either a focus area itself, or the capabilities or practices it consists of are then analyzed and interpreted through the help of the transcribed arguments that were provided by experts during the interviews. As a result, numerous modifications are made to the API-m-FAMM, which are visualized in its entirety in Figure \ref{fig:api-m-famm04b}. Additionally, some fundamental decisions are made with regards to the scope and contents of the API-m-FAMM. \begin{itemize} \item Firstly, it was decided that all practices that are contained in the model should be implementable \textit{without} the usage of an API management platform. This decision was made due to several reasons. First of all, it was found that among the organizations that the experts that were consulted are employed at, only a small portion actively utilizes a third party platform to manage their API(s). When asked, experts belonging to the category that have not incorporated an API management platform into their organizations cited arguments such as wanting to avoid vendor lock-in, high costs, or simply not having a need for many of the functionalities provided by such management platforms. Oftentimes, the latter argument was tied to the organization currently exclusively using internal APIs, thus removing the need for using a management platform to manage and expose any partner or public APIs altogether. Considering that it may reasonably be hypothesized that these arguments may likely also apply to other organizations wishing to consult the API-m-FAMM to evaluate and improve upon their API management related practices, any practices or capabilities that were found to be directly tied to the usage of an API management platform were removed from the model. For example, this was the case for the \textit{Visual Data Mapping} practice, which is exclusively provided by the \textit{Axway} API management platform \footnote{\url{https://www.axway.com/en/products/api-management}}, as well as the practices corresponding to the newly suggested \textit{Error Handling} capability, which are implementable through the use of the \textit{Apigee} platform \footnote{\url{https://cloud.google.com/apigee/api-management?hl=nl}}. An additional reason for excluding such capabilities and practices is that they are likely to evolve throughout the coming years, which would in turn require the API-m-FAMM to be updated as well. In order to prevent this, the API-m-FAMM and the practices it comprises should be platform-independent. Lastly, the purpose of the API-m-FAMM is not to guide practitioners in selecting an appropriate commercial API management platform for their organization. Instead, the API-m-FAMM aims to guide organizations in assessing and evaluating their current maturity in terms of those processes that are considered to be best-practices and are at the core of API management, so that they may then develop a strategy towards implementing practices that are currently not implemented and desirable in further maturing the organization in terms of API management. \item Secondly, many practices were deemed to be too granular, specific, or irrelevant to be included. Consequently, such practices were either removed, or merged into a practice that is composed of these smaller practices. An example of practices that were found to be too granular include newly suggested practices such as \textit{Event Participation}, \textit{Event Hosting}, and \textit{Organize Hackathons}. Additionally, since determining a difference among these practices in terms of their maturity was found to be unfeasible, they were instead merged into the \textit{Organize Events} practice and included in its description. \item Thirdly, some practices that describe a specific protocol were renamed to be more ambiguous and generic. For example, the former \textit{OAuth 2.0 Authorization} practice was renamed to \textit{Standardized Authorization Protocol}, with a referral to the OAuth 2.0 protocol being included in its description instead. This was done to ensure that the API-m-FAMM remains functional and applicable in the future, since it is likely that new protocols will be developed and adopted among the industry in the future. These concerns also applied to suggested practices corresponding to individual authentication methods such as client certificate and SAML authentication, which were ultimately merged into the \textit{Implement Authentication Protocol} practice and included in its description. An additional reason for doing so in the case of these authentication methods is that they each have their individual strengths and weaknesses, with one not always necessarily being 'better' or more mature than the other. Furthermore, some methods may be more appropriate for some use cases than others. \item Furthermore, some capabilities and its corresponding practices that were also thought to apply to most organizations in general, that are not necessarily involved with API management were excluded from the model. An example of this is the \textit{Financial Management} capability that was suggested to be added. Considering that practices such as \textit{Automated Billing}, \textit{Third-Party Payment Provider Integration}, and \textit{Revenue Sharing} are best practices that apply to commercially oriented organizations in general, they were removed. This decision was made to ensure that the contents of the API-m-FAMM is exclusively composed of practices that are directly tied to API management. \item During interviews focused on the \textit{Lifecycle} focus area, experts were asked to elaborate on the manner in which their organization has implemented \textit{Governance}. Based on the answers given however, it became clear that capturing processes related to governance in the form of practices is not feasible. This may largely be attributed to the observation that such processes seem to be inherent to specific characteristics of the organization, such as its culture, size, usage of a third party API management platform, as well as the amount of APIs that are used or exposed by the organization. Some practices were suggested for addition, such as \textit{Define Naming Conventions}, \textit{Define Best Practices}, and \textit{Define Integration Patterns}. However, after having discussed these with experts in subsequent interviews, it was decided that these practices are too abstract and inconcrete in comparison with other practices, considering that they may be interpreted in different ways by practitioners due to the varying organizational characteristics mentioned earlier. Hence, the \textit{Governance} capability that was originally part of the \textit{Lifecycle} focus area was removed, along with the \textit{Design-time Governance} and \textit{Run-time Governance} practices it was composed of. \item A valuable suggestion that was made by experts is the addition of monitoring in terms of the amount of resources that calls to the API consume, such as CPU, disk, memory, and network usage. Considering that this monitoring perspective was previously missing alongside performance and health monitoring, as well as it being suggested by multiple experts independently from one another, the \textit{Resource Monitoring} practice was newly added. Similarly, this resource perspective was also found to be missing among the \textit{Traffic Management} capability, alongside the \textit{Request Limiting} and \textit{Request Throttling} practices. Hence, the \textit{Data Volume Limiting} practice was newly added. \item Another fundamental change that was made to the API-m-FAMM is the renaming of the former \textit{Monitoring} focus area to \textit{Observability}. This rename was independently suggested by two experts, whom argued that observability better describes the focus area, considering that the \textit{Analytics} capability was split into two capabilities: \textit{Monitoring} and \textit{Analytics}. This decision was made because experts were of the opinion that monitoring is concerned with gathering (real-time) metrics related to the API's health, performance, and resource usage, while analytics is concerned with aggregating these metrics so that insights may be formed and subsequent action may be taken based off of these. As a result, the monitoring capability was added, as well as practices related either to monitoring or analytics being moved to the capabilities they are associated with. \item Moreover, some practices that were originally posed from a passive perspective, were changed with the intention of being conducted in an active manner. For example, the \textit{Include Changelogs} practice was renamed to \textit{Distribute Changelogs}, and its description was changed so that its focus is changed from passive inclusion of changelogs in the reference documentation, to active distribution of changelogs to consumers of the API. Similarly, the \textit{Provide API Status Page} was renamed to \textit{Broadcast API Status}, as well as its description being changed to signify the operational status of the API being broadcasted to consumers in an active manner, as opposed to providing an API status page in a passive fashion. These changes were made due to the fact that when phrased in a passive manner, these practices were deemed to be too irrelevant to be included in the API-m-FAMM, considering that the level of maturity required to implement these practices is too low when compared to other practices. When phrased from an active perspective however, these practices can be considered to be best practices that an organization should strive to implement. \item Finally, a major fundamental change was made with regards to the \textit{Lifecycle Control} capability. While practices belonging to this capability such as \textit{API Endpoint Creation}, \textit{API Publication}, and \textit{Import Pre-existing API} are considered to be an integral aspect of API management in both literature as well as the industry, the decision was made to exclude these practices from the API-m-FAMM. This choice was made due to the fact that being able to design, create, publish, and deploy an API is a precondition for implementing all other practices the model consists of. Moreover, during interviews it became clear that it was difficult for experts to rank these practices in terms of their maturity, considering that they are often performed in chronological order. \end {itemize} \begin{figure*}[!h] \centering \includegraphics[page=1, clip, trim=7cm 4cm 2cm 0.5cm, width=0.8\textwidth]{Figures/API-m-FAMMv0.4b.pdf} \caption{API-m-FAMM v0.4, including all suggested changes that were made by experts during interviews, as well as the manner in which they were subsequently interpreted and applied by the researchers. Please consult the legend on the top left-hand side of the figure for more information regarding the manner in which the colored outlines and fills should be interpreted.} \label{fig:api-m-famm04b} \end{figure*} Next the practices are assigned to individual maturity levels. This is done by using the results of the maturity ranking exercises during the interviews. First however, all dependencies between practices are identified, which are depicted in Figure \ref{API-m-FAMM Dependencies}. In this context, a dependency entails that one or more practices that the practice in question is dependant on are required to be implemented before the practice may be implemented. These dependencies may either occur; (1) between practices within the same capability; (2) between practices that are assigned to different capabilities within the same focus area, or (3) between practices that are assigned to different capabilities and focus areas. In total 34 dependencies are identified, which was done by analyzing literature stemming from the SLR and online supplemental material, as well as input received through expert interviews and the discussion sessions that were conducted among the researchers. The number of dependencies that are identified are shown for each focus area in Table \ref{tab:DependenciesTable}, as well as for each of the three dependency types mentioned. \begin{table}[h] \centering \begin{tabular}{l c c c r} \hline \textbf{Focus Area} & \textbf{Within Capability} & \textbf{Within Focus Area} & \textbf{Between Focus Areas} & \textbf{Total} \\ \hline Community & 3 & 0 & 0 & 3 \\ Security & 2 & 0 & 0 & 2 \\ Lifecycle Management & 3 & 1 & 2 & 6 \\ Observability & 0 & 6 & 0 & 6 \\ Performance & 4 & 0 & 2 & 6 \\ Commercial & 2 & 1 & 8 & 11 \\ \hline \textbf{Total} & 14 & 8 & 12 & 34 \end{tabular} \caption{The number of identified dependencies per focus area and per dependency type.} \label{tab:DependenciesTable} \end{table} As an example of a dependency between practices within the same capability, implementation of the \textit{Implement Load Balancing} practice is required before the \textit{Implement Scaling} practice may be implemented. An example of a dependency between practices that are assigned to different capabilities within the same focus area is the dependency between \textit{Enable Predictive Analytics} and \textit{Performance Monitoring}. The former practice belongs to the \textit{Analytics} capability, while the latter practice belongs to the \textit{Monitoring} capability, but both capabilities are contained within the \textit{Observability} focus area. An example of a dependency between practices that are assigned to different capabilities and focus areas may be observed in the case of the dependency between the \textit{Adopt Metering-based Monetization Model} and \textit{Resource Monitoring} practices. The former practice is assigned to the \textit{Monetization Strategies} capability within the \textit{Commercial} focus area, while the latter practice is assigned to the \textit{Monitoring} capability within the \textit{Performance} focus area. \begin{figure*}[!h] \centering \includegraphics[page=1, clip, trim=1cm 3cm 8cm 1cm, width=0.7\textwidth]{Figures/API-m-FAMMv0.4dependencies.pdf} \caption{The API-m-FAMM v0.4 after all changes had been applied, showing all dependencies that were identified between practices. In order to improve legibility, practices are not ranked in terms of their maturity in this figure.} \label{API-m-FAMM Dependencies} \end{figure*} After having identified all dependencies between practices, all 34 practices that have one or more dependencies are juxtaposed in a matrix. This is done by adhering to the constraint that practices can not depend on practices that have a higher maturity level. As a result, the foundation of the API-m-FAMM is formed, with practices ranging from maturity levels 1 to 10. Using this structure as a base, all other practices are subsequently assigned to individual maturity levels within their respective capabilities. These assignments are performed by using the results of the maturity ranking exercises that were performed by experts as one of the main sources of input. By again using the \textit{Logging} capability as an example, the interpretation of such a maturity ranking exercise is visualized in Figure \ref{Maturity_Ranking_Interpretation}. In this figure, it can be seen that the \textit{Activity Logging}, \textit{Access Logging}, and \textit{User Auditing} practices were ranked by 3 experts in terms of their perceived maturity. An additional practice, \textit{Application Logging}, was suggested for addition. However, this practice was removed because the decision was made to exclude applications in terms of abstraction from the API-m-FAMM, which is why it is outlined in red. Additionally, the decision was made to include and move the \textit{Error Logging} practice to the \textit{Logging} capability. Hence, this practice is outlined in green, and is included in this ranking exercise by incorporating this practice in the figure, along with the capability it was originally categorized with by the expert. Furthermore, the \textit{Error Reporting} practice was moved to the \textit{Analytics} capability (as can be seen in Figure \ref{fig:api-m-famm04b}, which is why it is outlined in purple and excluded from this maturity ranking exercise. Lastly, the remaining 3 practices that were suggested to be added are excluded, along with the \textit{Error Handling} capability as a whole, which is denoted by the red outlines. \begin{figure}[h] \centering \includegraphics[page=1, clip, trim=1cm 0cm 1cm 0cm, width=0.7\textwidth]{Figures/API-m-FAMMv0.4maturityranking.pdf} \caption{Conceptual overview representing a rough approximation of the way in which the expert's maturity rankings were interpreted and used as a starting point for performing the maturity level assignments.} \label{Maturity_Ranking_Interpretation} \end{figure} Arrows are included that range from the lowest a practice has been ranked in terms of its perceived maturity, to its highest. Dotted lines are attached to each practice, which are then connected to these arrows with a small circle in order to highlight and compare the maturity assignments of each expert with one another. Subsequently, dashed lines are used to indicate a rough estimate of the average of these assignments, which are then mapped on the maturity levels. However, it should be noted that Figure \ref{Maturity_Ranking_Interpretation} was made for illustratory purposes, in order to provide the reader with a conceptual idea of the manner in which the maturity assignments were performed. In practice, the maturity assignment of practices was done in a pragmatic manner, through discussion sessions among the researchers during which the expert's varying maturity rankings and their accompanying motivation and arguments were discussed and interpreted. Based on the outcome of these discussions, decisions were then made to assign practices to individual maturity levels, while taking the experts' opinions and maturity rankings into account. Finally, all practices are renamed to fit an uniform syntactical structure, which starts with a verb, followed by one or more nouns. For example, \textit{User Auditing} is renamed to \textit{Audit Users}, and \textit{Resource Monitoring} is renamed to \textit{Monitor Resource Usage}. Furthermore, descriptions of the practices that are included in the API-m-FAMM after all changes had been applied are updated. When possible, this is done using information and input that was provided by experts during interviews. Ultimately, these activities produced a second, updated version of the API-m-FAMM, which is shown in Figure \ref{API-m-FAMM_2.4} and consists of 6 focus areas, 20 capabilities, and 81 practices. These descriptions are available through \emph{version3} of this published source document on arXiv~\cite{mathijssen2021source}. \begin{figure*}[!h] \centering \includegraphics[page=1, clip, trim=0.5cm 0.5cm 0.5cm 0.5cm, width=\textwidth]{Figures/API-m-FAMMv0.4.pdf} \caption{API-m-FAMM v0.4, which includes the assignment of all practices to their respective maturity levels, which range from level 1 to level 10.} \label{API-m-FAMM_2.4} \end{figure*} \section{Version 0.5} \label{sec:version05} After having updated the API-m-FAMM to incorporate all findings from the interviews a second evaluation cycle was conducted. This is done as a means for evaluating and verifying whether experts agree with the fundamental decisions that were made, as well as gathering feedback on the way suggestions made by experts were interpreted and the maturity levels that practices had been assigned to. This second evaluation cycle consists of unstructured interviews with three experts originating from the same sample of experts that were interviewed during the first evaluation cycle. During these interviews, the changes made as a result of the previous evaluation cycle, as well as the newly introduced maturity assignments are presented and discussed. Since all experts agreed with the fundamental decisions that were made, no major further adjustments are made to the API-m-FAMM as a result of this evaluation cycle. \section{Version 1.0} \label{sec:version10} The final phase of the API-m-FAMM construction, the \emph{Deploy} phase, was executed through case studies. These case studies were conducted by evaluating six software products. Some additional changes are made to practices as a result of the discussion sessions with practitioners after the evaluation. One practice was removed altogether, and the descriptions of six practices were modified. Specifically, the following changes were made to the following practices: \begin{itemize} \item \textbf{Perform Request Rate Limiting}: this practice was extended to also comprise error limiting. In the case of AFAS Profit, this is implemented by placing consumers on a temporary denylist when they perform an excessive amount of faulty calls within a predefined time span. \item \textbf{Prevent Sensitive Data Exposure}: this practice was removed. During discussions, this practice caused confusion due to the observation that this practice is already captured by the \textit{Implement Transport Layer Encryption} and \textit{Decouple Internal \& External Data Model} practices. Additionally, after further investigation this practice was deemed to be out of scope, considering that the scope of this practice involves app data storage in general, as opposed to API management. \item \textbf{Implement Predictive Scaling}: the description of this practice was modified. Originally, the description mentioned that this practice may be implemented 'manually or automatically', which caused confusion due to the fact that these methods are already capture in the \textit{Implement Scaling} practice. Because predictive scaling is envisioned by practitioners and the researchers to be done automatically, the manual element was removed from the description. \item \textbf{Monitor Resource Usage}: the description of this practice was expanded. During discussions, it became clear that monitoring resources does not always specifically involve metrics such as CPU and disk usage. Instead, rough approximations may be used to determine resource usage instead, which is why the description was expanded to clarify this. \end{itemize} In addition to these changes, a small number of changes were made as a result of practitioners identifying errors such as typos. The final version of the model can be seen in Figure~\ref{fig:api-m-famm}. \clearpage \bibliographystyledata{elsarticle-num} \bibliographydata{apimanagement} \clearpage \bibliographystyle{elsarticle-num}
2024-02-18T23:39:40.960Z
2021-05-31T02:02:16.000Z
algebraic_stack_train_0000
55
17,146
proofpile-arXiv_065-303
\section{Introduction} Keyword spotting (KWS) is a frequently used technique in spoken data processing whose goal is to detect selected words or phrases in speech. It can be applied off-line for fast search in recorded utterances (e.g. telephone calls analysed by police~\cite{p1}), large spoken corpora (like broadcast archives~\cite{p2}), or data collected by call-centres~\cite{p3}. There are also on-line applications, namely for instant alerting, used in media monitoring~\cite{p4} or in keyword activated mobile services~\cite{p5}. The performance of a KWS system is evaluated from two viewpoints. The primary one is a detection reliability, which aims at missing as few as possible keywords occurring in the audio signal, i.e. to achieve a low miss detection rate (MD), while keeping the number of false alarms (FA) as low as possible. The second criterion is a speed as most applications require either instant reactions, or they are aimed at huge data (thousands of hours), where it is appreciated if the search takes only a small fraction of their duration. The latter aspect is often referred to as a real-time (RT) factor and should be significantly smaller than 1. There are several approaches to solve the KWS task~\cite{p6}. The simplest and often the fastest one, usually denoted as an \textit{acoustic approach}, utilizes a strategy similar to continuous speech recognition but with a limited vocabulary made of the keywords only. The sounds corresponding to other speech and noise are modelled and captured by filler units~\cite{p7}. An \textit{LVCSR approach} requires a very large continuous speech recognition (LVCSR) system that transcribes the audio first and after that searches for the keywords in its text output or in its internal decoder hypotheses arranged in \textit{word lattices}~\cite{p8}. This strategy takes into account both words from a large representative lexicon as well as inter-word context captured by a language model (LM). However, it is always slower and fails if the keywords are not in the lexicon and/or in the LM. A \textit{phoneme lattice approach} operates on a similar principle but with phonemes (usually represented by triphones) as the basic units. The keywords are searched within the phoneme lattices~\cite{p9}. The crucial part of all the 3 major approaches consist in assigning a \textit{confidence score} to keyword candidates and setting thresholds for their acceptance or rejection. The basic strategies can be combined to get the best properties of each, as shown e.g. in~\cite{p10,p11}, and in general, they adopt a two-pass scheme. The introduction of deep neural networks (DNN) into the speech processing domain has resulted in a significant improvement of acoustic models and therefore also in the accuracy of the LVCSR and phoneme based KWS systems. Various architectures have been proposed and tested, such as feedforward DNNs~\cite{p12}, convolutional (CNN)~\cite{p13} and recurrent ones (RNN)~\cite{p14}. A combination of the Long Short-Term Memory (LSTM) version of the latter together with the Connectionist Temporal Classification (CTC) method, which is an alternative to the classic hidden Markov model (HMM) approach, have become popular, too. The CTC provides the location and scoring measure for any arbitrary phone sequence as presented e.g. in~\cite{p15}. Moreover, modern machine learning strategies, such as training data augmentation or transfer learning have enabled to train KWS also for various signal conditions~\cite{p16} and languages with low data resources~\cite{p17}. The KWS system presented here is a combination of several aforementioned approaches and techniques. It allows for searching any arbitrary keyword(s) using an HMM word-and-filler decoder that accepts acoustic models based on various types of DNNs, including feedforward sequential memory networks that are an efficient alternative to RNNs~\cite{p20}. An audio signal is processed and searched within a single pass in a frame synchronous manner, which means that no intermediate data (such as lattices) need to be precomputed and stored. This allows for very short processing time (under 0.01 RT) in an off-line mode. Moreover, the execution time can be further reduced if the same signal is searched repeatedly with a different keyword list. The system can operate also in an on-line mode, where keyword alerts are produced with a small latency. In the following text, we will focus mainly on the speed optimization of the algorithms, which is the main and original contribution of this paper. \section{Brief Description of Presented Keyword Spotting System} The system models acoustic events in an audio signal by HMMs. Their smallest units are states. Phonemes and noises are modelled as 3-state sequences and the keywords as concatenations of the corresponding phoneme models. All different 3-state models (i.e. physical triphones in a tied-state triphone model) also serve as the fillers. Hence any audio signal can be modelled either as a sequence of the fillers, or - in presence of any of the keywords – as a sequence of the fillers and the keyword models. During data processing, the most probable sequences are continuously built by the Viterbi decoder and if they contain keywords, these are located and further managed. The complete KWS system is composed of three basic modules. All run in a frame synchronous manner. The first one – a \textit{signal processing} module - takes a frame of the signal and computes log-likelihoods for all the HMM states. The second one – a \textit{state processing} module – controls Viterbi recombinations for all active keywords and filler states. The third one – a \textit{spot managing} module – focuses on the last states of the keyword/filler models, computes differences in accumulated scores of the keywords and the best filler sequences, evaluates their confidence scores and those with the scores higher than a threshold are further processed. This scheme assures that the data is processed almost entirely in the forward direction with minimum need for look-back and storage of already processed data. \section{KWS Speed and Memory Optimizations} \label{sec:approach} The presented work extends – in a significant way – the scheme proposed in~\cite{p18}. Therefore, we will use a similar notation here when explaining optimizations in the three modules. The core of the system is a Viterbi decoder that handles keywords $w$ and fillers $v$ in the same way, i.e. as generalized units $u$. \subsection{Signal Processing Module} It computes likelihoods for each state (senone) using a trained neural network. This is a standard operation which can be implemented either on a CPU, or on a GPU. In the latter case, the computation may be more than 1000 times faster. Yet, we come with another option for a significant reduction in the KWS execution. The speed of the decoder depends on the number of units that must be processed in each frame. We cannot change the keyword number but let us see what can be done with the fillers. Usually, their list is made of all different physical triphones, which means a size of several thousands of items. If monophones are used instead, the number of fillers would be equal to their number, i.e. it would be smaller by 2 orders and the decoder would run much faster, but obviously with a worse performance. We propose an optional alternative solution that takes advantages from both approaches. We model the words and fillers by something we call quasi-monophones, which can be thought as triphone states mapped to a monophone structure. In each frame, every quasi-monophone state gets the highest likelihood of the mapped states. This simple triphone-to-monophone conversion can be easily implemented as an additional layer of the neural network that just takes max values from the mapped nodes in the previous layer. The benefit is that the decoder handles a much smaller number of different states and namely fillers. In the experimental section, we demonstrate the impact of this arrangement on KWS system’s speed and performance. \subsection{State Processing Module} The decoder controls a propagation of accumulated scores between adjacent states. At each frame $t$, new score $d$ is computed for each state $s$ of unit $u$ by adding log likelihood $L$ (provided by the previous module) to the higher of the scores in the predecessor states: \begin{equation} d(u,s,t) = L(s,t)+\max_{i=0,1}[d(u,s-i,t-1)] \end{equation} Let us denote the score in the unit’s end state $s_E$ as \begin{equation} D(u,t) = d(u,s_E,t) \end{equation} and $T(u,t)$ be the frame where this unit’s instance started. Further, we denote two values $d_{best}$ and $D_{best}$: \begin{equation} d_{best}(t) = \max_{u,s}[d(u,s,t)] \end{equation} \begin{equation} D_{best}(t) = \max_{u}[D(u,t)] \end{equation} The former value serves primarily for pruning, the latter is propagated to initial states $s_1$ of all units in the next frame: \begin{equation} d(u,s_{1},t+1) = L(s_1,t+1)+\max[D_{best}(t),d(u,s_{1},t)] \end{equation} \subsection{Spot Managing Module} This module computes acoustic scores $S$ for all words $w$ that reached their last states. This is done by subtracting these two accumulated scores: \begin{equation} \label{eq6} S(w,t) = D(w,t) - D_{best}(T(w,t)-1) \end{equation} The word score $S(w,t)$ needs to be compared with score $S(v_{string},t)$ that would be achieved by the best filler string $v_{string}$ starting in frame $T(w,t)$ and ending in frame $t$. \begin{equation} \label{eq7} R(w,t) = S(v_{string},t) - S(w,t) \end{equation} In~\cite{p18}, the first term in eq.~\ref{eq7} is computed by applying the Viterbi algorithm within the given frame span to the fillers only. Here, we propose to approximate its value by this simple difference: \begin{equation} \label{eq8} S(v_{string},t) \cong D_{best}(t) - D_{best}(T(w,t)-1) \end{equation} The left side of eq.~\ref{eq8} equals exactly the right one if the Viterbi backtracking path passes through frame $T(w,t)$, which can be quickly checked. A large experimental evaluation showed that this happens in more than 90 \% cases. In the remaining ones, the difference is so small that it has a negligible impact on further steps. Hence, by substituting from eq.~\ref{eq6} and eq.~\ref{eq8} into eq.~\ref{eq7} we get: \begin{equation} \label{eq9} R(w,t) = D_{best}(t) - D(w,t) \end{equation} The value of $R(w,t)$ is related to the confidence of word $w$ being detected in the given frame span. We just need to normalize it and convert it to a human-understandable scale where number 100 means the highest possible confidence. We do it in the following way: \begin{equation} \label{eq10} C(w,t) = 100 - k\frac{R(w,t)}{(t-T(w,t))N_S(w)} \end{equation} The $R$ value is divided by the word duration (in frames) and its number of HMM states $N_s$, which is further multiplied by constant $k$ before subtracting the term from 100. The constant influences the range of the confidence values. We set it so that the values are easily interpretable by KWS system users (see section~\ref{sec:evalres}). The previous analysis shows that the spot managing module can be made very simple and fast. In each frame, it just computes eq.~\ref{eq9} and~\ref{eq10} and the candidates with the confidence scores higher than a set threshold are registered in a time-sliding buffer (10 to 20 frames long). A simple filter running over the buffer content detects the keyword instance with the highest score and sends it to the output. \subsection{Optimized Repeated Run} In many practical applications, the same audio data is searched repeatedly, usually with different keyword lists (e.g. during police investigations). In this case, the KWS system can run significantly faster if we store all likelihoods and two additional values ($d_{best}$ and $D_{best}$) per frame. In the repeated run, the signal processing part is skipped over and the decoder can process only the keywords because all information needed for optimal pruning and confidence calculation is covered by the 2 above mentioned values. \section{System and Data for Evaluation} \subsection{KWS System} The KWS system used in the experiments is written in C language and runs on a PC (Intel Core i7-9700K). In some tasks we employ also a GPU (GeForce RTX 2070 SUPER) for likelihood computation. We tested 2 types of acoustic models (AM) based on neural networks. Both accept 16 kHz audio signals, segmented into 25ms long frames and preprocessed to 40 filter bank coefficients. The first uses a 5-layer feedforward DNN trained on some 1000 hours of Czech data (a mix of read and broadcast speech). The second AM utilizes a bidirectional feedforward sequential memory network (BFSMN) similar to that described in~\cite{p20}. We have been using it as an effective alternative of RNNs. In our case, it has 11 layers, each covering 4 left and 4 right temporal contexts. This AM was trained on the same source data augmented by about 400 hours of (originally) clean speech that passed through different codecs~\cite{p21}. For both types of the NNs we have trained triphone AMs, for the second also a monophone and quasi-monophone version. \subsection{Dataset for Evaluation} \label{sec:dataset} Three large datasets have been prepared for the evaluation experiments, each covering a different type of speech (see also Table~\ref{tab:dataset}). The Interview dataset contains 10 complete Czech TV shows with two-persons talking in a studio. The Stream dataset is made of 30 shows from Internet TV Stream. We selected the shows with heavy background noise, e.g. Hudebni Masakry (Music Masacres in English). The Call dataset covers 53 telephone communications with call-centers (in separated channels) and it is a mix of spontaneous (client) and mainly controlled (operator) speech. All recordings have been carefully annotated with time information (10 ms resolution) added to each word. \begin{table}[ht] \centering \caption{Datasets for evaluation and their main parameters.}\label{tab:dataset} \begin{tabular}{|l|c|c|c|c|} \hline \bfseries Dataset & \bfseries Speech type & \bfseries Signal type & \bfseries Total duration [min] & \bfseries \# keywords\\ \hline \hline Interview & planned & studio & 272 & 3524 \\ \hline Stream & informal & heavy noise & 157 & 1454 \\ \hline Call & often spontaneous & telephone & 613 & 2935 \\ \hline \end{tabular} \end{table} \section{Experimental Evaluation} \label{sec:eval} \subsection{Keyword List} Our goal was to test the system under realistic conditions and, at the same time, to get statistically conclusive results. A keyword list of 156 word lemmas with 555 derived forms was prepared for the experiments. For example, in case of keyword ''David'' we included its derived forms ''David'', ''Davida'', ''Davidem'', ''Davidovi'', etc. in order to avoid false alarms caused by words being substrings of others. The list was made by combining 80 most frequent words that occurred in each of the datasets, from which some were common and some appeared only in one set. The searched word forms had to be at least 4 phonemes long. The mean length of the listed word forms was 6.9 phonemes. The phonetic transcriptions were automatically extracted from a 500k-word lexicon used in our LVCSR system. \subsection{Filler Lists} The list of fillers was created automatically for each acoustic model. The triphone DNN model generated 9210 fillers and the triphone BFSMN produced 10455 of them. In contrast to these large numbers, the monophone and quasi-monophone BFSMN model had only 48 fillers (representing 40 phonemes + 8 noises). \subsection{Evaluation conditions and metrics} A word was considered correctly detected if the spotted word-form belonged to the same lemma as the word occurring in the transcription at the same instant - with tolerance ±0.5 s. Otherwise it was counted as a false alarm. For each experiment we computed Missed Detection (MD) and False Alarm (FA) rates as a function of acceptance threshold value, and drawn a Detection Error Tradeoff (DET) diagram with a marked Equal Error Rate (EER) point position. \subsection{Evaluation results} \label{sec:evalres} The Interview dataset was used as a development data, on which we experimented with various models, system arrangements and also user preferences. In accord with them, the internal constant $k$ occurring in eq.~\ref{eq10} was set to locate the confidence score equal to 75 close to the EER point. The first part of the experiments focused on the accuracy of the created acoustic models. We tested the triphone DNN and 3 versions of the BFSMN one. Their performance is illustrated by DET curves in Fig.~\ref{fig1}, where also the EER values are displayed. It is evident that the BFSMN-tri model performs significantly better than the DNN one, which is mainly due to its wider context span. This is also a reason why even its monophone version has performance comparable to the DNN-tri one. The proposed quasi-monophone BFSMN model shows the second best performance but the gap between it and the best one is not that crucial, especially if we take into account its additional benefits that will be discussed later. \begin{figure}[ht!] \centering \includegraphics[scale=0.52]{tsd1046a.eps} \caption{KWS results for the Interview dataset in form of DET curves drawn for 4 investigated neural network structures.} \label{fig1} \end{figure} Similar trends can be seen also in Fig.~\ref{fig2} and Fig.~\ref{fig3} where we compare the same models (excl. the monophone BFSMN) on the Stream and Call datasets. In both cases, the performance of all the models was worse (when compared to that of the Interview set) as it can be seen from the positions of the curves and the EER values. This is due to the character of speech and signal quality as explained is section~\ref{sec:dataset}. Yet, we can notice the positive effect of the training of the BFSMN models on the augmented data (with various codecs), especially on the Call dataset. Again, the gap between the best triphone and the proposed quasi-monophone version seems to be not that critical. \begin{figure}[ht!] \centering \includegraphics[scale=0.52]{tsd1046b.eps} \caption{DET curves compared for 3 models on the Stream dataset} \label{fig2} \end{figure} \begin{figure}[ht!] \centering \includegraphics[scale=0.52]{tsd1046c.eps} \caption{DET curves compared for 3 models on the Call dataset} \label{fig3} \end{figure} Now, we shall focus on the execution time of the proposed scheme. As explained in section~\ref{sec:approach}, the three modules of the KWS system can be split into 2 parts: the first with the signal processing module, the second with the remaining two. Both can run together on a PC (in a single thread), or if extremely fast execution is required, the former can be implemented on a GPU. We tested both approaches and measured their RT factors. Similar measurements (across all the tree datasets) were performed also in the second part for all the proposed variants and operation modes (see Table~\ref{tab:times} for results.) The total RT factor is obtained by adding the values for selected options in each of the two parts. \begin{table}[ht!] \centering \caption{Execution times for proposed KWS variants expressed as RT factors.}\label{tab:times} \begin{tabular}{|l|c|} \hline \bfseries System part, variant, mode & \bfseries Real-Time factor\\ \hline \hline \multicolumn{2}{|c|}{Part 1 (signal proc. module)} \\ \hline on CPU & 0.12 \\ \hline on GPU & 0.0005 \\ \hline \multicolumn{2}{|c|}{Part 2 (rest of KWS system)} \\ \hline triphone BFSMN & 0.012 \\ \hline quasi-mono BFSMN & 0.002 \\ \hline triphone BFSMN, repeated & 0.009 \\ \hline quasi-mono BFSMN, repeated & 0.001 \\ \hline \end{tabular} \end{table} Let us remind that the proposed quasi-monophone model performs slightly worse but it offers two practical benefits: a) a speed that can get close to 0.001 RT (if a GPU is used for likelihood computation) and b) a small disk memory consumption in case of repeated runs (with different keywords) because only 48x3+2=146 float numbers per frame need to be stored. Moreover, the speed of the proposed KWS system is only slightly influenced by the number of keywords. A test made with 10.000 keywords (instead of 555 ones used in the main experiments) showed only twice slower performance. \section{Conclusion} In this contribution we focus mainly on the speed aspect of a modern KWS system, but at the same time we aim at the best performance that is available thanks to the advances in deep neural networks. The used BFSMN architecture has several benefits for practical usage. In contrast to more popular RNNs, it can be efficiently and fast trained on a large amount (several thousands of hours) of audio and at the same time yields performance comparable to more complex RNNs and LSTMs as shown in~\cite{p20}. Its phoneme accuracy is high (due its large internal context) so that it fits both to acoustic KWS systems as well as to standard speech-to-text LVCSR systems. The latter means that it is well suited for a tandem KWS scheme where a user requires that the sections with detected keywords are immediately transcribed by a LVCSR system. In our arrangement this can be done very effectively by reusing some of the precomputed data. (Let us recall that if we use the quasi-monophones, their values are just max values from the original triphone neural network and hence both acoustic models can be implemented by the same network with an additional layer.) The results presented in section~\ref{sec:eval} allow for designing an optimal configuration that takes into account the three main factors: accuracy, speed and cost. If the main priority is accuracy and not the speed, the KWS system can run on a standard PC and process data with a RT factor about 0.1. When very large amounts of records must be processed within very short time then the addition of a GPU and the adoption of the proposed quasi-monophone approach will allow for completing the job in time that can be up to 3 orders shorter than the audio duration. We evaluated the performance on Czech datasets as these were available with precise human checked transcriptions. Obviously, the proposed architecture is language independent and we plan to utilize it for other languages investigated in our project. \subsubsection*{Acknowledgments.} This work was supported by the Technology Agency of the Czech Republic (Project No. TH03010018).
2024-02-18T23:39:41.025Z
2020-07-22T02:13:51.000Z
algebraic_stack_train_0000
59
3,777
proofpile-arXiv_065-324
\section{Introduction} Let $\mathbb{N}$ be the set of all nonnegative integers. For any sequence of positive integers $A=\{a_1<a_2<\cdots\}$, let $P(A)$ be the subset sum set of $A$, that is, $$P(A)=\left\{\sum_{i}\varepsilon_i a_i:\sum_{i}\varepsilon_i<\infty, \varepsilon_i\in\{0,1\}\right\}.$$ Here we note that $0\in P(A)$. In 1970, Burr \cite{Burr} posed the following problem: which sets $S$ of integers are equal to $P(A)$ for some $A$? For the existence of such set $S$ he mentioned that if the complement of $S$ grows sufficiently rapidly such as $b_1>x_0$ and $b_{n+1}\ge b_n^2$, then there exists a set $A$ such that $P(A)=\mathbb{N}\setminus\{b_1,b_2,\cdots\}$. But this result is unpublished. In 1996, Hegyv\'{a}ri \cite{Hegyvari} proved the following result. \begin{theorem}\cite[Theorem 1]{Hegyvari} If $B=\{b_1<b_2<\cdots\}$ is a sequence of integers with $b_1\ge x_0$ and $b_{n+1}\ge5b_n$ for all $n\ge1$, then there exists a sequence $A$ of positive integers for which $P(A)=\mathbb{N}\setminus B$. \end{theorem} In 2012, Chen and Fang \cite{ChenFang} obtained the following results. \begin{theorem}\cite[Theorem 1]{ChenFang} Let $B=\{b_1<b_2<\cdots\}$ be a sequence of integers with $b_1\in\{4,7,8\}\cup\{b:b\ge11\}$ and $b_{n+1}\ge3b_n+5$ for all $n\ge1$. Then there exists a sequence $A$ of positive integers for which $P(A)=\mathbb{N}\setminus B$. \end{theorem} \begin{theorem}\cite[Theorem 2]{ChenFang} \label{ChenFang2} Let $B=\{b_1<b_2<\cdots\}$ be a sequence of positive integers with $b_1\in\{3,5,6,9,10\}$ or $b_2=3b_1+4$ or $b_1=1$ and $b_2=9$ or $b_1=2$ and $b_2=15$. Then there is no sequence $A$ of positive integers for which $P(A)=\mathbb{N}\setminus B$. \end{theorem} Later, Chen and Wu \cite{ChenWu} further improved this result. By observing Chen and Fang's results we know that the critical value of $b_2$ is $3b_1+5$. In this paper, we study the problem of critical values of $b_k$. We call this problem critical values of Burr's problem. In 2019, Fang and Fang \cite{FangFang2019} considered the critical value of $b_3$ and proved the following result. \begin{theorem}\cite[Theorem 1.1]{FangFang2019} If $A$ and $B=\{1<b_1<b_2<\cdots\}$ are two infinite sequences of positive integers with $b_2=3b_1+5$ such that $P(A)=\mathbb{N}\setminus B$, then $b_3\ge4b_1+6$. Furthermore, there exist two infinite sequences of positive integers $A$ and $B=\{1<b_1<b_2<\cdots\}$ with $b_2=3b_1+5$ and $b_3=4b_1+6$ such that $P(A)=\mathbb{N}\setminus B$. \end{theorem} Recently, Fang and Fang \cite{FangFang2020} introduced the following definition. For given positive integers $b$ and $k\ge3$, define $c_{k}(b)$ successively as follows: (i) let $c_k=c_{k}(b)$ be the least integer $r$ for which, there exist two infinite sets of positive integers $A$ and $B=\{b_1<b_2<\cdots<b_{k-1}<b_{k}<\cdots\}$ with $b_1=b$, $b_2=3b+5$ and $b_i=c_i(3\le i<k)$ and $b_k=r$ such that $P(A)=\mathbb{N}\setminus B$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>b+1$; (ii) if such $A,B$ do not exist, define $c_k=+\infty$. In \cite{FangFang2020}, Fang and Fang proved the following result. \begin{theorem}\cite[Theorem 1.1]{FangFang2020} For given positive integer $b\in\{1,2,4,7,8\}\cup\{b':b'\ge 11,b'\in\mathbb{N}\}$, we have $$c_{2k-1}=(3b+6)(k-1)+b,~~c_{2k}=(3b+6)(k-1)+3b+5,~~k=1,2,\dots.$$ \end{theorem} Naturally, we consider the problem that for any integer $b_1$ and $b_2\ge 3b_1+5$ if we can determine the critical value of $b_3$, instead of $b_2=3b_1+5$. This problem is posed by Fang and Fang in \cite{FangFang2019}. Recently, authors \cite{WuYan} answered this problem. \begin{theorem}\cite{WuYan} \label{thm:1.6}If $A$ and $B=\{b_1<b_2<\cdots\}$ are two infinite sequences of positive integers with $b_2\ge 3b_1+5$ such that $P(A)=\mathbb{N}\setminus B$, then $b_3\ge b_2+b_1+1$. \end{theorem} \begin{theorem}\cite{WuYan} \label{thm:1.7}For any positive integers $b_1\in\{4,7,8\}\cup[11,+\infty)$ and $b_2\ge 3b_1+5$, there exists two infinite sequences of positive integers $A$ and $B=\{b_1<b_2<\cdots\}$ with $b_3=b_2+b_1+1$ such that $P(A)=\mathbb{N}\setminus B$. \end{theorem} In this paper, we go on to consider the critical value of $b_k$ for any integers $b_1$ and $b_2\ge 3b_1+5$. Motivated by the definition of Fang and Fang, we also introduce the following definition. For given positive integers $u$, $v\ge 3u+5$ and $k\ge3$, let $e_1=u$, $e_2=v$ and $e_k=e_k(u,v)$ be the least integer $r$ for which there exist two infinite sets of positive integer $A$ and $B=\{b_1<b_2<\cdots<b_{k-1}<b_k<\cdots\}$ with $b_i=e_i(1\le i<k)$ and $e_k=r$ such that $P(A)=\mathbb{N}\setminus B$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$. If such sets $A,B$ do not exist, define $e_k=+\infty$. In this paper, we obtain the following results. \begin{theorem}\label{thm:1.1} For given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$, we have \begin{equation}\label{eq:c} e_{2k+1}=(v+1)k+u,~~e_{2k+2}=(v+1)k+v,~~k=0,1,\dots. \end{equation} \end{theorem} \begin{corollary}\label{thm:1.2} For given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$, we have $$e_{k}=e_{k-1}+e_{k-2}-e_{k-3},$$ where $e_0=-1$, $e_1=u$, $e_2=v$. \end{corollary} If $u\in\{3,5,6,9,10\}$ or $u=1,v=9\ge 3u+5$ or $u=2, v=15\ge 3u+5$, by Theorem \ref{ChenFang2} we know that such sequence $A$ does not exist. So we only consider the case for $u\in\{4,7,8\}\cup\{u:u\ge11\}$. In fact, we find Corollary \ref{thm:1.2} first, but in the proof of Theorem \ref{thm:1.1} we follow Fang and Fang's method. Some of skills are similar to \cite{WuYan}. For the convenience of readers, we provide all the details of the proof. \section{Proof of Theorem \ref{thm:1.1}} For given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$ and $v\ge 3u+5$, we define $$d_{2k+1}=(v+1)k+u,~~d_{2k+2}=(v+1)k+v,~~k=0,1,\dots.$$ \begin{lemma}\label{lem:2.1} Given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$. Then there exists an infinite set $A$ of positive integers such that $P(A)=\mathbb{N}\setminus\{d_1,d_2,\dots\}$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$. \end{lemma} \begin{proof} Let $s$ and $r$ be nonnegative integers with $$v+1=(u+1)+(u+2)+\cdots+(u+s)+r,~~0\le r\le u+s.$$ Since $v\ge 3u+5$, it follows that $s\ge3$. Note that $u\ge4$. Then there exist integers $r_2,\dots,r_s$ such that \begin{equation}\label{eq:2.1} r=r_2+\cdots+r_s+\varepsilon(r),~~0\le r_2\le\cdots\le r_s\le u-1, \end{equation} where $\varepsilon(r)=0$ if $r=0$, otherwise $\varepsilon(r)=1$. If there is an index $3\le j\le s$ such that $r_j-r_{j-1}=u-1$, we replace $r_j$ and $r_{j-1}$ by $r_j-1$ and $r_{j-1}+1$. Then \eqref{eq:2.1} still holds and $r_j-r_{j-1}\le u-2$ for any index $3\le j\le s$. We cite a result in \cite{ChenFang} that there exists a set of positive integers $A_1$ with $A_1\subseteq[0,u-1]$ such that $$P(A_1)=[0,u-1].$$ Let $$a_1=u+1,~~a_s=u+s+r_s+\varepsilon(r),~~a_{t}=u+t+r_t,~~2\le t\le s-1.$$ Then $$a_{t-1}<a_{t}\le a_{t-1}+u,~~2\le t\le s$$ and so $$P(A_1\cup\{a_1,\dots,a_s\})=[0,a_{2}+\cdots+a_{s}+2u]\setminus\{u,a_{2}+\cdots+a_{s}+u\}.$$ Since $$a_{2}+\cdots+a_{s}+u=(u+2+r_2)+\cdots+(u+s+r_s+\varepsilon(r))+u=v,$$ it follows that \begin{equation}\label{eq:2.2} P(A_1\cup\{a_1,\dots,a_s\})=[0,u+v]\setminus\{u,v\}. \end{equation} Let $a_{s+n}=(v+1)n$ for $n=1,2,\dots$. We will take induction on $k\ge1$ to prove that \begin{equation}\label{eq:2.3} P(A_1\cup\{a_1,\dots,a_{s+k}\})=[0,\sum_{i=1}^k a_{s+i} +u+v]\setminus\{d_1,d_2,\dots,d_{2m-1},d_{2m}\}, \end{equation} where $m=k(k+1)/2+1$. By \eqref{eq:2.2}, it is clear that $$P(A_1\cup\{a_1,\dots,a_{s+1}\})=[0,a_{s+1}+u+v]\setminus\{d_1,d_2,d_3,d_4\},$$ which implies that \eqref{eq:2.3} holds for $k=1$. Assume that \eqref{eq:2.3} holds for some $k-1\ge1$, that is, \begin{equation}\label{eq:2.4} P(A_1\cup\{a_1,\dots,a_{s+k-1}\})=[0,\sum_{i=1}^{k-1} a_{s+i} +u+v]\setminus\{d_1,d_2,\dots,d_{2m'-1},d_{2m'}\}, \end{equation} where $m'=k(k-1)/2+1$. Then \begin{eqnarray*} a_{s+k}+P(A_1\cup\{a_1,\dots,a_{s+k}\}) =[(v+1)k,\sum_{i=1}^{k} a_{s+i} +u+v]\setminus D, \end{eqnarray*} where $$D=\left\{d_{2k+1},d_{2k+2},\dots,d_{k(k+1)+1},d_{k(k+1)+2}\right\}.$$ Since $d_{2k+1}\le d_{k(k-1)+3}=d_{2m'+1}$, it follows that \begin{equation*} P(A_1\cup\{a_1,\dots,a_{s+k}\})=[0,\sum_{i=1}^k a_{s+i} +u+v]\setminus\{d_1,d_2,\dots,d_{2m-1},d_{2m}\}, \end{equation*} where $m=k(k+1)/2+1$, which implies that \eqref{eq:2.3} holds. Let $A=A_1\cup\{a_1,a_2,\dots\}$. We know that such $A$ satisfies Lemma \ref{lem:2.1}. This completes the proof of Lemma \ref{lem:2.1}. \end{proof} \begin{lemma}\label{lem:2.2}\cite[Lemma 1]{ChenFang}. Let $A=\{a_1<a_2 <\cdots\}$ and $B=\{b_1<b_2 <\cdots\}$ be two sequences of positive integers with $b_1>1$ such that $P(A)=\mathbb{N}\backslash B$. Let $a_k<b_1<a_{k+1}$. Then $$P(\{a_1,\cdots,a_i\})=[0,c_i], ~~i=1,2,\cdots,k,$$ where $c_1=1$, $c_2=3$, $c_{i+1}=c_i+a_i+1~(1\leq i\leq k-1)$, $c_k=b_1-1$ and $c_i+1\geq a_i+1~(1 \leq i \leq k-1)$. \end{lemma} \begin{lemma}\label{lem:2.3} Given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$. If $A$ is an infinite set of positive integers such that $$P(A)=\mathbb{N}\setminus \{d_1<d_2<\cdots<d_{k-1}<b_{k}<\cdots\}$$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$, then there exists a subset $A_1\subseteq A$ such that $$P(A_1)=[0,d_1+d_2]\setminus\{d_1,d_2\}$$ and $\min \{A\setminus A_1\} >u+1$. \end{lemma} \begin{proof} Let $A=\{a_1<a_2<\cdots\}$ be an infinite set of positive integers such that $$P(A)=\mathbb{N}\setminus \{d_1<d_2<\cdots<d_{k-1}<b_{k}<\cdots\}$$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$. It follows from Lemma \ref{lem:2.2} that $$P(\{a_1,\cdots,a_k\})=[0,u-1],$$ where $k$ is the index such that $a_k<u<a_{k+1}$. Since $v\ge 3u+5>u+1$, it follows that $u+1\in P(A)$. Hence, $a_{k+1}=u+1$. Then $$P(\{a_1,\cdots,a_{k+1}\})=[0,2u]\setminus\{u\}.$$ Noting that $a_{k+t}>a_{k+1}=u+1$ for any $t\ge 2$, we have $$a_{k+t}\le a_1+\cdots+a_{k+t-1}+1=a_{k+2}+\cdots+a_{k+t-1}+2u.$$ Then $$P(\{a_1,\cdots,a_{k+2}\})=[0,a_{k+2}+2u]\setminus\{u,a_{k+2}+u\}.$$ If $a_{k+2}+\cdots+a_{k+t-1}+u\ge a_{k+t}$ and $a_{k+2}+\cdots+a_{k+t-1}\neq a_{k+t}$ for all integers $t\ge3$, then $$P(\{a_1,\cdots,a_{k+t}\})=[0,a_{k+2}+\cdots+a_{k+t}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+t}+u\}.$$ Then $d_2\ge a_{k+2}+\cdots+a_{k+t}+u$ for any integer $t\ge3$, which is impossible since $d_2$ is a given integer. So there are some integers $3\le t_1<t_2<\cdots$ such that $a_{k+2}+\cdots+a_{k+t_i-1}+u< a_{k+t_i}$ or $a_{k+2}+\cdots+a_{k+t_i-1}= a_{k+t_i}$, and $$P(\{a_1,\cdots,a_{k+t_1-1}\})=[0,a_{k+2}+\cdots+a_{k+t_1-1}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+t_1-1}+u\}.$$ If $a_{k+2}+\cdots+a_{k+t_1-1}+u< a_{k+t_1}$, then $d_2=a_{k+2}+\cdots+a_{k+t_1-1}+u$ and $$P(\{a_1,\cdots,a_{k+t_1-1}\})=[0,d_1+d_2]\setminus\{d_1,d_2\},~~a_{k+t_1}>u+1.$$ So the proof is finished. If $a_{k+2}+\cdots+a_{k+t_1-1}= a_{k+t_1}$, then $$P(\{a_1,\cdots,a_{k+t_1}\})=[0,a_{k+2}+\cdots+a_{k+t_1}+2u]\setminus\{u,a_{k+t_1}+u,a_{k+2}+\cdots+a_{k+t_1}+u\}.$$ If $a_{k+t_1+1}>a_{k+t_1}+u$, then $$d_2=a_{k+t_1}+u=a_{k+2}+\cdots+a_{k+t_1-1}+u$$ and $$a_{k+2}+\cdots+a_{k+t_1-1}+2u=d_1+d_2.$$ Therefore, $$P(\{a_1,\cdots,a_{k+t_1-1}\})=[0,d_1+d_2]\setminus\{d_1,d_2\},~~a_{k+t_1}>u+1.$$ So the proof is finished. If $a_{k+t_1+1}\le a_{k+t_1}+u$, then $$P(\{a_1,\cdots,a_{k+t_1+1}\})=[0,a_{k+2}+\cdots+a_{k+t_1+1}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+t_1+1}+u\}.$$ By the definition of $t_2$ and $a_{k+t_1+1}\le a_{k+t_1}+u$ we know that $t_2\neq t_1+1$. Noting that $a_{k+2}+\cdots+a_{k+t-1}+u\ge a_{k+t}$ and $a_{k+2}+\cdots+a_{k+t-1}\neq a_{k+t}$ for any integer $t_1<t<t_2$, we have $$P(\{a_1,\cdots,a_{k+t_2-1}\})=[0,a_{k+2}+\cdots+a_{k+t_2-1}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+t_2-1}+u\}.$$ Similar to the way to deal with $t_1$, we know that there is always a subset $A_1\subseteq A$ such that $P(A_1)=[0,d_1+d_2]\setminus\{d_1,d_2\}$ and $\min\{A\setminus A_1\}>u+1$ or there exists an infinity sequence of positive integers $l_i\ge3$ such that $$P(\{a_1,\cdots,a_{k+l_i}\})=[0,a_{k+2}+\cdots+a_{k+l_i}+2u]\setminus\{u,a_{k+2}+\cdots+a_{k+l_i}+u\}.$$ Since $d_2$ is a given integer, it follows that the second case is impossible. This completes the proof of Lemma \ref{lem:2.3}. \end{proof} \begin{lemma}\label{lem:2.4} Given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$. Let $A$ be an infinite set of positive integers such that $$P(A)=\mathbb{N}\setminus \{d_1<d_2<\cdots<d_{k}<b_{k+1}<\cdots\}$$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$ and let $A_1$ be a subset of $A$ such that $$P(A_1)=[0,u+v]\setminus\{d_1,d_2\}$$ and $\min\{A\setminus A_1\}>u+1$. Write $A\setminus A_1=\{a_1<a_2<\cdots\}$. Then $v+1\mid a_i$ for $i=1,2,\dots, m$, and $$P(A_1\cup\{a_1,\dots,a_m\})=[0,\sum_{i=1}^{m}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n}\},$$ where $m$ is the index such that $$\sum_{i=1}^{m-1}a_i+v<d_k\le \sum_{i=1}^{m}a_i+v$$ and $$d_{n}=\sum_{i=1}^m a_i+v.$$ \end{lemma} \begin{proof} We will take induction on $k\ge3$. For $k=3$, by $a_1>u+1$ we know that $$v<d_3=u+v+1\le a_1+v,$$ that is, $m=1$. It is enough to prove that $v+1\mid a_1$ and $$P(A_1\cup\{a_1\})=[0,a_1+u+v]\setminus\{d_1,d_2,\dots,d_{n}\},$$ where $d_{n}=a_1+v$. Since $d_3\notin P(A)$ and $[0,v-1]\setminus\{u\}\subseteq P(A_1)$ and $$a_1\le \sum_{\substack{a'<a_1\\a'\in A}}a'+1=\sum_{a'\in A_1}a'+1=u+v+1=d_3< a_1+v,$$ it follows that $d_3=a_1+u$, that is, $a_1=v+1$. Since $$P(A_1)=[0,u+v]\setminus\{d_1,d_2\},$$ it follows that $$a_1+P(A_1)=[a_1,a_1+u+v]\setminus\{a_1+d_1,a_1+d_2\}.$$ Then $$P(A_1\cup\{a_1\})=[0,a_1+u+v]\setminus\{d_1,d_2,d_3,d_4\},$$ where $d_4=a_1+v$. Suppose that $ v+1\mid a_i$ for $i=1,2,\dots,m$ and \begin{equation}\label{eq0} P(A_1\cup\{a_1,\dots,a_{m}\})=[0,\sum_{i=1}^{m}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n}\}, \end{equation} where $m$ is the index such that $$\sum_{i=1}^{m-1}a_i+v<d_{k-1}\le \sum_{i=1}^{m}a_i+v$$ and $$d_{n}=\sum_{i=1}^{m} a_i+v.$$ If $d_{k-1}< \sum_{i=1}^{m}a_i+v$, then $d_{k}\le \sum_{i=1}^{m}a_i+v$. Then the proof is finished. If $d_{k-1}=\sum_{i=1}^{m}a_i+v$, then $d_{k}=\sum_{i=1}^{m}a_i+v+u+1$. It follows that $$\sum_{i=1}^{m}a_i+v<d_{k}\le \sum_{i=1}^{m+1}a_i+v.$$ It is enough to prove that $ v+1\mid a_{m+1}$ and $$P(A_1\cup\{a_1,\dots,a_{m+1}\})=[0,\sum_{i=1}^{m+1}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n'}\},$$ where $$d_{n'}=\sum_{i=1}^{m+1} a_i+v.$$ Since $a_{m+1}\neq d_k$ and $$a_{m}<a_{m+1}\le \sum_{\substack{a<a_{m+1}\\a\in A}} a+1=\sum_{i=1}^{m}a_i+u+v+1=d_{k},$$ it follows that there exists a positive integer $T$ such that $$a_{m+1}< (v+1)T+u\le a_{m+1}+v+1$$ and $$(v+1)T+u\le d_k.$$ Note that $d_i=(v+1)T+u\notin P(A)$ for some $i\le k$ and $[1,v+1]\setminus\{u,v\}\in P(A_1)$. Hence, $(v+1)T+u=a_{m+1}+u$ or $(v+1)T+u=a_{m+1}+v$. If $(v+1)T+u=a_{m+1}+u$, then $a_{m+1}=(v+1)T$. If $(v+1)T+u=a_{m+1}+v$, then $a_{m+1}=(v+1)(T-1)+u+1$. Since $v \ge 3u+5$, it follows that $$a_{m+1}+u<(v+1)(T-1)+v<a_{m+1}+v.$$ Note that $[u+1,v-1]\subseteq P(A_1)$. Then $(v+1)(T-1)+v\in P(A)$, which is impossible. Hence, $v+1\mid a_{m+1}$. Moreover, $a_{m+1}=(v+1)T$. Since $$a_{m+1}+P(A_1\cup\{a_1,\dots,a_{m}\})=[a_{m+1},\sum_{i=1}^{m+1}a_i+u+v]\setminus\{a_{m+1}+d_1,\dots,a_{m+1}+d_{n}\}$$ and $$a_{m+1}+d_1=(v+1)T+u\le d_k=\sum_{i=1}^{m}a_i+v+u+1=d_{n+1},$$ it follows from \eqref{eq0} that $$P(A_1\cup\{a_1,\dots,a_{m+1}\})=[0,\sum_{i=1}^{m+1}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n'}\},$$ where $$d_{n'}=a_{m+1}+d_n=\sum_{i=1}^{m+1} a_i+v.$$ This completes the proof of Lemma \ref{lem:2.4}. \end{proof} \begin{lemma}\label{lem:2.5} Given positive integers $u\in\{4,7,8\}\cup\{u:u\ge11\}$, $v\ge 3u+5$ and $k\ge3$. If $A$ is an infinite set of positive integers such that $$P(A)=\mathbb{N}\setminus \{d_1<d_2<\cdots<d_{k}<b_{k+1}<\cdots\}$$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$, then $b_{k+1}\ge d_{k+1}$. \end{lemma} \begin{proof} By Lemma \ref{lem:2.3} we know that there exists $A_1\subseteq A$ such that $$P(A_1)=[0,u+v]\setminus\{d_1,d_2\}$$ and $\min\{A\setminus A_1\}>u+1$. Write $A\setminus A_1=\{a_1<a_2<\cdots\}$. By Lemma \ref{lem:2.4} we know that $v+1\mid a_i$ for $i=1,2,\dots, m$ and $$P(A_1\cup\{a_1,\dots,a_m\})=[0,\sum_{i=1}^{m}a_i+u+v]\setminus\{d_1,d_2,\dots,d_{n}\},$$ where $m$ is the index such that $$\sum_{i=1}^{m-1}a_i+v<d_k\le \sum_{i=1}^{m}a_i+v$$ and $$d_{n}=\sum_{i=1}^m a_i+v.$$ If $d_k<\sum_{i=1}^m a_i+v$, then $d_{k+1}\le\sum_{i=1}^m a_i+v=d_{n}$. Hence, $k+1\le n$. Thus, $b_{k+1}\ge d_{k+1}$. If $d_k=\sum_{i=1}^m a_i+v$, then $d_{k+1}=\sum_{i=1}^m a_i+u+v+1$ and \begin{equation}\label{eq:2.5} P(A_1\cup\{a_1,\dots,a_m\})=[0,\sum_{i=1}^{m}a_i+u+v]\setminus\{d_1,\dots,d_{k}\} \end{equation} and \begin{equation}\label{eq:2.6} a_{m+1}+P(A_1\cup\{a_1,\dots,a_m\})=[a_{m+1},\sum_{i=1}^{m+1}a_i+u+v]\setminus\{a_{m+1}+d_1,\dots,a_{m+1}+d_{k}\}. \end{equation} Note that $$a_{m}<a_{m+1}\le \sum_{\substack{a<a_{m+1}\\a\in A}} a+1=\sum_{i=1}^{m}a_i+u+v+1=d_{k+1}.$$ By $a_{m+1}\neq d_{k}$ we divide into two cases according to the value of $a_{m+1}$. {\bf Case 1}: $d_{k}<a_{m+1}\le d_{k+1}$. It follows from \eqref{eq:2.5} and \eqref{eq:2.6} that $$b_{k+1}\ge a_{m+1}+d_1\ge d_{k}+d_{1}+1=\sum_{i=1}^{m}a_i+v+u+1=d_{k+1}.$$ {\bf Case 2}: $a_{m}<a_{m+1}< d_{k}$. Similar to the proof of Lemma \ref{lem:2.4}, we know that there exists a positive integer $T$ such that $$a_{m+1}=(v+1)T,~~a_{m+1}+d_1=(v+1)T+u\le d_k.$$ It follows from \eqref{eq:2.5} and \eqref{eq:2.6} that \begin{equation}\label{eq:2.7} P(A_1\cup\{a_1,\dots,a_{m+1}\})=[0,\sum_{i=1}^{m+1}a_i+u+v]\setminus\{d_1,\dots,d_{n'}\}, \end{equation} where $$d_{n'}=a_{m+1}+d_k.$$ Then $n'\ge k+1$. Thus $b_{k+1}\ge d_{k+1}$. \end{proof} \emph{Proof of Theorem \ref{thm:1.1}:} It follows from Theorem \ref{thm:1.6} and Theorem \ref{thm:1.7} that $e_3=(v+1)+u$. For $k\ge3$, suppose that $A$ is an infinite set of positive integers such that $$P(A)=\mathbb{N}\setminus \{e_1<e_2<\cdots<e_{k}<b_{k+1}<\cdots\}$$ and $a\le \sum_{\substack{a'<a\\a'\in A}}a'+1$ for all $a\in A$ with $a>u+1$, where $e_i(1\le i\le k)$ is defined in \eqref{eq:c}. By Lemma \ref{lem:2.5} we have $b_{k+1}\ge d_{k+1}$. By Lemma \ref{lem:2.1} we know that $d_{k+1}$ is the critical value, that is, $e_{k+1}=d_{k+1}$. This completes the proof of Theorem \ref{thm:1.1}. \noindent\textbf{Acknowledgments.} This work was supported by the National Natural Science Foundation of China, Grant No.11771211 and NUPTSF, Grant No.NY220092. \renewcommand{\refname}{References}
2024-02-18T23:39:41.119Z
2020-07-22T02:08:34.000Z
algebraic_stack_train_0000
63
4,189
proofpile-arXiv_065-351
\section{Introduction} In \cite{Unf} free equations of spin 0 and spin 1/2 matter fields in 2+1 - dimensional anti-de Sitter (AdS) space were reformulated in a form of certain covariant constantness conditions (``unfolded form''). Being equivalent to the standard one, such a formulation is useful at least in two respects. It leads to a simple construction of a general solution of the free equations and gives important hints how to describe non-linear dynamics exhibiting infinite-dimensional higher-spin symmetries. In \cite{Unf} it was also observed that the proposed construction admits a natural realization in terms of the Heisenberg-Weyl oscillator algebra for the case of massless fields. Based on this realization, non-linear dynamics of massless matter fields interacting through higher-spin gauge fields was then formulated in \cite{Eq} in all orders in interactions. In the present paper we address the question how one can extend the oscillator realization of the massless equations of \cite{Unf} to the case of an arbitrary mass of matter fields. We show that the relevant algebraic construction is provided by the deformed oscillator algebra suggested in \cite{Quant} with the deformation parameter related to the parameter of mass. In a future publication of the two of the authors \cite{Fut} the results of this paper will be used for the analysis of non-linear dynamics of matter fields in 2+1 dimensions, interacting through higher-spin gauge fields. The 2+1 dimensional model considered in this paper can be regarded as a toy model exhibiting some of the general properties of physically more important higher-spin gauge theories in higher dimensions $d\geq 4$. \section{Preliminaries} We describe the 2+1 dimensional AdS space in terms of the Lorentz connection one-form $\omega^{\alpha\beta}=dx^\nu \omega_\nu{}^{\alpha\beta}(x)$ and dreibein one-form $h^{\alpha\beta}= dx^\nu h_\nu{}^{\alpha\beta} (x)$. Here $x^\nu$ are space-time coordinates $(\nu =0,1,2)$ and $\alpha,\beta,\ldots =1,2$ are spinor indices, which are raised and lowered with the aid of the symplectic form $\epsilon_{\alpha\beta}=-\epsilon_{\beta\alpha}$, $A^{\alpha}=\epsilon^{\alpha\beta}A_{\beta}$, $A_{\alpha}=A^{\beta}\epsilon_{\beta\alpha}$, $\epsilon_{12}=\epsilon^{12}=1$. The AdS geometry can be described by the equations \begin{equation} \label{d omega} d\omega_{\alpha\beta}=\omega_{\alpha\gamma}\wedge\omega_\beta{}^\gamma+ \lambda^2h_{\alpha\gamma}\wedge h_\beta{}^\gamma\,, \end{equation} \begin{equation} \label{dh} dh_{\alpha\beta}=\omega_{\alpha\gamma}\wedge h_\beta{}^\gamma+ \omega_{\beta\gamma}\wedge h_\alpha{}^\gamma \, , \end{equation} which have a form of zero-curvature conditions for the $o(2,2)\sim sp(2)\oplus sp(2)$ Yang-Mills field strengths. Here $\omega_{\alpha\beta}$ and $h_{\alpha\beta}$ are symmetric in $\alpha$ and $\beta$. For the space-time geometric interpretation of these equations one has to assume that the dreibein $h_\nu{}^{\alpha\beta}$ is a non-degenerate $3\times 3$ matrix. Then (\ref{dh}) reduces to the zero-torsion condition which expresses Lorentz connection via dreibein $h_\nu{}^{\alpha\beta}$ and (\ref{d omega}) implies that the Riemann tensor 2-form $R_{\alpha\beta}= d\omega_{\alpha\beta}-\omega_{\alpha\gamma}\wedge\omega_\beta{}^\gamma$ acquires the AdS form \begin{equation} \label{R} R_{\alpha\beta}= \lambda^2h_{\alpha\gamma}\wedge h_\beta{}^\gamma \end{equation} with $\lambda^{-1}$ identified with the AdS radius. In \cite{Unf} it was shown that one can reformulate free field equations for matter fields in 2+1 dimensions in terms of the generating function $C(y|x)$ \begin{equation} \label{C0} C(y|x)= \sum_{n=0}^\infty \frac1{n!}C_{\alpha_1 \ldots\alpha_n}(x) y^{\alpha_1}\ldots y^{\alpha_n}\, \end{equation} in the following ``unfolded'' form \begin{equation} \label{DC mod} DC=h^{\alpha\beta} \left[a(N) \frac{\partial}{\partial y^\alpha }\frac{\partial}{\partial y^\beta}+ b(N) y_\alpha\frac{\partial}{\partial y^\beta}+ e(N) y_\alpha y_\beta \right]C \, , \end{equation} where $D$ is the Lorentz covariant differential \begin{equation} \label{lorcov} D=d-\omega^{\alpha\beta}y_\alpha \frac{\partial}{\partial y^\beta}\, \end{equation} and $N$ is the Euler operator \begin{equation} N\equiv y^\alpha\frac{\partial}{\partial y^\alpha} \, . \end{equation} The integrability conditions of the equations (\ref{DC mod}) (i.e. the consistency with $d^2 =0$) require the functions $a,b$ and $e$ to satisfy the following restrictions \cite{Unf} \begin{equation} \label{consist} \alpha(n)=0 \mbox{\qquad for \, $n\ge 0$ ,\qquad $\gamma(n)=0$ \qquad for \quad $n\ge 2$ ,} \end{equation} $$ \beta(n)=0\mbox{ \qquad for \quad $n\ge 1$ ,} $$ where \begin{equation} \alpha(N)=a(N)\left[(N+4)b(N+2)-Nb(N)\right]\,, \end{equation} \begin{equation} \gamma(N)=e(N)\left[(N+2)b(N)-(N-2)b(N-2)\right]\,, \end{equation} \begin{equation} \beta(N)=(N+3)a(N)e(N+2)-(N-1)e(N)a(N-2)+b^2(N)-\lambda^2\, . \end{equation} It was shown in \cite{Unf} that, for the condition that $a(n)\ne 0$ $\forall$ $n\ge 0$ and up to a freedom of field redefinitions $C\rightarrow \tilde{C} =\varphi (N) C$, $\varphi(n) \neq 0 \quad \forall n\in {\bf Z}^{+}$, there exist two one parametric classes of independent solutions of~(\ref{consist}), $$ a(n)=1\,,\qquad b(n)=0\,, \qquad e(n)=\frac14\lambda^2-\frac{M^2} {2(n+1)(n-1)}\, ,\qquad n\,\mbox{--even}\,, $$ \begin{equation} \label{cob} a(n)=b(n)=e(n)=0\,,\qquad n\,\mbox{--odd}\,, \end{equation} and $$ a(n)=b(n)=e(n)=0\,,\qquad n\,\mbox{--even}\,, $$ \begin{equation} \label{cof} a(n)=1\,,\qquad b(n)=\frac{\sqrt2M}{n(n+2)}\,,\qquad e(n)=\frac14\lambda^2-\frac{M^2}{2n^2}\, ,\qquad n\,\mbox{--odd}\,, \end{equation} with an arbitrary parameter $M$. As a result, the system (\ref{DC mod}) reduces to two independent infinite chains of equations for bosons and fermions described by multispinors with even and odd number of indices, respectively. To elucidate the physical content of these equations one has to identify the lowest components of the expansion (\ref{C0}), $C(x)$ and $C_\alpha (x)$, with the physical spin-0 boson and spin 1/2 fermion matter fields, respectively, and to check, first, that the system (\ref{DC mod}) amounts to the physical massive Klein-Gordon and Dirac equations, \begin{equation} \label{M K-G} \Box C=\left(\frac32\lambda^2-M^2\right)C\,, \end{equation} \begin{equation} \label{D} h^\nu{}_\alpha{}^\beta D_{\nu}C_\beta=\frac M{\sqrt2} C_\alpha \,, \end{equation} and, second, that all other equations in (\ref{DC mod}) express all highest multispinors via highest derivatives of the matter fields $C$ and $C_{\alpha}$ imposing no additional constraints on the latter. Note that the D'Alambertian is defined as usual \begin{equation} \label{dal} \Box =D^{\mu}D_{\mu} \,, \end{equation} where $D_{\mu}$ is a full background covariant derivative involving the zero-torsion Christoffel connection defined through the metric postulate $D_{\mu}h_{\nu}^{\alpha\beta}=0$. The inverse dreibein $h^\nu{}_{\alpha\beta}$ is defined as in \cite{Unf}, \begin{equation} h_\nu{}^{\alpha\beta}h^\nu{}_{\gamma\delta}= \frac12(\delta^\alpha_\gamma\delta^\beta_\delta+\delta^\alpha_ \delta\delta^\beta_\gamma)\,. \end{equation} Note also that the indices $\mu$, $\nu$ are raised and lowered by the metric tensor $$ g_{\mu\nu}=h_\mu{}^{\alpha\beta}h_\nu{}_{\alpha\beta} \,. $$ As emphasized in \cite{Unf}, the equations (\ref{DC mod}) provide a particular example of covariant constantness conditions \begin{equation} \label{dC} dC_i =A_i{}^j C_j \end{equation} with the gauge fields $A_i{}^j =A^a(T_a)_i{}^j$ obeying the zero-curvature conditions \begin{equation} \label{dA} dA^a=U^a_{bc}A^b \wedge A^c \,, \end{equation} where $U^a_{bc}$ are structure coefficients of the Lie (super)algebra which gives rise to the gauge fields $A^a$ (cf (1), (2)). Then the requirement that the integrability conditions for (\ref{dC}) must be true is equivalent to the requirement that $(T_a)_i{}^j$ form some matrix representation of the gauge algebra. Thus, the problem consists of finding an appropriate representation of the space-time symmetry group which leads to correct field equations. As a result, after the equations are rewritten in this ``unfolded form'', one can write down their general solution in a pure gauge form $A(x)=-g^{-1}(x) dg(x)$, $C(x)=T(g^{-1})(x) C_0$, where $C_0$ is an arbitrary $x$ - independent element of the representation space. This general solution has a structure of the covariantized Taylor type expansion \cite{Unf}. For the problem under consideration the relevant (infinite-dimensional) representation of the AdS algebra is characterized by the coefficients (\ref{cob}) and (\ref{cof}). \section{Operator Realization for Arbitrary Mass} Let us now describe an operator algebra that leads automatically to the correct massive field equations of the form (\ref{DC mod}). Following to \cite{Quant} we introduce oscillators obeying the commutation relations \begin{equation} \label{y mod} [\hat{y}_\alpha,\hat{y}_\beta]=2i\epsilon_{\alpha\beta}(1+\nu k)\, , \end{equation} where $\alpha ,\beta =1,2$, $k$ is the Klein operator anticommuting with $\hat{y}_\alpha$, \begin{equation} \label{k} k\hat{y}_\alpha=-\hat{y}_\alpha k\, , \qquad k^2 =1 \end{equation} and $\nu$ is a free parameter. The main property of these oscillators is that the bilinears \begin{equation} \label{Q} T_{\alpha\beta} =\frac{1}{4i} \{\hat{y}_\alpha ,\hat{y}_\beta\} \end{equation} fulfill the standard $sp(2)$ commutation relations \begin{equation} \label{sp(2) com} [T_{\alpha\beta},T_{\gamma\delta}]= \epsilon_{\alpha\gamma}T_{\beta\delta}+ \epsilon_{\beta\delta}T_{\alpha\gamma}+ \epsilon_{\alpha\delta}T_{\beta\gamma}+ \epsilon_{\beta\gamma}T_{\alpha\delta} \end{equation} as well as \begin{equation} \label{oscom} [T_{\alpha\beta} ,\hat{y}_{\gamma}]= \epsilon_{\alpha\gamma}\hat{y}_{\beta}+ \epsilon_{\beta\gamma}\hat{y}_{\alpha}\, \end{equation} for any $\nu$. Note that a specific realization of this kind of oscillators was considered by Wigner \cite{Wig} who addressed a question whether it is possible to modify the oscillator commutation relations in such a way that the relation $[H, a_\pm ]=\pm a_\pm $ remains valid. This relation is a particular case of (\ref{oscom}) with $H=T_{12}$ and $a_\pm =y_{1,2}$. The property (\ref{sp(2) com}) allows us to realize the $o(2,2)$ gravitational fields as \begin{equation} \label{W} W_{gr} (x)= \omega +\lambda h ;\qquad \omega\equiv\frac1{8i}\omega^{\alpha\beta}\{\hat{y}_\alpha, \hat{y}_\beta\} \, , \quad h\equiv\frac1{8i}h^{\alpha\beta}\{\hat{y}_\alpha,\hat{y}_\beta\} \psi \, , \end{equation} where $\psi$ is an additional central involutive element, \begin{equation} \psi^2=1\,,\qquad [\psi,\hat{y}_{\alpha}]=0\,,\qquad [\psi,k]=0\,, \end{equation} which is introduced to describe the 3d AdS algebra $o(2,2)\sim sp(2)\oplus sp(2)$ spanned by the generators \begin{equation} \label{al} L_{\alpha\beta}=\frac1{4i}\{\hat{y}_\alpha, \hat{y}_\beta\}\,,\qquad \, P_{\alpha\beta}=\frac1{4i}\{\hat{y}_\alpha, \hat{y}_\beta\}\psi\,. \end{equation} Now the equations (\ref{d omega}) and (\ref{dh}) describing the vacuum anti-de Sitter geometry acquire a form \begin{equation} \label{va} dW_{gr} =W_{gr} \wedge W_{gr}\, . \end{equation} Let us introduce the operator-valued generating function $C(\hat{y},k|x)$ \begin{equation} \label{hatC} C(\hat{y},k,\psi|x)=\sum_{A,B=0,1} \sum_{n=0}^\infty \frac 1{n!} \lambda^{-[\frac n2]} C^{AB}_{\alpha_1 \ldots\alpha_n}(x) k^A \psi^B\hat{y}^{\alpha_1}\ldots \hat{y}^{\alpha_n}\, , \end{equation} where $C^{AB}_{\alpha_1 \ldots\alpha_n}$ are totally symmetric tensors (which implies the Weyl ordering with respect to $\hat{y}_{\alpha}$). It is easy to see that the following two types of equations \begin{equation} \label{aux} DC=\lambda[h,C] \, , \end{equation} and \begin{equation} \label{D hatC} DC=\lambda\{h,C\} \, , \end{equation} where \begin{equation} DC\equiv dC-[\omega,C] \, \end{equation} are consistent (i.e. the integrability conditions are satisfied as a consequence of the vacuum conditions (\ref{va})). Indeed, (\ref{aux}) corresponds to the adjoint action of the space-time algebra (\ref{al}) on the algebra of modified oscillators. The equations (\ref{D hatC}) correspond to another representation of the space-time symmetry which we call twisted representation. The fact that one can replace the commutator by the anticommutator in the term proportional to dreibein is a simple consequence of the property that AdS algebra possesses an involutive automorphism changing a sign of the AdS translations. In the particular realization used here it is induced by the automorphism $\psi\to -\psi$. There is an important difference between these two representations. The first one involving the commutator decomposes into an infinite direct sum of finite-dimensional representations of the space-time symmetry algebra. Moreover, because of the property (\ref{oscom}) this representation is $\nu$-independent and therefore is equivalent to the representation with $\nu=0$ which was shown in \cite{Unf} to describe an infinite set of auxiliary (topological) fields. The twisted representation on the other hand is just the infinite-dimensional representation needed for the description of matter fields (in what follows we will use the symbol $C$ only for the twisted representation). To see this one has to carry out a component analysis of the equations (\ref{D hatC}) which consists of some operator reorderings bringing all terms into the Weyl ordered form with respect to $\hat{y}_\alpha$. As a result one finds that (\ref{D hatC}) takes the form of the equation (\ref{DC mod}) with the following values of the coefficients $a(n)$, $b(n)$ and $e(n)$ : \begin{eqnarray} \label{a} \lefteqn{a(n)=\frac{i\lambda}2 \left[1+\nu k\frac{1+(-1)^n}{(n+2)^2-1} \right.} \nonumber\\ & & \left.{}-\frac{\nu^2}{(n+2)^2((n+2)^2-1)} \left((n+2)^2- \frac{1-(-1)^n}2 \right)\right]\,, \end{eqnarray} \begin{equation} \label{b} b(n)=-\nu k\lambda\,\frac{1-(-1)^n}{2n(n+2)}\,, \end{equation} \begin{equation} \label{e} e(n)=-\frac{i\lambda}2\, . \end{equation} As expected, these expressions satisfy the conditions~(\ref{consist}). Now let us remind ourselves that due to the presence of the Klein operator $k$ we have a doubled number of fields compared to the analysis in the beginning of this section. One can project out the irreducible subsets with the aid of the two projectors $P_\pm$, \begin{equation} C_\pm\equiv P_\pm C\, ,\qquad P_\pm\equiv\frac{1\pm k}2\, . \end{equation} As a result we get the following component form of eq.~(\ref{DC mod}) with the coefficients (\ref{a})-(\ref{e}), \begin{equation} \label{chainbos+-} DC^{\pm}_{\alpha(n)}=\frac i2\left[\left(1-\frac{\nu(\nu\mp2)}{(n+1)(n+3)} \right) h^{\beta\gamma}C^{\pm}_{\beta\gamma\alpha(n)}- \lambda^2n(n-1)h_{\alpha\alpha}C^{\pm}_{\alpha(n-2)}\right] \end{equation} for even $n$, and \begin{eqnarray} \label{chainferm+-} DC^{\pm}_{\alpha(n)} & = & \frac i2\left(1-\frac{\nu^2}{(n+2)^2}\right) h^{\beta\gamma}C^{\pm}_{\beta\gamma\alpha(n)} \pm \frac {\nu\lambda}{n+2}h_{\alpha}{}^{\beta}C^{\pm}_{\beta\alpha(n-1)} \nonumber\\ & & {}-\frac i2 \lambda^2n(n-1)h_{\alpha\alpha}C^{\pm}_{\alpha(n-2)} \end{eqnarray} for odd $n$. Here we use the notation $C_{\alpha(n)}=C_{\alpha_1,\dots,\alpha_n}$ and assume the full symmetrization of the indices denoted by $\alpha$. As it was shown in \cite{Unf}, the D'Alambertian corresponding to eq.~(\ref{DC mod}) has the following form \begin{eqnarray} \label{D'Al} \Box C & = & \Biggl[(N+3)(N+2)a(N)e(N+2)+ \nonumber\\ & & \left.+N(N-1)e(N)a(N-2)-\frac12N(N+2)b^2(N)\right]C\, . \end{eqnarray} Insertion of~(\ref{a})-(\ref{e}) into~(\ref{D'Al}) yields \begin{equation} \label{L M} \Box C_\pm =\left[\lambda^2\frac{N(N+2)}2+\lambda^2\frac32- M^2_\pm \right]C_\pm\,, \end{equation} with \begin{equation} \label{M} M^2_\pm =\lambda^2\frac{\nu(\nu\mp 2)}2\, ,\qquad n\mbox{ -even,} \end{equation} \begin{equation} \label{M f} M^2_\pm =\lambda^2\frac{\nu^2}2\, ,\qquad n\mbox{ -odd.} \end{equation} Thus, it is shown that the modification (\ref{y mod}) allows one to describe matter fields \footnote{Let us remind the reader that the physical matter field components are singled out by the conditions $NC_\pm=0$ in the bosonic sector and $NC_\pm=C_\pm$ in the fermionic sector} with an arbitrary mass parameter related to $\nu$. This construction generalizes in a natural way the realization of equations for massless matter fields in terms of the ordinary ($\nu=0$) oscillators proposed in \cite{Unf}. An important comment however is that this construction not necessarily leads to non-vanishing coefficients $a(n)$. Consider, for example, expression~(\ref{a}) for the bosonic part of $C_{+}$, i.e., set $k=1\,,\,n=2m$, $m$ is some integer, \begin{equation} \label{a1} a(2m)=\frac{i\lambda}2 \left[1-\frac{\nu(\nu-2)}{(2m+1)(2m+3)}\right]\, . \end{equation} We observe that $a(2l)=0$ at $\nu=\pm 2(l+1)+1 $. It is not difficult to see that some of the coefficients $a(n)$ vanish if and only if $\nu=2k+1$ for some integer $k$. This conclusion is in agreement with the results of~\cite{Quant} where it was shown that for these values of $\nu$ the enveloping algebra of the relations (\ref{y mod}), $Aq(2;\nu |{\bf C})$, possesses ideals. Thus, strictly speaking for $\nu =2k+1$ the system of equation derived from the operator realization (\ref{D hatC}) is different from that considered in \cite{Unf}. The specificities of the degenerated systems with $\nu=2k+1$ will be discussed in the section 5. In \cite{BWV} it was shown that the algebra $Aq(2,\nu )$ is isomorphic to the factor algebra $U(osp(1,2))/I(C_2 -\nu^2 )$, where $U(osp(1,2))$ is the enveloping algebra of $osp(1,2)$, while $I(C_2 -\nu^2 )$ is the ideal spanned by all elements of the form $$ (C_2-\nu^2)\, x\,, \qquad \forall x\in U(osp(1,2)) \,, $$ where $C_2$ is the quadratic Casimir operator of $osp(1,2)$. {}From this observation it follows in particular that the oscillator realization described above is explicitly supersymmetric. In fact it is N=2 supersymmetric \cite{BWV} with the generators of $osp(2,2)$ of the form $$ T_{\alpha\beta}=\frac1{4i}\{\hat{y}_\alpha,\hat{y}_\beta \}\,,\quad Q_\alpha =\hat{y}_\alpha\,,\quad S_\alpha =\hat{y}_\alpha k\,,\quad J=k+\nu \,. $$ This observation guarantees that the system of equations under consideration possesses N=2 global supersymmetry. It is this $N=2$ supersymmetry which leads to a doubled number of boson and fermion fields in the model. \section{Bosonic Case and U(o(2,1))} In the purely bosonic case one can proceed in terms of bosonic operators, avoiding the doubling of fields caused by supersymmetry. To this end, let us use the orthogonal realization of the AdS algebra $o(2,2)\sim o(2,1)\oplus o(2,1)$. Let $T_a$ be the generators of $o(2,1)$, \begin{equation} \label{comr} [T_a,T_b]=\epsilon_{ab}{}^c T_c \,, \end{equation} where $\epsilon_{abc}$ is a totally antisymmetric 3d tensor, $\epsilon_{012}=1$, and Latin indices are raised and lowered by the Killing metrics of $o(2,1)$, $$ A^a=\eta^{ab}A_b\,,\qquad \eta=diag(1,-1,-1) \,. $$ Let the background gravitational field have a form \begin{equation} \label{W T} W_\mu=\omega_\mu{}^a T_a +\tilde\lambda\psi h_\mu{}^a T_a\,, \end{equation} where $\psi$ is a central involutive element, \begin{equation} \psi^2=1,\qquad [\psi, T_a]=0\,, \end{equation} and let $W$ obey the zero-curvature conditions (\ref{va}). Note, that the inverse dreibein $h^\mu{}_a$ is normalized so that \begin{equation} h_\mu{}^a h^\mu{}^b=\eta^{ab} \,. \end{equation} Let $T_a$ be restricted by the following additional condition on the quadratic Casimir operator \begin{equation} \label{tr} C_2\equiv T_a T^a=\frac18\left(\frac32-\frac{M^2}{\tilde\lambda^2}\right)\,. \end{equation} We introduce the dynamical 0-form $C$ as a function of $T_a$ and $\psi$ \begin{equation} \label{CT} C=\sum_{n=0}^\infty\sum_{A=0,1}\frac1{n!}\psi^A C_A{}^{a_1\ldots a_n}(x) T_{a_1}\ldots T_{a_n}\, , \end{equation} where $C_A{}^{a_1\ldots a_n}$ are totally symmetric traceless tensors. Equivalently one can say that $C$ takes values in the algebra $A_M \oplus A_M$ where $A_M = U(o(2,1))/I_{(C_2-\frac18(\frac32-\frac{M^2}{\tilde\lambda^2}))}$. Here $U(o(2,1))$ is the enveloping algebra for the relations (\ref{comr}) and $I_{(C_2-\frac18(\frac32-\frac{M^2}{\tilde\lambda^2}))}$ is the ideal spanned by all elements of the form $$ \left[C_2-\frac18\left(\frac32-\frac{M^2}{\tilde\lambda^2}\right)\right]\,x \,,\qquad \forall x\in U(o(2,1) \,. $$ We can then write down the equation analogous to~(\ref{D hatC}) in the form \begin{equation} \label{DC T} D_{\mu}C=\tilde\lambda\psi h_{\mu}{}^a\{T_a,C\}\, , \end{equation} where \begin{equation} D_{\mu}C=\partial_{\mu}C-\omega_{\mu}{}^a[T_a,C]\, . \end{equation} Acting on the both sides of eq.~(\ref{DC T}) by the full covariant derivative $D^{\mu}$, defined through the metric postulate $D_{\mu}(h^a_{\nu}T_a)=0$ under the condition that the Christoffel connection is symmetric, one can derive \begin{equation} \Box C_n=\frac12\tilde\lambda^2\left[2n(n+1)+\frac32- \frac{M^2}{\tilde\lambda^2} \right]C_n \,, \end{equation} where $C_n$ denotes a $n$-th power monomial in (\ref{CT}). We see that this result coincides with (\ref{L M}) at $N=2n$ and \begin{equation} \label{ll} \lambda^2=\frac12\tilde\lambda^2 \,. \end{equation} Also one can check that the zero-curvature conditions for the gauge fields (\ref{W}) and (\ref{W T}) are equivalent to each other provided that (\ref{ll}) is true. The explicit relationships are $$ \omega_\mu{}^{\alpha\beta}=-\frac12\omega_\mu{}^a\sigma_a^{\alpha\beta}\,,\quad h_\mu{}^{\alpha\beta}=-\frac1{\sqrt2}h_\mu{}^a\sigma_a^{\alpha\beta}\,,\quad T_a=-\frac1{16i}\sigma_a^{\alpha\beta}\{\hat{y}_\alpha,\hat{y}_\beta\}\,, $$ where $\sigma_a^{\alpha\beta}=(I,\sigma_1,\sigma_3)$, $\sigma_1\,,\sigma_3$ are symmetric Pauli matrices. One can also check that, as expected, eq.~(\ref{DC T}) possesses the same degenerate points in M as eq.~(\ref{DC mod}) does according to~(\ref{a1}). \section{Degenerate Points} In this section we discuss briefly the specificities of the equation~(\ref{D hatC}) at singular points in $\nu$. Let us substitute the expansion~(\ref{hatC}) into~(\ref{DC mod}) with the coefficients defined by~(\ref{a})-(\ref{e}) and project (\ref{DC mod}) to the subspace of bosons $C_{+}$ by setting $k=1$ and $n$ to be even. Then we get in the component form \begin{equation} \label{chain} DC_{\alpha(n)}=\frac i2\left[\left(1-\frac{\nu(\nu-2)}{(n+1)(n+3)} \right) h^{\beta\gamma}C_{\beta\gamma\alpha(n)}- \lambda^2n(n-1)h_{\alpha\alpha}C_{\alpha(n-2)}\right] \,. \end{equation} In the general case (i.e., $\nu\ne 2l+1$, $l$-integer) this chain of equations starts from the scalar component and is equivalent to the dynamical equation~(\ref{M K-G}) with $M^2=\lambda^2\frac{\nu(\nu-2)}2$ supplemented either by relations expressing highest multispinors via highest derivatives of $C$ or identities which express the fact that higher derivatives are symmetric. At $\nu=2l+1$ the first term on the r.h.s. of~(\ref{chain}) vanishes for $n=2(\pm l-1)$. Since $n$ is non-negative let us choose for definiteness a solution with $n=2(l-1)$, $l>0$. One observes that the rank-$2l$ component is not any longer expressed by~(\ref{chain}) via derivatives of the scalar $C$, thus becoming an independent dynamical variable. Instead, the equation (\ref{chain}) tells us that (appropriately AdS covariantized) $l$-th derivative of the scalar field $C$ vanishes. As a result, at degenerate points the system of equations (\ref{chain}) acquires a non-decomposable triangle-type form with a finite subsystem of equations for the set of multispinors $C_{\alpha (2n)},$ $n<l$ and an infinite system of equations for the dynamical field $C_{\alpha (2l)}$ and higher multispinors, which contains (derivatives of) the original field $C$ as a sort of sources on the right hand side. The subsystem for lower multispinors describes a system analogous to that of topological fields (\ref{aux}) which can contain at most a finite number of degrees of freedom. In fact this system should be dynamically trivial by the unitarity requirements (there are no finite-dimensional unitary representations of the space-time symmetry groups) \footnote{The only exception is when the degeneracy takes place on the lowest level and the representation turns out to be trivial (constant).}. Physically, this is equivalent to imposing appropriate boundary conditions at infinity which must kill these degrees of freedom because, having only a finite number of non-vanishing derivatives, these fields have a polynomial growth at the space-time infinity (except for a case of a constant field $C$). Thus one can factor out the decoupling lowest components arriving at the system of equations which starts from the field $C_{\alpha (2l)}$. These systems are dynamically non-trivial and correspond to certain gauge systems. For example, one can show that the first degenerate point $\nu=3$ just corresponds to 3d electrodynamics. To see this one can introduce a two-form \begin{equation} F=h^{\alpha}{}_{\gamma}\wedge h^{\gamma\beta} C_{\alpha\beta}\, \end{equation} and verify that the infinite part of the system (\ref{chain}) with $n\ge2$ (i.e. with the scalar field factored out) is equivalent to the Maxwell equations \begin{equation} dF=0\,,\qquad d\,{}^* F=0\, \end{equation} supplemented with an infinite chain of higher Bianchi identities (here ${}^* F$ denotes a form dual to $F$). Note that, for our normalization of a mass, electrodynamics turns out to be massive with the mass $M^2=\frac32\lambda^2$ which vanishes in the flat limit $\lambda\to 0 $. A more detailed analysis of this formulation of electrodynamics and its counterparts corresponding to higher degenerate points will be given in \cite{Fut}. Now let us note that there exists an alternative formulation of the dynamics of matter fields which is equivalent to the original one of \cite{Unf} for all $\nu$ and is based on the co-twisted representation $\tilde C$ . Namely, let us introduce a non-degenerate invariant form \begin{equation} \langle C,\tilde C \rangle = \int d^4x \sum_{n=0}^\infty \frac 1{(2n)!}C_{\alpha(2n)}\tilde C^{\alpha(2n)}\, \end{equation} confining ourselves for simplicity to the purely bosonic case in the sector $C_{+}$. The covariant differential corresponding to the twisted representation $C$ of $o(2,2)$ has the form \begin{equation} {\cal D}C= dC-[\omega,C]-\lambda\{h,C\}\, , \end{equation} so that eq.~(\ref{D hatC}) acquires a form \quad ${\cal D}C=0$. The covariant derivative in the co-twisted representation can be obtained from the invariance condition \begin{equation} \langle C,{\cal D}\tilde C \rangle =-\langle {\cal D}C,\tilde C \rangle \,. \end{equation} It has the following explicit form \begin{eqnarray} \lefteqn{{\cal D}\tilde C^{\alpha(n)}=d\tilde C^{\alpha(n)}- n\omega^{\alpha}{}_{\beta}\tilde C^{\beta\alpha(n-1)} }\nonumber\\ & & {}-\frac i2\left[h_{\beta\gamma}\tilde C^{\beta\gamma\alpha(n)}- \lambda^2n(n-1)\left(1-\frac{\nu(\nu-2)}{(n-1)(n+1)}\right) h^{\alpha\alpha}\tilde C^{\alpha(n-2)}\right] \,. \end{eqnarray} As a result the equation for $\tilde C$ analogous to~(\ref{chain}) reads \begin{equation} \label{co-chain} D\tilde C^{\alpha(n)}=\frac i2\left[ h_{\beta\gamma}\tilde C^{\beta\gamma\alpha(n)}- \lambda^2n(n-1)\left(1-\frac{\nu(\nu-2)}{(n-1)(n+1)}\right) h^{\alpha\alpha}\tilde C^{\alpha(n-2)}\right] \,. \end{equation} We see that now the term containing a higher multispinor appears with a unite coefficient while the coefficients in front of the lower multispinor sometimes vanish. The equations (\ref{co-chain}) identically coincide with the equations derived in \cite{Unf} which are reproduced in the section 2 of this paper. Let us note that the twisted and co-twisted representations are equivalent for all $\nu \neq 2l+1$ because the algebra of deformed oscillators possesses an invariant quadratic form which is non-degenerate for all $\nu \neq 2l+1$ \cite{Quant}. For $\nu = 2l+1$ this is not the case any longer since the invariant quadratic form degenerates and therefore twisted and co-twisted representations turn out to be formally inequivalent. Two questions are now in order. First, what is a physical difference between the equations corresponding to twisted and co-twisted representations at the degenerate points, and second which of these two representations can be used in an interacting theory. These issues will be considered in more detail in \cite{Fut}. Here we just mention that at the free field level the two formulations are still physically equivalent and in fact turn out to be dual to each other. For example for the case of electrodynamics the scalar field component $C$ in the co-twisted representation can be interpreted as a magnetic potential such that ${}^*F = dC$. A non-trivial question then is whether such a formulation can be extended to any consistent local interacting theory. Naively one can expect that the formulation in terms of the twisted representation has better chances to be extended beyond the linear problem. It will be shown in \cite{Fut} that this is indeed the case. \section{Conclusion} In this paper we suggested a simple algebraic method of formulating free field equations for massive spin-0 and spin 1/2 matter fields in 2+1 dimensional AdS space in the form of covariant constantness conditions for certain infinite-dimensional representations of the space-time symmetry group. An important advantage of this formulation is that it allows one to describe in a simple way a structure of the global higher-spin symmetries. These symmetries are described by the parameters which take values in the infinite-dimensional algebra of functions of all generating elements $y_\alpha$, $k$ and $\psi$, i.e. $\varepsilon=\varepsilon(y_\alpha,k,\psi |x)$. The full transformation law has a form \begin{equation} \label{trans} \delta C = \varepsilon C - C\tilde\varepsilon\,, \end{equation} where \begin{equation} \tilde\varepsilon(y_\alpha,k,\psi |x)=\varepsilon(y_\alpha,k,-\psi |x) \end{equation} and the dependence of $\varepsilon $ on $x$ is fixed by the equation \begin{equation} d \varepsilon=W_{gr} \varepsilon - \varepsilon W_{gr} \,, \end{equation} which is integrable as a consequence of the zero-curvature conditions (\ref{va}) and therefore admits a unique solution in terms of an arbitrary function $\varepsilon_0 (y_\alpha,k,\psi)=\varepsilon (y_\alpha,k,\psi|x_0)$ for an arbitrary point of space-time $x_0$. It is obvious that the equations (\ref{D hatC}) are indeed invariant with respect to the transformations (\ref{trans}). Explicit knowledge of the structure of the global higher-spin symmetry is one of the main results obtained in this paper. In \cite{Fut} it will serve as a starting point for the analysis of higher-spin interactions of matter fields in 2+1 dimension. An interesting feature of higher-spin symmetries demonstrated in this paper is that their form depends on a particular dynamical system under consideration. Indeed, the higher-spin algebras with different $M^2 (\nu )$ are pairwise non-isomorphic. This is obvious from the identification of the higher-spin symmetries with certain factor-algebras of the enveloping algebras of space-time symmetry algebras along the lines of the Section 4. Ordinary space-time symmetries on their turn can be identified with (maximal) finite-dimensional subalgebras of the higher-spin algebras which do not depend on the dynamical parameters like $\nu$ (cf (\ref{y mod})). The infinite-dimensional algebras isomorphic to those considered in the section 4 have been originally introduced in \cite{BBS,H} as candidates for 3d bosonic higher-spin algebras, while the superalgebras of deformed oscillators described in the section 3 were suggested in \cite{Quant} as candidates for 3d higher-spin superalgebras. Using all these algebras and the definition of supertrace given in \cite{Quant} it was possible to write a Chern-Simons action for the 3d higher-spin gauge fields which are all dynamically trivial in the absence of matter fields (in a topologically trivial situation). Originally this was done by Blencowe \cite{bl} for the case of the Heisenberg algebra (i.e. $\nu =0$). It was not clear, however, what is a physical meaning of the ambiguity in the continuous parameter like $\nu$ parametrizing pairwise non-isomorphic 3d higher-spin algebras. In this paper we have shown that different symmetries are realized on different matter multiplets, thus concluding that higher-spin symmetries turn out to be dependent on a particular physical model under consideration. \section*{Acknowledgements} The research described in this article was supported in part by the European Community Commission under the contract INTAS, Grant No.94-2317 and by the Russian Foundation for Basic Research, Grant No.96-01-01144.
2024-02-18T23:39:41.232Z
1996-09-03T12:27:33.000Z
algebraic_stack_train_0000
69
5,698
proofpile-arXiv_065-363
\section{Introduction} Since the early work of Wigner \cite{Wig53} random matrix theory (RMT) has been applied with success in many domains of physics~\cite{Meh91}. Initially developed to serve for nuclear physics, RMT proves itself to provide an adequate description to any situation implying chaos. It has been found that the spectra of many quantum systems is very close to one of four archetypal situations described by four statistical ensembles. For the few integrable models this is the ensemble of diagonal random matrices, while for non-integrable systems this can be the Gaussian Orthogonal Ensemble (GOE), the Gaussian Unitary Ensemble (GUE), or the Gaussian Symplectic Ensemble (GSE), depending on the symmetries of the model under consideration. In the last years several quantum spin Hamiltonians have been investigated from this point of view. It has been found \cite{PoZiBeMiMo93,HsAdA93} that 1D systems for which the Bethe ansatz applies have a level spacing distribution close to a Poissonian (exponential) distribution, $P(s) = \exp(-s)$, whereas if the Bethe ansatz does not apply, the level spacing distribution is described by the Wigner surmise for the Gaussian orthogonal ensemble (GOE): \begin{equation} \label{e:wigner} P(s) = \frac{\pi}{2} s \exp( -\pi s^2 / 4) \;. \end{equation} Similar results have been found for 2D quantum spin systems \cite{MoPoBeSi93,vEGa94,BrAdA96}. Other statistical properties have also been analyzed, showing that the description of the spectrum of the quantum spin system by a statistical ensemble is valid not only for the level spacings but also for quantities involving more than two eigenvalues. In a recent letter \cite{hm4} we proposed the extension of random matrix theory analysis to models of classical statistical mechanics (vertex and spin models), studying the transfer matrix of the eight-vertex model as an example. The underlying idea is that, if there actually exists a close relation between integrability and the Poissonian character of the distribution, it could be better understood in a framework which makes Yang--Baxter integrability and its key structures (commutation of transfer matrices depending on spectral parameters) crystal clear: one wants to switch from quantum Hamiltonian framework to transfer matrix framework. We now present the complete results of our study of transfer matrices and a detailed description of the numerical method. This work is split into two papers: the first one describes the numerical methods and the results on the eight-vertex model, the second one treats the case of discrete spin models with the example of the Ising model in two and three dimensions and the standard Potts model with three states. We will analyze a possible connection between statistical properties of the entire spectrum of the model's transfer matrix and the Yang--Baxter integrability. A priori, such a connection is not sure to exist since only the few eigenvalues with largest modulus have a physical signification, while we are looking for properties of the entire spectrum. However, our numerical results show a connection which we will discuss. We will also give an extension of the so-called ``disorder variety'' to the asymmetric eight-vertex model where the partition function can be summed up without Yang--Baxter integrability. We then present an infinite discrete symmetry group of the model and an infinite set of algebraic varieties stable under this group. Finally, we test all these varieties from the point of view of RMT analysis. This paper is organized as follows: in Sec.~\ref{s:numeric} we recall the machinery of RMT, and we give some details about the numerical methods we use. Sec.~\ref{s:8v} is devoted to the eight-vertex model. We list the cases where the partition function can be summed up, and give some new analytical results concerning the disorder variety and the automorphy group of the asymmetric eight-vertex model. The numerical results of the analysis of the spectrum of transfer matrices are presented in Sec.~\ref{s:results8v}. The last section concludes with a discussion. \section{Numerical Methods of RMT} \label{s:numeric} \subsection{Unfolding of the Spectrum} In RMT analysis one considers the spectrum of the (quantum) Hamiltonian, or of the transfer matrix, as a collection of numbers, and one looks for some possibly universal statistical properties of this collection of numbers. Obviously, the raw spectrum will not have any universal properties. For example, Fig.~\ref{f:density} shows schematically three densities of eigenvalues: for a 2d Hubbard model, for an eight-vertex model and for the Gaussian Orthogonal Ensemble. They have clearly nothing in common. To find universal properties, one has to perform a kind of renormalization of the spectrum, this is the so-called unfolding operation. This amounts to making the {\em local} density of eigenvalues equal to unity everywhere in the spectrum. In other words, one has to subtract the regular part from the integrated density of states and consider only the fluctuations. This can be achieved by different means, however, there is no rigorous prescription and the best criterion is the insensitivity of the final result to the method employed or to the parameters (for ``reasonable'' variation). Throughout this paper, we call $E_i$ the raw eigenvalues and $\epsilon_i$ the corresponding unfolded eigenvalues. Thus the requirement is that the local density of the $\epsilon_i$'s is one. We need to compute an averaged integrated density of states $\bar\rho(E)$ from the actual integrated density of states: \begin{equation} \rho(E)={1\over N}\int_{-\infty}^E \sum_i{\delta(e-E_i)}\,de \;, \end{equation} and then we take $\epsilon_i = N \bar\rho(E_i)$. To compute $\bar\rho(E)$ from $\rho(E)$, we have performed a running average: we choose some odd integer $2r+1$ of the order of 9--25 and then replace each eigenvalue $E_i$ by a local average: \begin{equation} E_i^\prime = {1\over 2r+1} \sum_{j=i-r}^{i+r} E_j \;, \end{equation} and $\bar\rho(E)$ is approximated by the linear interpolation between the points of coordinates $(E_i^\prime,i)$. We compared the results with other methods: one can replace each delta peak in $\rho(E)$ by a Gaussian with a properly chosen mean square deviation. Another method is to discard the low frequency components in a Fourier transform of $\rho(E)$. A detailed explanation and tests of these methods of unfolding are given in Ref.~\cite{BrAdA97}. Note also that for very peculiar spectra it is necessary to break it into parts and to unfold each part separately. Also the extremal eigenvalues are discarded since they induce finite size effects. It comes out that of the three methods, the running average unfolding is the best suited in the context of transfer matrices, and it is also the fastest. \subsection{Symmetries} For quantum Hamiltonians, it is well known that it is necessary to sort the eigenvalues with respect to their quantum numbers, and to compare only eigenvalues of states belonging to the same quantum numbers. This is due to the fact that eigenstates with different symmetries are essentially uncorrelated. The same holds for transfer matrices. In general, a transfer matrix $T$ of a classical statistical mechanics lattice model (vertex model) depends on several parameters (Boltzmann weights $w_i$). Due to the lattice symmetries, or to other symmetries (permutation of colors and so on), there exist some operators $S$ acting on the same space as the transfer matrix and which are {\em independent of the parameters}, commuting with $T$: $[T(\{w_i\}),S] = 0$. It is then possible to find subspaces of $T$ which are also independent of the parameters. Projection on these invariant subspaces amounts to block-diagonalizing $T$ and to split the unique spectrum of $T$ into the many spectra of each block. The construction of the projectors is done with the help of the character table of irreducible representations of the symmetry group. Details can be found in \cite{BrAdA97,hmth}. As we will discuss in the next sections, we always restricted ourselves to symmetric transfer matrices. Consequently the blocks are also symmetric and there are only {\em real} eigenvalues. The diagonalization is performed using standard methods of linear algebra (contained in the LAPACK library). The construction of the transfer matrix and the determination of its symmetries depend on the model and are detailed in Sec.~\ref{s:transfer} for the eight-vertex model. \subsection{Quantities Characterizing the Spectrum} \label{s:quantities} Once the spectrum has been obtained and unfolded, various statistical properties of the spectrum are investigated. The simplest one is the distribution $P(s)$ of the spacings $s=\epsilon_{i+1}-\epsilon_i$ between two consecutive unfolded eigenvalues. This distribution will be compared to an exponential and to the Wigner law (\ref{e:wigner}). Usually, a simple visual inspection is sufficient to recognize the presence of level repulsion, the main property for non-integrable models. However, to quantify the ``degree'' of level repulsion, it is convenient to use a parameterized distribution which interpolates between the Poisson law, the Wigner law. From the many possible distributions we have chosen the Brody distribution \cite[ch.\ 16.8]{Meh91}: \begin{mathletters} \begin{equation} P_\beta(s) = c_1\, s^\beta\, \exp\left(-c_2 s^{\beta+1}\right) \end{equation} with \begin{equation} c_2=\left[\Gamma\left({\beta+2\over\beta+1}\right)\right]^{1+\beta} \quad\mbox{and}\quad c_1=(1+\beta)c_2 \;. \end{equation} \end{mathletters} For $\beta=0$, this is a simple exponential for the Poisson ensemble, and for $\beta=1$, one recovers the Wigner surmise for the GOE. This distribution turns out to be convenient since its indefinite integral can be expressed with elementary functions. It has been widely used in the literature, except when special distributions were expected as at the metal insulator transition \cite{VaHoScPi95}. Minimizing the quantity: \begin{equation} \phi(\beta) = \int_0^\infty(P_\beta(s)-P(s))^2 \,ds \end{equation} yields a value of $\beta$ characterizing the degree of level repulsion of the distribution $P(s)$. We have always found $\phi(\beta)$ small. When $-0.1<\beta<0.1$, the distribution is close to a Poisson law, while for $0.5<\beta<1.2$ the distribution is close to the Wigner surmise. If a distribution is found to be close to the Wigner surmise (or the Poisson law), this does not mean that the GOE (or the Diagonal Matrices Ensemble) describes correctly the spectrum. Therefore it is of interest to compute functions involving higher order correlations as for example the spectral rigidity \cite{Meh91}: \begin{equation} \Delta_3(E) = \left\langle \frac{1}{E} \min_{a,b} \int_{\alpha-E/2}^{\alpha+E/2} {\left( N(\epsilon)-a \epsilon -b\right)^2 d\epsilon} \right\rangle_\alpha \;, \end{equation} where $\langle\dots\rangle_\alpha$ denotes an average over the whole spectrum. This quantity measures the deviation from equal spacing. For a totally rigid spectrum, as that of the harmonic oscillator, one has $\Delta_3^{\rm osc}(E) = 1/12$, for an integrable (Poissonian) system one has $\Delta_3^{\rm Poi}(E) = E/15$, while for the Gaussian Orthogonal Ensemble one has $\Delta_3^{\rm GOE}(E) = \frac{1}{\pi^2} (\log(E) - 0.0687) + {\cal O}(E^{-1})$. It has been found that the spectral rigidity of quantum spin systems follows $\Delta_3^{\rm Poi}(E)$ in the integrable case and $\Delta_3^{\rm{GOE}}(E)$ in the non-integrable case. However, in both cases, even though $P(s)$ is in good agreement with RMT, deviations from RMT occur for $\Delta_3(E)$ at some system dependent point $E^*$. This stems from the fact that the rigidity $\Delta_3(E)$ probes correlations beyond nearest neighbours in contrast to $P(s)$. \section{The Asymmetric Eight-Vertex Model on a square lattice} \label{s:8v} \subsection{Generalities} We will focus in this section on the asymmetric eight-vertex model on a square lattice. We use the standard notations of Ref.~\cite{Bax82}. The eight-vertex condition specifies that only vertices are allowed which have an even number of arrows pointing to the center of the vertex. Fig.~\ref{f:vertices} shows the eight vertices with their corresponding Boltzmann weight. The partition function per site depends on these eight homogeneous variables (or equivalently seven independent values): \begin{equation} Z(a,a',b,b',c,c',d,d')\;. \end{equation} It is customary to arrange the eight (homogeneous) Boltzmann weights in a $4 \times 4$ $R$-matrix: \begin{eqnarray} \label{e:Rmat} {\cal R} \, = \, \left( \begin {array}{cccc} a&0&0&d\\ 0&b& c&0\\ 0&c^\prime&b^\prime&0\\ d^\prime&0&0& a^\prime \end {array} \right) \end{eqnarray} The entry ${\cal R}_{i j}$ is the Boltzmann weight of the vertex defined by the four digits of the binary representation of the two indices $i$ and $j$. The row index corresponds to the east and south edges and the column index corresponds to the west and north edges: \[ {\cal R}_{i j}={\cal R}_{\mu\alpha}^{\nu\beta} =w(\mu,\alpha|\beta,\nu) \] \[ \begin{picture}(40,30)(-20,-15) \put(-10,0){\line(1,0){20}} \put(0,-10){\line(0,1){20}} \put(-20,-3){$\mu$} \put(-3,13){$\beta$} \put(13,-3){$\nu$} \put(-3,-20){$\alpha$} \end{picture} \] When the Boltzmann weights are unchanged by negating all the four edge values the model is said {\em symmetric} otherwise it is {\em asymmetric}. This should not be confused with the symmetry of the transfer matrix. Let us now discuss a general symmetry property of the model. A combinatorial argument \cite{Bax82} shows that for any lattice without dangling ends, the two parameters $c$ and $c^\prime$ can be taken equal, and that, for most regular lattices (including the periodic square lattice considered in this work), $d$ and $d^\prime$ can also be taken equal (gauge transformation \cite{GaHi75}). Specifically, one has: \begin{equation}\label{e:gauge} Z(a,a',b,b',c,c',d,d') = Z(a,a',b,b',\sqrt{cc'},\sqrt{cc'},\sqrt{dd'},\sqrt{dd'}) \;. \end{equation} We will therefore always take $c=c'$ and $d=d'$ in the numerical calculations. In the following, when $c'$ and $d'$ are not mentioned it is implicitly meant that $c'=c$ and $d'=d$. Let us finally recall that the asymmetric eight-vertex model is equivalent to an Ising spin model on a square lattice including next nearest neighbor interactions on the diagonals and four-spin interactions around a plaquette (IRF model) \cite{Bax82,Kas75}. However, this equivalence is not exact on a finite lattice since the $L\times M$ plaquettes do not form a basis (to have a cycle basis, one must take any $L\times M -1$ plaquettes plus one horizontal and one vertical cycle). \subsection{The Row-To-Row Transfer Matrix} \label{s:transfer} Our aim is to study the full spectrum of the transfer matrix. More specifically, we investigate the properties of the row-to-row transfer matrix which corresponds to build up a periodic $L\times M$ rectangular lattice by adding rows of length $L$. The transfer matrix $T_L$ is a $2^L\times 2^L$ matrix and the partition function becomes: \begin{equation} Z(a,a',b,b',c,d) = {\rm Tr} \, [T_L(a,a',b,b',c,d)]^M\;. \label{e:Ztrace} \end{equation} However, there are many other possibilities to build up the lattice, each corresponding to another form of transfer matrix: it just has to lead to the same partition function. Other widely used examples are diagonal(-to-diagonal) and corner transfer matrices \cite{Bax82}. The index of the row-to-row transfer matrix enumerates the $2^L$ possible configurations of one row of $L$ vertical bonds. We choose a binary coding: \begin{equation} \alpha=\sum_{i=0}^{L-1} \alpha_i2^i \equiv | \alpha_0,\dots,\alpha_{L-1} \rangle \end{equation} with $\alpha_i\in\{0,1\}$, 0 corresponding to arrows pointing up or to the right and 1 for the other directions. One entry $T_{\alpha,\beta}$ thus describes the contribution to the partition function of two neighboring rows having the configurations $\alpha$ and $\beta$: \begin{equation} T_{\alpha,\beta} = \sum_{\{\mu\}} \prod_{i=0}^{L-1} w(\mu_i,\alpha_i | \beta_i,\mu_{i+1}) \;. \label{e:Tab} \end{equation} With our binary notation, the eight-vertex condition means that $w(\mu_i,\alpha_i | \beta_i,\mu_{i+1})=0$ if the sum $\mu_i+\alpha_i+\beta_i+\mu_{i+1}$ is odd. Therefore, the sum (\ref{e:Tab}) reduces to exactly two terms: once $\mu_0$ is chosen (two possibilities), $\mu_1$ is uniquely defined since $\alpha_0$ and $\beta_0$ are fixed and so on. For periodic boundary conditions, the entry $T_{\alpha,\beta}$ is zero if the sum of all $\beta_i$ and $\alpha_i$ is odd. This property naturally splits the transfer matrix into two blocks: entries between row configurations with an even number of up arrows and entries between configurations with an odd number of up arrows. \subsubsection{Symmetries of the Transfer Matrix} \label{s:c=d} Let us now discuss various symmetry properties of the transfer matrix. (i) When one exchanges the rows $\alpha$ and $\beta$, the vertices of type $a$, $a'$, $b$, and $b'$ will remain unchanged while the vertices of type $c$ and $d$ will exchange into one another. Thus for $c=d$ the transfer matrix $T_L(a,a',b,b',c,d)$ is symmetric. In general the symmetry of the row-to-row transfer matrix is satisfied for $c=d'$ and $d=c'$. In terms of the equivalent IRF Ising model, condition $c=d$ means that the two diagonal interactions $J$ and $J'$ (confer to Ref.~\cite{Bax82}) are the same: the Ising model is isotropic and therefore its row-to-row transfer matrix is symmetric, too. This coincidence is remarkable since the equivalence between the asymmetric eight-vertex model and the Ising model is not exact on a finite lattice as already mentioned. (ii) We now consider the effect of permutations of lattice sites preserving the neighboring relations. Denote by $S$ a translation operator defined by: \begin{equation} S|\alpha_0,\alpha_1,\dots,\alpha_{L-1} \rangle = |\alpha_1,\dots,\alpha_{L-1},\alpha_0\rangle \;. \end{equation} Then we have: \begin{equation} \langle\alpha S^{-1}|T_L(a,a',b,b',c,d)|S\beta\rangle = \langle\alpha|T_L(a,a',b,b',c,d)|\beta\rangle \;, \end{equation} and therefore: \begin{equation} [T_L(a,a',b,b',c,d),S] = 0 \;. \end{equation} For the reflection operator $R$ defined by: \begin{equation} R|\alpha_0,\alpha_1,\dots,\alpha_{L-1} \rangle = |\alpha_{L-1},\dots,\alpha_{1},\alpha_0\rangle \;, \end{equation} we have: \begin{equation} \langle\alpha R^{-1}|T_L(a,a',b,b',c,d)|R\beta\rangle = \langle\alpha|T_L(a,a',b,b',d,c)|\beta\rangle \;. \end{equation} Thus $R$ commutes with $T$ only for the symmetric case $c=d$: \begin{equation} [T_L(a,a',b,b',c,c),R] = 0 \;. \end{equation} Combination of the translations $S$ and the reflection $R$ leads to the dihedral group ${\cal D}_L$. These are all the general lattice symmetries in the square lattice case. The one dimensional nature of the group ${\cal D}_L$ reflects the dimensionality of the rows added to the lattice by a multiplication by $T$. This is general : the symmetries of the transfer matrices of $d$-dimensional lattice models are the symmetries of ($d-1$)-dimensional space. The translational invariance in the last space direction has already been exploited with the use of the transfer matrix itself leading to Eq.~(\ref{e:Ztrace}). (iii) Lastly, we look at symmetries due to operations on the dynamic variables themselves. There is a priori no continuous symmetry in this model in contrast with the Heisenberg quantum chain which has a continuous $SU(2)$ spin symmetry. But one can define an operator $C$ returning all arrows: \begin{equation} C|\alpha_0,\alpha_1,\dots,\alpha_{L-1} \rangle = |1-\alpha_0, 1-\alpha_1,\dots,1-\alpha_{L-1}\rangle \;. \end{equation} This leads to an exchange of primed and unprimed Boltzmann weights: \begin{equation} \langle\alpha C^{-1}|T_L(a,a',b,b',c,d)|C\beta\rangle = \langle\alpha|T_L(a',a,b',b,c,d)|\beta\rangle \;, \end{equation} Thus for the symmetric eight-vertex model (Baxter model) the symmetry operator $C$ commutes with the transfer matrix: \begin{equation} [T_L(a,a,b,b,c,d),C] = 0 \;. \end{equation} \subsubsection{Projectors} Once the symmetries have been identified, it is simple to construct the projectors of one row of each irreducible representation of the group ${\cal D}_L$ (details can be found in \cite{BrAdA97,hmth}). When $L$ is even, there are four representations of dimension 1 and $L/2-1$ representations of dimension 2 (i.e.\ in all there are $L/2+3$ projectors). When $L$ is odd, there are two one-dimensional representations and $(L-1)/2$ representations of dimension 2, in all $(L-1)/2 + 2$ projectors. For the symmetric model with $a=a'$ and $b=b'$, there is an extra ${\cal Z}_2$ symmetry which doubles the number of projectors. Using the projectors block diagonalizes the transfer matrix leaving a collection of small matrices to diagonalize instead of the large one. For example, for $L=16$, the total row-to-row transfer matrix has the linear size $2^L=65536$, the projected blocks have linear sizes between 906 and 2065 (see also Tabs.~\ref{t:aj14} and \ref{t:aj16}). As already mentioned, the block projection not only saves computing time for the diagonalization but is necessary to sort the eigenvalues with respect to the symmetry of the corresponding eigenstates. In summary, when $c=d$, the row-to-row transfer matrix is symmetric leading to a real spectrum. Its symmetries have been identified. This is a fortunate situation since restriction $c=d$ does neither prevent, nor enforce, Yang--Baxter integrability as will be explained in the following section. \subsection{Integrability of the Eight-Vertex Model} We now summarize the cases where the partition function of the eight-vertex model can be analyzed and possibly computed. These are the symmetric eight-vertex model, the asymmetric six-vertex model, the free-fermion variety and some ``disorder solutions''. \subsubsection{The Symmetric Eight-Vertex Model} \label{ss:s8v} Firstly, in the absence of an `electrical field', i.e.\ when $a=a'$, $b=b'$, $c=c'$, and $d=d'$, the transfer matrix can be diagonalized using the Bethe ansatz or the Yang--Baxter equations \cite{Bax82}. This case is called the symmetric eight-vertex model, also called Baxter model \cite{Bax82}. One finds that two row-to-row transfer matrices $T_L(a,b,c,d)$ and $T_L(\bar a,\bar b,\bar c,\bar d)$ commute if: \begin{mathletters} \begin{eqnarray} \Delta(a,b,c,d) &=& \Delta(\bar a, \bar b, \bar c, \bar d) \\ \Gamma(a,b,c,d) &=& \Gamma(\bar a, \bar b, \bar c, \bar d) \end{eqnarray} \end{mathletters} with: \begin{mathletters} \label{e:gd} \begin{eqnarray} \Gamma(a,b,c,d) & = & {ab-cd \over ab + cd} \;,\\ \Delta(a,b,c,d) & = & {a^2+b^2-c^2-d^2 \over 2(ab+cd)} \;. \end{eqnarray} \end{mathletters} Note that these necessary conditions are valid for {\em any} lattice size $L$. One also gets the {\em same} conditions for the column-to-column transfer matrices of this model. Thus the commutation relations lead to a foliation of the parameter space in elliptic curves given by the intersection of two quadrics Eq.~(\ref{e:gd}), that is to an elliptic parameterization (in the so-called principal regime \cite{Bax82}): \begin{mathletters} \label{e:parabax} \begin{eqnarray} a &=& \rho {\:{\rm sn}\,}(\eta-\nu) \\ b &=& \rho {\:{\rm sn}\,}(\eta+\nu) \\ c &=& \rho {\:{\rm sn}\,}(2\eta) \\ d &=& -\rho\, k {\:{\rm sn}\,}(2\eta){\:{\rm sn}\,}(\eta-\nu) {\:{\rm sn}\,}(\eta+\nu) \end{eqnarray} \end{mathletters} where ${\:{\rm sn}\,}$ denotes the Jacobian elliptic function and $k$ their modulus. It is also well known that the transfer matrix $T(a,b,c,d)$ commutes with the Hamiltonian of the anisotropic Heisenberg chain \cite{Sut70}: \begin{equation} {\cal H} = -\sum_i J_x \sigma^x_i\sigma^x_{i+1} + J_y \sigma^y_i\sigma^y_{i+1} + J_z \sigma^z_i\sigma^z_{i+1} \end{equation} if: \begin{equation} \label{e:heisass} 1:\Gamma(a,b,c,d):\Delta(a,b,c,d) = J_x:J_y:J_z \;. \end{equation} This means that, given the three coupling constants $J_x$, $J_y$, and $J_z$ of a Heisenberg Hamiltonian, there exist infinitly many quadruplets $(a,b,c,d)$ of parameters such that: \begin{equation} [T(a,b,c,d),{\cal H}(J_x,J_y,J_z)] = 0 \;. \end{equation} Indeed the three constants $J_x$, $J_y$, and $J_z$ determine uniquely $\eta$ and $k$ in the elliptic parameterization (\ref{e:parabax}) and the spectral parameter $\nu$ can take any value, thus defining a continuous one-parameter family. Not only $T$ and ${\cal H}$ commute for arbitrary values of the parameter $\nu$, but ${\cal H}$ is also related to the logarithmic derivative of $T$ at $\nu=\eta$. In this work, we examine only regions with the extra condition $c=d$ to ensure that $T$ is symmetric, and thus that the spectrum is symmetric. Using the symmetries of the eight-vertex model, one finds that the model $(a,b,c,d)$, with $c=d$ mapped into its principal regime, gives a model $(\bar a, \bar b, \bar c,\bar d)$ with $\bar a = \bar b$. In terms of the elliptic parameterization this means ${\:{\rm sn}\,}(\eta-\nu)={\:{\rm sn}\,}(\eta+\nu)$ or $\nu=0$. In summary, in the continuous one-parameter family of commuting transfer matrices $T(\nu)$ corresponding to a given value of $\Delta$ and $\Gamma$, there are two special values of the spectral parameter $\nu$: $\nu=\eta$ is related to the Heisenberg Hamiltonian ${\cal H}(1,\Gamma,\Delta)$, and for $\nu=0$ the transfer matrix $T(\nu)$ is symmetric (up to a gauge transformation). \subsubsection{Six-Vertex Model} The six-vertex model is a special case of the eight-vertex model: one disallows the two last vertex configurations of Fig.~\ref{f:vertices}, this means $d=d'=0$. Both, the symmetric and asymmetric six-vertex models, have been analyzed using the Bethe ansatz or also the Yang-Baxter equations \cite{Bax82,LiWu72,Nol92}. We did not examine this situation any further since condition $c=d$ to have a real spectrum (see paragraph \ref{s:c=d}(i)) leads to a trivial case. \subsubsection{Free-Fermion Condition} Another case where the asymmetric eight-vertex model can be solved is the case where the Boltzmann weights verify the so-called {\em free-fermion} condition: \begin{equation} \label{e:ff} aa'+ bb' = cc'+ dd' \end{equation} For condition (\ref{e:ff}) the model reduces to a quantum problem of free fermions and the partition function can thus be computed \cite{FaWu70,Fel73c}. The free-fermion asymmetric eight-vertex model is Yang--Baxter integrable, however the parameterization of the Yang--Baxter equations is more involved compared to the situation described in section \ref{ss:s8v}: the row-to-row and column-to-column commutations correspond to two different foliations of the parameter space in algebraic surfaces. It is also known that the asymmetric eight-vertex free-fermion model can be mapped onto a checkerboard Ising model. In Appendix \ref{a:ff} we give the correspondence between the vertex model and the spin model. The partition function per site of the model can be expressed in term of elliptic functions $E$ which are not (due to the complexity of the parameterization of the Yang--Baxter equations) straightforwardly related to the two sets of surfaces parameterizing the Yang--Baxter equations or even to the canonical elliptic parameterization of the generic (non free-fermion) asymmetric eight-vertex model (see Eqs~\ref{e:paraas} in the following, see also \cite{BeMaVi92}). The elliptic modulus of these elliptic functions $E$ is given in Appendix \ref{a:ff} as a function of the checkerboard Ising variables as well as in the homogeneous Boltzmann weights ($a$, $a'$, $b$, $b'$, $c$, $c'$, $d$, and $d'$) for the free-fermion asymmetric eight-vertex model. Finally, we remark that the restriction $c=d$ is compatible with the condition (\ref{e:ff}) and, in contrast with the asymmetric six-vertex model, the asymmetric free-fermion model provides a case where the row-to-row transfer matrix of the model is symmetric. \subsubsection{Disorder Solutions} If the parameters $a$, $a^\prime$, $b$, $b^\prime$, $c$, and $d$ are chosen such that the $R$-matrix (\ref{e:Rmat}) has an eigenvector which is a pure tensorial product: \begin{equation} \label{e:condeso} \cal{R} \left( \begin{array}{c} 1 \\ p \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ q \end{array} \right) = \lambda \left( \begin{array}{c} 1 \\ p \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ q \end{array} \right) \end{equation} then the vector: \begin{equation} \label{e:vectorpq} \left( \begin{array}{c} 1 \\ p \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ q \end{array} \right) \otimes \cdots \otimes \left( \begin{array}{c} 1 \\ p \end{array} \right) \otimes \left( \begin{array}{c} 1 \\ q \end{array} \right) \end{equation} ($2 L$ factors) is an eigenvector of the diagonal(-to-diagonal) transfer matrix $\tilde T_L$, usually simply called the diagonal transfer matrix. The corresponding eigenvalue is $\Lambda=\lambda^{2L}$, with \begin{equation} \lambda = {aa'-bb'+cc'-dd' \over (a+a')-(b+b')} \; . \label{e:lambda} \end{equation} However, the eigenvalue $\Lambda$ may, or may not, be the eigenvalue of largest modulus. This corresponds to the existence of so-called disorder solutions \cite{JaMa85} for which some dimensional reduction of the model occurs \cite{GeHaLeMa87}. Condition (\ref{e:condeso}) is simple to express, it reads: \begin{eqnarray} \label{e:condeso2} \lefteqn{A^2 + B^2 + C^2 + D^2 + 2 A B - 2 A D - 2 B C - 2 C D= } \nonumber \\ & & (A+B-C-D) (a + b) (a^\prime + b^\prime) -(A-D)(b^2 + b^{\prime 2}) - (B-C) (a^2 +a^{\prime 2}) \end{eqnarray} where $A=aa^\prime$, $B=bb^\prime$, $C=cc^\prime$, and $D=dd^\prime$. Note that in the symmetric case $a=a^\prime$, $b=b^\prime$, $c=c^\prime$, and $d=d^\prime$, Eq. (\ref{e:condeso2}) factorizes as: \begin{equation} (a - b + d - c) (a - b + d + c) (a - b - d - c) (a - b - d + c) = 0 \nonumber \end{equation} which is the product of terms giving two disorder varieties and two critical varieties of the Baxter model. It is known that the symmetric model has four disorder varieties (one of them, $a+b+c+d=0$, is not in the physical domain of the parameter space) and four critical varieties \cite{Bax82}. The missing varieties can be obtained by replacing ${\cal R}$ by ${\cal R}^2$ in Eq.~(\ref{e:condeso}). In our numerical calculations we have always found for the asymmetric eight-vertex model that $\Lambda$ is either the eigenvalue of largest modulus or the eigenvalue of lowest modulus. Finally, note that condition (\ref{e:condeso2}) does {\em not} correspond to a solution of the Yang-Baxter equations. This can be understood since disorder conditions like (\ref{e:condeso2}) are not invariant under the action of the infinite discrete symmetry group $\Gamma$ presented in the next subsection, whereas the solutions of the Yang-Baxter equations are actually invariant under the action of this group \cite{BMV:vert,Ma86}. On the other hand, similarly to the Yang--Baxter equations, the ``disorder solutions'' can be seen to correspond to families of commuting diagonal transfer matrices $\tilde T_L$ on a subspace $V$ of the $2^{2L}$ dimensional space on which $\tilde T_L$ acts: \begin{equation} [\tilde T_L(a,a',b,b',c,d), \tilde T_L(\bar a,\bar a',\bar b,\bar b',\bar c,\bar d)] \bigm|_V =0\;, \end{equation} where subscript $V$ means that the commutation is only valid on the subspace $V$. Actually, this subspace is the one-dimensional subspace corresponding to vector (\ref{e:vectorpq}). The notion of transfer matrices commuting only on a subspace $V$ can clearly have precious consequences on the calculation of the eigenvectors and eigenvalues, and hopefully of the partition function per site. One sees that the Yang--Baxter integrability and the disorder solution ``calculability'' are two limiting cases where $V$ respectively corresponds to the entire space where $\tilde T_L$ acts and to a single vector, namely Eq.~(\ref{e:vectorpq}). \subsection{Some Exact Results on the Asymmetric Eight-Vertex Model} \label{ss:exares} When the Boltzmann weights of the model do not verify any of the conditions of the preceding section, the partition function of the model has not yet been calculated. However, some analytical results can be stated. Algebraic varieties of the parameter space can be presented, which have very special symmetry properties. The intersection of these algebraic varieties with critical manifolds of the model are candidates for multicritical points \cite{hm1}. We have tested the properties of the spectrum of the transfer matrices on these loci of the phase space. There actually exists an infinite discrete group of symmetries of the parameter space of the asymmetric eight-vertex model (and beyond of the sixteen-vertex model \cite{BeMaVi92}). The critical manifolds of the model have to be compatible with this group of symmetries and this is also true for any exact property of the model: for instance if the model is Yang--Baxter integrable, the YBE are compatible with this infinite symmetry group \cite{BMV:vert}. However, it is crucial to recall that this symmetry group is not restricted to the Yang--Baxter integrability. It is a symmetry group of the model {\em beyond} the integrable framework and provides for instance a canonical elliptic foliation of the parameter space of the model (see the concept of quasi-integrability \cite{BeMaVi92}). The group is generated by simple transformations of the homogeneous parameters of the model: the matrix inversion and some representative geometrical symmetries, as for example the geometrical symmetry of the square lattice which amounts to a simple exchange of $c$ and $d$: \[ t_1 \left ( \begin {array}{cccc} a &0 &0 &d \\ 0 &b &c &0 \\ 0 &c &b'&0 \\ d &0 &0 &a' \end {array} \right ) = \left ( \begin {array}{cccc} a&0&0&c\\ 0&b&d&0\\ 0&d&b^\prime&0\\ c&0&0& a^\prime \end {array} \right ) \] Combining $I$ and $t_1$ yields an infinite discrete group $\Gamma$ of symmetries of the parameter space \cite{Ma86}. This group is isomorphic to the infinite dihedral group (up to a semi-direct product with ${\cal Z}_2$). An infinite order generator of the non-trivial part of this group is for instance $t_1 \cdot I $. In the parameter space of the model this generator yields an infinite set of points located on elliptic curves. The analysis of the orbits of the group $\Gamma$ for the asymmetric eight-vertex model yields (a finite set of) elliptic curves given by: \begin{equation} \label{e:paraas} \frac{(a a' + b b' - c c' - d d')^2}{a a' b b'} ={\rm const} ,\quad \frac{a a' b b'}{c c' d d'} = {\rm const} \end{equation} and \[ \frac{a}{a'} = {\rm const} ,\quad \frac{b}{b'} = {\rm const} ,\quad \frac{c}{c'} = {\rm const} ,\quad \frac{d}{d'} = {\rm const}. \] In the limit of the symmetric eight-vertex model one recovers the well-known elliptic curves (\ref{e:gd}) of the Baxter model given by the intersection of two quadrics. Recalling parameterization (\ref{e:parabax}) one sees that $t_1\cdot I$, the infinite order generator of $\Gamma$, is actually represented as a shift by $\eta$ of the spectral parameter: $\nu \rightarrow \nu+\eta$. The group $\Gamma$ is generically infinite, however, if some conditions on the parameters hold, it degenerates into a finite group. These conditions define algebraic varieties for which the model has a high degree of symmetry. The location of multicritical points seems to correspond to enhanced symmetries namely to the algebraic varieties where the symmetry group $\Gamma$ degenerates into a finite group \cite{hm1}. Such conditions of commensuration of the shift $\eta$ with one of the two periods of the elliptic functions occurred many times in the literature of theoretical physics (Tutte--Behara numbers, rational values of the central charge and of critical exponents \cite{MaRa83}). Furthermore, one can have, from the conformal field theory literature, a prejudice of free-fermion parastatistics on these algebraic varieties of enhanced symmetry \cite{DaDeKlaMcCoMe93}. It is thus natural to concentrate on them. We therefore have determined an {\em infinite} number of these algebraic varieties, which are remarkably {\em codimension-one} varieties of the parameter space. Their explicit expressions become quickly very large in terms of the homogeneous parameters of the asymmetric eight-vertex model, however, their expressions are remarkably simple in terms of some algebraic invariants generalizing those of the Baxter model, namely: \begin{mathletters} \begin{eqnarray} J_x & = & \sqrt{ a a' b b'} + \sqrt{c c' d d'} \\ J_y & = & \sqrt{ a a' b b' }- \sqrt{c c' d d'} \\ J_z & = &{{ a a' + b b' - c c' - d d'}\over {2}}. \end{eqnarray} \end{mathletters} Note that, in the symmetric subcase, one recovers Eqs.~(\ref{e:heisass}). In terms of these well-suited homogeneous variables, it is possible to extend the ``shift doubling'' ($ \eta \rightarrow 2 \eta$) and ``shift tripling'' ($ \eta \rightarrow 3 \eta$) transformations of the Baxter model to the asymmetric eight-vertex model. One gets for the shift doubling transformation: \begin{mathletters} \label{doubling} \begin{eqnarray} J_x' & = & J_z^2 J_y^2 - J_x^2 J_y^2- J_z^2 J_x^2 \\ J_y' & = & J_z^2 J_x^2 - J_x^2 J_y^2- J_z^2 J_y^2 \\ J_z' & = & J_x^2 J_y^2 - J_z^2 J_x^2- J_z^2 J_y^2 \end{eqnarray} \end{mathletters} and for the shift tripling transformation: \begin{mathletters} \label{three} \begin{eqnarray} J_x'' & = & \left (-2 J_z^2J_y^2J_x^4 -3 J_y^4J_z^4+2 J_y^2J_z^4J_x^2+J_y^4 J_x^4+2 J_y^4J_z^2J_x^2+J_z^4J_x^4 \right ) \cdot J_x \\ J_y'' & = & \left (2 J_z^2J_y^2J_x^4 -3 J_z^4J_x^4+J_y^4J_x^4 -2 J_y^4J_z^2J_x^2+J_y^4J_z^4 +2 J_y^2J_z^4J_x^2 \right ) \cdot J_y \\ J_z'' & = & \left (J_y^4J_z^4+2 J_y^4J_z^2J_x^2 -3 J_y^4J_x^4 -2 J_y^2J_z^4J_x^2+2 J_z^2J_y^2J_x^4 +J_z^4J_x^4 \right ) \cdot J_z \end{eqnarray} \end{mathletters} The simplest codimension-one finite order varieties are: $J_x=0$, $J_y=0$, or $J_z=0$. One remarks that $J_z=0$ is nothing but the free-fermion condition (\ref{e:ff}) which is thus a condition for $\Gamma$ to be finite. Another simple example is: \begin{equation} J_y J_z - J_x J_y - J_x J_z = 0, \end{equation} and the relations obtained by all permutations of $x$, $y$, and $z$. Using the two polynomial transformations (\ref{doubling}) and (\ref{three}) one can easily get an {\em infinite number} of codimension-one algebraic varieties of finite order. The demonstration that the codimension-one algebraic varieties built in such a way are actually finite order conditions of $\Gamma$ will be given elsewhere. Some low order varieties are given in Appendix \ref{a:invariant}. In the next section, the lower order varieties are tested from the view point of statistical properties of the transfer matrix spectrum. \section{Results of the RMT Analysis} \label{s:results8v} \subsection{General Remarks} The phase space of the asymmetric eight-vertex model with the constraint $c=d$ (ensuring symmetric transfer matrices and thus real spectra) is a four-dimensional space (five homogeneous parameters $a$, $a'$, $b$, $b'$, and $c$). Many particular algebraic varieties of this four-dimensional space have been presented in the previous section and will now be analyzed from the random matrix theory point of view. We will present the full distribution of eigenvalue spacings and the spectral rigidity at some representative points. Then we will analyze the behavior of the eigenvalue spacing distribution along different paths in the four-dimensional parameter space. These paths will be defined keeping some Boltzmann weights constant and parameterizing the others by a single parameter $t$. We have generated transfer matrices for various linear sizes, up to $L=16$ vertices, leading to transfer matrices of size up to $65536\times65536$. Tables~\ref{t:aj14} and \ref{t:aj16} give the dimensions of the different invariant subspaces for $L=14$ and $L=16$. Note that the size of the blocks to diagonalize increases exponentially with the linear size $L$. The behavior in the various subspaces is not significantly different. Nevertheless the statistics is better for larger blocks since the influence of the boundary of the spectrum and finite size effects are smaller. To get better statistics we also have averaged the results of several blocks for the same linear size $L$. \subsection{Near the Symmetric Eight-Vertex Model} Fig.~\ref{f:pds} presents the probability distribution of the eigenvalue spacings for three different sets of Boltzmann weights which are listed in Tab.~\ref{t:pdscases}. Fig.~\ref{f:pds}a) corresponds to a point of a symmetric eight-vertex model while the other cases (b) and (c) are results for the asymmetric eight-vertex model. The data points result from about 4400 eigenvalue spacings coming from the ten even subspaces for $L=14$ which are listed in Tab.~\ref{t:aj14}. For the symmetric model (a), using the symmetry under reversal of all arrows, these blocks can once more be splitted into two sub-blocks of equal size. The broken lines show the exponential and the Wigner distribution as the exact results for the diagonal random matrix ensemble (i.e.\ independent eigenvalues) and the $2\times2$ GOE matrices. In Fig.~\ref{f:pds}a) the data points fit very well an exponential, whereas in Figs.~\ref{f:pds}b) and \ref{f:pds}c) they are close to the Wigner surmise. In the latter cases we have also added the best fitting Brody distribution with the parameter $\beta$ listed in Tab.~\ref{t:pdscases}. The agreement with the Wigner distribution is better for the case (c) where the asymmetry expressed by the ratio $a/a'$ is bigger. We also have calculated the spectral rigidity to test how accurate is the description of spectra of transfer matrices in terms of spectra of mathematical random matrix ensembles. We present in Fig.~\ref{f:d3} the spectral rigidity $\Delta_3(E)$ for the same points in parameter space corresponding to integrability and to non-integrability as in Fig.~\ref{f:pds}. The two limiting cases corresponding to the Poissonian distributed eigenvalues (solid line) and to GOE distributed eigenvalues (dashed line) are also shown. For the integrable point the agreement between the numerical data and the expected rigidity is very good. For the non-integrable case the departure of the rigidity from the expected behavior appears at $E \approx 2$ in case (b) and at $E\approx6$ in case (c) (in units of eigenvalue spacings), indicating that the RMT analysis is only valid at short scales. Such behavior has already been seen in quantum spin systems \cite{MoPoBeSi93,BrAdA96}. We stress that the numerical results concerning the rigidity depend much more on the unfolding than the results concerning the spacing distribution. Summarizing the results for the eigenvalue spacing distribution and the rigidity, we have found very good agreement with the Poissonian ensemble for the symmetric eight-vertex model (a), good agreement with the GOE for the asymmetric model (c) and some intermediate behavior for the asymmetric eight-vertex model (b). The difference between the behavior for the cases (b) and (c) can be explained by the larger asymmetry in case (c): case (b) is closer to the integrable symmetric eight-vertex model. To study the proximity to the integrable model, we have determined the `degree' of level repulsion $\beta$ by fitting the Brody distribution to the statistics along a path ($a=t$, $a'=4/t$, $b=b'=4/5$, $c=\sqrt{5/8}$) joining the cases (a) and (c) for different lattice sizes. The result is shown in Fig.~\ref{f:fss}, the details about the number of blocks and eigenvalue spacings used in the distributions are listed in Tab.~\ref{t:fss}. A finite size effect is seen: we always find $\beta\approx0$ for the symmetric model at $a/a'=1$ and increasing the system size leads to a better coincidence with the Wigner distribution ($\beta=1$) for the non-integrable asymmetric model $a\not=a'$. So in the limit $L\rightarrow \infty$ we claim to find a delta-peak at the symmetric point $a=a'=2$. We also have found that the size effects are really controlled by the length $L$ and not by the size of the block. However, our finite size analysis is only qualitative. There is an uncertainty on $\beta$ of about $\pm0.1$. There are two possible sources for this uncertainty. The first one is a simple statistical effect and could be reduced increasing the number of spacings. The second one is a more inherent problem due to the empirical parameters in the unfolding procedure. This source of errors can not be suppressed increasing the size $L$. For a quantitative analysis of the finite size effects it would be necessary to have a high accuracy on $\beta$ and to vary $L$ over a large scale, which is not possible because of the exponential growth of the block sizes with~$L$. To test a possible extension of the critical variety $a=b+c+d$ outside the symmetric region $a=a'$, $b=b'$, we have performed similar calculations along the path ($a=t$, $a'=4/t$, $b=b'=4/5$, $c=3/5$) crossing the symmetric integrable variety at a critical point $t=2$. The results are the same: we did not find any kind of Poissonian behavior when $t \neq 2$. We have tested one single path and not the whole neighborhood around the Baxter critical variety. This would be necessary if one really wants to test a possible relation between Poissonian behavior and criticity instead of integrability (both properties being often related). The possible relation between Poissonian behavior and criticity will be discussed for spin models in the second paper. We conclude, from all these calculations, that the analysis of the properties of the unfolded spectrum of transfer matrices provides an efficient way to detect integrable models, as already known for the energy spectrum of quantum systems \cite{PoZiBeMiMo93,HsAdA93,MoPoBeSi93,vEGa94}. \subsection{Case of Poissonian Behavior for the Asymmetric Eight-Vertex Model} We now investigate the phase space far from the Baxter model. We define paths in the asymmetric region which cross the varieties introduced above but which do not cross the Baxter model. These paths and their intersection with the different varieties are summarized in Tab.~\ref{t:paths}. Fig.~\ref{f:beta1} corresponds to the path (a) ($a=4/5$, $a'=5/4$, $b=b'=t$, $c=1.3$). This defines a path which crosses the free-fermion variety at the solution of Eq.~(\ref{e:ff}): $t=t_{\rm ff}=\sqrt{2.38}$ and the disorder variety at the two solutions of Eq.~(\ref{e:condeso}): $t=t_{\rm di}^{\rm max}\approx1.044$ and at $t=t_{\rm di}^{\rm min}\approx1.0056$ (the subscript ``di'' stands for disorder). See Tab.~\ref{t:paths} for the intersections with the other varieties. We have numerically found that, at the point $t=t_{\rm di}^{\rm max}$, the eigenvalue (\ref{e:lambda}) is the one of largest modulus, whereas at $t=t_{\rm di}^{\rm min}$ it is the eigenvalue of smallest modulus (this is the superscript min or max). The results shown are obtained using the representation $R=0$ for $L=16$ (see Tab.~\ref{t:aj16}). After unfolding and discarding boundary states we are left with a distribution of about 1100 eigenvalue spacings. One clearly sees that, most of the time, $\beta$ is of the order of one, signaling that the spacing distribution is very close to the Wigner distribution, except for $t=t_{\rm ff}\approx1.54$ and for $t$ close to the disorder solutions, where $\beta$ is close to zero. This is not surprising for $t=t_{\rm ff}$ since the model is Yang--Baxter integrable at this point. The value $\beta(t_{\rm ff})$ is slightly negative: this is related to a `level attraction' already noted in \cite{hm4}. The downward peak is very sharp and a good approximation of a $\delta$ peak that we expect for an infinite size. At $t=t_{\rm di}^{\rm min}$ and $t = t_{\rm di}^{\rm max}$ the model is {\em not} Yang-Baxter integrable. We cannot numerically resolve between these two points. Therefore, we now study paths where these two disorder solutions are clearly distinct. For Fig.~\ref{f:beta2} they are both below the free-fermion point, while for Fig.~\ref{f:beta3} the free-fermion point is between the two disorder solution points. In each of the Figs.~\ref{f:beta3} and \ref{f:beta2} are shown the results for two paths which differ only by an exchange of the two weights $a\leftrightarrow a'$. In Fig.~\ref{f:beta3} one clearly sees a peak to $\beta$ slightly negative at the free-fermion point at $t=0.8$ and another one at one disorder solution point $t=t_{\rm di}^{\rm max}\approx1.46$ for both curves but no peak at the second disorder solution point $t=t_{\rm di}^{\rm min}\approx0.55$. It is remarkable that only point $t=t_{\rm di}^{\rm max}$ yields the eigenvalue of largest modulus for the diagonal(-to-diagonal) transfer matrix. Consequently, one has the partition function per site of the model at this point. At point $t=t_{\rm di}^{\rm min}$, where the partition function is not known, we find level repulsion. However, only for path (c) the degree of level repulsion $\beta$ is close to unity while for path (b) it saturates at a much smaller value. Another difference between the cases (b) and (c) is a minimum in the curve of $\beta(t)$ for path (c) at $t\approx 1.8$ which is not seen for path (b). We do not have a theoretical explanation for these phenomena yet: these points are not located on any of the varieties presented in this paper. We stress that an explanation cannot straightforwardly be found in the framework of the symmetry group $\Gamma$ presented here since $a$ and $a'$ appear only with the product $aa'$. It also cannot be found in the Yang--Baxter framework, since $a$ and $a'$ are on the same footing in the Yang--Baxter equations. In Fig.~\ref{f:beta2} the curves of $\beta$ for the two paths (d) and (e) again coincide very well at the free-fermion point at $t=t_{\rm ff}\approx1.61$. But the behavior is very different for $t<1$ where the solutions of the disorder variety are located. For the path (d) neither of the two disorder points is seen on the curve $\beta(t)$ which is almost stationary near a value around 0.6. This means that some eigenvalue repulsion occurs, but the entire distribution is not very close to the Wigner surmise. On the contrary, for path (e) the spacing distribution is very close to a Poissonian distribution ($\beta(t) \approx 0$) when $t$ is between the two disorder points. This suggests that the status of eigenvalue spectrum on the disorder variety of the asymmetric eight-vertex model is not simple: a more systematic study would help to clarify the situation. We now summarize the results from the Figs.~\ref{f:beta1}--\ref{f:beta2}: generally, the statistical properties of the transfer matrix spectra of the asymmetric eight-vertex model are close to those of the GOE except for some algebraic varieties. We have always a very sharp peak with $\beta\rightarrow 0$ at the free-fermion point, often $\beta\approx-0.2$. All other points with $\beta \rightarrow 0$ are found to be a solution of the disorder variety (\ref{e:condeso2}) of the asymmetric eight-vertex model. \subsection{Special Algebraic Varieties} To conclude this section we discuss the special algebraic varieties of the symmetry group $\Gamma$. As explained in subsection \ref{ss:exares} it is possible to construct an infinite number of algebraic varieties where the generator is of finite order $n$: $(t_1\cdot I)^n = {\rm Id}$ and thus $\Gamma$ is finite order. As an example, the solutions for $n=6$ and $n=16$ are given in Appendix \ref{a:invariant}. We have actually calculated a third variety, the expression of which is too long to be given ($n=8$). We give in Tab.~\ref{t:paths} the values of the parameter $t$ for which each path crosses each variety $t_{\rm fo}^6$, $t_{\rm fo}^8$, $t_{\rm fo}^{16}$ (the subscript ``fo'' stands for finite order and the superscript is the order $f$). It is easy to verify on the different curves that no tendency to Poissonian behavior occurs at these points. We therefore give a negative answer to the question of a special status of {\em generic} points of the algebraic finite order varieties with respect to the statistical properties of the transfer matrix spectra. However, one can still imagine that subvarieties of these finite order varieties could have Poissonian behavior and be candidates for free parafermions or multicritical points. \section{Conclusion and Discussion} We have found that the entire spectrum of the symmetric row-to-row transfer matrices of the eight-vertex model of lattice statistical mechanics is sensitive to the Yang--Baxter integrability of the model. The GOE provides a satisfactory description of the spectrum of non Yang--Baxter integrable models: the eigenvalue spacing distribution and the spectral rigidity up to an energy scale of several spacings are in agreement with the Wigner surmise and the rigidity of the GOE matrix spectra. This accounts for ``eigenvalue repulsion''. In contrast, for Yang--Baxter integrable models, the unfolded spectrum has many features of a set of independent numbers: the spacing distribution is Poissonian and the rigidity is linear over a large energy scale. This accounts for ``eigenvalue independence''. However, we have also given a non Yang--Baxter integrable disorder solution of the asymmetric eight-vertex model. For some parts of it the spectrum is clearly Poissonian, too. This suggests that the Wignerian nature of the spectrum is not completely controlled by the Yang--Baxter integrability alone, but possibly by a more general notion of ``calculability'', possibly based on the existence of a family of transfer matrices commuting on the same subspace. We have also found some ``eigenvalue attraction'' for some Yang--Baxter integrable models, namely for most points of the free-fermion variety. These results could surprise since we do not a priori expect properties involving all the $2^L$ eigenvalues when only the few eigenvalues of larger modulus have a physical significance. However, the eigenvalues of small modulus control the finite size effects, and it is well known that, for example, the critical behavior (critical exponents) can be deduced from the finite size effects. The nature of the eigenvalue spacing distribution being an effective criterion, we have also used it to test unsuccessfully various special manifolds including the vicinity of the critical variety of the Baxter model. We will present in a forthcoming publication a similar study of spin model (rather than vertex models). In particular it is interesting to analyze the spectrum on a critical, but not Yang--Baxter integrable, algebraic variety of codimension one as it can be found in the $q=3$ Potts model on a triangular lattice with three-spin interactions \cite{WuZi81}. However, this leads models the transfer matrix of which cannot be made symmetric. This will require a particular study of complex spectra which is much more complicated. In particular the eigenvalue repulsion becomes two-dimensional, and to investigate the eigenvalue spacing distribution, one has to analyze the distances between eigenvalues in two dimensions. \acknowledgments We would like to thank Henrik Bruus for many discussions concerning random matrix theory.
2024-02-18T23:39:41.253Z
1996-09-17T15:26:41.000Z
algebraic_stack_train_0000
72
9,332
proofpile-arXiv_065-428
\section{Guidelines} It is well known that the effects on the physics of a field, due to a much heavier field coupled to the former, are not detectable at energies comparable to the lighter mass. More precisely the Appelquist-Carazzone (AC) theorem~\cite{ac} states that for a Green's function with only light external legs, the effects of the heavy loops are either absorbable in a redefinition of the bare couplings or suppressed by powers of $k/M$ where $k$ is the energy scale characteristic of the Green's function (presumably comparable to the light mass), and $M$ is the heavy mass. However the AC theorem does not allow to make any clear prediction when $k$ becomes close to $M$ and, in this region one should expect some non-perturbative effect due to the onset of new physics. \par In the following we shall make use of the Wilson's renormalization group (RG) approach to discuss the physics of the light field from the infrared region up to and beyond the mass of the heavy field. Incidentally, the RG technique has been already employed to proof the AC theorem~\cite{girar}. The RG establishes the flow equations of the various coupling constants of the theory for any change in the observational energy scale; moreover the improved RG equations, originally derived by F.J. Wegner and A. Houghton~\cite{weg}, where the mixing of all couplings (relevant and irrelevant) generated by the blocking procedure is properly taken into account, should allow to handle the non-perturbative features arising when the heavy mass threshold is crossed. \par We shall discuss the simple case of two coupled scalar fields and since we are interested in the modifications of the parameters governing the light field, due to the heavy loops, we shall consider the functional integration of the heavy field only. The action at a given energy scale $k$ is \begin{equation} S_k(\phi,\psi)=\int d^4 x~\left ({1\over 2} \partial_\mu \phi \partial^\mu \phi+ {1\over 2} W(\phi,\psi) \partial_\mu \psi \partial^\mu \psi+ U(\phi,\psi) \right ), \label{eq:acteff} \end{equation} with polynomial $W$ and $U$ \begin{equation} U(\phi,\psi)=\sum_{m,n}{{G_{2m,2n} \psi^{2m}\phi^{2n}}\over {(2n)!(2m)!}}; ~~~~~~~~~~~~~~~~~~~~ W(\phi,\psi)=\sum_{m,n}{{H_{2m,2n}\psi^{2m}\phi^{2n}} \over {(2n)!(2m)!}}. \label{eq:svil} \end{equation} Since we want to focus on the light field, which we choose to be $\psi$, we have simply set to 1 the wave function renormalization of $\phi$. In the following we shall analyse the symmetric phase of the theory with vanishing vacuum energy $G_{0,0}=0$. \par We do not discuss here the procedure employed~\cite{jan} to deduce the RG coupled equations for the couplings in Eq.~\ref{eq:svil}, because it is thoroughly explained in the quoted reference. Since it is impossible to handle an infinite set of equations and a truncation in the sums in Eq.~\ref{eq:svil} is required, we keep in the action only terms that do not exceed the sixth power in the fields and their derivatives. Moreover we choose the initial condition for the RG equations at a fixed ultraviolet scale $\Lambda$ where we set $H_{0,0}=1$, $G_{0,4}= G_{2,2}=G_{4,0}=0.1$ and $G_{0,6}=G_{2,4}=G_{4,2}=G_{6,0}= H_{2,0}=H_{0,2}=0$, and the flow of the various couplings is determined as a function of $t=ln \left (k/ \Lambda\right )$, for negative $t$. \begin{figure} \psfig{figure=fig1.ps,height=4.3cm,width=12.cm,angle=90} \caption{ (a): $G_{0,2}(t)/\Lambda^2$ (curve (1)) and $10^6\cdot G_{2,0}(t)/\Lambda^2$ (curve (2)) vs $t=log\left ({{k}/{\Lambda}}\right )$. \break (b): $G_{2,2}(t)$ (1), $G_{0,4}(t)$ (2), $G_{4,0}(t)$ (3) vs $t$. \label{fig:funo}} \end{figure} \par In Fig.~\ref{fig:funo}(a) $G_{0,2}(t)/\Lambda^2$ (curve (1) ) and $10^6 \cdot G_{2,0}(t)/\Lambda^2$ (curve (2)) are plotted. Clearly the heavy and the light masses become stable going toward the IR region and their value at $\Lambda$ has been chosen in such a way that the stable IR values are, $M\equiv\sqrt {G_{0,2}(t=-18)}\sim 10^{-4}\cdot \Lambda$ and $m\equiv\sqrt{G_{2,0}(t=-18)}\sim 2\cdot 10^{-7}\cdot\Lambda$. So, in principle, there are three scales: $\Lambda$, ($t=0$), the heavy mass $M$, ($t\sim -9.2$), the light mass $m$, ($t\sim -16.1$). In Fig.~\ref{fig:funo}(b) the three renormalizable dimensionless couplings are shown; the neat change around $t=-9.2$, that is $k \sim M$, is evident and the curves become flat below this value. The other four non-renormalizable couplings included in $U$ are plotted in Fig.~\ref{fig:fdue}(a), in units of $\Lambda$. Again everything is flat below $M$, and the values of the couplings in the IR region coincide with their perturbative Feynman-diagram estimate at the one loop level; it is easy to realize that they are proportional to $1/M^2$, which, in units of $\Lambda$, is a big number. Thus the large values in Fig.~\ref{fig:fdue}(a) are just due to the scale employed and, since these four couplings for any practical purpose, must be compared to the energy scale at which they are calculated, it is physically significant to plot them in units of the running cutoff $k$: the corresponding curves are displayed in Fig.~\ref{fig:fdue}(b); in this case the couplings are strongly suppressed below $M$. \begin{figure} \psfig{figure=fig2.ps,height=4.3cm,width=12.cm,angle=90} \caption{ (a): $G_{6,0}(t)\cdot \Lambda^2$ (1), $G_{0,6}(t)\cdot \Lambda^2$ (2), $G_{4,2}(t)\cdot \Lambda^2$ (3) and $G_{2,4}(t)\cdot \Lambda^2$ (4) vs $t$.\break (b): $G_{6,0}(t)\cdot k^2$ (1), $G_{0,6}(t)\cdot k^2$ (2), $G_{4,2}(t)\cdot k^2$ (3) and $G_{2,4}(t)\cdot k^2$ (4) vs $t$. \label{fig:fdue}} \end{figure} \par It must be remarked that there is no change in the couplings when the light mass threshold is crossed. This is a consequence of having integrated the heavy field only: in this case one could check directly from the equations ruling the coupling constants flow, that a shift in the initial value $G_{2,0}(t=0)$ has the only effect (as long as one remains in the symmetric phase) of modifying $G_{2,0}(t)$, leaving the other curves unchanged. Therefore the results obtained are independent of $m$ and do not change even if $m$ becomes much larger than $M$. An example of the heavy mass dependence is shown in Fig.~\ref{fig:ftre}(a), where $G_{6,0}(t)$ is plotted, in units of the running cutoff $k$, for three different values of $G_{0,2}(t=0)$, which correspond respectively to $M/\Lambda\sim 2\cdot 10^{-6}$, (1), $M/\Lambda\sim 10^{-4}$, (2) and $M/\Lambda\sim 0.33$, (3). Note, in each curve, the change of slope when the $M$ scale is crossed. $H_{0,0}=1,~~H_{0,2}=0$ is a constant solution of the corresponding equations for these two couplings; on the other hand $H_{2,0}$ is not constant and it is plotted in units of the running cutoff $k$ in Fig.~\ref{fig:ftre}(b), for the three values of $M$ quoted above. \begin{figure} \psfig{figure=fig3.ps,height=4.3cm,width=12.cm,angle=90} \caption{ (a): $G_{6,0}(t)\cdot k^2$ vs $t$ for $M/\Lambda \sim 2\cdot 10^{-6}$ (1), $\sim 10^{-4}$ (2), $\sim 0.33$ (3).\break (b): $H_{2,0}(t)\cdot k^2$ for the three values of $M/\Lambda$ quoted in (a). \label{fig:ftre}} \end{figure} \par In conclusion, according to the AC theorem all couplings are constant at low energies and a change in the UV physics can only shift their values in the IR region. Remarkably, for increasing $t$, no trace of UV physics shows up until one reaches $M$, that acts as a UV cut-off for the low energy physics. Moreover, below $M$, no non-perturbative effect appears due to the non-renormalizable couplings that vanish fastly in units of $k$. Their behavior above $M$ is somehow constrained by the renormalizability condition fixed at $t=0$, as clearly shown in Fig.~\ref{fig:ftre}(a) (3). Finally, the peak of $H_{2,0}$ at $k\sim M$ in Fig.~\ref{fig:ftre}(b), whose width and height are practically unchanged in the three examples, is a signal of non-locality of the theory limited to the region around $M$. \section*{Acknowledgments} A.B. gratefully acknowledges Fondazione A. Della Riccia and INFN for financial support. \section*{References}
2024-02-18T23:39:41.423Z
1996-09-19T12:32:13.000Z
algebraic_stack_train_0000
84
1,524
proofpile-arXiv_065-463
\section{Introduction} \begin{figure} \vspace*{13pt} \vspace*{6.7truein} \special{psfile=figa.ps voffset= 240 hoffset= -40 hscale=50 vscale=50 angle = 0} \special{psfile=figb.ps voffset= 240 hoffset= 220 hscale=50 vscale=50 angle = 0} \special{psfile=figc.ps voffset= -40 hoffset= 90 hscale=50 vscale=50 angle = 0} \caption{Histogram of the number of models that yield a particular prediction for $m_{\nu_{\mu}}^2- m_{\nu_{e}}^2$ assuming (a) small angle and (b) large angle solution to solar neutrino problem. In (c) we solve the solar neutrino problem via small angle $e$--$\tau$ oscillations and check whether this is compatible with the LSND result. } \label{fig} \end{figure} In the Standard Model of elementary particles (SM) both lepton number ($L$) and baryon number ($B$) are conserved due to an accidental symmetry, {\sl i.e.} there is no renormalizable, gauge-invariant term that would break the symmetry. In the minimal supersymmetric extension of the SM (MSSM) the situation is different. Due to a the variety of scalar partners the MSSM allows for a host of new interactions many of which violate $B$ or $L$. Since neither $B$ nor $L$ violation has been observed in present collider experiments these couplings are constrained from above. More constraints arise from neutrino physics or cosmology. Thus, all lepton and baryon number violating interaction are often eliminated by imposing a discrete, multiplicative symmetry called $R$-parity,\cite{r-parity} $R_p \equiv (-1)^{2S+3B+L}$, where $S$ is the spin. One very attractive feature of $R_p$ conserving models is that the lightest supersymmetric particle (LSP) is stable and a good cold dark matter candidate.\cite{cdm} However, while the existence of a dark matter candidate is a very desirable prediction, it does not prove $R_p$ conservation and one should consider more general models. Here, we will investigate the scenario where $R_p$ is broken explicitly via the terms\cite{suzuki} $W = \mu_i L_i H$, where $H$ is the Higgs coupling to up-type fermions and $L_i$ ($i = 1,2,3$) are the left-handed lepton doublets. Clearly, these Higgs-lepton mixing terms violate lepton-number. As a result, majorana masses will be generated for one neutrino at tree-level and for the remaining two neutrinos at the one-loop level. These masses were calculated in the frame-work of minimal supergravity in ref.~\citenum{npb} and the numerical results will be briefly summarized here. There are three $R_P$ violating parameters which can be used to fix 1) the tree-level neutrino mass, 2) the $\mu$--$\tau$ mixing angle and 3) the $e$--$\mu$ mixing angle. The question of whether e.g. the solar\cite{solarn} and the atmospheric\cite{atmosphericn} neutrino puzzle can be solved simultaneously depends on the prediction of $m_{\nu_\mu}^2-m_{\nu_e}^2$. In fig.~1 we have scanned the entire SUSY parameter space consisting of the Higgsino (gaugino) mass parameter, $\mu$ ($m_{1/2}$), the trilinear scalar interaction parameter $A_0$, and the ratio of Higgs VEVs, $\tan\beta$. The universal scalar mass parameter $m_0$ is fixed by minimizing the potential. Plotted is the number of models yielding a particular prediction for $m_{\nu_\mu}^2-m_{\nu_e}^2$ for (a) sin$^2 2 \theta_{e \nu_\mu} = 0.008$ and (b) sin$^2 2 \theta_{e \nu_\mu} = 1$. We fix $m_{\nu_\tau}=0.1$~eV and sin$^2 2 \theta_{\mu \nu_\tau} = 1$ in order to solve the atmospheric neutrino problem. We see that both long wave-length oscillation (LWO)\cite{lwo} ($m_{\nu_\mu}^2-m_{\nu_e}^2=10^{-10}$~eV$^2$) and MSW effect\cite{msw-effect} ($m_{\nu_\mu}^2-m_{\nu_e}^2=10^{-5}$~eV$^2$) can be accommodated. In fig.~1(c) we solve the solar neutrino problem via $e$--$\tau$ oscillations and we fix sin$^2 2 \theta_{e \nu_\mu} = 0.004$ in order to accommodate the LSND result.\cite{lsnd} We see that most models are already ruled out by collider constraints and even more by dark matter (DM) constraints. However, a very small (but non-zero) number of models yields a prediction compatible with the LSND result (the dotted line is lower limit of LSND). \noindent{\bf Acknowledgements} This work was supported in parts by the DOE under Grants No. DE-FG03-91-ER40674 and by the Davis Institute for High Energy Physics. \vskip0.3cm \noindent{\bf References}
2024-02-18T23:39:41.543Z
1996-09-30T23:32:09.000Z
algebraic_stack_train_0000
90
736
proofpile-arXiv_065-607
\section{Introduction} In the first part of this paper we shall investigate a special case of relative continuity of symplectically adjoint maps of a symplectic space. By this, we mean the following. Suppose that $(S,\sigma)$ is a symplectic space, i.e.\ $S$ is a real-linear vector space with an anti-symmetric, non-degenerate bilinear form $\sigma$ (the symplectic form). A pair $V,W$ of linear maps of $S$ will be called {\it symplectically adjoint} if $\sigma(V\phi,\psi) = \sigma(\phi,W\psi)$ for all $\phi,\psi \in S$. Let $\mu$ and $\mu'$ be two scalar products on $S$ and assume that, for each pair $V,W$ of symplectically adjoint linear maps of $(S,\sigma)$, the boundedness of both $V$ and $W$ with respect to $\mu$ implies their boundedness with respect to $\mu'$. Such a situation we refer to as {\it relative $\mu - \mu'$ continuity of symplectically adjoint maps} (of $(S,\sigma)$). A particular example of symplectically adjoint maps is provided by the pair $T,T^{-1}$ whenever $T$ is a symplectomorphism of $(S,\sigma)$. (Recall that a symplectomorphism of $(S,\sigma)$ is a bijective linear map $T : S \to S$ which preserves the symplectic form, $\sigma(T\phi,T\psi) = \sigma(\phi,\psi)$ for all $\phi,\psi \in S$.) In the more specialized case to be considered in the present work, which will soon be indicated to be relevant in applications, we show that a certain distinguished relation between a scalar product $\mu$ on $S$ and a second one, $\mu'$, is sufficient for the relative $\mu - \mu'$ continuity of symplectically adjoint maps. (We give further details in Chapter 2, and in the next paragraph.) The result will be applied in Chapter 3 to answer a couple of open questions concerning the algebraic structure of the quantum theory of the free scalar field in arbitrary globally hyperbolic spacetimes: the local definiteness, local primarity and Haag-duality in representations of the local observable algebras induced by quasifree Hadamard states, as well as the determination of the type of the local von Neumann algebras in such representations. Technically, what needs to be proved in our approach to this problem is the continuity of the temporal evolution of the Cauchy-data of solutions of the scalar Klein-Gordon equation \begin{equation} (\nabla^a \nabla_a + r)\varphi = 0 \end{equation} in a globally hyperbolic spacetime with respect to a certain topology on the Cauchy-data space. (Here, $\nabla$ is the covariant derivative of the metric $g$ on the spacetime, and $r$ an arbitrary realvalued, smooth function.) The Cauchy-data space is a symplectic space on which the said temporal evolution is realized by symplectomorphisms. It turns out that the classical ``energy-norm'' of solutions of (1.1), which is given by a scalar product $\mu_0$ on the Cauchy-data space, and the topology relevant for the required continuity statement (the ``Hadamard one-particle space norm''), induced by a scalar product $\mu_1$ on the Cauchy-data space, are precisely in the relation for which our result on relative $\mu_0 - \mu_1$ continuity of symplectically adjoint maps applies. Since the continuity of the Cauchy-data evolution in the classical energy norm, i.e.\ $\mu_0$, is well-known, the desired continuity in the $\mu_1$-topology follows. The argument just described may be viewed as the prime example of application of the relative continuity result. In fact, the relation between $\mu_0$ and $\mu_1$ is abstracted from the relation between the classical energy-norm and the one-particle space norms arising from ``frequency-splitting'' procedures in the canonical quantization of (linear) fields. This relation has been made precise in a recent paper by Chmielowski [11]. It provides the starting point for our investigation in Chapter 2, where we shall see that one can associate with a dominating scalar product $\mu \equiv \mu_0$ on $S$ in a canonical way a positive, symmetric operator $|R_{\mu}|$ on the $\mu$-completion of $S$, and a family of scalar products $\mu_s$, $s > 0$, on $S$, defined as $\mu$ with $|R_{\mu}|^s$ as an operator kernel. Using abstract interpolation, it will be shown that then relative $\mu_0 - \mu_s$ continuity of symplectically adjoint maps holds for all $0 \leq s \leq 2$. The relative $\mu_0 - \mu_1$ continuity arises as a special case. In fact, it turns out that the indicated interpolation argument may even be extended to an apparently more general situation from which the relative $\mu_0 - \mu_s$ continuity of symplectically adjoint maps derives as a corollary, see Theorem 2.2. Chapter 3 will be concerned with the application of the result of Thm.\ 2.2 as indicated above. In the preparatory Section 3.1, some notions of general relativity will be summarized, along with the introduction of some notation. Section 3.2 contains a brief synopsis of the notions of local definiteness, local primarity and Haag-duality in the the context of quantum field theory in curved spacetime. In Section 3.3 we present the $C^*$-algebraic quantization of the KG-field obeying (1.1) on a globally hyperbolic spacetime, following [16]. Quasifree Hadamard states will be described in Section 3.4 according to the definition given in [45]. In the same section we briefly summarize some properties of Hadamard two-point functions, and derive, in Proposition 3.5, the result concerning the continuity of the Cauchy-data evolution maps in the topology of the Hadamard two-point functions which was mentioned above. It will be seen in the last Section 3.5 that this leads, in combination with results obtained earlier [64,65,66], to Theorem 3.6 establishing detailed properties of the algebraic structure of the local von Neumann observable algebras in representations induced by quasifree Hadamard states of the Klein-Gordon field over an arbitrary globally hyperbolic spacetime. \section{Relative Continuity of Symplectically Adjoint Maps} \setcounter{equation}{0} Let $(S,\sigma)$ be a symplectic space. A (real-linear) scalar product $\mu$ on $S$ is said to {\it dominate} $\sigma$ if the estimate \begin{equation} |\sigma(\phi,\psi)|^2 \leq 4 \cdot \mu(\phi,\phi)\,\mu(\psi,\psi)\,, \quad \phi,\psi \in S\,, \end{equation} holds; the set of all scalar products on $S$ which dominate $\sigma$ will be denoted by ${\sf q}(S,\sigma)$. Given $\mu \in {\sf q}(S,\sigma)$, we write $H_{\mu} \equiv \overline{S}^{\mu}$ for the completion of $S$ with respect to the topology induced by $\mu$, and denote by $\sigma_{\mu}$ the $\mu$-continuous extension, guaranteed to uniquely exist by (2.1), of $\sigma$ to $H_{\mu}$. The estimate (2.1) then extends to $\sigma_{\mu}$ and all $\phi,\psi \in H_{\mu}$. This entails that there is a uniquely determined, $\mu$-bounded linear operator $R_{\mu} : H_{\mu} \to H_{\mu}$ with the property \begin{equation} \sigma_{\mu}(x,y) = 2\,\mu(x,R_{\mu}y)\,, \quad x,y \in H_{\mu}\,. \end{equation} The antisymmetry of $\sigma_{\mu}$ entails for the $\mu$-adjoint $R_{\mu}^*$ of $R_{\mu}$ \begin{equation} R_{\mu}^* = - R_{\mu}\,, \end{equation} and by (2.1) one finds that the operator norm of $R_{\mu}$ is bounded by 1, $||\,R_{\mu}\,|| \leq 1$. The operator $R_{\mu}$ will be called the {\it polarizator} of $\mu$. In passing, two things should be noticed here: \\[6pt] (1) $R_{\mu}|S$ is injective since $\sigma$ is a non-degenerate bilinear form on $S$, but $R_{\mu}$ need not be injective on on all of $H_{\mu}$, as $\sigma_{\mu}$ may be degenerate. \\[6pt] (2) In general, it is not the case that $R_{\mu}(S) \subset S$. \\[6pt] Further properties of $R_{\mu}$ will be explored below. Let us first focus on two significant subsets of ${\sf q}(S,\sigma)$ which are intrinsically characterized by properties of the corresponding $\sigma_{\mu}$ or, equivalently, the $R_{\mu}$. The first is ${\sf pr}(S,\sigma)$, called the set of {\it primary} scalar products on $(S,\sigma)$, where $\mu \in {\sf q}(S,\sigma)$ is in ${\sf pr}(S,\sigma)$ if $\sigma_{\mu}$ is a symplectic form (i.e.\ non-degenerate) on $H_{\mu}$. In view of (2.2) and (2.3), one can see that this is equivalent to either (and hence, both) of the following conditions: \begin{itemize} \item[(i)] \quad $R_{\mu}$ is injective, \item[(ii)] \quad $R_{\mu}(H_{\mu})$ is dense in $H_{\mu}$. \end{itemize} The second important subset of ${\sf q}(S,\sigma)$ is denoted by ${\sf pu}(S,\sigma)$ and defined as consisting of those $\mu \in {\sf q}(S,\sigma)$ which satisfy the {\it saturation property} \begin{equation} \mu(\phi,\phi) = \sup_{\psi \in S\backslash \{0\} } \, \frac{|\sigma(\phi,\psi)|^2}{4 \mu(\psi,\psi) } \,,\ \ \ \psi \in S \,. \end{equation} The set ${\sf pu}(S,\sigma)$ will be called the set of {\it pure} scalar products on $(S,\sigma)$. It is straightforward to check that $\mu \in {\sf pu}(S,\sigma)$ if and only if $R_{\mu}$ is a unitary anti-involution, or complex structure, i.e.\ $R_{\mu}^{-1} = R_{\mu}^*$, $R_{\mu}^2 = - 1$. Hence ${\sf pu}(S,\sigma) \subset {\sf pr}(S,\sigma)$. \\[10pt] Our terminology reflects well-known relations between properties of quasifree states on the (CCR-) Weyl-algebra of a symplectic space $(S,\sigma)$ and properties of $\sigma$-dominating scalar products on $S$, which we shall briefly recapitulate. We refer to [1,3,5,45,49] and also references quoted therein for proofs and further discussion of the following statements. Given a symplectic space $(S,\sigma)$, one can associate with it uniquely (up to $C^*$-algebraic equivalence) a $C^*$-algebra ${\cal A}[S,\sigma]$, which is generated by a family of unitary elements $W(\phi)$, $\phi \in S$, satisfying the canonical commutation relations (CCR) in exponentiated form, \begin{equation} W(\phi)W(\psi) = {\rm e}^{-i\sigma(\phi,\psi)/2}W(\phi + \psi)\,, \quad \phi,\psi \in S\,. \end{equation} The algebra ${\cal A}[S,\sigma]$ is called the {\it Weyl-algebra}, or {\it CCR-algebra}, of $(S,\sigma)$. It is not difficult to see that if $\mu \in {\sf q}(S,\sigma)$, then one can define a state (i.e., a positive, normalized linear functional) $\omega_{\mu}$ on ${\cal A}[S,\sigma]$ by setting \begin{equation} \omega_{\mu} (W(\phi)) : = {\rm e}^{- \mu(\phi,\phi)/2}\,, \quad \phi \in S\,. \end{equation} Any state on the Weyl-algebra ${\cal A}[S,\sigma]$ which can be realized in this way is called a {\it quasifree state}. Conversely, given any quasifree state $\omega_{\mu}$ on ${\cal A}[S,\sigma]$, one can recover its $\mu \in {\sf q}(S,\sigma)$ as \begin{equation} \mu(\phi,\psi) = 2 {\sf Re}\left. \frac{\partial}{\partial t} \frac{\partial}{\partial \tau} \right|_{t = \tau = 0} \omega_{\mu} (W(t\phi)W(\tau \psi))\,, \quad \phi,\psi \in S\,. \end{equation} So there is a one-to-one correspondence between quasifree states on ${\cal A}[S,\sigma]$ and dominating scalar products on $(S,\sigma)$. \\[10pt] Let us now recall the subsequent terminology. To a state $\omega$ on a $C^*$-algebra $\cal B$ there corresponds (uniquely up to unitary equivalence) a triple $({\cal H}_{\omega},\pi_{\omega},\Omega_{\omega})$, called the GNS-representation of $\omega$ (see e.g.\ [5]), characterized by the following properties: ${\cal H}_{\omega}$ is a complex Hilbertspace, $\pi_{\omega}$ is a representation of $\cal B$ by bounded linear operators on ${\cal H}_{\omega}$ with cyclic vector $\Omega_{\omega}$, and $\omega(B) = \langle \Omega_{\omega},\pi_{\omega}(B)\Omega_{\omega} \rangle $ for all $B \in \cal B$. Hence one is led to associate with $\omega$ and $\cal B$ naturally the $\omega$-induced von Neumann algebra $\pi_{\omega}({\cal B})^-$, where the bar means taking the closure with respect to the weak operator topology in the set of bounded linear operators on ${\cal H}_{\omega}$. One refers to $\omega$ (resp., $\pi_{\omega}$) as {\it primary} if $\pi_{\omega}({\cal B})^- \cap \pi_{\omega}({\cal B})' = {\bf C} \cdot 1$ (so the center of $\pi_{\omega}({\cal B})^-$ is trivial), where the prime denotes taking the commutant, and as {\it pure} if $\pi_{\omega}({\cal B})' = {\bf C}\cdot 1$ (i.e.\ $\pi_{\omega}$ is irreducible --- this is equivalent to the statement that $\omega$ is not a (non-trivial) convex sum of different states). In the case where $\omega_{\mu}$ is a quasifree state on a Weyl-algebra ${\cal A}[S,\sigma]$, it is known that (cf.\ [1,49]) \begin{itemize} \item[(I)] $\omega_{\mu}$ is primary if and only if $\mu \in {\sf pr}(S,\sigma)$, \item[(II)] $\omega_{\mu}$ is pure if and only if $\mu \in {\sf pu}(S,\sigma)$. \end{itemize} ${}$\\ We return to the investigation of the properties of the polarizator $R_{\mu}$ for a dominating scalar product $\mu$ on a symplectic space $(S,\sigma)$. It possesses a polar decomposition \begin{equation} R_{\mu} = U_{\mu} |R_{\mu}| \end{equation} on the Hilbertspace $(H_{\mu},\mu)$, where $U_{\mu}$ is an isometry and $|R_{\mu}|$ is symmetric and has non-negative spectrum. Since $R_{\mu}^* = - R_{\mu}$, $R_{\mu}$ is normal and thus $|R_{\mu}|$ and $U_{\mu}$ commute. Moreover, one has $|R_{\mu}| U_{\mu}^* = - U_{\mu} |R_{\mu}|$, and hence $|R_{\mu}|$ and $U_{\mu}^*$ commute as well. One readily observes that $(U_{\mu}^* + U_{\mu})|R_{\mu}| = 0$. The commutativity can by the spectral calculus be generalized to the statement that, whenever $f$ is a real-valued, continuous function on the real line, then \begin{equation} [f(|R_{\mu}|),U_{\mu}] = 0 = [f(|R_{\mu}|),U_{\mu}^*] \,, \end{equation} where the brackets denote the commutator. In a recent work [11], Chmielowski noticed that if one defines for $\mu \in {\sf q}(S,\sigma)$ the bilinear form \begin{equation} \tilde{\mu}(\phi,\psi) := \mu (\phi,|R_{\mu}| \psi)\,, \quad \phi,\psi \in S, \end{equation} then it holds that $\tilde{\mu} \in {\sf pu}(S,\sigma)$. The proof of this is straightforward. That $\tilde{\mu}$ dominates $\sigma$ will be seen in Proposition 2.1 below. To check the saturation property (2.4) for $\tilde{\mu}$, it suffices to observe that for given $\phi \in H_{\mu}$, the inequality in the following chain of expressions: \begin{eqnarray*} \frac{1}{4} | \sigma_{\mu}(\phi,\psi) |^2 & = & |\mu(\phi,U_{\mu} |R_{\mu}| \psi)|^2 \ = \ |\mu(\phi,-U_{\mu}^*|R_{\mu}|\psi) |^2 \\ & = & |\mu(|R_{\mu}|^{1/2}U_{\mu}\phi,|R_{\mu}|^{1/2}\psi)|^2 \\ & \leq & \mu(|R_{\mu}|^{1/2}U_{\mu}\phi,|R_{\mu}|^{1/2}U_{\mu}\phi) \cdot \mu(|R_{\mu}|^{1/2}\psi, |R_{\mu}|^{1/2}\psi) \nonumber \end{eqnarray*} is saturated and becomes an equality upon choosing $\psi \in H_{\mu}$ so that $|R_{\mu}|^{1/2}\psi$ is parallel to $|R_{\mu}|^{1/2}U_{\mu} \phi$. Therefore one obtains for all $\phi \in S$ \begin{eqnarray*} \sup_{\psi \in S\backslash\{0\}}\, \frac{|\sigma(\phi,\psi)|^2} {4 \mu(\psi,|R_{\mu}| \psi) } & = & \mu(|R_{\mu}|^{1/2}U_{\mu} \phi,|R_{\mu}|^{1/2}U_{\mu} \phi) \\ & = & \mu(U_{\mu}|R_{\mu}|^{1/2}\phi,U_{\mu} |R_{\mu}|^{1/2} \phi) \\ & = & \tilde{\mu}(\phi,\phi)\,, \end{eqnarray*} which is the required saturation property. Following Chmielowski, the scalar product $\tilde{\mu}$ on $S$ associated with $\mu \in {\sf q}(S,\sigma)$ will be called the {\it purification} of $\mu$. It appears natural to associate with $\mu \in {\sf q}(S,\sigma)$ the family $\mu_s$, $s > 0$, of symmetric bilinear forms on $S$ given by \begin{equation} \mu_s(\phi,\psi) := \mu(\phi,|R_{\mu}|^s \psi)\,, \quad \phi,\psi \in S\,. \end{equation} We will use the convention that $\mu_0 = \mu$. Observe that $\tilde{\mu} = \mu_1$. The subsequent proposition ensues. \begin{Proposition} ${}$\\[6pt] (a) $\mu_s$ is a scalar product on $S$ for each $s \geq 0$. \\[6pt] (b) $\mu_s$ dominates $\sigma$ for $0 \leq s \leq 1$. \\[6pt] (c) Suppose that there is some $s \in (0,1)$ such that $\mu_s \in {\sf pu}(S,\sigma)$. Then $\mu_r = \mu_1$ for all $r > 0$. If it is in addition assumed that $\mu \in {\sf pr}(S,\sigma)$, then it follows that $\mu_r = \mu_1$ for all $r \geq 0$, i.e.\ in particular $\mu = \tilde{\mu}$. \\[6pt] (d) If $\mu_s \in {\sf q}(S,\sigma)$ for some $s > 1$, then $\mu_r = \mu_1$ for all $r > 0$. Assuming additionally $\mu \in {\sf pr}(S,\sigma)$, one obtains $\mu_r = \mu_1$ for all $r \geq 0$, entailing $\mu = \tilde{\mu}$.\\[6pt] (e) The purifications of the $\mu_s$, $0 < s < 1$, are equal to $\tilde{\mu}$: We have $\widetilde{\mu_s} = \tilde{\mu} = \mu_1$ for all $0 < s < 1$. \end{Proposition} {\it Proof.} (a) According to (b), $\mu_s$ dominates $\sigma$ for each $0 \leq s \leq 1$, thus it is a scalar product whenever $s$ is in that range. However, it is known that $\mu(\phi,|R_{\mu}|^s \phi) \geq \mu(\phi,|R_{\mu}| \phi)^s$ for all vectors $\phi \in H_{\mu}$ of unit length ($\mu(\phi,\phi) = 1$) and $1 \leq s < \infty$, cf.\ [60 (p.\ 20)]. This shows that $\mu_s(\phi,\phi) \neq 0$ for all nonzero $\phi$ in $S$, $s \geq 0$. \\[6pt] (b) For $s$ in the indicated range there holds the following estimate: \begin{eqnarray*} \frac{1}{4} |\sigma(\phi,\psi)|^2 & = & |\mu(\phi,U_{\mu}|R_{\mu}| \psi)|^2 \ = \ |\mu(\phi,-U_{\mu}^*|R_{\mu}| \psi )|^2 \\ & = & | \mu(|R_{\mu}|^{s/2}U_{\mu} \phi, |R_{\mu}|^{1 - s/2} \psi) |^2 \\ & \leq & \mu(U_{\mu} |R_{\mu}|^{s/2}\phi,U_{\mu}|R_{\mu}|^{s/2} \phi) \cdot \mu(|R_{\mu}|^{s/2}\psi,|R_{\mu}|^{2(1-s)}|R_{\mu}|^{s/2} \psi) \\ & \leq & \mu_s(\phi,\phi)\cdot \mu_s(\psi,\psi)\,, \quad \phi,\psi \in S\,. \end{eqnarray*} Here, we have used that $|R_{\mu}|^{2(1-s)} \leq 1$. \\[6pt] (c) If $(\phi_n)$ is a $\mu$-Cauchy-sequence in $H_{\mu}$, then it is, by continuity of $|R_{\mu}|^{s/2}$, also a $\mu_s$-Cauchy-sequence in $H_s$, the $\mu_s$-completion of $S$. Via this identification, we obtain an embedding $j : H_{\mu} \to H_s$. Notice that $j(\psi) = \psi$ for all $\psi \in S$, so $j$ has dense range; however, one has \begin{equation} \mu_s(j(\phi),j(\psi)) = \mu(\phi,|R_{\mu}|^s \psi) \end{equation} for all $\phi,\psi \in H_{\mu}$. Therefore $j$ need not be injective. Now let $R_s$ be the polarizator of $\mu_s$. Then we have \begin{eqnarray*} 2\mu_s(j(\phi),R_s j(\psi))\ = \ \sigma_{\mu}(\phi,\psi) & = & 2 \mu(\phi,R_{\mu}\psi) \\ & = & 2 \mu(\phi,|R_{\mu}|^sU_{\mu}|R_{\mu}|^{1-s}\psi) \\ & = & 2 \mu_s(j(\phi),j(U_{\mu}|R_{\mu}|^{1-s})\psi) \,,\quad \phi,\psi \in H_{\mu}\,. \end{eqnarray*} This yields \begin{equation} R_s {\mbox{\footnotesize $\circ$}} j = j {\mbox{\footnotesize $\circ$}} U_{\mu}|R_{\mu}|^{1-s} \end{equation} on $H_{\mu}$. Since by assumption $\mu_s$ is pure, we have $R_s^2 = -1$ on $H_s$, and thus $$ j = - R_s j U_{\mu}|R_{\mu}|^{1-s} = - j(U_{\mu}|R_{\mu}|^{1-s})^2 \,.$$ By (2.12) we may conclude $$ |R_{\mu}|^{2s} = - U_{\mu} |R_{\mu}| U_{\mu} |R_{\mu}| = U_{\mu}^*U_{\mu}|R_{\mu}|^2 = |R_{\mu}|^2 \,, $$ which entails $|R_{\mu}|^s = |R_{\mu}|$. Since $|R_{\mu}| \leq 1$, we see that for $s \leq r \leq 1$ we have $$ |R_{\mu}| = |R_{\mu}|^s \geq |R_{\mu}|^r \geq |R_{\mu}| \,,$$ hence $|R_{\mu}|^r = |R_{\mu}|$ for $s \geq r \geq 1$. Whence $|R_{\mu}|^r = |R_{\mu}|$ for all $r > 0$. This proves the first part of the statement. For the second part we observe that $\mu \in {\sf pr}(S,\sigma)$ implies that $|R_{\mu}|$, and hence also $|R_{\mu}|^s$ for $0 < s < 1$, is injective. Then the equation $|R_{\mu}|^s = |R_{\mu}|$ implies that $|R_{\mu}|^s(|R_{\mu}|^{1-s} - 1) = 0$, and by the injectivity of $|R_{\mu}|^s$ we may conclude $|R_{\mu}|^{1-s} =1$. Since $s$ was assumed to be strictly less than 1, it follows that $|R_{\mu}|^r = 1$ for all $r \geq 0$; in particular, $|R_{\mu}| =1$. \\[6pt] (d) Assume that $\mu_s$ dominates $\sigma$ for some $s > 1$, i.e.\ it holds that $$ 4|\mu(\phi,U_{\mu}|R_{\mu}|\psi)|^2 = |\sigma_{\mu}(\phi,\psi)|^2 \leq 4\cdot \mu(\phi,|R_{\mu}|^s\phi)\cdot \mu(\psi,|R_{\mu}|^s\psi)\,, \quad \phi,\psi \in H_{\mu}\,, $$ which implies, choosing $\phi = U_{\mu} \psi$, the estimate $$ \mu(\psi,|R_{\mu}| \psi) \leq \mu(\psi,|R_{\mu}|^s \psi) \,, \quad \psi \in H_{\mu}\,,$$ i.e.\ $|R_{\mu}| \leq |R_{\mu}|^s$. On the other hand, $|R_{\mu}| \geq |R_{\mu}|^r \geq |R_{\mu}|^s$ holds for all $1 \leq r \leq s$ since $|R_{\mu}| \leq 1$. This implies $|R_{\mu}|^r = |R_{\mu}|$ for all $r > 0$. For the second part of the statement one uses the same argument as given in (c). \\[6pt] (e) In view of (2.13) it holds that \begin{eqnarray*} |R_s|^2j & = & - R_s^2 j\ =\ - R_s j U_{\mu} |R_{\mu}|^{1-s} \\ & = & - j U_{\mu} |R_{\mu}|^{1-s}U_{\mu}|R_{\mu}|^{1-s}\ =\ - j U_{\mu}^2 (|R_{\mu}|^{1-s})^2 \,. \end{eqnarray*} Iterating this one has for all $n \in {\bf N}$ $$ |R_s|^{2n} j = (-1)^n j U_{\mu}^{2n}(|R_{\mu}|^{1-s})^{2n}\,. $$ Inserting this into relation (2.12) yields for all $n \in {\bf N}$ \begin{eqnarray} \mu_s(j(\phi),|R_s|^{2n}j(\psi)) & = & \mu(\phi, |R_{\mu}|^s (-1)^n U_{\mu}^{2n}(|R_{\mu}|^{1-s})^{2n}\psi) \\ & = & \mu(\phi,|R_{\mu}|^s(|R_{\mu}|^{1-s})^{2n}\psi)\,,\quad \phi,\psi \in H_{\mu}\,. \nonumber \end{eqnarray} For the last equality we used that $U_{\mu}$ commutes with $|R_s|^s$ and $U_{\mu}^2|R_{\mu}| = - |R_{\mu}|$. Now let $(P_n)$ be a sequence of polynomials on the intervall $[0,1]$ converging uniformly to the square root function on $[0,1]$. From (2.14) we infer that $$ \mu_s(j(\phi),P_n(|R_s|^2)j(\psi)) = \mu(\phi,|R_{\mu}|^s P_n((|R_{\mu}|^{1-s})^2) \psi)\,, \quad \phi, \psi \in H_{\mu} $$ for all $n \in {\bf N}$, which in the limit $n \to \infty$ gives $$ \mu_s(j(\phi),|R_s|j(\psi)) = \mu(\phi,|R_{\mu}|\psi)\,, \quad \phi,\psi \in H_{\mu}\,, $$ as desired. $\Box$ \\[10pt] Proposition 2.1 underlines the special role of $\tilde{\mu} = \mu_1$. Clearly, one has $\tilde{\mu} = \mu$ iff $\mu \in {\sf pu}(S,\sigma)$. Chmielowski has proved another interesting connection between $\mu$ and $\tilde{\mu}$ which we briefly mention here. Suppose that $\{T_t\}$ is a one-parametric group of symplectomorphisms of $(S,\sigma)$, and let $\{\alpha_t\}$ be the automorphism group on ${\cal A}[S,\sigma]$ induced by it via $\alpha_t(W(\phi)) = W(T_t\phi)$, $\phi \in S,\ t \in {\bf R}$. An $\{\alpha_t\}$-invariant quasifree state $\omega_{\mu}$ on ${\cal A}[S,\sigma]$ is called {\it regular} if the unitary group which implements $\{\alpha_t\}$ in the GNS-representation $({\cal H}_{\mu},\pi_{\mu},\Omega_{\mu})$ of $\omega_{\mu}$ is strongly continuous and leaves no non-zero vector in the one-particle space of ${\cal H}_{\mu}$ invariant. Here, the one-particle space is spanned by all vectors of the form $\left. \frac{d}{dt} \right|_{t = 0} \pi_{\mu}(W(t\phi))\Omega_{\mu}$, $\phi \in S$. It is proved in [11] that, if $\omega_{\mu}$ is a regular quasifree KMS-state for $\{\alpha_t\}$, then $\omega_{\tilde{\mu}}$ is the unique regular quasifree groundstate for $\{\alpha_t\}$. As explained in [11], the passage from $\mu$ to $\tilde{\mu}$ can be seen as a rigorous form of ``frequency-splitting'' methods employed in the canonical quantization of classical fields for which $\mu$ is induced by the classical energy norm. We shall come back to this in the concrete example of the Klein-Gordon field in Sec.\ 3.4. It should be noted that the purification map $\tilde{\cdot} : {\sf q}(S,\sigma) \to {\sf pu}(S,\sigma)$, $\mu \mapsto \tilde{\mu}$, assigns to a quasifree state $\omega_{\mu}$ on ${\cal A}[S,\sigma]$ the pure quasifree state $\omega_{\tilde{\mu}}$ which is again a state on ${\cal A}[S,\sigma]$. This is different from the well-known procedure of assigning to a state $\omega$ on a $C^*$-algebra ${\cal A}$, whose GNS representation is primary, a pure state $\omega_0$ on ${\cal A}^{\circ} \otimes {\cal A}$. (${\cal A}^{\circ}$ denotes the opposite algebra of ${\cal A}$, cf.\ [75].) That procedure was introduced by Woronowicz and is an abstract version of similar constructions for quasifree states on CCR- or CAR-algebras [45,54,75]. Whether the purification map $\omega_{\mu} \mapsto \omega_{\tilde{\mu}}$ can be generalized from quasifree states on CCR-algebras to a procedure of assigning to (a suitable class of) states on a generic $C^*$-algebra pure states on that same algebra, is in principle an interesting question, which however we shall not investigate here. \begin{Theorem} ${}$\\[6pt] (a) Let $H$ be a (real or complex) Hilbertspace with scalar product $\mu(\,.\,,\,.\,)$, $R$ a (not necessarily bounded) normal operator in $H$, and $V,W$ two $\mu$-bounded linear operators on $H$ which are $R$-adjoint, i.e.\ they satisfy \begin{equation} W{\rm dom}(R) \subset {\rm dom}(R) \quad {\it and} \quad V^*R = R W \quad {\rm on \ \ dom}(R) \,. \end{equation} Denote by $\mu_s$ the Hermitean form on ${\rm dom}(|R|^{s/2})$ given by $$ \mu_s(x,y) := \mu(|R|^{s/2}x,|R|^{s/2} y)\,, \quad x,y \in {\rm dom}(|R|^{s/2}),\ 0 \leq s \leq 2\,.$$ We write $||\,.\,||_0 := ||\,.\,||_{\mu} := \mu(\,.\,,\,.\,)^{1/2}$ and $||\,.\,||_s := \mu_s(\,.\,,\,.\,)^{1/2}$ for the corresponding semi-norms. Then it holds for all $0 \leq s \leq 2$ that $$ V{\rm dom}(|R|^{s/2}) \subset {\rm dom}(|R|^{s/2}) \quad {\it and} \quad W{\rm dom}(|R|^{s/2}) \subset {\rm dom}(|R|^{s/2}) \,, $$ and $V$ and $W$ are $\mu_s$-bounded for $0 \leq s \leq 2$. More precisely, the estimates \begin{equation} ||\,Vx\,||_0 \leq v\,||\,x\,||_0 \quad {\rm and} \quad ||\,Wx\,||_0 \leq w\,||\,x\,||_0\,, \quad x \in H\,, \end{equation} with suitable constants $v,w > 0$, imply that \begin{equation} ||\,Vx\,||_s \leq w^{s/2}v^{1 -s/2}\,||\,x\,||_s \quad {\rm and} \quad ||\,Wx\,||_s \leq v^{s/2}w^{1-s/2}\,||\,x\,||_s \,, \end{equation} for all $ x \in {\rm dom}(|R|^{s/2})$ and $0 \leq s \leq 2$. \\[6pt] (b)\ \ \ (Corollary of (a))\ \ \ \ Let $(S,\sigma)$ be a symplectic space, $\mu \in {\sf q}(S,\sigma)$ a dominating scalar product on $(S,\sigma)$, and $\mu_s$, $0 < s \leq 2$, the scalar products on $S$ defined in (2.11). Then we have relative $\mu-\mu_s$ continuity of each pair $V,W$ of symplectically adjoint linear maps of $(S,\sigma)$ for all $0 < s \leq 2$. More precisely, for each pair $V,W$ of symplectically adjoint linear maps of $(S,\sigma)$, the estimates (2.16) for all $x \in S$ imply (2.17) for all $x \in S$. \end{Theorem} {\it Remark.} (i) In view of the fact that the operator $R$ of part (a) of the Theorem may be unbounded, part (b) can be extended to situations where it is not assumed that the scalar product $\mu$ on $S$ dominates the symplectic form $\sigma$. \\[6pt] (ii) When it is additionally assumed that $V = T$ and $W = T^{-1}$ with symplectomorphisms $T$ of $(S,\sigma)$, we refer in that case to the situation of relative continuity of the pairs $V,W$ as relative continuity of symplectomorphisms. In Example 2.3 after the proof of Thm.\ 2.2 we show that relative $\tilde{\mu} - \mu$ continuity of symplectomorphisms fails in general. Also, it is not the case that relative $\mu - \mu'$ continuity of symplectomorphisms holds if $\mu'$ is an arbitrary element in ${\sf pu}(S,\sigma)$ which is dominated by $\mu$ ($||\,\phi\,||_{\mu'} \leq {\rm const.}||\,\phi\,||_{\mu}$, $\phi \in S$), see Example 2.4 below. This shows that the special relation between $\mu$ and $\tilde{\mu}$ (resp., $\mu$ and the $\mu_s$) expressed in (2.11,2.15) is important for the derivation of the Theorem. \\[10pt] {\it Proof of Theorem 2.2.} (a)\ \ \ In a first step, let it be supposed that $R$ is bounded. From the assumed relation (2.15) and its adjoint relation $R^*V = W^* R^*$ we obtain, for $\epsilon' > 0$ arbitrarily chosen, \begin{eqnarray*} V^* (|R|^2 + \epsilon' 1) V & = & V^*RR^* V + \epsilon' V^*V \ = \ RWW^*R^* + \epsilon' V^*V \\ & \leq & w^2 |R|^2 + \epsilon' v^21 \ \leq \ w^2(|R|^2 + \epsilon 1) \end{eqnarray*} with $\epsilon := \epsilon'v^2/w^2$. This entails for the operator norms $$ ||\,(|R|^2 + \epsilon' 1 )^{1/2}V \,|| \ \leq\ w\,||\,(|R|^2 + \epsilon 1)^{1/2}\,|| \,, $$ and since $(|R|^2 + \epsilon 1)^{1/2}$ has a bounded inverse, $$ ||\,(|R|^2 + \epsilon' 1)^{1/2} V (|R|^2 + \epsilon 1 )^{-1/2} \,||\ \leq\ w\,. $$ On the other hand, one clearly has $$ ||\,(|R|^2 + \epsilon' 1)^0 V (|R|^2 + \epsilon 1)^0\,|| \ =\ ||\,V\,||\ \leq\ v\,. $$ Now these estimates are preserved if $R$ and $V$ are replaced by their complexified versions on the complexified Hilbertspace $H \oplus iH = {\bf C} \otimes H$. Thus, identifying if necessary $R$ and $V$ with their complexifications, a standard interpolation argument (see Appendix A) can be applied to yield $$ ||\,(|R|^2 + \epsilon' 1)^{\alpha} V (|R|^2 + \epsilon 1)^{-\alpha} \,||\ \leq\ w^{2\alpha} v^{1 - 2\alpha} $$ for all $0 \leq \alpha \leq 1/2$. Notice that this inequality holds uniformly in $\epsilon' > 0$. Therefore we may conclude that $$ ||\,|R|^{2\alpha}Vx \,||_0\ \leq\ w^{2\alpha}v^{1 - 2\alpha} \,||\,|R|^{2\alpha}x\,||_0 \,, \quad x \in H\,,\ 0 \leq \alpha \leq 1/2\,,$$ which is the required estimate for $V$. The analogous bound for $W$ is obtained through replacing $V$ by $W$ in the given arguments. Now we have to extend the argument to the case that $R$ is unbounded. Without restriction of generality we may assume that the Hilbertspace $H$ is complex, otherwise we complexify it and with it all the operators $R$,$V$,$W$, as above, thereby preserving their assumed properties. Then let $E$ be the spectral measure of $R$, and denote by $R_r$ the operator $E(B_r)RE(B_r)$ where $B_r := \{z \in {\bf C}: |z| \leq r\}$, $r > 0$. Similarly define $V_r$ and $W_r$. From the assumptions it is seen that $V_r^*R_r = R_rW_r$ holds for all $r >0$. Applying the reasoning of the first step we arrive, for each $0 \leq s \leq 2$, at the bounds $$ ||\,V_r x\,||_s \leq w^{s/2}v^{1-s/2}\,||\,x\,||_s \quad {\rm and} \quad ||\,W_r x\,||_s \leq v^{s/2}w^{1-s/2}\,||\,x\,||_s \,,$$ which hold uniformly in $r >0$ for all $x \in {\rm dom}(|R|^{s/2})$. From this, the statement of the Proposition follows.\\[6pt] (b) This is just an application of (a), identifying $H_{\mu}$ with $H$, $R_{\mu}$ with $R$ and $V,W$ with their bounded extensions to $H_{\mu}$. $\Box$ \\[10pt] {\bf Example 2.3} We exhibit a symplectic space $(S,\sigma)$ with $\mu \in {\sf pr}(S,\sigma)$ and a symplectomorphism $T$ of $(S,\sigma)$ where $T$ and $T^{-1}$ are continuous with respect to $\tilde{\mu}$, but not with respect to $\mu$. \\[6pt] Let $S := {\cal S}({\bf R},{\bf C})$, the Schwartz space of rapidly decreasing testfunctions on ${\bf R}$, viewed as real-linear space. By $\langle \phi,\psi \rangle := \int \overline{\phi} \psi \,dx$ we denote the standard $L^2$ scalar product. As a symplectic form on $S$ we choose $$ \sigma(\phi,\psi) := 2 {\sf Im}\langle \phi,\psi \rangle\,, \quad \phi,\psi \in S\,. $$ Now define on $S$ the strictly positive, essentially selfadjoint operator $A\phi := -\frac{d^2}{dx^2}\phi + \phi$, $\phi \in S$, in $L^2({\bf R})$. Its closure will again be denoted by $A$; it is bounded below by $1$. A real-linear scalar product $\mu$ will be defined on $S$ by $$ \mu(\phi,\psi) := {\sf Re}\langle A\phi,\psi \rangle\,, \quad \phi,\psi \in S. $$ Since $A$ has lower bound $1$, clearly $\mu$ dominates $\sigma$, and one easily obtains $R_{\mu} = - i A^{-1}$, $|R_{\mu}| = A^{-1}$. Hence $\mu \in {\sf pr}(S,\sigma)$ and $$ \tilde{\mu}(\phi,\psi) = {\sf Re}\langle \phi,\psi \rangle\,, \quad \phi,\psi \in S\,.$$ Now consider the operator $$ T : S \to S\,, \quad \ \ (T\phi)(x) := {\rm e}^{-ix^2}\phi(x)\,, \ \ \ x \in {\bf R},\ \phi \in S\,. $$ Obviously $T$ leaves the $L^2$ scalar product invariant, and hence also $\sigma$ and $\tilde{\mu}$. The inverse of $T$ is just $(T^{-1}\phi)(x) = {\rm e}^{i x^2}\phi(x)$, which of course leaves $\sigma$ and $\tilde{\mu}$ invariant as well. However, $T$ is not continuous with respect to $\mu$. To see this, let $\phi \in S$ be some non-vanishing smooth function with compact support, and define $$ \phi_n(x) := \phi(x -n)\,, \quad x \in {\bf R}, \ n \in {\bf N}\,. $$ Then $\mu(\phi_n,\phi_n) = {\rm const.} > 0$ for all $n \in {\bf N}$. We will show that $\mu(T\phi_n,T\phi_n)$ diverges for $n \to \infty$. We have \begin{eqnarray} \mu(T\phi_n,T\phi_n) & = & \langle A T\phi_n,T\phi_n \rangle \geq \int \overline{(T\phi_n)'}(T\phi_n)' \, dx \\ & \geq & \int (4x^2|\phi_n(x)|^2 + |\phi_n'(x)|^2)\, dx - \int 4 |x \phi_n'(x)\phi_n(x)|\,dx \,,\nonumber \end{eqnarray} where the primes indicate derivatives and we have used that $$ |(T\phi_n)'(x)|^2 = 4x^2|\phi_n(x)|^2 + |\phi_n'(x)|^2 + 4\cdot {\sf Im}(ix \overline{\phi_n}(x)\phi_n' (x))\,. $$ Using a substitution of variables, one can see that in the last term of (2.18) the positive integral grows like $n^2$ for large $n$, thus dominating eventually the negative integral which grows only like $n$. So $\mu(T\phi_n,T\phi_n) \to \infty$ for $n \to \infty$, showing that $T$ is not $\mu$-bounded. \\[10pt] {\bf Example 2.4} We give an example of a symplectic space $(S,\sigma)$, a $\mu \in {\sf pr}(S,\sigma)$ and a $\mu' \in {\sf pu}(S,\sigma)$, where $\mu$ dominates $\mu'$ and where there is a symplectomorphism $T$ of $(S,\sigma)$ which together with its inverse is $\mu$-bounded, but not $\mu'$-bounded.\\[6pt] We take $(S,\sigma)$ as in the previous example and write for each $\phi \in S$, $\phi_0 := {\sf Re}\phi$ and $\phi_1 := {\sf Im}\phi$. The real scalar product $\mu$ will be defined by $$ \mu(\phi,\psi) := \langle\phi_0,A\psi_0\rangle + \langle \phi_1, \psi_1 \rangle \,, \quad \phi,\psi \in S\,, $$ where the operator $A$ is the same as in the example before. Since its lower bound is $1$, $\mu$ dominates $\sigma$, and it is not difficult to see that $\mu$ is even primary. The real-linear scalar product $\mu'$ will be taken to be $$ \mu'(\phi,\psi) = {\sf Re}\langle \phi,\psi \rangle\,, \quad \phi,\psi \in S\,.$$ We know from the example above that $\mu' \in {\sf pu}(S,\sigma)$. Also, it is clear that $\mu'$ is dominated by $\mu$. Now consider the real-linear map $T: S \to S$ given by $$ T(\phi_0 + i\phi_1) := A^{-1/2} \phi_1 - i A^{1/2}\phi_0\,, \quad \phi \in S\,.$$ One checks easily that this map is bijective with $T^{-1} = - T$, and that $T$ preserves the symplectic form $\sigma$. Also, $||\,.\,||_{\mu}$ is preserved by $T$ since $$ \mu(T\phi,T\phi) = \langle \phi_1,\phi_1 \rangle + \langle A^{1/2}\phi_0,A^{1/2}\phi_0 \rangle = \mu(\phi,\phi)\,, \quad \phi \in S\,.$$ On the other hand, we have for each $\phi \in S$ $$ \mu'(T\phi,T\phi) = \langle \phi_1,A\phi_1 \rangle + \langle \phi_0, A^{-1}\phi_0 \rangle \,, $$ and this expression is not bounded by a ($\phi$-independent) constant times $\mu'(\phi,\phi)$, since $A$ is unbounded with respect to the $L^2$-norm. \newpage \section{The Algebraic Structure of Hadamard Vacuum Representations} \setcounter{equation}{0} ${}$ \\[20pt] {\bf 3.1 Summary of Notions from Spacetime-Geometry} \\[16pt] We recall that a spacetime manifold consists of a pair $(M,g)$, where $M$ is a smooth, paracompact, four-dimensional manifold without boundaries, and $g$ is a Lorentzian metric for $M$ with signature $(+ - -\, - )$. (Cf.\ [33,52,70], see these references also for further discussion of the notions to follow.) It will be assumed that $(M,g)$ is time-orientable, and moreover, globally hyperbolic. The latter means that $(M,g)$ possesses Cauchy-surfaces, where by a Cauchy-surface we always mean a {\it smooth}, spacelike hypersurface which is intersected exactly once by each inextendable causal curve in $M$. It can be shown [15,28] that this is equivalent to the statement that $M$ can be smoothly foliated in Cauchy-surfaces. Here, a foliation of $M$ in Cauchy-surfaces is a diffeomorphism $F: {\bf R} \times \Sigma \to M$, where $\Sigma$ is a smooth 3-manifold so that $F(\{t\} \times \Sigma)$ is, for each $t \in {\bf R}$, a Cauchy-surface, and the curves $t \mapsto F(t,q)$ are timelike for all $q \in\Sigma$. (One can even show that, if global hyperbolicity had been defined by requiring only the existence of a non necessarily smooth or spacelike Cauchy-surface (i.e.\ a topological hypersurface which is intersected exactly once by each inextendable causal curve), then it is still true that a globally hyperbolic spacetime can be smoothly foliated in Cauchy-surfaces, see [15,28].) We shall also be interested in ultrastatic globally hyperbolic spacetimes. A globally hyperbolic spacetime is said to be {\it ultrastatic} if a foliation $F : {\bf R} \times \Sigma \to M$ in Cauchy-surfaces can be found so that $F_*g$ has the form $dt^2 \oplus (- \gamma)$ with a complete ($t$-independent) Riemannian metric $\gamma$ on $\Sigma$. This particular foliation will then be called a {\it natural foliation} of the ultrastatic spacetime. (An ultrastatic spacetime may posses more than one natural foliation, think e.g.\ of Minkowski-spacetime.) The notation for the causal sets and domains of dependence will be recalled: Given a spacetime $(M,g)$ and ${\cal O} \subset M$, the set $J^{\pm}({\cal O})$ (causal future/past of ${\cal O}$) consists of all points $p \in M$ which can be reached by future/past directed causal curves emanating from ${\cal O}$. The set $D^{\pm}({\cal O})$ (future/past domain of dependence of ${\cal O}$) is defined as consisting of all $p \in J^{\pm}({\cal O})$ such that every past/future inextendible causal curve starting at $p$ intersects ${\cal O}$. One writes $J({\cal O}) := J^+({\cal O}) \cup J^-({\cal O})$ and $D({\cal O}) := D^+({\cal O}) \cup D^-({\cal O})$. They are called the {\it causal set}, and the {\it domain of dependence}, respectively, of ${\cal O}$. For ${\cal O} \subset M$, we denote by ${\cal O}^{\perp} := {\rm int}(M \backslash J({\cal O}))$ the {\it causal complement} of ${\cal O}$, i.e.\ the largest {\it open} set of points which cannot be connected to ${\cal O}$ by any causal curve. A set of the form ${\cal O}_G := {\rm int}\,D(G)$, where $G$ is a subset of some Cauchy-surface $\Sigma$ in $(M,g)$, will be referred to as the {\it diamond based on} $G$; we shall also say that $G$ is the {\it base} of ${\cal O}_G$. We note that if ${\cal O}_G$ is a diamond, then ${\cal O}_G^{\perp}$ is again a diamond, based on $\Sigma \backslash \overline{G}$. A diamond will be called {\it regular} if $G$ is an open, relatively compact subset of $\Sigma$ and if the boundary $\partial G$ of $G$ is contained in the union of finitely many smooth, two-dimensional submanifolds of $\Sigma$. Following [45], we say that an open neighbourhood $N$ of a Cauchy-surface $\Sigma$ in $(M,g)$ is a {\it causal normal neighbourhood} of $\Sigma$ if (1) $\Sigma$ is a Cauchy-surface for $N$, and (2) for each pair of points $p,q \in N$ with $p \in J^+(q)$, there is a convex normal neighbourhood ${\cal O} \subset M$ such that $J^-(p) \cap J^+(q) \subset {\cal O}$. Lemma 2.2 of [45] asserts the existence of causal normal neighbourhoods for any Cauchy-surface $\Sigma$. \\[20pt] {\bf 3.2 Some Structural Aspects of Quantum Field Theory in Curved Spacetime} \\[16pt] In the present subsection, we shall address some of the problems one faces in the formulation of quantum field theory in curved spacetime, and explain the notions of local definiteness, local primarity, and Haag-duality. In doing so, we follow our presentation in [67] quite closely. Standard general references related to the subsequent discussion are [26,31,45,71]. Quantum field theory in curved spacetime (QFT in CST, for short) means that one considers quantum field theory means that one considers quantum fields propagating in a (classical) curved background spacetime manifold $(M,g)$. In general, such a spacetime need not possess any symmetries, and so one cannot tie the notion of ``particles'' or ``vacuum'' to spacetime symmetries, as one does in quantum field theory in Minkowski spacetime. Therefore, the problem of how to characterize the physical states arises. For the discussion of this problem, the setting of algebraic quantum field theory is particularly well suited. Let us thus summarize some of the relevant concepts of algebraic QFT in CST. Let a spacetime manifold $(M,g)$ be given. The observables of a quantum system (e.g.\ a quantum field) situated in $(M,g)$ then have the basic structure of a map ${\cal O} \to {\cal A(O)}$, which assigns to each open, relatively compact subset ${\cal O}$ of $M$ a $C^*$-algebra ${\cal A(O)}$,\footnote{ Throughout the paper, $C^*$-algebras are assumed to be unital, i.e.\ to possess a unit element, denoted by ${ 1}$. It is further assumed that the unit element is the same for all the ${\cal A(O)}$.} with the properties:\footnote{where $[{\cal A}({\cal O}_1),{\cal A}({\cal O}_2)] = \{A_1A_2 - A_2A_1 : A_j \in {\cal A}({\cal O}_j),\ j =1,2 \}$.} \begin{equation} {\it Isotony:}\quad \quad {\cal O}_1 \subset {\cal O}_2 \Rightarrow {\cal A}({\cal O}_1) \subset {\cal A}({\cal O}_2) \end{equation} \begin{equation} {\it Locality:} \quad \quad {\cal O}_1 \subset {\cal O}_2^{\perp} \Rightarrow [{\cal A}({\cal O}_1),{\cal A}({\cal O}_2)] = \{0 \} \,. \end{equation} A map ${\cal O} \to {\cal A(O)}$ having these properties is called a {\it net of local observable algebras} over $(M,g)$. We recall that the conditions of locality and isotony are motivated by the idea that each ${\cal A(O)}$ is the $C^*$-algebra formed by the observables which can be measured within the spacetime region ${\cal O}$ on the system. We refer to [31] and references given there for further discussion. The collection of all open, relatively compact subsets of $M$ is directed with respect to set-inclusion, and so we can, in view of (3.1), form the smallest $C^*$-algebra ${\cal A} := \overline{ \bigcup_{{\cal O}}{\cal A(O)}}^{||\,.\,||}$ which contains all local algebras ${\cal A(O)}$. For the description of a system we need not only observables but also states. The set ${\cal A}^{*+}_1$ of all positive, normalized linear functionals on ${\cal A}$ is mathematically referred to as the set of {\it states} on ${\cal A}$, but not all elements of ${\cal A}^{*+}_1$ represent physically realizable states of the system. Therefore, given a local net of observable algebras ${\cal O} \to {\cal A(O)}$ for a physical system over $(M,g)$, one must specify the set of physically relevant states ${\cal S}$, which is a suitable subset of ${\cal A}^{*+}_1$. We have already mentioned in Chapter 2 that every state $\omega \in {\cal A}^{*+}_1$ determines canonically its GNS representation $({\cal H}_{\omega},\pi_{\omega},\Omega_{\omega})$ and thereby induces a net of von Neumann algebras (operator algebras on ${\cal H}_{\omega}$) $$ {\cal O} \to {\cal R}_{\omega}({\cal O}) := \pi_{\omega}({\cal O})^- \,. $$ Some of the mathematical properties of the GNS representations, and of the induced nets of von Neumann algebras, of states $\omega$ on ${\cal A}$ can naturally be interpreted physically. Thus one obtains constraints on the states $\omega$ which are to be viewed as physical states. Following this line of thought, Haag, Narnhofer and Stein [32] formulated what they called the ``principle of local definiteness'', consisting of the following three conditions to be obeyed by any collection ${\cal S}$ of physical states. \\[10pt] {\bf Local Definiteness:} ${}\ \ \bigcap_{{\cal O} \owns p} {\cal R}_{\omega}({\cal O}) = {\bf C} \cdot { 1}$ for all $\omega \in {\cal S}$ and all $p \in M$. \\[6pt] {\bf Local Primarity:} \ \ For each $\omega \in {\cal S}$, ${\cal R}_{\omega} ({\cal O})$ is a factor. \\[6pt] {\bf Local Quasiequivalence:} For each pair $\omega_1,\omega_2 \in {\cal S}$ and each relatively compact, open ${\cal O} \subset M$, the representations $\pi_{\omega_1} | {\cal A(O)}$ and $\pi_{\omega_2} | {\cal A(O)}$ of ${\cal A(O)}$ are quasiequivalent. \\[10pt] {\it Remarks.} (i) We recall (cf.\ the first Remark in Section 2) that ${\cal R}_{\omega}({\cal O})$ is a factor if ${\cal R}_{\omega}({\cal O}) \cap {\cal R}_{\omega} ({\cal O})' = {\bf C} \cdot { 1}$ where the prime means taking the commutant. We have not stated in the formulation of local primarity for which regions ${\cal O}$ the algebra ${\cal R}_{\omega}({\cal O})$ is required to be a factor. The regions ${\cal O}$ should be taken from a class of subsets of $M$ which forms a base for the topology. \\[6pt] (ii) Quasiequivalence of representations means unitary equivalence up to multiplicity. Another characterization of quasiequivalence is to say that the folia of the representations coincide, where the {\it folium} of a representation $\pi$ is defined as the set of all $\omega \in {\cal A}^{*+}_1$ which can be represented as $\omega(A) = tr(\rho\,\pi(A))$ with a density matrix $\rho$ on the representation Hilbertspace of $\pi$. \\[6pt] (iii) Local definiteness and quasiequivalence together express that physical states have finite (spatio-temporal) energy-density with respect to each other, and local primarity and quasiequivalence rule out local macroscopic observables and local superselection rules. We refer to [31] for further discussion and background material. A further, important property which one expects to be satisfied for physical states $\omega \in {\cal S}$ whose GNS representations are irreducible \footnote{It is easy to see that, in the presence of local primarity, Haag-duality will be violated if $\pi_{\omega}$ is not irreducible.} is \\[10pt] {\bf Haag-Duality:} \ \ ${\cal R}_{\omega}({\cal O}^{\perp})' = {\cal R}_{\omega}({\cal O})$, \\ which should hold for the causally complete regions ${\cal O}$, i.e.\ those satisfying $({\cal O}^{\perp})^{\perp} = {\cal O}$, where ${\cal R}_{\omega}({\cal O}^{\perp})$ is defined as the von Neumann algebra generated by all the ${\cal R}_{\omega} ({\cal O}_1)$ so that $\overline{{\cal O}_1} \subset {\cal O}^{\perp}$. \\[10pt] We comment that Haag-duality means that the von Neumann algebra ${\cal R}_{\omega}({\cal O})$ of local observables is maximal in the sense that no further observables can be added without violating the condition of locality. It is worth mentioning here that the condition of Haag-duality plays an important role in the theory of superselection sectors in algebraic quantum field theory in Minkowski spacetime [31,59]. For local nets of observables generated by Wightman fields on Minkowski spacetime it follows from the results of Bisognano and Wichmann [4] that a weaker condition of ``wedge-duality'' is always fulfilled, which allows one to pass to a new, potentially larger local net (the ``dual net'') which satisfies Haag-duality. In quantum field theory in Minkowski-spacetime where one is given a vacuum state $\omega_0$, one can define the set of physical states ${\cal S}$ simply as the set of all states on ${\cal A}$ which are locally quasiequivalent (i.e., the GNS representations of the states are locally quasiequivalent to the vacuum-representation) to $\omega_0$. It is obvious that local quasiequivalence then holds for ${\cal S}$. Also, local definiteness holds in this case, as was proved by Wightman [72]. If Haag-duality holds in the vacuum representation (which, as indicated above, can be assumed to hold quite generally), then it does not follow automatically that all pure states locally quasiequivalent to $\omega_0$ will also have GNS representations fulfilling Haag-duality; however, it follows once some regularity conditions are satisfied which have been checked in certain quantum field models [19,61]. So far there seems to be no general physically motivated criterion enforcing local primarity of a quantum field theory in algebraic formulation in Minkowski spacetime. But it is known that many quantum field theoretical models satisfy local primarity. For QFT in CST we do in general not know what a vacuum state is and so ${\cal S}$ cannot be defined in the same way as just described. Yet in some cases (for some quantum field models) there may be a set ${\cal S}_0 \subset {\cal A}^{*+}_1$ of distinguished states, and if this class of states satisfies the four conditions listed above, then the set ${\cal S}$, defined as consisting of all states $\omega_1 \in {\cal A}^{*+}_1$ which are locally quasiequivalent to any (and hence all) $\omega \in {\cal S}_0$, is a good candidate for the set of physical states. For the free scalar Klein-Gordon field (KG-field) on a globally hyperbolic spacetime, the following classes of states have been suggested as distinguished, physically reasonable states \footnote{The following list is not meant to be complete, it comprises some prominent families of states of the KG-field over a generic class of spacetimes for which mathematically sound results are known. Likewise, the indicated references are by no means exhaustive.} \begin{itemize} \item[(1)] (quasifree) states fulfilling local stability [3,22,31,32] \item[(2)] (quasifree) states fulfilling the wave front set (or microlocal) spectrum condition [6,47,55] \item[(3)] quasifree Hadamard states [12,68,45] \item[(4)] adiabatic vacua [38,48,53] \end{itemize} The list is ordered in such a way that the less restrictive condition preceeds the stronger one. There are a couple of comments to be made here. First of all, the specifications (3) and (4) make use of the information that one deals with the KG-field (or at any rate, a free field obeying a linear equation of motion of hyperbolic character), while the conditions (1) and (2) do not require such input and are applicable to general -- possibly interacting -- quantum fields over curved spacetimes. (It should however be mentioned that only for the KG-field (2) is known to be stronger than (1). The relation between (1) and (2) for more general theories is not settled.) The conditions imposed on the classes of states (1), (2) and (3) are related in that they are ultralocal remnants of the spectrum condition requiring a certain regularity of the short distance behaviour of the respective states which can be formulated in generic spacetimes. The class of states (4) is more special and can only be defined for the KG-field (or other linear fields) propagating in Robertson-Walker-type spacetimes. Here a distinguished choice of a time-variable can be made, and the restriction imposed on adiabatic vacua is a regularity condition on their spectral behaviour with respect to that special choice of time. (A somewhat stronger formulation of local stability has been proposed in [34].) It has been found by Radzikowski [55] that for quasifree states of the KG-field over generic globally hyperbolic spacetimes the classes (2) and (3) coincide. The microlocal spectrum condition is further refined and applied in [6,47]. Recently it was proved by Junker [38] that adiabatic vacua of the KG-field in Robertson-Walker spacetimes fulfill the microlocal spectrum condition and thus are, in fact, quasifree Hadamard states. The notion of the microlocal spectrum condition and the just mentioned results related to it draw on pseudodifferential operator techniques, particularly the notion of the wave front set, see [20,36,37]. Quasifree Hadamard states of the KG-field (see definition in Sec.\ 3.4 below) have been investigated for quite some time. One of the early studies of these states is [12]. The importance of these states, especially in the context of the semiclassical Einstein equation, is stressed in [68]. Other significant references include [24,25] and, in particular, [45] where, apparently for the first time, a satisfactory definition of the notion of a globally Hadamard state is given, cf.\ Section 3.4 for more details. In [66] it is proved that the class of quasifree Hadamard states of the KG-field fulfills local quasiequivalence in generic globally hyperbolic spacetimes and local definiteness, local primarity and Haag-duality for the case of ultrastatic globally hyperbolic spacetimes. As was outlined in the beginning, the purpose of the present chapter is to obtain these latter results also for arbitrary globally hyperbolic spacetimes which are not necessarily ultrastatic. It turns out that some of our previous results can be sharpened, e.g.\ the local quasiequivalence specializes in most cases to local unitary equivalence, cf.\ Thm.\ 3.6. For a couple of other results about the algebraic structure of the KG-field as well as other fields over curved spacetimes we refer to [2,6,15,16,17,40,41,46,63,64,65,66,74]. \\[24pt] {\bf 3.3 The Klein-Gordon Field} \\[18pt] In the present section we summarize the quantization of the classical KG-field over a globally hyperbolic spacetime in the $C^*$-algebraic formalism. This follows in major parts the the work of Dimock [16], cf.\ also references given there. Let $(M,g)$ be a globally hyperbolic spacetime. The KG-equation with potential term $r$ is \begin{equation} (\nabla^a \nabla_a + r) \varphi = 0 \end{equation} where $\nabla$ is the Levi-Civita derivative of the metric $g$, the potential function $r \in C^{\infty}(M,{\bf R})$ is arbitrary but fixed, and the sought for solutions $\varphi$ are smooth and real-valued. Making use of the fact that $(M,g)$ is globally hyperbolic and drawing on earlier results by Leray, it is shown in [16] that there are two uniquely determined, continuous \footnote{ With respect to the usual locally convex topologies on $C_0^{\infty}(M,{\bf R})$ and $C^{\infty}(M,{\bf R})$, cf.\ [13].} linear maps $E^{\pm}: C_0^{\infty}(M,{\bf R}) \to C^{\infty}(M,{\bf R})$ with the properties $$ (\nabla^a \nabla_a + r)E^{\pm}f = f = E^{\pm}(\nabla^a \nabla_a + r)f\,,\quad f \in C_0^{\infty}(M,{\bf R})\,, $$ and $$ {\rm supp}(E^{\pm}f) \subset J^{\pm}({\rm supp}(f))\,,\quad f \in C_0^{\infty}(M,{\bf R})\,. $$ The maps $E^{\pm}$ are called the advanced(+)/retarded(--) fundamental solutions of the KG-equation with potential term $r$ in $(M,g)$, and their difference $E := E^+ - E^-$ is referred to as the {\it propagator} of the KG-equation. One can moreover show that the Cauchy-problem for the KG-equation is well-posed. That is to say, if $\Sigma$ is any Cauchy-surface in $(M,g)$, and $u_0 \oplus u_1 \in C_0^{\infty}(M,{\bf R}) \oplus C_0^{\infty}(M,{\bf R})$ is any pair of Cauchy-data on $\Sigma$, then there exists precisely one smooth solution $\varphi$ of the KG-equation (3.3) having the property that \begin{equation} P_{\Sigma}(\varphi) := \varphi | \Sigma \oplus n^a \nabla_a \varphi|\Sigma = u_0 \oplus u_1\,. \end{equation} The vectorfield $n^a$ in (3.4) is the future-pointing unit normalfield of $\Sigma$. Furthermore, one has ``finite propagation speed'', i.e.\ when the supports of $u_0$ and $u_1$ are contained in a subset $G$ of $\Sigma$, then ${\rm supp}(\varphi) \subset J(G)$. Notice that compactness of $G$ implies that $J(G) \cap \Sigma'$ is compact for any Cauchy-surface $\Sigma'$. The well-posedness of the Cauchy-problem is a consequence of the classical energy-estimate for solutions of second order hyperbolic partial differential equations, cf.\ e.g.\ [33]. To formulate it, we introduce further notation. Let $\Sigma$ be a Cauchy-surface for $(M,g)$, and $\gamma_{\Sigma}$ the Riemannian metric, induced by the ambient Lorentzian metric, on $\Sigma$. Then denote the Laplacian operator on $C_0^{\infty}(\Sigma,{\bf R})$ corresponding to $\gamma_{\Sigma}$ by $\Delta_{\gamma_{\Sigma}}$, and define the {\it classical energy scalar product} on $C_0^{\infty}(\Sigma,{\bf R}) \oplus C_0^{\infty}(\Sigma,{\bf R})$ by \begin{equation} \mu_{\Sigma}^E(u_0 \oplus u_1, v_0 \oplus v_1 ) := \int_{\Sigma} (u_0 (- \Delta_{\gamma_{\Sigma}} + 1)v_0 + u_1 v_1) \, d\eta_{\Sigma} \,, \end{equation} where $d\eta_{\Sigma}$ is the metric-induced volume measure on $\Sigma$. As a special case of the energy estimate presented in [33] one then obtains \begin{Lemma} (Classical energy estimate for the KG-field.) Let $\Sigma_1$ and $\Sigma_2$ be a pair of Cauchy-surfaces in $(M,g)$ and $G$ a compact subset of $\Sigma_1$. Then there are two positive constants $c_1$ and $c_2$ so that there holds the estimate \begin{equation} c_1\,\mu^E_{\Sigma_1}(P_{\Sigma_1}(\varphi),P_{\Sigma_1} (\varphi)) \leq \mu^E_{\Sigma_2}(P_{\Sigma_2}(\varphi), P_{\Sigma_2}(\varphi)) \leq c_2 \, \mu^E_{\Sigma_1}(P_{\Sigma_1}(\varphi),P_{\Sigma_1} (\varphi)) \end{equation} for all solutions $\varphi$ of the KG-equation (3.3) which have the property that the supports of the Cauchy-data $P_{\Sigma_1}(\varphi)$ are contained in $G$. \footnote{ {\rm The formulation given here is to some extend more general than the one appearing in [33] where it is assumed that $\Sigma_1$ and $\Sigma_2$ are members of a foliation. However, the more general formulation can be reduced to that case.}} \end{Lemma} We shall now indicate that the space of smooth solutions of the KG-equation (3.3) has the structure of a symplectic space, locally as well as globally, which comes in several equivalent versions. To be more specific, observe first that the Cauchy-data space $$ {\cal D}_{\Sigma} := C_0^{\infty}(\Sigma,{\bf R}) \oplus C_0^{\infty}(\Sigma,{\bf R}) $$ of an arbitrary given Cauchy-surface $\Sigma$ in $(M,g)$ carries a symplectic form $$ \delta_{\Sigma}(u_0 \oplus u_1, v_0 \oplus v_1) := \int_{\Sigma}(u_0v_1 - v_0u_1)\,d\eta_{\Sigma}\,.$$ It will also be observed that this symplectic form is dominated by the classical energy scalar product $\mu^E_{\Sigma}$. Another symplectic space is $S$, the set of all real-valued $C^{\infty}$-solutions $\varphi$ of the KG-equation (3.3) with the property that, given any Cauchy-surface $\Sigma$ in $(M,g)$, their Cauchy-data $P_{\Sigma}(\varphi)$ have compact support on $\Sigma$. The symplectic form on $S$ is given by $$ \sigma(\varphi,\psi) := \int_{\Sigma}(\varphi n^a \nabla_a\psi -\psi n^a \nabla_a \varphi)\,d\eta_{\Sigma} $$ which is independent of the choice of the Cauchy-surface $\Sigma$ on the right hand side over which the integral is formed; $n^a$ is again the future-pointing unit normalfield of $\Sigma$. One clearly finds that for each Cauchy-surface $\Sigma$ the map $P_{\Sigma} : S \to {\cal D}_{\Sigma}$ establishes a symplectomorphism between the symplectic spaces $(S,\sigma)$ and $({\cal D}_{\Sigma},\delta_{\Sigma})$. A third symplectic space equivalent to the previous ones is obtained as the quotient $K := C_0^{\infty}(M,{\bf R}) /{\rm ker}(E)$ with symplectic form $$ \kappa([f],[h]) := \int_M f(Eh)\,d\eta \,, \quad f,h \in C_0^{\infty}(M,{\bf R})\,, $$ where $[\,.\,]$ is the quotient map $C_0^{\infty}(M,{\bf R}) \to K$ and $d\eta$ is the metric-induced volume measure on $M$. Then define for any open subset ${\cal O} \subset M$ with compact closure the set $K({\cal O}) := [C_0^{\infty}({\cal O},{\bf R})]$. One can see that the space $K$ has naturally the structure of an isotonous, local net ${\cal O} \to K({\cal O})$ of subspaces, where locality means that the symplectic form $\kappa([f],[h])$ vanishes for $[f] \in K({\cal O})$ and $[h] \in K({\cal O}_1)$ whenever ${\cal O}_1 \subset {\cal O}^{\perp}$. Dimock has proved in [16 (Lemma A.3)] that moreover there holds \begin{equation} K({\cal O}_G) \subset K(N) \end{equation} for all open neighbourhoods $N$ (in $M$) of $G$, whenever ${\cal O}_G$ is a diamond. Using this, one obtains that the map $(K,\kappa) \to (S,\sigma)$ given by $[f] \mapsto Ef$ is surjective, and by Lemma A.1 in [16], it is even a symplectomorphism. Clearly, $(K({\cal O}_G),\kappa|K({\cal O}_G))$ is a symplectic subspace of $(K,\kappa)$ for each diamond ${\cal O}_G$ in $(M,g)$. For any such diamond one then obtains, upon viewing it (or its connected components separately), equipped with the appropriate restriction of the spacetime metric $g$, as a globally hyperbolic spacetime in its own right, local versions of the just introduced symplectic spaces and the symplectomorphisms between them. More precisely, if we denote by $S({\cal O}_G)$ the set of all smooth solutions of the KG-equation (3.3) with the property that their Cauchy-data on $\Sigma$ are compactly supported in $G$, then the map $P_{\Sigma}$ restricts to a symplectomorphism $(S({\cal O}_G), \sigma|S({\cal O}_G)) \to ({\cal D}_{G},\delta_{G})$, $\varphi \mapsto P_{\Sigma}(\varphi)$. Likewise, the symplectomorphism $[f] \mapsto Ef$ restricts to a symplectomorphism $(K({\cal O}_G),\kappa|K({\cal O}_G)) \to (S({\cal O}_G),\sigma|S({\cal O}_G))$. To the symplectic space $(K,\kappa)$ we can now associate its Weyl-algebra ${\cal A}[K,\kappa]$, cf.\ Chapter 2. Using the afforementioned local net-structure of the symplectic space $(K,\kappa)$, one arrives at the following result. \begin{Proposition} {\rm [16]}. Let $(M,g)$ be a globally hyperbolic spacetime, and $(K,\kappa)$ the symplectic space, constructed as above, for the KG-eqn.\ with smooth potential term $r$ on $(M,g)$. Its Weyl-algebra ${\cal A}[K,\kappa]$ will be called the {\em Weyl-algebra of the KG-field with potential term $r$ over} $(M,g)$. Define for each open, relatively compact ${\cal O} \subset M$, the set ${\cal A}({\cal O})$ as the $C^*$-subalgebra of ${\cal A}[K,\kappa]$ generated by all the Weyl-operators $W([f])$, $[f] \in K({\cal O})$. Then ${\cal O} \to {\cal A}({\cal O})$ is a net of $C^*$-algebras fulfilling isotony (3.1) and locality (3.2), and moreover {\em primitive causality}, i.e.\ \begin{equation} {\cal A}({\cal O}_G) \subset {\cal A}(N) \end{equation} for all neighbourhoods $N$ (in $M$) of $G$, whenever ${\cal O}_G$ is a (relatively compact) diamond. \end{Proposition} It is worth recalling (cf.\ [5]) that the Weyl-algebras corresponding to symplectically equivalent spaces are canonically isomorphic in the following way: Let $W(x)$, $x \in K$ denote the Weyl-generators of ${\cal A}[K,\kappa]$ and $W_S(\varphi)$, $\varphi \in S$, the Weyl-generators of ${\cal A}[S,\sigma]$. Furthermore, let $T$ be a symplectomorphism between $(K,\kappa)$ and $(S,\sigma)$. Then there is a uniquely determined $C^*$-algebraic isomorphism $\alpha_T : {\cal A}[K,\kappa] \to {\cal A}[S,\sigma]$ given by $\alpha_T(W(x)) = W_S(Tx)$, $x \in K$. This shows that if we had associated e.g.\ with $(S,\sigma)$ the Weyl-algebra ${\cal A}[S,\sigma]$ as the algebra of quantum observables of the KG-field over $(M,g)$, we would have obtained an equivalent net of observable algebras (connected to the previous one by a net isomorphism, see [3,16]), rendering the same physical information. \\[24pt] {\bf 3.4 Hadamard States} \\[18pt] We have indicated above that quasifree Hadamard states are distinguished by their short-distance behaviour which allows the definition of expectation values of energy-momentum observables with reasonable properties [26,68,69,71]. If $\omega_{\mu}$ is a quasifree state on the Weyl-algebra ${\cal A}[K,\kappa]$, then we call $$ \lambda(x,y) := \mu(x,y) + \frac{i}{2}\kappa(x,y)\,, \quad x,y \in K\,, $$ its {\it two-point function} and $$ \Lambda(f,h) := \lambda([f],[h])\,, \quad f,h \in C_0^{\infty}(M,{\bf R})\,, $$ its {\it spatio-temporal} two-point function. In Chapter 2 we have seen that a quasifree state is entirely determined through specifying $\mu \in {\sf q}(K,\kappa)$, which is equivalent to the specification of the two-point function $\lambda$. Sometimes the notation $\lambda_{\omega}$ or $\lambda_{\mu}$ will be used to indicate the quasifree state $\omega$ or the dominating scalar product $\mu$ which is determined by $\lambda$. For a quasifree Hadamard state, the spatio-temporal two-point function is of a special form, called Hadamard form. The definition of Hadamard form which we give here follows that due to Kay and Wald [45]. Let $N$ is a causal normal neighbourhood of a Cauchy-surface $\Sigma$ in $(M,g)$. Then a smooth function $\chi : N \times N \to [0,1]$ is called {\it $N$-regularizing} if it has the following property: There is an open neighbourhood, $\Omega_*$, in $N \times N$ of the set of pairs of causally related points in $N$ such that $\overline{\Omega_*}$ is contained in a set $\Omega$ to be described presently, and $\chi \equiv 1$ on $\Omega_*$ while $\chi \equiv 0$ outside of $\overline{\Omega}$. Here, $\Omega$ is an open neighbourhood in $M \times M$ of the set of those $(p,q) \in M \times M$ which are causally related and have the property that (1) $J^+(p) \cap J^-(q)$ and $J^+(q) \cap J^-(p)$ are contained within a convex normal neighbourhood, and (2) $s(p,q)$, the square of the geodesic distance between $p$ and $q$, is a well-defined, smooth function on $\Omega$. (One observes that there are always sets $\Omega$ of this type which contain a neighbourhood of the diagonal in $M \times M$, and that an $N$-regularizing function depends on the choice of the pair of sets $\Omega_*,\Omega$ with the stated properties.) It is not difficult to check that $N$-regularizing functions always exist for any causal normal neighbourhood; a proof of that is e.g.\ given in [55]. Then denote by $U$ the square root of the VanVleck-Morette determinant, and by $v_m$, $m \in {\bf N}_0$ the sequence determined by the Hadamard recursion relations for the KG-equation (3.3), see [23,27] and also [30] for their definition. They are all smooth functions on $\Omega$.\footnote{For any choice of $\Omega$ with the properties just described.} Now set for $n \in {\bf N}$, $$ V^{(n)}(p,q) := \sum_{m = 0}^n v_m(p,q)(s(p,q))^m \,, \quad (p,q) \in \Omega\,, $$ and, given a smooth time-function $T: M \to {\bf R}$ increasing towards the future, define for all $\epsilon > 0$ and $(p,q) \in \Omega$, $$ Q_T(p,q;\epsilon) := s(p,q) - 2 i\epsilon (T(p) - T(q)) - \epsilon^2 \,,$$ and $$G^{T,n}_{\epsilon}(p,q) := \frac{1}{4\pi^2}\left( \frac{U(p,q)}{Q_T(p,q;\epsilon)} + V^{(n)}(p,q)ln(Q_T(p,q; \epsilon)) \right) \,, $$ where $ln$ is the principal branch of the logarithm. With this notation, one can give the \begin{Definition}{\rm [45]}. A ${\bf C}$-valued bilinear form $\Lambda$ on $C_0^{\infty}(M,{\bf R})$ is called an {\em Hadamard form} if, for a suitable choice of a causal normal neighbourhood $N$ of some Cauchy-surface $\Sigma$, and for suitable choices of an $N$-regularizing function $\chi$ and a future-increasing time-function $T$ on $M$, there exists a sequence $H^{(n)} \in C^n(N \times N)$, so that \begin{equation} \Lambda(f,h) = \lim_{\epsilon \to 0+} \int_{M\times M} \Lambda^{T,n}_{\epsilon}(p,q)f(p)h(q)\, d\eta(p) \,d\eta(q) \end{equation} for all $f,h \in C_0^{\infty}(N,{\bf R})$, where \footnote{ The set $\Omega$ on which the functions forming $G^{T,n}_{\epsilon}$ are defined and smooth is here to coincide with the $\Omega$ with respect to which $\chi$ is defined.} \begin{equation} \Lambda^{T,n}_{\epsilon}(p,q) := \chi(p,q)G^{T,n}_{\epsilon}(p,q) + H^{(n)}(p,q)\,, \end{equation} and if, moreover, $\Lambda$ is a global bi-parametrix of the KG-equation (3.3), i.e.\ it satisfies $$ \Lambda((\nabla^a\nabla_a + r)f,h) = B_1(f,h)\quad {\it and} \quad \Lambda(f,(\nabla^a\nabla_a + r)h) = B_2(f,h) $$ for all $f,h \in C_0^{\infty}(M)$, where $B_1$ and $B_2$ are given by smooth integral kernels on $M \times M$.\footnote{ We point out that statement (b) of Prop.\ 3.4 is wrong if the assumption that $\Lambda$ is a global bi-parametrix is not made. In this respect, Def.\ C.1 of [66] is imprecisely formulated as the said assumption is not stated. There, like in several other references, it has been implicitely assumed that $\Lambda$ is a two point function and thus a bi-solution of (3.3), i.e. a bi-parametrix with $B_1 = B_2 \equiv 0$.} \end{Definition} Based on results of [24,25], it is shown in [45] that this is a reasonable definition. The findings of these works will be collected in the following \begin{Proposition} ${}$\\[6pt] (a) If $\Lambda$ is of Hadamard form on a causal normal neighbourhood $ N$ of a Cauchy-surface $\Sigma$ for some choice of a time-function $T$ and some $N$-regularizing function $\chi$ (i.e.\ that (3.9),(3.10) hold with suitable $H^{(n)} \in C^n(N \times N)$), then so it is for any other time-function $T'$ and $N$-regularizing $\chi'$. (This means that these changes can be compensated by choosing another sequence $H'^{(n)} \in C^n( N \times N)$.) \\[6pt] (b) (Causal Propagation Property of the Hadamard Form)\\ If $\Lambda$ is of Hadamard form on a causal normal neighbourhood $ N$ of some Cauchy-surface $\Sigma$, then it is of Hadamard form in any causal normal neighbourhood $ N'$ of any other Cauchy-surface $\Sigma'$. \\[6pt] (c) Any $\Lambda$ of Hadamard form is a regular kernel distribution on $C_0^{\infty}(M \times M)$. \\[6pt] (d) There exist pure, quasifree Hadamard states (these will be referred to as {\em Hadamard vacua}) on the Weyl-algebra ${\cal A}[K,\kappa]$ of the KG-field in any globally hyperbolic spacetime. The family of quasifree Hadamard states on ${\cal A}[K,\kappa]$ spans an infinite-dimensional subspace of the continuous dual space of ${\cal A}[K,\kappa]$. \\[6pt] (e) The dominating scalar products $\mu$ on $K$ arising from quasifree Hadamard states $\omega_{\mu}$ induce locally the same topology, i.e.\ if $\mu$ and $\mu'$ are arbitrary such scalar products and ${\cal O} \subset M$ is open and relatively compact, then there are two positive constants $a,a'$ such that $$ a\, \mu([f],[f]) \leq \mu'([f],[f]) \leq a'\,\mu([f],[f])\,, \quad [f] \in K({\cal O})\,.$$ \end{Proposition} {\it Remark.} Observe that this definition of Hadamard form rules out the occurence of spacelike singularities, meaning that the Hadamard form $\Lambda$ is, when tested on functions $f,h$ in (3.9) whose supports are acausally separated, given by a $C^{\infty}$-kernel. For that reason, the definition of Hadamard form as stated above is also called {\it global} Hadamard form (cf.\ [45]). A weaker definition of Hadamard form would be to prescribe (3.9),(3.10) only for sets $N$ which, e.g., are members of an open covering of $M$ by convex normal neighbourhoods, and thereby to require the Hadamard form locally. In the case that $\Lambda$ is the spatio-temporal two-point function of a state on ${\cal A}[K,\kappa]$ and thus dominates the symplectic form $\kappa$ ($|\kappa([f],[h])|^2 \leq 4\,\Lambda(f,f)\Lambda(h,h)$), it was recently proved by Radzikowski that if $\Lambda$ is locally of Hadamard form, then it is already globally of Hadamard form [56]. However, if $\Lambda$ doesn't dominate $\kappa$, this need not hold [29,51,56]. Radzikowski's proof makes use of a characterization of Hadamard forms in terms of their wave front sets which was mentioned above. A definition of Hadamard form which is less technical in appearence has recently been given in [44]. We should add that the usual Minkowski-vacuum of the free scalar field with constant, non-negative potential term is, of course, an Hadamard vacuum. This holds, more generally, also for ultrastatic spacetimes, see below. \\[10pt] {\it Notes on the proof of Proposition 3.4.} The property (a) is proved in [45]. The argument for (b) is essentially contained in [25] and in the generality stated here it is completed in [45]. An alternative proof using the ``propagation of singularities theorem'' for hyperbolic differential equations is presented in [55]. Also property (c) is proved in [45 (Appendix B)] (cf.\ [66 (Prop.\ C.2)]). The existence of Hadamard vacua (d) is proved in [24] (cf.\ also [45]); the stated Corollary has been observed in [66] (and, in slightly different formulation, already in [24]). Statement (e) has been shown to hold in [66 (Prop.\ 3.8)]. \\[10pt] In order to prepare the formulation of the next result, in which we will apply our result of Chapter 2, we need to collect some more notation. Suppose that we are given a quasifree state $\omega_{\mu}$ on the Weyl-algebra ${\cal A}[K,\kappa]$ of the KG-field over some globally hyperbolic spacetime $(M,g)$, and that $\Sigma$ is a Cauchy-surface in that spacetime. Then we denote by $\mu_{\Sigma}$ the dominating scalar product on $({\cal D}_{\Sigma},\delta_{\Sigma})$ which is, using the symplectomorphism between $(K,\kappa)$ and $({\cal D}_{\Sigma},\delta_{\Sigma})$, induced by the dominating scalar product $\mu$ on $(K,\kappa)$, i.e.\ \begin{equation} \mu_{\Sigma}(P_{\Sigma}Ef,P_{\Sigma}Eh) = \mu([f],[h])\,, \quad [f],[h] \in K\,. \end{equation} Conversely, to any $\mu_{\Sigma} \in {\sf q}({\cal D}_{\Sigma},\delta_{\Sigma})$ there corresponds via (3.11) a $\mu \in {\sf q}(K,\kappa)$. Next, consider a complete Riemannian manifold $(\Sigma,\gamma)$, with corresponding Laplacian $\Delta_{\gamma}$, and as before, consider the operator $ -\Delta_{\gamma} +1$ on $C_0^{\infty}(\Sigma,{\bf R})$. Owing to the completeness of $(\Sigma,\gamma)$ this operator is, together with all its powers, essentially selfadjoint in $L^2_{{\bf R}}(\Sigma,d\eta_{\gamma})$ [10], and we denote its selfadjoint extension by $A_{\gamma}$. Then one can introduce the {Sobolev scalar products} of $m$-th order, $$ \langle u,v \rangle_{\gamma,m} := \langle u, A_{\gamma}^m v \rangle\,, \quad u,v \in C_0^{\infty}(\Sigma,{\bf R}),\ m \in {\bf R}\,, $$ where on the right hand side is the scalar product of $L^2_{{\bf R}}(\Sigma, d\eta_{\gamma})$. The completion of $C_0^{\infty}(\Sigma,{\bf R})$ in the topology of $\langle\,.\,,\,.\,\rangle_{\gamma,m}$ will be denoted by $H_m(\Sigma,\gamma)$. It turns out that the topology of $H_m(\Sigma,\gamma)$ is locally independent of the complete Riemannian metric $\gamma$, and that composition with diffeomorphisms and multiplication with smooth, compactly supported functions are continuous operations on these Sobolev spaces. (See Appendix B for precise formulations of these statements.) Therefore, whenever $G \subset \Sigma$ is open and relatively compact, the topology which $\langle \,.\,,\,.\, \rangle_{m,\gamma}$ induces on $C_0^{\infty}(G,{\bf R})$ is independent of the particular complete Riemannian metric $\gamma$, and we shall refer to the topology which is thus locally induced on $C_0^{\infty}(\Sigma,{\bf R})$ simply as the (local) {\it $H_m$-topology.} Let us now suppose that we have an ultrastatic spacetime $(\tilde{M},\tilde{\gamma})$, given in a natural foliation as $({\bf R} \times \tilde{\Sigma},dt^2 \otimes (-\gamma))$ where $(\tilde{\Sigma},\gamma)$ is a complete Riemannian manifold. We shall identify $\tilde{\Sigma}$ and $\{0\} \times \tilde{\Sigma}$. Consider again $A_{\gamma}$ = selfadjoint extension of $- \Delta_{\gamma} + 1$ on $C_0^{\infty}(\tilde{\Sigma},{\bf R})$ in $L^2_{{\bf R}}(\tilde{\Sigma},d\eta_{\gamma})$ with $\Delta_{\gamma}$ = Laplacian of $(\tilde{\Sigma},\gamma)$, and the scalar product $\mu^{\circ}_{\tilde{\Sigma}}$ on ${\cal D}_{\tilde{\Sigma}}$ given by \begin{eqnarray} \mu^{\circ}_{\tilde{\Sigma}}(u_0 \oplus u_1,v_0 \oplus v_1) & := & \frac{1}{2} \left ( \langle u_0,A_{\gamma}^{1/2}v_0 \rangle + \langle u_1,A_{\gamma}^{-1/2}v_1 \rangle \right) \\ & = & \frac{1}{2} \left( \langle u_0,v_0 \rangle_{\gamma,1/2} + \langle u_1,v_1 \rangle_{\gamma,-1/2} \right) \nonumber \end{eqnarray} for all $u_0 \oplus u_1,v_0 \oplus v_1 \in {\cal D}_{\tilde{\Sigma}}$. It is now straightforward to check that $\mu^{\circ}_{\tilde{\Sigma}} \in {\sf pu}({\cal D}_{\tilde{\Sigma}},\delta_{\tilde{\Sigma}})$, in fact, $\mu^{\circ}_{\tilde{\Sigma}}$ is the purification of the classical energy scalar product $\mu^E_{\tilde{\Sigma}}$ defined in eqn.\ (3.5). (We refer to [11] for discussion, and also the treatment of more general situations along similar lines.) What is furthermore central for the derivation of the next result is that $\mu^{\circ}_{\tilde{\Sigma}}$ corresponds (via (3.11)) to an Hadamard vacuum $\omega^{\circ}$ on the Weyl-algebra of the KG-field with potential term $r \equiv 1$ over the ultrastatic spacetime $({\bf R} \times \tilde{\Sigma},dt^2 \oplus (-\gamma))$. This has been proved in [24]. The state $\omega^{\circ}$ is called the {\it ultrastatic vacuum} for the said KG-field over $({\bf R} \times \tilde{\Sigma} ,dt^2 \oplus (-\gamma))$; it is the unique pure, quasifree ground state on the corresponding Weyl-algebra for the time-translations $(t,q) \mapsto (t + t',q)$ on that ultrastatic spacetime with respect to the chosen natural foliation (cf.\ [40,42]). \\[6pt] {\it Remark.} The passage from $\mu^E_{\tilde{\Sigma}}$ to $\mu^{\circ}_{\tilde{\Sigma}}$, where $\mu^{\circ}_{\tilde{\Sigma}}$ is the purification of the classical energy scalar product, may be viewed as a refined form of ``frequency-splitting'' procedures (or Hamiltonian diagonalization), in order to obtain pure dominating scalar products and hence, pure states of the KG-field in curved spacetimes, see [11]. However, in the case that $\tilde{\Sigma}$ is not a Cauchy-surface lying in the natural foliation of an ultrastatic spacetime, but an arbitrary Cauchy-surface in an arbitrary globally hyperbolic spacetime, the $\mu^{\circ}_{\tilde{\Sigma}}$ may fail to correspond to a quasifree Hadamard state --- even though, as the following Proposition demonstrates, $\mu^{\circ}_{\tilde{\Sigma}}$ gives locally on the Cauchy-data space ${\cal D}_{\tilde{\Sigma}}$ the same topology as the dominating scalar products induced on it by any quasifree Hadamard state. More seriously, $\mu^{\circ}_{\tilde{\Sigma}}$ may even correspond to a state which is no longer locally quasiequivalent to any quasifree Hadamard state. For an explicit example demonstrating this in a closed Robertson-Walker universe, and for additional discussion, we refer to Sec.\ 3.6 in [38]. \\[6pt] We shall say that a map $T : {\cal D}_{\Sigma} \to {\cal D}_{\Sigma'}$, with $\Sigma,\Sigma'$ Cauchy-surfaces, is {\it locally continuous} if, for any open, locally compact $G \subset \Sigma$, the restriction of $T$ to $C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$ is continuous (with respect to the topologies under consideration). \begin{Proposition} Let $\omega_{\mu}$ be a quasifree Hadamard state on the Weyl-algebra ${\cal A}[K,\kappa]$ of the KG-field with smooth potential term $r$ over the globally hyperbolic spacetime $(M,g)$, and $\Sigma,\Sigma'$ two Cauchy-surfaces in $(M,g)$. Then the Cauchy-data evolution map \begin{equation} T_{\Sigma',\Sigma} : = P_{\Sigma'} {\mbox{\footnotesize $\circ$}} P_{\Sigma}^{-1} : {\cal D}_{\Sigma} \to {\cal D}_{\Sigma'} \end{equation} is locally continuous in the $H_{\tau} \oplus H_{\tau -1}$-topology, $0 \leq \tau \leq 1$, on the Cauchy-data spaces, and the topology induced by $\mu_{\Sigma}$ on ${\cal D}_{\Sigma}$ coincides locally (i.e.\ on each $C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$ for $G \subset \Sigma$ open and relatively compact) with the $H_{1/2} \oplus H_{-1/2}$-topology. \end{Proposition} {\it Remarks.} (i) Observe that the continuity statement is reasonably formulated since, as a consequence of the support properties of solutions of the KG-equation with Cauchy-data of compact support (``finite propagation speed'') it holds that for each open, relatively compact $G \subset \Sigma$ there is an open, relatively compact $G' \subset \Sigma'$ with $T_{\Sigma',\Sigma}(C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})) \subset C_0^{\infty}(G',{\bf R}) \oplus C_0^{\infty}(G',{\bf R})$. \\[6pt] (ii) For $\tau =1$, the continuity statement is just the classical energy estimate. It should be mentioned here that the claimed continuity can also be obtained by other methods. For instance, Moreno [50] proves, under more restrictive assumptions on $\Sigma$ and $\Sigma'$ (among which is their compactness), the continuity of $T_{\Sigma',\Sigma}$ in the topology of $H_{\tau} \oplus H_{\tau -1}$ for all $\tau \in {\bf R}$, by employing an abstract energy estimate for first order hyperbolic equations (under suitable circumstances, the KG-equation can be brought into this form). We feel, however, that our method, using the results of Chapter 2, is physically more appealing and emphasizes much better the ``invariant'' structures involved, quite in keeping with the general approach to quantum field theory. \\[10pt] {\it Proof of Proposition 3.5.} We note that there is a diffeomorphism $\Psi : \Sigma \to \Sigma'$. To see this, observe that we may pick a foliation $F : {\bf R} \times \tilde{\Sigma} \to M$ of $M$ in Cauchy-surfaces. Then for each $q \in \tilde{\Sigma}$, the curves $t \mapsto F(t,q)$ are inextendible, timelike curves in $(M,g)$. Each such curve intersects $\Sigma$ exactly once, at the parameter value $t = \tau(q)$. Hence $\Sigma$ is the set $\{F(\tau(q),q) : q \in \tilde{\Sigma}\}$. As $F$ is a diffeomorphism and $\tau: \tilde{\Sigma} \to {\bf R}$ must be $C^{\infty}$ since, by assumption, $\Sigma$ is a smooth hypersurface in $M$, one can see that $\Sigma$ and $\tilde{\Sigma}$ are diffeomorphic. The same argument shows that $\Sigma'$ and $\tilde{\Sigma}$ and therefore, $\Sigma$ and $\Sigma'$, are diffeomorphic. Now let us first assume that the $g$-induced Riemannian metrics $\gamma_{\Sigma}$ and $\gamma_{\Sigma'}$ on $\Sigma$, resp.\ $\Sigma'$, are complete. Let $d\eta$ and $d\eta'$ be the induced volume measures on $\Sigma$ and $\Sigma'$, respectively. The $\Psi$-transformed measure of $d\eta$ on $\Sigma'$, $\Psi^*d\eta$, is given through \begin{equation} \int_{\Sigma} (u {\mbox{\footnotesize $\circ$}} \Psi) \,d\eta = \int_{\Sigma'} u\,(\Psi^*d\eta)\,, \quad u \in C_0^{\infty}(\Sigma')\,. \end{equation} Then the Radon-Nikodym derivative $(\rho(q))^2 :=(\Psi^*d\eta/d\eta')(q)$, $q \in \Sigma'$, is a smooth, strictly positive function on $\Sigma'$, and it is now easy to check that the linear map $$ \vartheta : ({\cal D}_{\Sigma},\delta_{\Sigma}) \to ({\cal D}_{\Sigma'},\delta_{\Sigma'})\,, \quad u_0 \oplus u_1 \mapsto \rho \cdot (u_0 {\mbox{\footnotesize $\circ$}} \Psi^{-1}) \oplus \rho \cdot (u_1 {\mbox{\footnotesize $\circ$}} \Psi^{-1}) \,, $$ is a symplectomorphism. Moreover, by the result given in Appendix B, $\vartheta$ and its inverse are locally continuous maps in the $H_s \oplus H_t$-topologies on both Cauchy-data spaces, for all $s,t \in {\bf R}$. By the energy estimate, $T_{\Sigma',\Sigma}$ is locally continuous with respect to the $H_1 \oplus H_0$-topology on the Cauchy-data spaces, and the same holds for the inverse $(T_{\Sigma',\Sigma})^{-1} = T_{\Sigma,\Sigma'}$. Hence, the map $\Theta := \vartheta^{-1} {\mbox{\footnotesize $\circ$}} T_{\Sigma',\Sigma}$ is a symplectomorphism of $({\cal D}_{\Sigma},\delta_{\Sigma})$, and $\Theta$ together with its inverse is locally continuous in the $H_1 \oplus H_0$-topology on ${\cal D}_{\Sigma}$. Here we made use of Remark (i) above. Now pick two sets $G$ and $G'$ as in Remark (i), then there is some open, relatively compact neighbourhood $\tilde{G}$ of $\Psi^{-1}(G') \cup G$ in $\Sigma$. We can choose a smooth, real-valued function $\chi$ compactly supported on $\Sigma$ with $\chi \equiv 1$ on $\tilde{G}$. It is then straightforward to check that the maps $\chi {\mbox{\footnotesize $\circ$}} \Theta {\mbox{\footnotesize $\circ$}} \chi$ and $\chi {\mbox{\footnotesize $\circ$}} \Theta^{-1} {\mbox{\footnotesize $\circ$}} \chi$ ($\chi$ to be interpreted as multiplication with $\chi$) is a pair of symplectically adjoint maps on $({\cal D}_{\Sigma},\delta_{\Sigma})$ which are bounded with respect to the $H_1 \oplus H_0$-topology, i.e.\ with respect to the norm of $\mu_{\Sigma}^E$. At this point we use Theorem 2.2(b) and consequently $\chi {\mbox{\footnotesize $\circ$}} \Theta {\mbox{\footnotesize $\circ$}} \chi$ and $\chi {\mbox{\footnotesize $\circ$}} \Theta^{-1}{\mbox{\footnotesize $\circ$}} \chi$ are continuous with respect to the norms of the $(\mu^E_{\Sigma})_s$, $0 \leq s \leq 2$. Inspection shows that $$ (\mu^E_{\Sigma})_s (u_0 \oplus u_1,v_0 \oplus v_1) = \frac{1}{2} \left( \langle u_0,A_{\gamma_{\Sigma}}^{1-s/2}v_0 \rangle + \langle u_1,A_{\gamma_{\Sigma}}^{-s/2}v_1 \rangle \right) $$ for $0 \leq s \leq 2$. From this it is now easy to see that $\Theta$ restricted to $C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$ is continuous in the topology of $H_{\tau} \oplus H_{\tau -1}$, $0 \leq \tau \leq 1$, since $\chi {\mbox{\footnotesize $\circ$}} \Theta {\mbox{\footnotesize $\circ$}} \chi(u_0 \oplus u_1) = \Theta(u_0 \oplus u_1)$ for all $u_0 \oplus u_1 \in C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$ by the choice of $\chi$. Using that $\Theta = \vartheta^{-1}{\mbox{\footnotesize $\circ$}} T_{ \Sigma',\Sigma}$ and that $\vartheta$ is locally continuous with respect to all the $H_s \oplus H_t$-topologies, $s,t \in {\bf R}$, on the Cauchy-data spaces, we deduce that that $T_{\Sigma',\Sigma}$ is locally continuous in the $H_{\tau} \oplus H_{\tau -1}$-topology, $0 \leq \tau \leq 1$, as claimed. If the $g$-induced Riemannian metrics $\gamma_{\Sigma}$, $\gamma_{\Sigma'}$ are not complete, one can make them into complete ones $\hat{\gamma}_{\Sigma} := f \cdot \gamma_{\Sigma}$, $\hat{\gamma}_{\Sigma'} := h \cdot \gamma_{\Sigma'}$ by multiplying them with suitable smooth, strictly positive functions $f$ on $\Sigma$ and $h$ on $\Sigma'$ [14]. Let $d\hat{\eta}$ and $d\hat{\eta}'$ be the volume measures corresponding to the new metrics. Consider then the density functions $(\phi_1)^2 := (d\eta/d\hat{\eta})$, $(\phi_2)^2 := (d\hat{\eta}'/d\eta')$, which are $C^{\infty}$ and strictly positive, and define $({\cal D}_{\Sigma},\hat{\delta}_{\Sigma})$, $({\cal D}_{\Sigma'},\hat{\delta}_{\Sigma'})$ and $\hat{\vartheta}$ like their unhatted counterparts but with $d\hat{\eta}$ and $d\hat{\eta}'$ in place of $d\eta$ and $d\eta'$. Likewise define $\hat{\mu}^E_{\Sigma}$ with respect to $\hat{\gamma}_{\Sigma}$. Then $\hat{T}_{\Sigma',\Sigma} := \phi_2 {\mbox{\footnotesize $\circ$}} T_{\Sigma',\Sigma} {\mbox{\footnotesize $\circ$}} \phi_1$ (understanding that $\phi_1,\phi_2$ act as multiplication operators) and its inverse are symplectomorphisms between $({\cal D}_{\Sigma},\hat{\delta}_{\Sigma})$ and $({\cal D}_{\Sigma'},\hat{\delta}_{\Sigma'})$ which are locally continuous in the $H_1 \oplus H_0$-topology. Now we can apply the argument above showing that $\hat{\Theta} = \hat{\vartheta}^{-1} {\mbox{\footnotesize $\circ$}} \hat{T}_{\Sigma',\Sigma}$ and, hence, $\hat{T}_{\Sigma',\Sigma}$ is locally continuous in the $H_{\tau} \oplus H_{\tau -1}$-topology for $0 \leq \tau \leq 1$. The same follows then for $T_{\Sigma',\Sigma} = \phi_2^{-1} {\mbox{\footnotesize $\circ$}} \hat{T}_{\Sigma',\Sigma} {\mbox{\footnotesize $\circ$}} \phi_1^{-1}$. For the proof of the second part of the statement, we note first that in [24] it is shown that there exists another globally hyperbolic spacetime $(\hat{M},\hat{g})$ of the form $\hat{M} = {\bf R} \times \Sigma$ with the following properties: \\[6pt] (1) $\Sigma_0 : = \{0\} \times \Sigma$ is a Cauchy-surface in $(\hat{M}, \hat{g})$, and a causal normal neighbourhood $N$ of $\Sigma$ in $M$ coincides with a causal normal neigbourhood $\hat{N}$ of $\Sigma_{0}$ in $\hat{M}$, in such a way that $\Sigma = \Sigma_0$ and $g = \hat{g}$ on $N$. \\[6pt] (2) For some $t_0 < 0$, the $(-\infty,t_0) \times \Sigma$-part of $\hat{M}$ lies properly to the past of $\hat{N}$, and on that part, $\hat{g}$ takes the form $dt^2 \oplus (- \gamma)$ where $\gamma$ is a complete Riemannian metric on $\Sigma$. \\[6pt] This means that $(\hat{M},\hat{g})$ is a globally hyperbolic spacetime which equals $(M,g)$ on a causal normal neighbourhood of $\Sigma$ and becomes ultrastatic to the past of it. Then consider the Weyl-algebra ${\cal A}[\hat{K},\hat{\kappa}]$ of the KG-field with potential term $\hat{r}$ over $(\hat{M},\hat{g})$, where $\hat{r} \in C_0^{\infty}(\hat{M},{\bf R})$ agrees with $r$ on the neighbourhood $\hat{N} = N$ and is identically equal to $1$ on the $(-\infty,t_0) \times \Sigma$-part of $\hat{M}$. Now observe that the propagators $E$ and $\hat{E}$ of the respective KG-equations on $(M,g)$ and $(\hat{M},\hat{g})$ coincide when restricted to $C_0^{\infty}(N,{\bf R})$. Therefore one obtains an identification map $$ [f] = f + {\rm ker}(E) \mapsto [f]\,\hat{{}} = f + {\rm ker}(\hat{E}) \,, \quad f \in C_0^{\infty}(N,{\bf R}) \,,$$ between $K(N)$ and $\hat{K}(\hat{N})$ which preserves the symplectic forms $\kappa$ and $\hat{\kappa}$. Without danger we may write this identification as an equality, $K(N) = \hat{K}(\hat{N})$. This identification map between $(K(N),\kappa|K(N))$ and $(\hat{K}(\hat{N}),\hat{\kappa}|\hat{K}(\hat{N}))$ lifts to a $C^*$-algebraic isomorphism between the corresponding Weyl-algebras \begin{eqnarray} {\cal A}[K(N),\kappa|K(N)]& =& {\cal A}[\hat{K}(\hat{N}),\hat{\kappa}| \hat{K}(\hat{N})]\,, \nonumber \\ W([f])& =& \hat{W}([f]\,\hat{{}}\,)\,,\ \ \ f \in C_0^{\infty}(N,{\bf R})\,. \end{eqnarray} Here we followed our just indicated convention to abbreviate this identification as an equality. Now we have $D(N) = M$ in $(M,g)$ and $D(\hat{N}) = \hat{M}$ in $(\hat{M},\hat{g})$, implying that $K(N) = K$ and $\hat{K}(\hat{N}) = \hat{K}$. Hence ${\cal A}[K(N),\kappa|K(N)] = {\cal A}[K,\kappa]$ and the same for the ``hatted'' objects. Thus (3.15) gives rise to an identification between ${\cal A}[K,\kappa]$ and ${\cal A}[\hat{K},\hat{\kappa}]$, and so the quasifree Hadamard state $\omega_{\mu}$ induces a quasifree state $\omega_{\hat{\mu}}$ on ${\cal A}[\hat{K},\hat{\kappa}]$ with \begin{equation} \hat{\mu}([f]\,\hat{{}},[h]\,\hat{{}}\,) = \mu([f],[h])\,, \quad f,h \in C_0^{\infty}(N,{\bf R}) \,. \end{equation} This state is also an Hadamard state since we have \begin{eqnarray*} \Lambda(f,h)& =& \mu([f],[h]) + \frac{i}{2}\kappa([f],[h]) \\ & = & \hat{\mu}([f]\,\hat{{}}\,,[h]\,\hat{{}}\,) + \frac{i}{2} \hat{\kappa}([f]\,\hat{{}}\,,[h]\,\hat{{}}\,)\,, \quad f,h \in C_0^{\infty}(N,{\bf R})\,, \end{eqnarray*} and $\Lambda$ is, by assumption, of Hadamard form. However, due to the causal propagation property of the Hadamard form this means that $\hat{\mu}$ is the dominating scalar product on $(\hat{K},\hat{\kappa})$ of a quasifree Hadamard state on ${\cal A}[\hat{K},\hat{\kappa}]$. Now choose some $t < t_0$, and let $\Sigma_t = \{t\} \times \Sigma$ be the Cauchy-surface in the ultrastatic part of $(\hat{M},\hat{g})$ corresponding to this value of the time-parameter of the natural foliation. As remarked above, the scalar product \begin{equation} \mu^{\circ}_{\Sigma_t}(u_0 \oplus u_1,v_0 \oplus v_1) = \frac{1}{2}\left( \langle u_0,v_0 \rangle_{\gamma,1/2} + \langle u_1,v_1 \rangle_{\gamma,-1/2} \right)\,, \quad u_0 \oplus u_1,v_0 \oplus v_1 \in {\cal D}_{\Sigma_t} \,, \end{equation} is the dominating scalar product on $({\cal D}_{\Sigma_t},\delta_{\Sigma_t})$ corresponding to the ultrastatic vacuum state $\omega^{\circ}$ over the ultrastatic part of $(\hat{M},\hat{g})$, which is an Hadamard vacuum. Since the dominating scalar products of all quasifree Hadamard states yield locally the same topology (Prop.\ 3.4(e)), it follows that the dominating scalar product $\hat{\mu}_{\Sigma_t}$ on $({\cal D}_{\Sigma_{t}},\delta_{\Sigma_t})$, which is induced (cf.\ (3.11)) by the the dominating scalar product of $\hat{\mu}$ of the quasifree Hadamard state $\omega_{\hat{\mu}}$, endows ${\cal D}_{\Sigma_t}$ locally with the same topology as does $\mu^{\circ}_{\Sigma_t}$. As can be read off from (3.17), this is the local $H_{1/2} \oplus H_{-1/2}$-topology. To complete the argument, we note that (cf.\ (3.11,3.13)) $$ \hat{\mu}_{\Sigma_0}(u_0 \oplus u_1,v_0 \oplus v_1) = \hat{\mu}_{\Sigma_t}(T_{\Sigma_t,\Sigma_0}(u_0 \oplus u_1),T_{\Sigma_t,\Sigma_0} (v_0 \oplus v_1))\,, \quad u_0 \oplus u_1,v_0 \oplus v_1 \in {\cal D}_{\Sigma_0}\,.$$ But since $\hat{\mu}_{\Sigma_t}$ induces locally the $H_{1/2} \oplus H_{-1/2}$-topology and since the symplectomorphism $T_{\Sigma_t,\Sigma_0}$ as well as its inverse are locally continuous on the Cauchy-data spaces in the $H_{1/2}\oplus H_{-1/2}$-topology, the last equality entails that $\hat{\mu}_{\Sigma_0}$ induces the local $H_{1/2} \oplus H_{-1/2}$-topology on ${\cal D}_{\Sigma_0}$. In view of (3.16), the Proposition is now proved. $\Box$ \\[24pt] {\bf 3.5 Local Definiteness, Local Primarity, Haag-Duality, etc.} \\[18pt] In this section we prove Theorem 3.6 below on the algebraic structure of the GNS-representations associated with quasifree Hadamard states on the CCR-algebra of the KG-field on an arbitrary globally hyperbolic spacetime $(M,g)$. The results appearing therein extend our previous work [64,65,66]. Let $(M,g)$ be a globally hyperbolic spacetime. We recall that a subset ${\cal O}$ of $M$ is called a { regular diamond} if it is of the form ${\cal O} = {\cal O}_G = {\rm int}\,D(G)$ where $G$ is an open, relatively compact subset of some Cauchy-surface $\Sigma$ in $(M,g)$ having the property that the boundary $\partial G$ of $G$ is contained in the union of finitely many smooth, closed, two-dimensional submanifolds of $\Sigma$. We also recall the notation ${\cal R}_{\omega}({\cal O}) = \pi_{\omega}({\cal A}({\cal O}))^-$ for the local von Neumann algebras in the GNS-representation of a state $\omega$. The $C^*$-algebraic net of observable algebras ${\cal O} \to {\cal A}({\cal O})$ will be understood as being that associated with the KG-field in Prop.\ 3.2. \begin{Theorem} Let $(M,g)$ be a globally hyperbolic spacetime and ${\cal A}[K,\kappa]$ the Weyl-algebra of the KG-field with smooth, real-valued potential function $r$ over $(M,g)$. Suppose that $\omega$ and $\omega_1$ are two quasifree Hadamard states on ${\cal A}[K,\kappa]$. Then the following statements hold. \\[6pt] (a) The GNS-Hilbertspace ${\cal H}_{\omega}$ of $\omega$ is infinite dimensional and separable. \\[6pt] (b) The restrictions of the GNS-representations $\pi_{\omega}|{\cal A(O)}$ and $\pi_{\omega_1}|{\cal A(O)}$ of any open, relatively compact ${\cal O} \subset M$ are quasiequivalent. They are even unitarily equivalent when ${\cal O}^{\perp}$ is non-void. \\[6pt] (c) For each $p \in M$ we have local definiteness, $$ \bigcap_{{\cal O} \owns p} {\cal R}_{\omega}({\cal O}) = {\bf C} \cdot 1\, . $$ More generally, whenever $C \subset M$ is the subset of a compact set which is contained in the union of finitely many smooth, closed, two-dimensional submanifolds of an arbitrary Cauchy-surface $\Sigma$ in $M$, then \begin{equation} \bigcap_{{\cal O} \supset C} {\cal R}_{\omega}({\cal O}) = {\bf C} \cdot 1\,. \end{equation} \\[6pt] (d) Let ${\cal O}$ and ${\cal O}_1$ be two relatively compact diamonds, based on Cauchy-surfaces $\Sigma$ and $\Sigma_1$, respectively, such that $\overline{{\cal O}} \subset {\cal O}_1$. Then the split-property holds for the pair ${\cal R}_{\omega}({\cal O})$ and ${\cal R}_{\omega}({\cal O}_1)$, i.e.\ there exists a type ${\rm I}_{\infty}$ factor $\cal N$ such that one has the inclusion $$ {\cal R}_{\omega}({\cal O}) \subset {\cal N} \subset {\cal R}_{\omega} ({\cal O}_1) \,. $$ \\[6pt] (e) Inner and outer regularity \begin{equation} {\cal R}_{\omega}({\cal O}) = \left( \bigcup_{\overline{{\cal O}_I} \subset {\cal O}} {\cal R}_{\omega}({\cal O}_I) \right) '' = \bigcap_{{\cal O}_1 \supset \overline{{\cal O}}} {\cal R}_{\omega}({\cal O}_1) \end{equation} holds for all regular diamonds ${\cal O}$. \\[6pt] (f) If $\omega$ is pure (an Hadamard vacuum), then we have Haag-Duality $$ {\cal R}_{\omega}({\cal O})' = {\cal R}_{\omega}({\cal O}^{\perp}) $$ for all regular diamonds ${\cal O}$. (By the same arguments as in {\rm [65 (Prop.\ 6)]}, Haag-Duality extends to all pure (but not necessarily quasifree or Hadamard) states $\omega$ which are locally normal (hence, by (d), locally quasiequivalent) to any Hadamard vacuum.) \\[6pt] (g) Local primarity holds for all regular diamonds, that is, for each regular diamond ${\cal O}$, ${\cal R}_{\omega}({\cal O})$ is a factor. Moreover, ${\cal R}_{\omega}({\cal O})$ is isomorphic to the unique hyperfinite type ${\rm III}_1$ factor if ${\cal O}^{\perp}$ is non-void. In this case, ${\cal R}_{\omega}({\cal O}^{\perp})$ is also hyperfinite and of type ${\rm III}_1$, and if $\omega$ is pure, ${\cal R}_{\omega}({\cal O}^{\perp})$ is again a factor. Otherwise, if ${\cal O}^{\perp} = \emptyset$, then ${\cal R}_{\omega}({\cal O})$ is a type ${\rm I}_{\infty}$ factor. \end{Theorem} {\it Proof.} The key point in the proof is that, by results which for the cases relevant here are to large extend due to Araki [1], the above statement can be equivalently translated into statements about the structure of the one-particle space, i.e.\ essentially the symplectic space $(K,\kappa)$ equipped with the scalar product $\lambda_{\omega}$. We shall use, however, the formalism of [40,45]. Following that, given a symplectic space $(K,\kappa)$ and $\mu \in {\sf q}(K,\kappa)$ one calls a real linear map ${\bf k}: K \to H$ a {\it one-particle Hilbertspace structure} for $\mu$ if (1) $H$ is a complex Hilbertspace, (2) the complex linear span of ${\bf k}(K)$ is dense in $H$ and (3) $$ \langle {\bf k}(x),{\bf k}(y) \rangle = \lambda_{\mu}(x,y) = \mu(x,y) + \frac{i}{2}\kappa(x,y) $$ for all $x,y \in K$. It can then be shown (cf.\ [45 (Appendix A)]) that the GNS-representation of the quasifree state $\omega_{\mu}$ on ${\cal A}[K,\kappa]$ may be realized in the following way: ${\cal H}_{\omega_{\mu}} = F_s(H)$, the Bosonic Fock-space over the one-particle space $H$, $\Omega_{\omega_{\mu}}$ = the Fock-vacuum, and $$ \pi_{\omega_{\mu}}(W(x)) = {\rm e}^{i(a({\bf k}(x)) + a^+({\bf k}(x)))^-}\,, \quad x \in K\, ,$$ where $a(\,.\,)$ and $a^+(\,.\,)$ are the Bosonic annihilation and creation operators, respectively. Now it is useful to define the symplectic complement $F^{\tt v} := \{\chi \in H : {\sf Im}\,\langle \chi,\phi \rangle = 0 \ \ \forall \phi \in F \}$ for $F \subset H$, since it is known that \begin{itemize} \item[(i)] ${\cal R}_{\omega_{\mu}}({\cal O})$ is a factor \ \ \ iff\ \ \ $ {\bf k}(K({\cal O}))^- \cap {\bf k}(K({\cal O}))^{\tt v} = \{0\}$, \item[(ii)] ${\cal R}_{\omega_{\mu}}({\cal O})' = {\cal R}_{\omega_{\mu}} ({\cal O}^{\perp})$\ \ \ iff\ \ \ ${\bf k}(K({\cal O}))^{\tt v} = {\bf k}(K({\cal O}^{\perp}))^-$, \item[(iii)] $\bigcap_{{\cal O} \supset C} {\cal R}_{\omega_{\mu}}({\cal O}) = {\bf C} \cdot 1$ \ \ \ iff\ \ \ $\bigcap_{{\cal O} \supset C}{\bf k}(K({\cal O}))^- = \{0\}\,,$ \end{itemize} cf.\ [1,21,35,49,58]. After these preparations we can commence with the proof of the various statements of our Theorem. \\[6pt] (a) Let ${\bf k}: K \to H$ be the one-particle Hilbertspace structure of $\omega$. The local one-particle spaces ${\bf k}(K({\cal O}_G))^-$ of regular diamonds ${\cal O}_G$ based on $G \subset \Sigma$ are topologically isomorphic to the completions of $C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$ in the $H_{1/2} \oplus H_{-1/2}$-topology and these are separable. Hence ${\bf k}(K)^-$, which is generated by a countable set ${\bf k}(K({\cal O}_{G_n}))$, for $G_n$ a sequence of locally compact subsets of $\Sigma$ eventually exhausting $\Sigma$, is also separable. The same holds then for the one-particle Hilbertspace $H$ in which the complex span of ${\bf k}(K)$ is dense, and thus separability is implied for ${\cal H}_{\omega} = F_s(H)$. The infinite-dimensionality is clear. \\[6pt] (b) The local quasiequivalence has been proved in [66] and we refer to that reference for further details. We just indicate that the proof makes use of the fact that the difference $\Lambda - \Lambda_1$ of the spatio-temporal two-point functions of any pair of quasifree Hadamard states is on each causal normal neighbourhood of any Cauchy-surface given by a smooth integral kernel --- as can be directly read off from the Hadamard form --- and this turns out to be sufficient for local quasiequivalence. The statement about the unitary equivalence can be inferred from (g) below, since it is known that every $*$-preserving isomorphism between von Neumann algebras of type III acting on separable Hilbertspaces is given by the adjoint action of a unitary operator which maps the Hilbertspaces onto each other. See e.g.\ Thm.\ 7.2.9 and Prop.\ 9.1.6 in [39]. \\[6pt] (c) Here one uses that there exist Hadamard vacua, i.e.\ pure quasifree Hadamard states $\omega_{\mu}$. Since by Prop.\ 3.4 the topology of $\mu_{\Sigma}$ in ${\cal D}_{\Sigma}$ is locally that of $H_{1/2} \oplus H_{-1/2}$, one can show as in [66 (Chp.\ 4 and Appendix)] that under the stated hypotheses about $C$ it holds that $\bigcap_{{\cal O} \supset C} {\bf k}(K({\cal O}))^- = \{0\}$ for the one-particle Hilbertspace structures of Hadamard vacua. From the local equivalence of the topologies induced by the dominating scalar products of all quasifree Hadamard states (Prop.\ 3.4(e)), this extends to the one-particle structures of all quasifree Hadamard states. By (iii), this yields the statement (c). \\[6pt] (d) This is proved in [65] under the additional assumption that the potential term $r$ is a positive constant. (The result was formulated in [65] under the hypothesis that $\Sigma = \Sigma_1$, but it is clear that the present statement without this hypothesis is an immediate generalization.) To obtain the general case one needs in the spacetime deformation argument of [65] the modification that the potential term $\hat{r}$ of the KG-field on the new spacetime $(\hat{M},\hat{g})$ is equal to a positive constant on its ultrastatic part while being equal to $r$ in a neighbourhood of $\Sigma$. We have used that procedure already in the proof of Prop.\ 3.5, see also the proof of (f) below where precisely the said modification will be carried out in more detail. \\[6pt] (e) Inner regularity follows simply from the definition of the ${\cal A}({\cal O})$; one deduces that for each $A \in {\cal A}({\cal O})$ and each $\epsilon > 0$ there exists some $\overline{{\cal O}_I} \subset {\cal O}$ and $A_{\epsilon} \in {\cal A}({\cal O}_I)$ so that $||\,A - A_{\epsilon}\,|| < \epsilon$. It is easy to see that inner regularity is a consequence of this property. So we focus now on the outer regularity. Let ${\cal O} = {\cal O}_G$ be based on the subset $G$ of the Cauchy-surface $\Sigma$. Consider the symplectic space $({\cal D}_{\Sigma},\delta_{\Sigma})$ and the dominating scalar product $\mu_{\Sigma}$ induced by $\mu \in {\sf q}({\cal D}_{\Sigma},\delta_{\Sigma})$, where $\omega_{\mu} = \omega$; the corresponding one-particle Hilbertspace structure we denote by ${\bf k}_{\Sigma}: {\cal D}_{\Sigma} \to H_{\Sigma}$. Then we denote by ${\cal W}({\bf k}_{\Sigma}({\cal D}_G))$ the von Neumann algebra in $B(F_s(H_{\Sigma}))$ generated by the unitary groups of the operators $(a({\bf k}_{\Sigma}(u_0 \oplus u_1)) + a^+({\bf k}_{\Sigma}(u_0 \oplus u_1)))^-$ where $u_0 \oplus u_1$ ranges over ${\cal D}_G := C_0^{\infty}(G,{\bf R}) \oplus C_0^{\infty}(G,{\bf R})$. So ${\cal W}({\bf k}_{\Sigma}({\cal D}_G)) = {\cal R}_{\omega}({\cal O}_G)$. It holds generally that $\bigcap_{G_1 \supset \overline{G}} {\cal W}({\bf k}_{\Sigma} ({\cal D}_{G_1})) = {\cal W}(\bigcap_{G_1 \supset \overline{G}} {\bf k}_{\Sigma}({\cal D}_{G_1})^-)$ [1], hence, to establish outer regularity, we must show that \begin{equation} \bigcap_{G_1 \supset \overline{G}} {\bf k}_{\Sigma}({\cal D}_{G_1})^- = {\bf k}_{\Sigma}({\cal D}_G)^-\,. \end{equation} In [65] we have proved that the ultrastatic vacuum $\omega^{\circ}$ of the KG-field with potential term $\equiv 1$ over the ultrastatic spacetime $(M^{\circ},g^{\circ}) = ({\bf R} \times \Sigma,dt^2 \oplus (-\gamma))$ (where $\gamma$ is any complete Riemannian metric on $\Sigma$) satisfies Haag-duality. That means, we have \begin{equation} {\cal R}^{\circ}_{\omega^{\circ}}({\cal O}_{\circ})' = {\cal R}^{\circ}_{\omega^{\circ}}({\cal O}_{\circ}^{\perp}) \end{equation} for any regular diamond ${\cal O}_{\circ}$ in $(M^{\circ},g^{\circ})$ which is based on any of the Cauchy-surfaces $\{t\}\times \Sigma$ in the natural foliation, and we have put a ``$\circ$'' on the local von Neumann algebras to indicate that they refer to a KG-field over $(M^{\circ},g^{\circ})$. But since we have inner regularity for ${\cal R}^{\circ}_{\omega^{\circ}}({\cal O}_{\circ}^{\perp})$ --- by the very definition --- the outer regularity of ${\cal R}^{\circ} _{\omega^{\circ}}({\cal O}_{\circ})$ follows from the Haag-duality (3.21). Translated into conditions on the one-particle Hilbertspace structure ${\bf k}^{\circ}_{\Sigma} : {\cal D}_{\Sigma} \to H^{\circ}_{\Sigma}$ of $\omega^{\circ}$, this means that the equality \begin{equation} \bigcap_{G_1 \supset \overline{G}} {\bf k}^{\circ}_{\Sigma} ({\cal D}_{G_1})^- = {\bf k}^{\circ}_{\Sigma}({\cal D}_G)^- \end{equation} holds. Now we know from Prop.\ 3.5 that $\mu_{\Sigma}$ induces locally the $H_{1/2} \oplus H_{-1/2}$-topology on ${\cal D}_{\Sigma}$. However, this coincides with the topology locally induced by $\mu^{\circ}_{\Sigma}$ on ${\cal D}_{\Sigma}$ (cf.\ (3.11)) --- even though $\mu^{\circ}_{\Sigma}$ may, in general, not be viewed as corresponding to an Hadamard vacuum of the KG-field over $(M,g)$. Thus the required relation (3.20) is implied by (3.22). \\[6pt] (f) In view of outer regularity it is enough to show that, given any ${\cal O}_1 \supset \overline{{\cal O}}$, it holds that \begin{equation} {\cal R}_{\omega}({\cal O}^{\perp})' \subset {\cal R}_{\omega}({\cal O}_1)\,. \end{equation} The demonstration of this property relies on a spacetime deformation argument similar to that used in the proof of Prop.\ 3.5. Let $G$ be the base of ${\cal O}$ on the Cauchy-surface $\Sigma$ in $(M,g)$. Then, given any other open, relatively compact subset $G_1$ of $\Sigma$ with $\overline{G} \subset G_1$, we have shown in [65] that there exists an ultrastatic spacetime $(\hat{M},\hat{g})$ with the properties (1) and (2) in the proof of Prop.\ 3.5, and with the additional property that there is some $t < t_0$ such that $$ \left( {\rm int}\,\hat{J}(G) \cap \Sigma_t \right )^- \subset {\rm int}\, \hat{D}(G_1) \cap \Sigma_t\,.$$ Here, $\Sigma_t = \{t\} \times \Sigma$ are the Cauchy-surfaces in the natural foliation of the ultrastatic part of $(\hat{M},\hat{g})$. The hats indicate that the causal set and the domain of dependence are to be taken in $(\hat{M},\hat{g})$. This implies that we can find some regular diamond ${\cal O}^t := {\rm int}\hat{D}(S^t)$ in $(\hat{M},\hat{g})$ based on a subset $S^t$ of $\Sigma_t$ which satisfies \begin{equation} \left( {\rm int}\, \hat{J}(G) \cap \Sigma_t \right)^- \subset S^t \subset {\rm int}\,\hat{D}(G_1) \cap \Sigma_t \,. \end{equation} Setting $\hat{{\cal O}} := {\rm int}\, \hat{D}(G)$ and $\hat{{\cal O}}_1 := {\rm int}\,\hat{D}(G_1)$, one derives from (3.24) the relations \begin{equation} \hat{{\cal O}} \subset {\cal O}^t \subset \hat{{\cal O}}_1 \,. \end{equation} These are equivalent to \begin{equation} \hat{{\cal O}}_1^{\perp} \subset ({\cal O}^t)^{\perp} \subset \hat{{\cal O}}^{\perp} \end{equation} where $\perp$ is the causal complementation in $(\hat{M},\hat{g})$. Now as in the proof of Prop.\ 3.5, the given Hadamard vacuum $\omega$ on the Weyl-algebra ${\cal A}[K,\kappa]$ of the KG-field over $(M,g)$ induces an Hadamard vacuum $\hat{\omega}$ on the Weyl-algebra ${\cal A}[\hat{K},\hat{\kappa}]$ of the KG-field over $(\hat{M},\hat{g})$ whose potential term $\hat{r}$ is $1$ on the ultrastatic part of $(\hat{M},\hat{g})$. Then by Prop.\ 6 in [65] we have Haag-duality \begin{equation} \hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}_t}^{\perp}) ' = \hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}_t}) \end{equation} for all regular diamonds $\hat{{\cal O}_t}$ with base on $\Sigma_t$; we have put hats on the von Neumann algebras to indicate that they refer to ${\cal A}[\hat{K},\hat{\kappa}]$. (This was proved in [65] assuming that $(\hat{M},\hat{g})$ is globally ultrastatic. However, with the same argument, based on primitive causality, as we use it next to pass from (3.28) to (3.30), one can easily establish that (3.27) holds if only $\Sigma_t$ is, as here, a member in the natural foliation of the ultrastatic part of $(\hat{M},\hat{g})$.) Since ${\cal O}^t$ is a regular diamond based on $\Sigma_t$, we obtain $$\hat{\cal R}_{\hat{\omega}}(({\cal O}^t)^{\perp})' = \hat{\cal R}_{\hat{\omega}}({\cal O}^t) $$ and thus, in view of (3.25) and (3.26), \begin{equation} \hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}}^{\perp})' \subset \hat{\cal R}_{\hat{\omega}}(({\cal O}^t)^{\perp})' = \hat{\cal R}_{\hat{\omega}}({\cal O}^t) \subset \hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}}_1)\,. \end{equation} Now recall (see proof of Prop.\ 3.5) that $(\hat{M},\hat{g})$ coincides with $(M,g)$ on a causal normal neighbourhood $N$ of $\Sigma$. Primitive causality (Prop.\ 3.2) then entails \begin{equation} \hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}}^{\perp} \cap N)' \subset \hat{\cal R}_{\hat{\omega}}(\hat{{\cal O}}_1 \cap N) \,. \end{equation} On the other hand, $\hat{{\cal O}}^{\perp} = {\rm int} \hat{D}(\Sigma \backslash G)$ and $\hat{{\cal O}}_1$ are diamonds in $(\hat{M},\hat{g})$ based on $\Sigma$. Since $(M,g)$ and $(\hat{M},\hat{g})$ coincide on the causal normal neighbourhood $N$ of $\Sigma$, one obtains that ${\rm int}\,D(\tilde{G}) \cap N = {\rm int}\, \hat{D}(\tilde{G}) \cap N$ for all $\tilde{G} \in \Sigma$. Hence, with ${\cal O} = {\rm int}\,D(G)$, ${\cal O}_1 = {\rm int}\, D(G_1)$ (in $(M,g)$), we have that (3.23) entails $$ {\cal R}_{\omega}({\cal O}^{\perp} \cap N)' \subset {\cal R}_{\omega}({\cal O}_1 \cap N) $$ (cf.\ the proof of Prop.\ 3.5) where the causal complement $\perp$ is now taken in $(M,g)$. Using primitive causality once more, we deduce that \begin{equation} {\cal R}_{\omega}({\cal O}^{\perp})' \subset {\cal R}_{\omega}({\cal O}_1)\,. \end{equation} The open, relatively compact subset $G_1$ of $\Sigma$ was arbitrary up to the constraint $\overline{G} \subset G_1$. Therefore, we arrive at the conclusion that the required inclusion (3.23) holds of all ${\cal O}_1 \supset \overline{{\cal O}}$. \\[6pt] (g) Let $\Sigma$ be the Cauchy-surface on which ${\cal O}$ is based. For the local primarity one uses, as in (c), the existence of Hadamard vacua $\omega_{\mu}$ and the fact (Prop.\ 3.5) that $\mu_{\Sigma}$ induces locally the $H_{1/2} \oplus H_{-1/2}$-topology; then one may use the arguments of [66 (Chp.\ 4 and Appendix)] to show that due to the regularity of the boundary $\partial G$ of the base $G$ of ${\cal O}$ there holds $$ {\bf k}(K({\cal O}))^- \cap {\bf k}(K({\cal O}))^{\tt v} = \{ 0 \}$$ for the one-particle Hilbertspace structures of Hadamard vacua. As in the proof of (c), this can be carried over to the one-particle structures of all quasifree Hadamard states since they induce locally on the one-particle spaces the same topology, see [66 (Chp.\ 4)]. We note that for Hadamard vacua the local primarity can also be established using (3.18) together with Haag-duality and primitive causality purely at the algebraic level, without having to appeal to the one-particle structures. The type ${\rm III}_1$-property of ${\cal R}_{\omega}({\cal O})$ is then derived using Thm.\ 16.2.18 in [3] (see also [73]). We note that for some points $p$ in the boundary $\partial G$ of $G$, ${\cal O}$ admits domains which are what is in Sect.\ 16.2.4 of [3] called ``$\beta_p$-causal sets'', as a consequence of the regularity of $\partial G$ and the assumption ${\cal O}^{\perp} \neq \emptyset$. We further note that it is straightforward to prove that the quasifree Hadamard states of the KG-field over $(M,g)$ possess at each point in $M$ scaling limits (in the sense of Sect.\ 16.2.4 in [3], see also [22,32]) which are equal to the theory of the massless KG-field in Minkowski-spacetime. Together with (a) and (c) of the present Theorem this shows that the the assumptions of Thm.\ 16.2.18 in [3] are fulfilled, and the ${\cal R}_{\omega}({\cal O})$ are type ${\rm III}_1$-factors for all regular diamonds ${\cal O}$ with ${\cal O}^{\perp} \neq \emptyset$. The hyperfiniteness follows from the split-property (d) and the regularity (e), cf.\ Prop.\ 17.2.1 in [3]. The same arguments may be applied to ${\cal R}_{\omega}({\cal O}^{\perp})$, yielding its type ${\rm III}_1$-property (meaning that in its central decomposition only type ${\rm III}_1$-factors occur) and hyperfiniteness. If $\omega$ is an Hadamard vacuum, then ${\cal R}_{\omega}({\cal O}^{\perp}) = {\cal R}_{\omega}({\cal O})'$ is a factor unitarily equivalent to ${\cal R}_{\omega}({\cal O})$. For the last statement note that ${\cal O}^{\perp} = \emptyset$ implies that the spacetime has a compact Cauchy-surface on which ${\cal O}$ is based. In this case ${\cal R}_{\omega}({\cal O}) = \pi_{\omega}({\cal A}[K,\kappa])''$ (use the regularity of $\partial G$, and (c), (e) and primitive causality). But since $\omega$ is quasiequivalent to any Hadamard vacuum by the relative compactness of ${\cal O}$, ${\cal R}_{\omega}({\cal O}) = \pi_{\omega}({\cal A}[K,\kappa])''$ is a type ${\rm I}_{\infty}$-factor. $\Box$ \\[10pt] We end this section and therefore, this work, with a few concluding remarks. First we note that the split-property signifies a strong notion of statistical independence. It can be deduced from constraints on the phase-space behaviour (``nuclearity'') of the considered quantum field theory. We refer to [9,31] for further information and also to [62] for a review, as a discussion of these issues lies beyond the scope of of this article. The same applies to a discussion of the property of the local von Neumann algebras ${\cal R}_{\omega}({\cal O})$ to be hyperfinite and of type ${\rm III}_1$. We only mention that for quantum field theories on Minkowski spacetime it can be established under very general (model-independent) conditions that the local (von Neumann) observable algebras are hyperfinite and of type ${\rm III}_1$, and refer the reader to [7] and references cited therein. However, the property of the local von Neumann algebras to be of type ${\rm III}_1$, together with the separability of the GNS-Hilbertspace ${\cal H}_{\omega}$, has an important consequence which we would like to point out (we have used it implicitly already in the proof of Thm.\ 3.6(b)): ${\cal H}_{\omega}$ contains a dense subset ${\sf ts}({\cal H}_{\omega})$ of vectors which are cyclic and separating for all ${\cal R}_{\omega}({\cal O})$ whenever ${\cal O}$ is a diamond with ${\cal O}^{\perp} \neq \emptyset$. But so far it has only been established in special cases that $\Omega_{\omega} \in {\sf ts}({\cal H}_{\omega})$, see [64]. At any rate, when $\Omega \in {\sf ts}({\cal H}_{\omega})$ one may consider for a pair of regular diamonds ${\cal O}_1,{\cal O}_2$ with $\overline{{\cal O}_1} \subset {\cal O}_2$ and ${\cal O}_2^{\perp}$ nonvoid the modular operator $\Delta_2$ of ${\cal R}_{\omega}({\cal O}_2)$,$\Omega$ (cf.\ [39]). The split property and the factoriality of ${\cal R}_{\omega}({\cal O}_1)$ and ${\cal R}_{\omega} ({\cal O}_2)$ imply the that the map \begin{equation} \Xi_{1,2} : A \mapsto \Delta^{1/4}_2 A \Omega\,, \quad A \in {\cal R}_{\omega}({\cal O}_1)\,, \end{equation} is compact [8]. As explained in [8], ``modular compactness'' or ``modular nuclearity'' may be viewed as suitable generalizations of ``energy compactness'' or ``energy nuclearity'' to curved spacetimes as notions to measure the phase-space behaviour of a quantum field theory (see also [65]). Thus an interesting question would be if the maps (3.31) are even nuclear. Summarizing it can be said that Thm.\ 3.6 shows that the nets of von Neumann observable algebras of the KG-field over a globally hyperbolic spacetime in the representations of quasifree Hadamard states have all the properties one would expect for physically reasonable representations. This supports the point of view that quasifree Hadamard states appear to be a good choice for physical states of the KG-field over a globally hyperbolic spacetime. Similar results are expected to hold also for other linear fields. Finally, the reader will have noticed that we have been considering exclusively the quantum theory of a KG-field on a {\it globally hyperbolic} spacetime. For recent developments concerning quantum fields in the background of non-globally hyperbolic spacetimes, we refer to [44] and references cited there. \\[24pt] {\bf Acknowledgements.} I would like to thank D.\ Buchholz for valueable comments on a very early draft of Chapter 2. Moreover, I would like to thank C.\ D'Antoni, R.\ Longo, J.\ Roberts and L.\ Zsido for their hospitiality, and their interest in quantum field theory in curved spacetimes. I also appreciated conversations with R.\ Conti, D.\ Guido and L.\ Tuset on various parts of the material of the present work. \\[28pt] \noindent {\Large {\bf Appendix}} \\[24pt] {\bf Appendix A} \\[18pt] For the sake of completeness, we include here the interpolation argument in the form we use it in the proof of Theorem 2.2 and in Appendix B below. It is a standard argument based on Hadamard's three-line-theorem, cf.\ Chapter IX in [57]. \\[10pt] {\bf Lemma A.1} {\it Let ${\cal F},{\cal H}$ be complex Hilbertspaces, $X$ and $Y$ two non-negative, injective, selfadjoint operators in ${\cal F}$ and ${\cal H}$, respectively, and $Q$ a bounded linear operator ${\cal H} \to {\cal F}$ such that $Q{\rm Ran}(Y) \subset {\rm dom}(X)$. Suppose that the operator $XQY$ admits a bounded extension $T :{\cal H} \to {\cal F}$. Then for all $0 \leq \tau \leq 1$, it holds that $Q{\rm Ran}(Y^{\tau}) \subset {\rm dom}(X^{\tau})$, and the operators $X^{\tau}QY^{\tau}$ are bounded by $||\,T\,||^{\tau} ||\,Q\,||^{1 - \tau}$. } \\[10pt] {\it Proof.} The operators $\ln(X)$ and $\ln(Y)$ are (densely defined) selfadjoint operators. Let the vectors $x$ and $y$ belong to the spectral subspaces of $\ln(X)$ and $\ln(Y)$, respectively, corresponding to an arbitrary finite intervall. Then the functions ${\bf C} \owns z \mapsto {\rm e}^{z\ln(X)}x$ and ${\bf C} \owns z \mapsto {\rm e}^{z\ln(Y)}y$ are holomorphic. Moreover, ${\rm e}^{\tau \ln(X)}x = X^{\tau}x$ and ${\rm e}^{\tau \ln(Y)}y = Y^{\tau}y$ for all real $\tau$. Consider the function $$ F(z) := \langle {\rm e}^{\overline{z}\ln(X)}x,Q{\rm e}^{z\ln(Y)}y \rangle_{{\cal F}} \,.$$ It is easy to see that this function is holomorphic on ${\bf C}$, and also that the function is uniformly bounded for $z$ in the strip $\{z : 0 \leq {\sf Re}\,z \leq 1 \}$. For $z = 1 + it$, $t \in {\bf R}$, one has $$ |F(z)| = |\langle {\rm e}^{-it\ln(X)}x,XQY{\rm e}^{it\ln(Y)}y \rangle_{{\cal F}} | \leq ||\,T\,||\,||\,x\,||_{{\cal F}}||\,y\,||_{{\cal H}} \,,$$ and for $z = it$, $t \in {\bf R}$, $$ |F(z)| = |\langle {\rm e}^{-it\ln(X)}x,Q{\rm e}^{it\ln(Y)}y \rangle_{{\cal F}} | \leq ||\,Q\,||\,||\,x\,||_{{\cal F}}||\,y\,||_{{\cal H}} \,.$$ By Hadamard's three-line-theorem, it follows that for all $z = \tau + it$ in the said strip there holds the bound $$ |F(\tau + it)| \leq ||\,T\,||^{\tau}||\,Q\,||^{1 - \tau}||\,x\,||_{{\cal F}} ||\,y\,||_{{\cal H}}\,.$$ As $x$ and $y$ were arbitrary members of the finite spectral intervall subspaces, the last estimate extends to all $x$ and $y$ lying in cores for the operators $X^{\tau}$ and $Y^{\tau}$, from which the the claimed statement follows. $\Box$ \\[24pt] {\bf Appendix B} \\[18pt] For the convenience of the reader we collect here two well-known results about Sobolev norms on manifolds which are used in the proof of Proposition 3.5. The notation is as follows. $\Sigma$ and $\Sigma'$ will denote smooth, finite dimensional manifolds (connected, paracompact, Hausdorff); $\gamma$ and $\gamma'$ are complete Riemannian metrics on $\Sigma$ and $\Sigma'$, respectively. Their induced volume measures are denoted by $d\eta$ and $d\eta'$. We abbreviate by $A_{\gamma}$ the selfadjoint extension in $L^2(\Sigma,d\eta)$ of the operator $-\Delta_{\gamma} +1$ on $C_0^{\infty}(\Sigma)$, where $\Delta_{\gamma}$ is the Laplace-Beltrami operator on $(\Sigma,\gamma)$; note that [10] contains a proof that $(-\Delta_{\gamma} + 1)^k$ is essentially selfadjoint on $C_0^{\infty}(\Sigma)$ for all $k \in {\bf N}$. $A'$ will be defined similarly with respect to the corresponding objects of $(\Sigma',\gamma')$. As in the main text, the $m$-th Sobolev scalar product is $\langle u,v \rangle_{\gamma,m} = \langle u,A_{\gamma}^{m}v \rangle$ for $u,v \in C_0^{\infty}(\Sigma)$ and $m \in {\bf R}$, where $\langle\,.\,,\,.\, \rangle$ is the scalar product of $L^2(\Sigma,d\eta)$. Anagolously we define $\langle\,.\,,\,.\,\rangle_{ \gamma',m}$. For the corresponding norms we write $||\,.\,||_{\gamma,m}$, resp., $||\,.\,||_{\gamma',m}$. \\[10pt] {\bf Lemma B.1} {\it (a) Let $\chi \in C_0^{\infty}(\Sigma)$. Then there is for each $m \in {\bf R}$ a constant $c_m$ so that $$ ||\,\chi u \,||_{\gamma,m} \leq c_m ||\, u \,||_{\gamma,m}\,, \quad u \in C_0^{\infty}(\Sigma) \,.$$ \\[6pt] (b) Let $\phi \in C^{\infty}(\Sigma)$ be strictly positive and $G \subset \Sigma$ open and relatively compact. Then there are for each $m \in {\bf R}$ two positive constants $\beta_1,\beta_2$ so that $$ \beta_1||\,\phi u\,||_{\gamma,m} \leq ||\,u\,||_{\gamma,m} \leq \beta_2||\,\phi u\,||_{\gamma,m}\,, \quad u \in C_0^{\infty}(G)\,.$$ } {\it Proof.} (a) We may suppose that $\chi$ is real-valued (otherwise we treat real and imaginary parts separately). A tedious but straightforward calculation shows that the claimed estimate is fulfilled for all $m =2k$, $k \in {\bf N}_0$. Hence $A^k \chi A^{-k}$ extends to a bounded operator on $L^2(\Sigma,d\eta)$, and the same is true of the adjoint $A^{-k}\chi A^k$. Thus by the interpolation argument, cf.\ Lemma A.1, $A^{\tau k} \chi A^{-\tau k}$ is bounded for all $-1 \leq \tau \leq 1$. This yields the stated estimate. \\[6pt] (b) This is a simple corollary of (a). For the first estimate, note that we may replace $\phi$ by a smooth function with compact support. Then note that the second estimate is equivalent to $||\,\phi^{-1}v\,||_{\gamma,m} \leq \beta_2||\,v\,||_{\gamma,m}$, $v \in C_0^{\infty}(G)$, and again we use that instead of $\phi^{-1}$ we may take a smooth function of compact support. $\Box$ \\[10pt] {\bf Lemma B.2} {\it Let $(\Sigma,\gamma)$ and $(\Sigma',\gamma')$ be two complete Riemannian manifolds, $N$ and $N'$ two open subsets of $\Sigma$ and $\Sigma'$, respectively, and $\Psi : N \to N'$ a diffeomorphism. Given $m \in {\bf R}$ and some open, relatively compact subset $G$ of $\Sigma$ with $\overline{G} \subset N$, there are two positive constants $b_1,b_2$ such that $$ b_1||\,u\,||_{\gamma,m} \leq ||\,\Psi^*u\,||_{\gamma',m} \leq b_2||\,u\,||_{\gamma,m} \,, \quad u \in C_0^{\infty}(G)\,,$$ where $\Psi^*u := u {\mbox{\footnotesize $\circ$}} \Psi^{-1}$. } \\[10pt] {\it Proof.} Again it is elementary to check that such a result is true for $m = 2k$ with $k \in {\bf N}_0$. One infers that, choosing $\chi \in C_0^{\infty}(N)$ with $\chi|G \equiv 1$ and setting $\chi' := \Psi^*\chi$, there is for each $k \in {\bf N}_0$ a positive constant $b$ fulfilling $$ ||\,A^k\chi\Psi_*\chi'v\,||_{\gamma,0} \leq b\,||\,(A')^kv\,||_{\gamma',0}\,, \quad v \in C_0^{\infty}(\Sigma')\,;$$ here $\Psi_*v := v {\mbox{\footnotesize $\circ$}} \Psi$. Therefore, $$ A^k{\mbox{\footnotesize $\circ$}} \chi{\mbox{\footnotesize $\circ$}} \Psi_*{\mbox{\footnotesize $\circ$}} \chi'{\mbox{\footnotesize $\circ$}} (A')^{-k} $$ extends to a bounded operator $L^2(\Sigma',d\eta') \to L^2(\Sigma,d\eta)$ for each $k \in {\bf N}_0$. Interchanging the roles of $A$ and $A'$, one obtains that also $$ (A')^k{\mbox{\footnotesize $\circ$}} \chi'{\mbox{\footnotesize $\circ$}} \Psi^*{\mbox{\footnotesize $\circ$}} \chi{\mbox{\footnotesize $\circ$}} A^{-k} $$ extends, for each $k \in {\bf N}_0$, to a bounded operator $L^2(\Sigma,d\eta) \to L^2(\Sigma',d\eta')$. The boundedness transfers to the adjoints of these two operators. Observe then that for $(\Psi_*)^{\dagger}$, the adjoint of $\Psi_*$, we have $(\Psi_*)^{\dagger} = \rho^2{\mbox{\footnotesize $\circ$}} (\Psi^*)$ on $C_0^{\infty}(N)$, and similarly, for the adjoint $(\Psi^*)^{\dagger}$ of $\Psi^*$ we have $(\Psi^*)^{\dagger} = \Psi_* {\mbox{\footnotesize $\circ$}} \rho^{-2}$ on $C_0^{\infty}(N')$, where $\rho^2 = \Psi^*d\eta/d\eta'$ is a smooth density function on $N'$, cf.\ eqn.\ (3.14). It can now easily be worked out that the interpolation argument of Lemma A.1 yields again the claimed result. \begin{flushright} $\Box$ \end{flushright} {\small
2024-02-18T23:39:42.034Z
1996-11-18T12:41:35.000Z
algebraic_stack_train_0000
116
21,319
proofpile-arXiv_065-664
\section{Introduction} \noindent Has the quark gluon plasma been discovered at the CERN SPS? Experiment NA50 has reported an abrupt decrease in $\psi$ production in Pb+Pb collisions at 158 GeV per nucleon \cite{na50}. Specifically, the collaboration presented a striking `threshold effect' in the $\psi$--to--continuum ratio by plotting it as a function of a calculated quantity, the mean path length of the $\psi$ through the nuclear medium, $L$, as shown in fig.~1a. This apparent threshold has sparked considerable excitement as it may signal deconfinement in the heavy Pb+Pb system \cite{bo}. \begin{figure} \vskip -1.0in \epsfxsize=4.5in \leftline{\epsffile{fig1.ps}} \vskip -2.4in \caption[]{(a) The NA50 \cite{na50} comparison of $\psi$ production in Pb+Pb and S+U collisions as a function of the average path length $L$, see eq.\ (3). $B$ is the $\psi\rightarrow \mu^+\mu^-$ branching ratio. (b) Transverse energy dependence of Pb+Pb data. Curves in (a) and (b) are computed using eqs.\ (4--6).} \end{figure} In this talk I report on work with Ramona Vogt in ref.~\cite{gv2} comparing Pb results to predictions \cite{gv,gstv} using a hadronic model of charmonium suppression. We first demonstrate that the behavior in the NA50 plot, fig.~1a, is not a threshold effect but, rather, reflects the approach to the geometrical limit of $L$ as the collisions become increasingly central. When plotted as a function of the {\it measured} neutral transverse energy $E_{T}$ as in fig.~1b, the data varies smoothly as in S+U measurements in fig.~3b below \cite{na50,na38,na38c,na38d,na38e}. The difference between S+U and Pb+Pb data lies strictly in the relative magnitude. To assess this magnitude, we compare $\psi$ and $\psi^\prime$ data to expectations based on the hadronic comover model \cite{gv,gstv}. The curves in fig.~1 represent our calculations using parameters fixed earlier in Ref.\ \cite{gstv}. Our result is essentially the same as the Pb+Pb prediction in \cite{gv}. Our primary intention is to demonstrate that there is no evidence for a strong discontinuity between $p$A, S+U and Pb+Pb data. However, to quote Maurice Goldhaber, ``$\ldots$ absence of evidence is {\it not} evidence of absence.'' Our secondary goal is to show that our model predictions agree with the new Pb+Pb data. The consistency of these predictions is evident from the agreement of our old $p$A and S+U calculations with more recent NA38 and NA51 data. Nevertheless, the significance of this result must be weighted by the fact that all $p$A and AB data are preliminary and at different beam energies. In this work, we do not attempt to show that our comover interpretation of the data is unambiguous -- this is certainly impossible at present. \section{Nucleons and Comovers} The hadronic contribution to charmonium suppression arises from scattering of the nascent $\psi$ with produced particles -- the comovers -- and nucleons \cite{gv,gstv}. To determine the suppression from nucleon absorption of the $\psi$, we calculate the probability that a $c{\overline c}$ pair produced at a point $(b, z)$ in a nucleus survives scattering with nucleons to form a $\psi$. The standard \cite{gstv,gh} result is \begin{equation} S_{A} = {\rm exp}\{-\int_z^\infty\! dz\, \rho_{A}(b, z) \sigma_{\psi N}\} \end{equation} where $\rho_{A}$ is the nuclear density, $b$ the impact parameter and $\sigma_{\psi N}$ the absorption cross section for $\psi$--nucleon interactions. One can estimate $S_{A}\sim \exp\{- \sigma_{\psi N} \rho_0 L_{A}\}$, where $L_{A}$ is the path length traversed by the $c\overline{c}$ pair. Suppression can also be caused by scattering with mesons that happen to travel along with the $c\overline{c}$ pair (see refs.\ in \cite{gv}). The density of such comovers scales roughly as $E_{T}$. The corresponding survival probability is \begin{equation} S_{\rm co} = {\rm exp}\{- \int\! d\tau n\, \sigma_{\rm co} v_{\rm rel}\}, \end{equation} where $n$ is the comover density and $\tau$ is the time in the $\psi$ rest frame. We write $S_{\rm co}\sim {\rm exp}\{-\beta E_{T}\}$, where $\beta$ depends on the scattering frequency, the formation time of the comovers and the transverse size of the central region, $R_{T}$, {\it cf.} eq.\ (8). To understand the saturation of the Pb data with $L$ in fig.~1a, we apply the schematic approximation of Ref.~\cite{gh} for the moment to write \begin{equation} {{\sigma^{AB}_\psi(E_{T})}\over{\sigma^{AB}_{\mu^+\mu^-}(E_{T})}} \propto \langle S_{A}S_{B}S_{\rm co}\rangle \sim {\rm e}^{-\sigma_{\psi N}\rho_{0}L}{\rm e}^{-\beta E_{T}}, \end{equation} where the brackets imply an average over the collision geometry for fixed $E_{T}$ and $\sigma(E_T) \equiv d\sigma/dE_T$. The path length $L\equiv \langle L_{A}+L_{B}\rangle$ and transverse size $R_T$ depend on the collision geometry. The path length grows with $E_{T}$, asymptotically approaching the geometric limit $R_A + R_B$. Explicit calculations show that nucleon absorption begins to {\it saturate} for $b < R_A$, where $R_A$ is the smaller of the two nuclei, see fig.~4 below. On the other hand, $E_{T}$ continues to grow for $b < R_A$ due, {\it e.g.}, to fluctuations in the number of $NN$ collisions. Equation (2) falls exponentially in this regime because $\beta$, like $L$, saturates. In fig.~1b, we compare the Pb data to calculations of the $\psi$--to--continuum ratio that incorporate nucleon and comover scattering. The contribution due to nucleon absorption indeed levels off for small values of $b$, as expected from eq.\ (3). Comover scattering accounts for the remaining suppression. These results are {\it predictions} obtained using the computer code of Ref.~\cite{gv} with parameters determined in Ref.~\cite{gstv}. However, to confront the present NA50 analysis \cite{na50}, we account for changes in the experimental coverege as follows: \begin{itemize} \item Calculate the continuum dimuon yield in the new mass range $2.9 < M < 4.5$~GeV. \item Adjust the $E_T$ scale to the pseudorapidity acceptance of the NA50 calorimeter, $1.1 < \eta < 2.3$. \end{itemize} The agreement in fig.~1 depends on these updates. \section{$J/\psi$ Suppression} We now review the details of our calculations, highlighting the adjustments as we go. For collisions at a fixed $b$, the $\psi$--production cross section is \begin{equation} \sigma_\psi^{AB}(b) = \sigma^{NN}_{\psi}\!\int\! d^2s dz dz^\prime\,\rho_A(s,z) \rho_B(b-s,z^\prime)\, S, \end{equation} where $S\equiv S_AS_BS_{\rm co}$ is the product of the survival probabilities in the projectile $A$, target $B$ and comover matter. The continuum cross section is \begin{equation} \sigma_{\mu^{+}\mu^{-}}^{AB}(b) = \sigma^{NN}_{\mu^+\mu^-}\!\int\! d^2s dz dz^\prime\,\rho_A(s,z) \rho_B(b-s,z^\prime). \end{equation} The magnitude of (4,5) and their ratio are fixed by the elementary cross sections $\sigma^{NN}_{\psi}$ and $\sigma^{NN}_{\mu^{+}\mu^{-}}$. We calculate $\sigma^{NN}_{\psi}$ using the phenomenologically--successful color evaporation model \cite{hpc-psi}. The continuum in the mass range used by NA50, $2.9 < M < 4.5$~GeV, is described by the Drell--Yan process. To confront NA50 and NA38 data in the appropriate kinematic regime, we compute these cross sections at leading order following \cite{hpc-psi,hpc-dy} using GRV LO parton distributions with a charm $K$--factor $K_c= 2.7$ and a color evaporation coefficient $F_\psi =2.54\%$ and a Drell--Yan $K$--factor $K_{DY}=2.4$. Observe that these choices were fixed by fitting $pp$ data at all available energies \cite{hpc-psi}. Computing $\sigma^{NN}_{\mu^{+}\mu^{-}}$ for $2.9<M<4.5$~GeV corresponds to the first update. To obtain $E_T$ dependent cross sections from eqs.\ (4) and (5), we write \begin{equation} \sigma^{AB}(E_{T}) = \int\! d^2b\, P(E_T,b) \sigma^{AB}(b). \end{equation} The probability $P(E_T,b)$ that a collision at impact parameter $b$ produces transverse energy $E_T$ is related to the minimum--bias distribution by \begin{equation} \sigma_{\rm min}(E_{T}) = \int\! d^{2}b\; P(E_{T}, b). \end{equation} We parametrize $P(E_{T}, b) = C\exp\{- (E_{T}- {\overline E}_{T})^2/2\Delta\}$, where ${\overline E}_{T}(b) = \epsilon {\cal N}(b)$, $\Delta(b) = \omega \epsilon {\overline E}_{T}(b)$, $C(b)=(2\pi\Delta(b))^{-1}$ and ${\cal N}(b)$ is the number of participants (see, {\it e.g.}, Ref.~\cite{gv}). We take $\epsilon$ and $\omega$ to be phenomenological calorimeter--dependent constants. We compare the minimum bias distributions for total hadronic $E_T$ calculated using eq.\ (7) for $\epsilon = 1.3$~GeV and $\omega = 2.0$ to NA35 S+S and NA49 Pb+Pb data \cite{na49}. The agreement in fig.~2a builds our confidence that eq.\ (7) applies to the heavy Pb+Pb system. \begin{figure} \vskip -1.5in \epsfxsize=4.0in \centerline{\epsffile{fig2.ps}} \vskip -1.0in \caption{Transverse energy distributions from eq.\ (7). The S--Pb comparison (a) employs the same parameters.} \end{figure} Figure 2b shows the distribution of neutral transverse energy calculated using eqs.\ (5) and (6) to simulate the NA50 dimuon trigger. We take $\epsilon = 0.35$~GeV, $\omega = 3.2$, and $\sigma^{NN}_{\mu^+\mu^-}\approx 37.2$~pb as appropriate for the dimuon--mass range $2.9 < M < 4.5$~GeV. The $E_T$ distribution for S+U~$\rightarrow \mu^+\mu^- + X$ from NA38 was described \cite{gstv} using $\epsilon = 0.64$~GeV and $\omega = 3.2$ -- the change in $\epsilon$ corresponds roughly to the shift in particle production when the pseudorapidity coverage is changed from $1.7 < \eta < 4.1$ (NA38) to $1.1 < \eta < 2.3$ (NA50). Taking $\epsilon = 0.35$~GeV for the NA50 acceptance is the second update listed earlier. We now apply eqs.\ (1,2,4) and (5) to charmonium suppression in Pb+Pb collisions. To determine nucleon absorption, we used $p$A data to fix $\sigma_{\psi N}\approx 4.8$~mb in Ref.~\cite{gstv}. This choice is in accord with the latest NA38 and NA51 $pA$ data, see fig.~3a. To specify comover scattering \cite{gstv}, we assumed that the dominant contribution to $\psi$ dissociation comes from exothermic hadronic reactions such as $\rho + \psi \rightarrow D+ \overline{D}$. We further took the comovers to evolve from a formation time $\tau_{0}\sim 2$~fm to a freezeout time $\tau_{F}\sim R_{T}/v_{\rm rel}$ following Bjorken scaling, where $v_{\rm rel}\sim 0.6$ is roughly the average $\psi-\rho$ relative velocity. The survival probability, eq.\ (2), is then \begin{equation} S_{\rm co} = \exp\{ - \sigma_{\rm co}v_{\rm rel}n_{0}\tau_{0} \ln(R_{T}/v_{\rm rel}\tau_{0})\} \end{equation} where $\sigma_{\rm co} \approx 2\sigma_{\psi N}/3$, $R_{T}\approx R_{A}$ and $n_{0}$ is the initial density of sufficiently massive $\rho, \omega$ and $\eta$ mesons. To account for the variation of density with $E_{T}$, we take $n_{0} = {\overline n}_{0}E_{T}/{\overline E}_{T}(0)$ \cite{gv}. A value $\overline{n}_{0} = 0.8$~fm$^{-3}$ was chosen to fit the central S+U datum. Since we fix the density in central collisions, this simple {\it ansatz} for $S_{\rm co}$ may be inaccurate for peripheral collisions. [Densities $\sim 1$~fm$^{-3}$ typically arise in hadronic models of ion collisions, e.g., refs.~\cite{cascade}. The internal consistency of hadronic models at such densities demands further study.] We expect the comover contribution to the suppression to increase in Pb+Pb relative to S+U for central collisions because both the initial density and lifetime of the system can increase. To be conservative, we assumed that Pb and S beams achieve the same mean initial density. Even so, the lifetime of the system essentially doubles in Pb+Pb because $R_T \sim R_{A}$ increases to 6.6~fm from 3.6~fm in S+U. The increase in the comover contribution evident in comparing figs.~1b and 3b is described by the seemingly innocuous logarithm in eq.\ (8), which increases by $\approx 60\%$ in the larger Pb system. \begin{figure} \vskip -2.8in \epsfxsize=4.5in \rightline{\epsffile{fig3.ps}} \vskip -0.5in \caption[]{(a) $p$A cross sections \cite{na50} in the NA50 acceptance and (b) S+U ratios from '91 \cite{na38c} and '92 \cite{na50} runs. The '92 data are scaled to the '91 continuum. The dashed line indicates the suppression from nucleons alone. The $pp$ cross section in (a) is constrained by the global fit to $pp$ data in ref.~\cite{hpc-psi}.} \end{figure} In Ref.~\cite{gstv}, we pointed out that comovers were necessary to explain S+U data from the NA38 1991 run \cite{na38}. Data just released \cite{na50} from their 1992 run support this conclusion. The '91 $\psi$ data were presented as a ratio to the dimuon continuum in the low mass range $1.7 < M < 2.7$~GeV, where charm decays are an important source of dileptons. On the other hand, the '92 $\psi$ data \cite{na50,na38e} are given as ratios to the Drell--Yan cross section in the range $1.5< M < 5.0$~GeV. That cross section is extracted from the continuum by fixing the $K$--factor in the high mass region \cite{na38f}. To compare our result from Ref.~\cite{gstv} to these data, we scale the '92 data by an empirical factor. This factor is $\approx 10\%$ larger than our calculated factor $\sigma^{NN}_{DY}(92)/\sigma^{NN}_{\rm cont.}(91) \approx 0.4$; these values agree within the NA38 systematic errors. [NA50 similarly scaled the '92 data to the high--mass continuum to produce fig.~1a.] Because our fit is driven by the highest $E_T$ datum, we see from fig.~3b that a fit to the '92 data would not appreciably change our result. Note that a uniform decrease of the ratio would increase the comover contribution needed to explain S+U collisions. NA50 and NA38 have also measured the total $\psi$--production cross section in Pb+Pb \cite{na50} and S+U reactions \cite{na38c}. To compare to that data, we integrate eqs.\ (4, 6) to obtain the total $(\sigma/AB)_{\psi} = 0.95$~nb in S+U at 200~GeV and 0.54~nb for Pb+Pb at 158~GeV in the NA50 spectrometer acceptance, $0.4 > x_{F}> 0$ and $-0.5 < \cos\theta < 0.5$ (to correct to the full angular range and $1 > x_{F} > 0$, multiply these cross sections by $\approx 2.07$). The experimental results in this range are $1.03 \pm 0.04 \pm 0.10$~nb for S+U collisions \cite{na38} and $0.44 \pm 0.005 \pm 0.032$ nb for Pb+Pb reactions \cite{na50}. Interestingly, in the Pb system we find a Drell--Yan cross section $(\sigma/AB)_{{}_{DY}} = 37.2$~pb while NA50 finds $(\sigma/AB)_{{}_{DY}} = 32.8\pm 0.9\pm 2.3$~pb. Both the $\psi$ and Drell--Yan cross sections in Pb+Pb collisions are somewhat above the data, suggesting that the calculated rates at the $NN$ level may be $\sim 20-30\%$ too large at 158~GeV. This discrepancy is within ambiguities in current $pp$ data near that low energy \cite{hpc-psi}. Moreover, nuclear effects on the parton densities omitted in eqs.\ (4,5) can affect the total S and Pb cross sections at this level. We remark that if one were to neglect comovers and take $\sigma_{\psi N} = 6.2$~mb, one would find $(\sigma/AB)_{\psi} = 1.03$~nb in S+U at 200~GeV and 0.62~nb for Pb+Pb at 158~GeV. The agreement with S+U data is possible because comovers only contribute to the total cross section at the $\sim 18\%$ level in the light system. This is expected, since the impact--parameter integrated cross section is dominated by large $b$ and the distinction between central and peripheral interactions is more striking for the asymmetric S+U system. As in Ref.~\cite{gstv}, the need for comovers is evident for the $E_{T}$--dependent ratios, where central collisions are singled out. \section{Saturation and the Definition of $L$} To see why saturation occurs in Pb+Pb collisions but not in S+U, we compare the NA50 $L(E_T)$ \cite{na50} to the average impact parameter $\langle b\rangle (E_T)$ in fig.~4. To best understand fig.~1a, we show the values of $L(E_T)$ computed by NA50 for this figure. We use our model to compute $\langle b\rangle = \langle b T_{AB}\rangle/\langle T_{AB}\rangle$, where $\langle f(b)\rangle \equiv \int\!d^2b\; P(E_T,b)f(b)$ and $T_{AB} = \int\!d^{2}sdzdz^\prime \rho_{A}(s,z)\rho_{B}(b-s,z^\prime)$. [Note that NA50 reports similar values of $\langle b\rangle (E_T)$ \cite{na50}.] In the $E_T$ range covered by the S experiments, we see that $\langle b\rangle$ is near $\sim R_{\rm S} = 3.6$~fm or larger. In this range, increasing $b$ dramatically reduces the collision volume and, consequently, $L$. In contrast, in Pb+Pb collisions $\langle b\rangle \ll R_{\rm Pb} =$~6.6~fm for all but the lowest $E_T$ bin, so that $L$ does not vary appreciably. \begin{figure} \vskip -2.8in \epsfxsize=4.5in \rightline{\epsffile{fig4.ps}} \vskip -0.5in \caption[]{$E_T$ dependence of $L$ (solid) used by NA50 \cite{na50} (see fig.~1a) and the average impact parameter $\langle b\rangle$ (dot--dashed). The solid line covers the measured $E_T$ range.} \end{figure} \begin{figure} \vskip -2.8in \epsfxsize=4.5in \rightline{\epsffile{l_et_all.ps}} \vskip -0.5in \caption[]{NA50 $L(E_T)$ [1] (points) compared to calculations for realistic nuclear densities (solid), as used here, and for a sharp--surface approximation (dot-dashed).} \end{figure} \begin{figure} \vskip -2.0in \epsfxsize=4.5in \centerline{\epsffile{fig1_L.ps}} \vskip -2.0in \caption{NA50 data replotted with a realistic $L(E_T)$ from (9).} \end{figure} To understand the sensitivity of fig.~1a to the definition of the path length, we now estimate $L(E_T)$ \cite{gv3}. We identify (3) with the exact expression formed from the ratio of (4) and (5). Expanding in $\sigma_{\psi N}$ and neglecting comovers, we find: \begin{equation} L(E_T) = \{2\rho_0\langle T_{AB}\rangle\}^{-1} \left\langle\int\! d^2s\; [T_A(s)]^2T_B(b-s) + [T_B(b-s)]^2T_A(s)\right\rangle, \end{equation} where $T_A(s) = \int \rho_A(s,z) dz$. In fig.~5 we compare the NA50 $L(E_T)$ to the path length calculated using two assumptions for the nuclear density profile: our realistic three--parameter Fermi distribution and the sharp--surface approximation $\rho = \rho_0\Theta(R_A -r)$. NA38~\cite{borhani} obtained $L$ for S+U using the empirical prescription of ref.~\cite{gh}, while NA50 calculated $L$ assuming the sharp-surface approximation~\cite{claudie}. Indeed, we see that the NA50 Pb+Pb values agree with our sharp--surface result, while the NA38 S+U values are nearer to the realistic distribution. To see how the value of the path length can affect the appearance of fig.~1a, we replot in fig.~6 the NA50 data using $L(E_T)$ from (9) with the realistic density. We learn that the appearance of fig.~1a is very sensitive to the definition of $L$. Furthermore, with a realistic $L$, one no longer gets the impression given by the NA50 figure \cite{na50} of Pb+Pb data ``departing from a universal curve.'' Nevertheless, the saturation phenomena evident in fig.~1a does not vanish. Saturation is a real effect of geometry. \section{$\psi^\prime$ Suppression} \begin{figure} \vskip -3.2in \epsfxsize=4.5in \rightline{\epsffile{fig5.ps}} \vskip -0.3in \caption[]{Comover suppression of $\psi^\prime$ compared to (a) NA38 and NA51 $p$A data \cite{na50,na38e} and (b) NA38 S+U data \cite{na38d} (filled points) and preliminary data \cite{na50}.} \end{figure} \begin{figure} \vskip -2.0in \epsfxsize=4.5in \centerline{\epsffile{fig6.ps}} \vskip -2.0in \caption{Comover suppression in Pb+Pb~$\rightarrow \psi^\prime +X$.} \end{figure} To apply eqs.\ (4-6) to calculate the $\psi^{\prime}$--to--$\psi$ ratio as a function of $E_{T}$, we must specify $\sigma_{\psi^{\prime}}^{NN}$, $\sigma_{\psi^{\prime} N}$, and $\sigma_{\psi^{\prime} {\rm co}}$. Following Ref.~\cite{hpc-psi}, we use $pp$ data to fix $B\sigma_{\psi^{\prime}}^{NN}/B\sigma_{\psi}^{NN} = 0.02$ (this determines $F_{\psi^\prime}$). The value of $\sigma_{\psi^{\prime} N}$ depends on whether the nascent $\psi^{\prime}$ is a color singlet hadron or color octet $c\overline{c}$ as it traverses the nucleus. In the singlet case, one expects the absorption cross sections to scale with the square of the charmonium radius. Taking this {\it ansatz} and assuming that the $\psi^\prime$ forms directly while radiative $\chi$ decays account for 40\% of $\psi$ production, one expects $\sigma_{\psi'}\sim 2.1\sigma_{\psi}$ for interactions with either nucleons or comovers \cite{gstv}. For the octet case, we take $\sigma_{\psi^{\prime} N} \approx \sigma_{\psi N}$ and fix $\sigma_{\psi^{\prime} {\rm co}}\approx 12$~mb to fit the S+U data. In fig.~7a, we show that the singlet and octet extrapolations describe $p$A data equally well. Our predictions for Pb+Pb collisions are shown in fig.~8. In the octet model, the entire suppression of the $\psi^{\prime}$--to--$\psi$ ratio is due to comover interactions. In view of the schematic nature of our approximation to $S_{\rm co}$ in eq.\ (8), we regard the agreement with data of singlet and octet extrapolations as equivalent. \section{Summary} In summary, the Pb data \cite{na50} cannot be described by nucleon absorption alone. This is seen in the NA50 plot, fig.~1a, and confirmed by our results. The saturation with $L$ but not $E_T$ suggests an additional density--dependent suppression mechanism. Earlier studies pointed out that additional suppression was already needed to describe the S+U results \cite{gstv}; recent data \cite{na50} support that conclusion (see, however, \cite{bo}). Comover scattering explains the additional suppression. Nevertheless, it is unlikely that this explanation is unique. SPS inverse--kinematics experiments ($B < A$) and AGS $p$A studies near the $\psi$ threshold can help pin down model uncertainties. After the completion of \cite{gv2}, several cascade calculations \cite{cascade} have essentially confirmed our conclusions. This confirmation is important, because such calculations do not employ the simplifications ({\it e.g.\ } $n_0\propto E_T$) needed to derive (8). In particular, these models calculate $E_T$ and the comover density consistently. Some of these authors took $\sigma_{\psi N} \sim 6$~mb (instead of $\sim 5$~mb) to fit the NA51 data in fig.~3a somewhat better. I am grateful to Ramona Vogt for her collaboration in this work. I also thank C.~Gerschel and M.~Gonin for discussions of the NA50 data, and M.~Gyulassy, R.~Pisarski and M.~Tytgat for insightful comments. \nonumsection{References}
2024-02-18T23:39:42.232Z
1996-09-25T13:46:57.000Z
algebraic_stack_train_0000
128
3,889
proofpile-arXiv_065-686
\section{INTRODUCTION} The underlying assumptions of the dual superconductivity\cite{tmp} of gauge theories, and its appropriatenss for describing quark confinement, are not rigorously founded, and it is necessary to perform precise numerical or analytic tests of this conjecture whenever possible. The internal structure of the color flux tube joining a quark pair provides an important test of these ideas, because it should show, as the dual of an Abrikosov vortex, a very peculiar property: it is expected to have a core of normal, hot vacuum as contrasted with the surrounding medium, which is in the dual superconducting phase. The location of the core would be given by the vanishing of the disorder parameter $\langle\Phi_M(x)\rangle=0$, where $\Phi_M$ is some effective magnetic Higgs field. In a pure gauge theory, the formulation of this property from the first principles poses some problems, because no local, gauge invariant, disorder field $\Phi_M(x)$ is known. As a consequence, one cannot define in a meaningful, precise way the notion of core of the dual vortex. A possible way out is suggested by the fact that in a medium in which $\langle\Phi_M\rangle=0$ the quarks should be deconfined, then it is expected that the interquark potential inside the flux tube gets modified. As a consequence, one may try to define a gauge-invariant notion of normal core of the flux tube as the region where the interquark interaction mimics a deconfined behavior. Of course one cannot speak of a true deconfinement, as it would require pulling infinitely apart the quarks, while the alleged core has a finite size. A simple, practical way to study in a lattice gauge theory the influence of the flux tube on the quark interaction is based on the study of the system of four coplanar Polyakov loops $P_1,P_2,P_3$ and $P_4$ following two steps \begin{description} \item{~~} Modify the ordinary vacuum by inserting in the action the pair $P_3,P_4^{\dagger}$ acting as sources at a fixed distance $R$. \item{~~} Evaluate in this modified vacuum the correlator the other pair $P_1,P_2^{\dagger}$ of Polyakov loops which are used as probes. \end{description} The correlators in the two vacua are related by \begin{equation} \langle P_1 P_2^{\dagger}\rangle_{q\bar{q}}=\frac{\langle P_1 P_2^{\dagger}P_3 P_4^{\dagger}\rangle} {\langle P_3 P_4^{\dagger}\rangle}~~. \end{equation} In this note we study some general properties of these correlators for $T\leq T_c$~. In particular, we point out that at $T=T_c$ the functional form of these correlators is universal and in some $3D$ gauge theories can be written explicitly, even in finite volumes. \section{FOUR POLYAKOV LOOPS} Consider the system of four parallel, coplanar Polyakov loops, symmetrically disposed with respect the origin of a cubic lattice with periodic boundary conditions in the direction of the imaginary time (which coincides with the common direction of the loops). We study their correlator \begin{equation} \langle P_1 P_2^{\dagger}P_3 P_4^{\dagger}\rangle= \langle P(\scriptsize{{-\frac{r}2}})P^{\dagger}(\scriptsize{{\frac{r}2}})P(\scriptsize{{-\frac{R}2}})P^{\dagger}(\scriptsize{{\frac{R}2}})\rangle \label{four} \end{equation} as a function of $r\le R$. For large $R$ and $r\sim R$ it obeys the asymptotic factorization condition \begin{equation} \langle P_1 P_2^{\dagger} P_3 P_4^{\dagger}\rangle\sim \langle P_1 P_3^{\dagger}\rangle\langle P_2 P_4^{\dagger}\rangle~. \label{fact} \end{equation} When $T < T_c$, assuming the usual area law $\langle P_1P_2^{\dagger}\rangle\propto\exp(-\sigma r/T)$, where $\sigma$ is the string tension, yields \begin{equation} \langle P_1 P_2^{\dagger}\rangle_{q\bar{q}}\sim\exp(\sigma r/T) \sim1/\langle P_1 P_2^{\dagger}\rangle~~, \end{equation} which gives an apparent repulsion between the two probes due to the attraction of the two sources. The other limit $r\ll R$ is more interesting, because the kinematics does not force any factorization and different confinement models suggest different behaviors. In particular in the naive string picture one is tempted to assume the factorization (\ref{fact}) even in this limit, because within this assumption the total area of the surfaces connecting the Polyakov loops is minimal. On the contrary, in the dual superconductivity it is expected that the test particles probe the short distance properties of the hot core of the flux tube, thus the correlator in the modified vacuum would approach to a constant ($\sim \langle P\rangle^2_{T>T_c}$) from above and \begin{equation} \langle P_1 P_2^{\dagger}\rangle_{q\bar{q}}>\langle P_1 P_2^{\dagger}\rangle~~~(r\ll R\,, T<T_c)~. \end{equation} In the range $T\ge T_c$ the interior of the flux tube is in the same phase of the surrounding region and the mutual interaction between the two near probes should not depend on the presence of very far sources, then \begin{equation} \langle P_1 P_2^{\dagger}\rangle_{q\bar{q}}\sim\langle P_1 P_2^{\dagger}\rangle~~~ (r\ll R\,, T\ge T_c)~. \label{faq} \end{equation} \subsection{ Critical Behavior} According to the widely tested Svetitsky-Yaffe conjecture, any gauge theory in $d+1$ dimensions with a continuous deconfining transition belongs to the same universality class of a $d$-dimensional $C(G)$-symmetric spin model, where $C(G)$ is the center of the gauge group. It follows that at the critical point all the critical indices describing the two transitions and all the adimensional ratios of correlation functions of corresponding observables in the two theories should coincide. In particular, since the order parameter the gauge theory is obviously mapped in the corresponding one of the spin model, the correlation functions among Polyakov loops should be proportional to the corresponding correlators of spin operators: \begin{equation} \langle P_1\dots P_{2n}\rangle_{T=T_c}\propto \langle s_1\dots s_{2n}\rangle~~. \end{equation} Conformal field theory has been very successful in determining the exact form of these universal functions for $d=2$ even in a finite box, which is a precious information for a correct comparison with numerical simulations. In particular, using the known results of the $2D$ critical Ising model in a rectangle $L_1\times L_2$ with periodic boundary conditions \cite{fsz} we can write explicitly the correlator of any (even) number $2n$ of Polyakov loops of any $2+1$ gauge theory with $C(G)=\hbox{{\rm Z{\hbox to 3pt{\hss\rm Z}}}}_2$. Let $x_j,y_j$ be the spatial coordinates of $P_j$ and define the complex variables $z_j=\frac{x_j}{L_1}+i\frac{y_j}{L_2}$ and $\tau=iL_2/L_1$. Then \begin{equation} \langle P_1\dots P_{2n}\rangle^2=c_n\sum_{\nu=1}^{4} \sum_{\varepsilon_i=\pm1}^{~}{\,}' A_\nu(\varepsilon \cdot z)\prod_{i<j}B_{ij} \label{crt} \end{equation} with $\varepsilon\cdot z=\sum_i \varepsilon_iz_i$ and the primed sum is constrained by $\sum_i\varepsilon_i=0$~; $c_n$ is an overall constant that can be expressed by factorization in terms of $c_1$. The universal functions $A_\nu$ and $B_{ij}$ can be written in terms of the four Jacobi theta functions $\vartheta_\nu(z,\tau)$ as follows \begin{equation} B_{ij}=\left\vert\frac{\vartheta_1(z_i-z_j,\tau)}{\vartheta_1'(0,\tau)} \right\vert^{\varepsilon_i\varepsilon_j/2}, \end{equation} \begin{equation} A_\nu(z)=\left\vert\frac{\vartheta_\nu(z,\tau)}{\vartheta_\nu(0,\tau)} \right\vert^2, \end{equation} In the infinite box limit $L_1,L_2\to\infty$, using the Taylor expansion \begin{equation} \vartheta_\nu(z,\tau)=a_\nu(1-\delta_{1,\nu})+b_\nu\,z+O(z^2)~, \end{equation} the correlator (\ref{four}) becomes \begin{equation} \langle P_1P_2P_3P_4\rangle=\frac{4c_1^2}{(Rr)^\frac14} \sqrt{\frac{R+r}{R-r}}~~, \end{equation} which satisfies both factorizations (\ref{fact},\ref{faq}). \section {CLUSTER ALGORITHM} In order to test the above formulae at criticality it is convenient to perform the numerical simulations in the simplest model belonging to the above-mentioned universality class, which is the the $3D$ $\hbox{{\rm Z{\hbox to 3pt{\hss\rm Z}}}}_2$ gauge model. Using the duality transformation it is possible to build up a one-to-one mapping of physical observables of the gauge system into the corresponding spin quantities. A great advantage of this method is that it can be used a non local cluster updating algorithm \cite{sw}, which has been proven very successful in fighting critical slowing down. In this framework it is easily shown that the vacuum expectation value of any set $\{C_1\dots C_n\}$ of Polyakov or Wilson loops of arbitrary shapes is simply encoded in the topology of Fortuin-Kasteleyn (FK) clusters: to each Montecarlo configuration we assign a weight 1 whenever there is no FK cluster topologically linked to any $C_i\in\{C_1\dots C_n\}$, otherwise we assign a weight 0. Let $N_0$ and $N_1$ be the number configurations of weight 0 and 1 respectively, then we have simply \begin{equation} \langle C_1\dots C_n\rangle=\frac{N_1}{N_0+N_1}~~. \end{equation} This method provides us with a handy, very powerful tool to estimate the correlator of any set of Wilson or Polyakov loops even at criticality. \section{RESULTS} In order to test the critical behavior of the multiloop correlator one has to know with high precision the location of the critical temperature as a function of the coupling $\beta$. We took advantage of ref.\cite{ch}, where these critical values have been obtained with an extremely high statistical accuracy. We report in Fig.1 some results at $\beta=0.746035$ corresponding to $1/aT_c=N_{tc}=6 $ and to a string tension $\sigma a^2=0.0189(2)$. The open circles are the data for the correlator $\langle P(\scriptsize{{-\frac{r}2}})P(\scriptsize{{\frac{r}2}})\rangle$ in a $N_t\times N_x\times N_y$ lattice with $N_t=3N_{tc},N_x=N_y=64$. They are well fitted by the one-parameter formula $c\exp(-\sigma N_t r)/\eta(i\frac{N_t}{2r})$, where the Dedekind $\eta$ function takes into account the quantum contribution of the flux tube vibrations \cite{pv}. The square symbols correspond to the correlator $\langle P(\scriptsize{{-\frac{r}2}})P(\scriptsize{{\frac{r}2}})\rangle_{q\bar{q}}$ at the same temperature, in presence of a pair of sources at a distance $R=24a$. The data in the central region are well fitted by a two-parameter formula $c_{q\bar{q}}\exp(-\sigma N_t r)/\eta(i\frac{N_t}{2r})+b\to c'_{q\bar{q}}\frac{e^{-mr}}{\sqrt{r}}+b$ which simulates a high temperature behavior with a screening mass $m=\sigma N_t$ and an order parameter $\langle P\rangle=\sqrt{b}$. The black circles correspond to $\langle P(\scriptsize{{-\frac{r}2}})P(\scriptsize{{\frac{r}2}})P(\scriptsize{{-\frac{R}2}})P(\scriptsize{{\frac{R}2}})\rangle$ evaluated at $T=T_c$ with $R=16a$. They fit nicely eq.(\ref{crt}) (continuous line). Note that such a curve does not contain any free parameters, being $c_2=\sqrt{2}c_1^2$ with $c_1 N_x^{\frac14}=0.199(4)~$ as estimated by measuring $\langle P_1P_2\rangle$ on lattices of different sizes at $T=T_c$ and $N_{tc}=6$. \vskip0.3cm \vskip -2.3 cm \hskip-2.cm\epsfig{file=figlat96.ps,height=10.5cm} \vskip -1.9cm Figure. 1. { Correlator of two Polyakov loops inside and outside the flux tube} \vskip0.2cm
2024-02-18T23:39:42.291Z
1996-09-12T15:42:48.000Z
algebraic_stack_train_0000
132
1,885
proofpile-arXiv_065-742
\section{#1}} \def \thesection\arabic{equation} {\arabic{section}.\arabic{equation}} \def \begin{equation} {\begin{equation}} \def \end{equation} {\end{equation}} \def \begin{eqnarray} {\begin{eqnarray}} \def \end{eqnarray} {\end{eqnarray}} \def \begin{eqnarray*} {\begin{eqnarray*}} \def \end{eqnarray*} {\end{eqnarray*}} \def \begin{thebibliography} {
2024-02-18T23:39:42.470Z
1996-09-16T16:28:05.000Z
algebraic_stack_train_0000
145
45
proofpile-arXiv_065-790
\section{Introduction} Quantum groups or q-deformed Lie algebra implies some specific deformations of classical Lie algebras. From a mathematical point of view, it is a non-commutative associative Hopf algebra. The structure and representation theory of quantum groups have been developed extensively by Jimbo [1] and Drinfeld [2]. The q-deformation of Heisenberg algebra was made by Arik and Coon [3], Macfarlane [4] and Biedenharn [5]. Recently there has been some interest in more general deformations involving an arbitrary real functions of weight generators and including q-deformed algebras as a special case [6-10]. \defa^{\dagger}{a^{\dagger}} \def\sqrt{q}{q^{-1}} \defa^{\dagger}{a^{\dagger}} Recently Greenberg [11] has studied the following q-deformation of multi mode boson algebra: \begin{displaymath} a_i a^{\dagger}_j -q a^{\dagger}_j a_i=\delta_{ij}, \end{displaymath} where the deformation parameter $q$ has to be real. The main problem of Greenberg's approach is that we can not derive the relation among $a_i$'s operators at all. In order to resolve this problem, Mishra and Rajasekaran [12] generalized the algebra to complex parameter $q$ with $|q|=1$ and another real deformation parameter $p$. In this paper we use the result of ref [12] to construct two types of coherent states and q-symmetric ststes. \defa^{\dagger}{a^{\dagger}} \def\sqrt{q}{q^{-1}} \defa^{\dagger}{a^{\dagger}} \section{Two Parameter Deformed Multimode Oscillators} \subsection{ Representation and Coherent States} In this subsection we discuss the algebra given in ref [12] and develop its reprsentation. Mishra and Rajasekaran's algebra for multi mode oscillators is given by \begin{displaymath} a_i a^{\dagger}_j =q a^{\dagger}_j a_i~~~(i<j) \end{displaymath} \begin{displaymath} a_ia^{\dagger}_i -pa^{\dagger}_i a_i=1 \end{displaymath} \begin{equation} a_ia_j=\sqrt{q} a_j a_i ~~~~(i<j), \end{equation} where $i,j=1,2,\cdots,n$. In this case we can say that $a^{\dagger}_i$ is a hermitian adjoint of $a_i$. The fock space representation of the algebra (1) can be easily constructed by introducing the hermitian number operators $\{ N_1, N_2,\cdots, N_n \}$ obeying \begin{equation} [N_i,a_j]=-\delta_{ij}a_j,~~~[N_i,a^{\dagger}_j]=\delta_{ij}a^{\dagger}_j,~~~(i,j=1,2,\cdots,n). \end{equation} From the second relation of eq.(1) and eq.(2), the relation between the number operator and creation and annihilation operator is given by \begin{equation} a^{\dagger}_ia_i =[N_i]=\frac{p^{N_i}-1}{p-1} \end{equation} or \begin{equation} N_i=\sum_{k=1}^{\infty}\frac{(1-p)^k}{1-p^k}(a^{\dagger}_i)^ka_i^k. \end{equation} \def|z_1,\cdots,z_n>_+{|0,0,\cdots,0>} Let $|z_1,\cdots,z_n>_+$ be the unique ground state of this system satisfying \begin{equation} N_i|z_1,\cdots,z_n>_+=0,~~~a_i|z_1,\cdots,z_n>_+=0,~~~(i,j=1,2, \cdots,n) \end{equation} \def|n_1,n_2,\cdots,n_n>{|n_1,n_2,\cdots,n_n>} \def|n_1,\cdots,n_i+1.\cdots ,n_n>{|n_1,\cdots,n_i+1.\cdots ,n_n>} \def|n_1,\cdots,n_i-1.\cdots ,n_n>{|n_1,\cdots,n_i-1.\cdots ,n_n>} and $\{|n_1,n_2,\cdots,n_n>| n_i=0,1,2,\cdots \}$ be the complete set of the orthonormal number eigenstates obeying \begin{equation} N_i|n_1,n_2,\cdots,n_n>=n_i|n_1,n_2,\cdots,n_n> \end{equation} and \begin{equation} <n_1,\cdots, n_n|n^{\prime}_1,\cdots,n^{\prime}_n>=\delta_{n_1 n_1^{\prime}}\cdots\delta_{n_2 n_2^{\prime}}. \end{equation} If we set \begin{equation} a_i|n_1,n_2,\cdots,n_n>=f_i(n_1,\cdots,n_n) |n_1,\cdots,n_i-1.\cdots ,n_n>, \end{equation} we have, from the fact that $a^{\dagger}_i$ is a hermitian adjoint of $a_i$, \begin{equation} a^{\dagger}_i|n_1,n_2,\cdots,n_n>=f^*(n_1,\cdots, n_i+1, \cdots, n_n) |n_1,\cdots,n_i+1.\cdots ,n_n>. \end{equation} Making use of relation $ a_i a_{i+1} = \sqrt{q} a_{i+1} a_i $ we find the following relation for $f_i$'s: \begin{displaymath} q\frac{f_{i+1}(n_1,\cdots, n_n)}{f_{i+1}(n_1, \cdots, n_i-1, \cdots, n_n} =\frac{f_i(n_1,\cdots, n_n)}{f_i(n_1,\cdots,n_{i+1}-1, \cdots, n_n)} \end{displaymath} \begin{equation} |f_i( n_1, \cdots, n_i+1, \cdots, n_n)|^2 -p |f_i(n_1, \cdots, n_n)|^2=1. \end{equation} Solving the above equations we find \begin{equation} f_i(n_1,\cdots, n_n)=q^{\Sigma_{k=i+1}^n n_k}\sqrt{[n_i]}, \end{equation} where $[x]$ is defined as \begin{displaymath} [x]=\frac{p^x-1}{p-1}. \end{displaymath} Thus the representation of this algebra becomes \begin{displaymath} a_i|n_1,\cdots, n_n>=q^{\Sigma_{k=i+1}^n n_k}\sqrt{[n_i]}|n_1,\cdots, n_i-1,\cdots, n_n>~~~ \end{displaymath} \begin{equation} a^{\dagger}_i|n_1,\cdots, n_n>=q^{-\Sigma_{k=i+1}^n n_k}\sqrt{[n_i+1]}|n_1,\cdots, n_i+1,\cdots, n_n>.~~~ \end{equation} The general eigenstates $|n_1,n_2,\cdots,n_n>$ is obtained by applying $a^{\dagger}_i$'s operators to the ground state $|z_1,\cdots,z_n>_+$: \begin{equation} |n_1,n_2,\cdots,n_n> =\frac{(a^{\dagger}_n)^{n_n}\cdots (a^{\dagger}_1)^{n_1} }{\sqrt{[n_n]!\cdots[n_1]!}}|z_1,\cdots,z_n>_+, \end{equation} where \begin{displaymath} [n]!=[n][n-1]\cdots[2][1],~~~[0]!=1. \end{displaymath} The coherent states for $gl_q(n)$ algebra is usually defined as \begin{equation} a_i|z_1,\cdots,z_i,\cdots,z_n>_-=z_i|z_1,\cdots,z_{i},\cdots,z_n>_-. \end{equation} \def\sqrt{q}{\sqrt{q}} From the $gl_q(n)$-covariant oscillator algebra we obtain the following commutation relation between $z_i$'s and $z^*_i$'s, where $z^*_i$ is a complex conjugate of $z_i$; \begin{displaymath} z_iz_j=q z_j z_i,~~~~(i<j), \end{displaymath} \begin{displaymath} z^*_iz^*_j=\frac{1}{q}z^*_jz_i,~~~~(i<j), \end{displaymath} \begin{displaymath} z^*_iz_j=q z_j z^*_i,~~~~(i \neq j) \end{displaymath} \begin{equation} z^*_iz_i=z_iz^*_i. \end{equation} Using these relations the coherent state becomes \begin{equation} |z_1,\cdots,z_n>_-=c(z_1,\cdots,z_n)\Sigma_{n_1,\cdots,n_n=0}^{\infty} \frac{z_n^{n_n}\cdots z_1^{n_1}}{\sqrt{[n_1]!\cdots[n_n]!}}|n_1,n_2,\cdots,n_n>. \end{equation} Using the eq.(13) we can rewrite eq.(16) as \begin{equation} |z_1,\cdots,z_n>_-=c(z_1,\cdots,z_n)e_p(z_na^{\dagger}_n)\cdots e_p(z_1 a^{\dagger}_1)|z_1,\cdots,z_n>_+, \end{equation} where \begin{displaymath} e_p(x)=\Sigma_{n=0}^{\infty}\frac{x^n}{[n]!} \end{displaymath} is a deformed exponential function. In order to obtain the normalized coherent states, we should impose the condition $~{}_<z_1,\cdots,z_n|z_1,\cdots,z_n>_-=1$. Then the normalized coherent states are given by \begin{equation} |z_1,\cdots,z_n>_-=\frac{1}{\sqrt{e_p(|z_1|^2)\cdots e_p(|z_n|^2)}} e_p(z_na^{\dagger}_n)\cdots e_p(z_1 a^{\dagger}_1)|z_1,\cdots,z_n>_+, \end{equation} where $|z_i|^2=z_iz^*_i=z^*_iz_i$. \subsection{Positive Energy Coherent States} The purpose of this subsection is to obtain another type of coherent states for algebra (1). In order to do so , it is convenient to introduce n subhamiltonians as follows \begin{displaymath} H_i=a^{\dagger}_ia_i-\nu, \end{displaymath} where \begin{displaymath} \nu=\frac{1}{1-p}. \end{displaymath} Then the commutation relation between the subhamiltonians and mode operators are given by \begin{equation} H_ia^{\dagger}_j=(\delta_{ij}(p-1)+1)a^{\dagger}_jH_i,~~~~[H_i,H_j]=0. \end{equation} Acting subhamiltonian on the number eigenstates gives \begin{equation} H_i|n_1,n_2,\cdots,n_n> =-\frac{p^{n_i}}{1-p}|n_1,n_2,\cdots,n_n> \end{equation} Thus the energy becomes negative when $0<p<1$. As was noticed in ref [13], for the positive energy states it is not $a_i$ but $a^{\dagger}_i$ that play a role of the lowering operator: \def\lambda{\lambda} \def|\l_1p^{n_1},\cdots,\l_n p^{n_n}>{|\lambda_1p^{n_1},\cdots,\lambda_n p^{n_n}>} \def|\l_1p^{n_1},\cdots,\l_ip^{n_i+1},\cdots,\l_n p^{n_n}>{|\lambda_1p^{n_1},\cdots,\lambda_ip^{n_i+1},\cdots,\lambda_n p^{n_n}>} \def|\l_1p^{n_1},\cdots,\l_ip^{n_i-1},\cdots,\l_n p^{n_n}>{|\lambda_1p^{n_1},\cdots,\lambda_ip^{n_i-1},\cdots,\lambda_n p^{n_n}>} \def|\l_1p^{-n_1},\cdots,\l_n p^{-n_n}>{|\lambda_1p^{-n_1},\cdots,\lambda_n p^{-n_n}>} \begin{displaymath} H_i|\l_1p^{n_1},\cdots,\l_n p^{n_n}> =\lambda_i p^{n_i} |\l_1p^{n_1},\cdots,\l_n p^{n_n}> \end{displaymath} \begin{displaymath} a^{\dagger}_i|\l_1p^{n_1},\cdots,\l_n p^{n_n}> =q^{-\Sigma_{k=i+1}^n n_k}\sqrt{\lambda_i p^{n_i+1}+\nu}|\l_1p^{n_1},\cdots,\l_ip^{n_i+1},\cdots,\l_n p^{n_n}> \end{displaymath} \begin{equation} a_i|\l_1p^{n_1},\cdots,\l_n p^{n_n}> =q^{\Sigma_{k=i+1}^n n_k}\sqrt{\lambda_i p^{n_i}+\nu}|\l_1p^{n_1},\cdots,\l_ip^{n_i-1},\cdots,\l_n p^{n_n}>, \end{equation} where $ \lambda_1, \cdots,\lambda_n >0$. Due to this fact, it is natural to define coherent states corresponding to the representation (21) as the eigenstates of $a^{\dagger}_i$'s: \def\Sigma{\Sigma} \def\frac{\frac} \def\lambda{\lambda} \def\mu{\mu} \def|z_1,\cdots,z_n>_+{|z_1,\cdots,z_n>_+} \def|z_1,\cdots,z_i,qz_{i+1},\cdots,q z_n>_+{|z_1,\cdots,z_i,qz_{i+1},\cdots,q z_n>_+} \begin{equation} a^{\dagger}_i|z_1,\cdots,z_n>_+ =z_i |z_1,\cdots,z_n>_+ \end{equation} Because the representation (21) depends on n free paprameters $ \lambda_i$'s , the coherent states $|z_1,\cdots,z_n>_+$ can take different forms. If we assume that the positive energy states are normalizable, i.e.$~~~$ $<\lambda_1 p^{n_1},\cdots ,\lambda_n p^{n_n}|\lambda_1 p^{n_1^{\prime}},\cdots,\lambda_n p^{n_n^{\prime}}>=\delta_{n_1 n_1^{\prime}}\cdots\delta_{n_nn_n^{\prime}}$, and form exactly one series for some fixed $\lambda_i$'s, then we can obtain \begin{displaymath} |z_1,\cdots,z_n>_+ \end{displaymath} \begin{equation} =C \Sigma_{n_1,\cdots,n_n=-\infty}^{\infty} \left[\Pi_{k=0}^n \frac{p^{\frac{n_k(n_k-1)}{4}}} {\sqrt{(-\frac{\nu}{\lambda_k};p)_{n_k}}} \left( \frac{1}{\sqrt{\lambda_k}}\right)^{n_k}\right]z_n^{n_n}\cdots z_1^{n_1}|\l_1p^{-n_1},\cdots,\l_n p^{-n_n}>. \end{equation} If we demand that ${}_+<z_1,\cdots,z_n|z_1,\cdots,z_n>_+=1$, we have \begin{equation} C^{-2} = \Pi_{k=1}^n{}_0\psi_1(-\frac{\nu}{\lambda_k};p,-\frac{|z_k|^2}{\lambda_k}) \end{equation} where bilateral p-hypergeometric series ${}_0\psi_1(a;p,x)$is defined by [14] \begin{equation} {}_0 \psi_1(a ;p ,x) =\Sigma_{n=-\infty}^{\infty} \frac{(-)^n p^{n(n-1)/2}}{(a;p)_{n}}x^n. \end{equation} \def\lambda{\lambda} \def\mu{\mu} \subsection{Two Parameter Deformed $gl(n)$ Algebra} The purpose of this subsection is to derive the deformed $gl(n)$ algebra from the deformed multimode oscillator algebra. The multimode oscillators given in eq.(1) can be arrayed in bilinears to construct the generators \begin{equation} E_{ij}=a^{\dagger}_i a_j. \end{equation} From the fact that $a^{\dagger}_i$ is a hermitian adjoint of $a_i$, we know that \begin{equation} E^{\dagger}_{ij}=E_{ji}. \end{equation} Then the deformed $gl(n)$ algebra is obtained from the algebra (1): \begin{displaymath} [E_{ii},E_{jj}]=0, \end{displaymath} \begin{displaymath} [E_{ii},E_{jk}]=0,~~~(i\neq j \neq k ) \end{displaymath} \begin{displaymath} [E_{ij},E_{ji}]=E_{ii}-E_{jj},~~~(i \neq j ) \end{displaymath} \begin{displaymath} E_{ii}E_{ij}-p E_{ij} E_{ii}=E_{ij},~~~(i \neq j) \end{displaymath} \begin{displaymath} E_{ij}E_{ik}= \cases{ q^{-1}E_{ik}E_{ij} & if $ j<k$ \cr qE_{ik}E_{ij} & if $ j>k$ \cr} \end{displaymath} \begin{equation} E_{ij}E_{kl}=q^{2(R(i,k)+R(j,l)-R(j,k)-R(i,l))}E_{kl}E_{ij},~~~(i \neq j \neq k \neq l), \end{equation} where the symbol $ R(i,j)$ is defined by \begin{displaymath} R(i,j)=\cases{ 1& if $i>j$\cr 0& if $i \leq j $ \cr } \end{displaymath} This algebra goes to an ordinary $gl(n)$ algebra when the deformation parameters $q$ and $p$ goes to 1. \def\otimes{\otimes} \section{q-symmetric states} In this section we study the statistics of many particle state. Let $N$ be the number of particles. Then the N-partcle state can be obtained from the tensor product of single particle state: \begin{equation} |i_1,\cdots,i_N>=|i_1>\otimes |i_2>\otimes \cdots \otimes |i_N>, \end{equation} where $i_1,\cdots, i_N$ take one value among $\{ 1,2,\cdots,n \}$ and the sigle particle state is defined by $|i_k>=a^{\dagger}_{i_k}|0>$. Consider the case that k appears $n_k$ times in the set $\{ i_1,\cdots,i_N\}$. Then we have \begin{equation} n_1 + n_2 +\cdots + n_n =\sum_{k=1}^n n_k =N. \end{equation} Using these facts we can define the q-symmetric states as follows: \begin{equation} |i_1,\cdots, i_N>_q =\sqrt{\frac{[n_1]_{p^2}!\cdots [n_n]_{p^2}!}{[N]_{p^2}!}} \sum_{\sigma \in Perm} \mbox{sgn}_q(\sigma)|i_{\sigma(1)}\cdots i_{\sigma(N)}>, \end{equation} where \begin{displaymath} \mbox{sgn}_q(\sigma)= q^{R(i_1\cdots i_N)}p^{R(\sigma(1)\cdots \sigma(N))}, \end{displaymath} \begin{equation} R(i_1,\cdots,i_N)=\sum_{k=1}^N\sum_{l=k+1}^N R(i_k,i_l) \end{equation} and $[x]_{p^2}=\frac{p^{2x}-1}{p^2-1}$. Then the q-symmetric states obeys \begin{equation} |\cdots, i_k,i_{k+1},\cdots>_q= \cases{ q^{-1} |\cdots,i_{k+1},i_k,\cdots>_q & if $i_k<i_{k+1}$\cr |\cdots,i_{k+1},i_k,\cdots>_q & if $i_k=i_{k+1}$\cr q |\cdots,i_{k+1},i_k,\cdots>_q & if $i_k>i_{k+1}$\cr } \end{equation} The above property can be rewritten by introducing the deformed transition operator $P_{k,k+1}$ obeying \begin{equation} P_{k,k+1} |\cdots, i_k , i_{k+1},\cdots>_q =|\cdots, i_{k+1},i_k,\cdots>_q \end{equation} This operator satisfies \begin{equation} P_{k+1,k}P_{k,k+1}=Id,~~~\mbox{so}~~P_{k+1,k}=P^{-1}_{k,k+1} \end{equation} Then the equation (33) can be written as \begin{equation} P_{k,k+1} |\cdots, i_k , i_{k+1},\cdots>_q =q^{-\epsilon(i_k,i_{k+1})} |\cdots, i_{k+1},i_k,\cdots>_q \end{equation} where $\epsilon(i,j)$ is defined as \begin{displaymath} \epsilon(i,j)= \cases{ 1 & if $ i>j$\cr 0 & if $ i=j$ \cr -1 & if $ i<j$ \cr } \end{displaymath} It is worth noting that the relation (36) does not contain the deformation parameter $p$. And the relation (36) goes to the symmetric relation for the ordinary bosons when the deformation parameter $q$ goes to $1$. If we define the fundamental q-symmetric state $|q>$ as \begin{displaymath} |q>=|i_1,i_2,\cdots,i_N>_q \end{displaymath} with $i_1 \leq i_2 \leq \cdots \leq i_N$, we have for any $k$ \begin{displaymath} |P_{k,k+1}|q>|^2 =||q>|^2 =1. \end{displaymath} In deriving the above relation we used following identity \begin{displaymath} \sum_{\sigma \in Perm } p^{R(\sigma(1),\cdots, \sigma(N))}= \frac{[N]_{p^2}!}{[n_1]_{p^2}!\cdots [n_n]_{p^2}!}. \end{displaymath} \section{Concluding Remark} To conclude, I used the two parameter deformed multimode oscillator system given in ref [12] to construct its representation, coherent states and deformed $gl_q(n)$ algebra. Mutimode oscillator is important when we investigate the many body quantum mechanics and statistical mechanics. In order to construct the new statistical behavior for deformed particle obeying the algebra (1), I investigated the defomed symmetric property of two parameter deformed mutimode states. \section*{Acknowledgement} This paper was supported by the KOSEF (961-0201-004-2) and the present studies were supported by Basic Science Research Program, Ministry of Education, 1995 (BSRI-95-2413). \vfill\eject
2024-02-18T23:39:42.612Z
1996-09-17T08:23:20.000Z
algebraic_stack_train_0000
152
2,631
proofpile-arXiv_065-834
\section{Introduction} If supersymmetry (SUSY) exists at the electroweak scale, it should be easy at the LHC to observe deviations from the Standard Model (SM) such as an excess of events with multiple jets plus missing energy $\slashchar{E}_T$ or with like-sign dileptons $\ell^\pm\ell^\pm$ plus $\slashchar{E}_T$~\badcite{ATLAS,CMS,BCPT}. Determining SUSY masses is more difficult because each SUSY event contains two missing lightest SUSY particles $\tilde\chi_1^0$, and there are not enough kinematic constraints to determine the momenta of these. This note describes two possible approaches to determining SUSY masses, one based on a generic global variable and the other based on constructing particular decay chains. The ATLAS and CMS Collaborations at the LHC are considering five points in the minimal supergravity (SUGRA) model listed in Table~I below~\badcite{LHC}. Point~4 is the comparison point extensively discussed elsewhere in these Proceedings. For this point a good strategy at the LHC is to use the decays $\tilde\chi_2^0 \rightarrow \tilde\chi_1^0 \ell^+\ell^-$ to determine the mass difference $M(\tilde\chi_2^0) - M(\tilde\chi_1^0)$~\badcite{LHC}. For higher masses, e.g.{} Points~1--3, this decay is small, but $\tilde\chi_2^0 \rightarrow \tilde\chi_1^0 h \rightarrow \tilde\chi_1^0 b \bar b$, $\tilde\chi_2^\pm \rightarrow \tilde\chi_1^0 W^\pm \rightarrow \tilde\chi_1^0 q \bar q$, and $\tilde\chi_2^0 \rightarrow \tilde\ell \ell \rightarrow \tilde\chi_1^0 \ell\ell$ provide alternative starting points for detailed analysis. \begin{table}[ht] \caption{SUGRA parameters for the five LHC points.} \begin{center} \begin{tabular}{cccccc} \hline\hline Point & $m_0$ & $m_{1/2}$ & $A_0$ & $\tan\beta$ & $\mathop{\rm sgn}{\mu}$ \\ & (GeV) & (GeV) & (GeV) & & \\ \hline 1 & 100 & 300 & 300 & \phantom{0}2.1 & $+$\\ 2 & 400 & 400 & 0 & \phantom{0}2.0 & $+$\\ 3 & 400 & 400 & 0 & 10.0 & $+$\\ 4 & 200 & 100 & 0 & \phantom{0}2.0 & $-$\\ 5 & 800 & 200 & 0 & 10.0 & $+$ \\ \hline\hline \end{tabular} \end{center} \vskip-4pt \end{table} \begin{figure}[ht] \dofig{2.95in}{point1_147.epsi} \caption{Point~1 signal and backgrounds. Open circles: signal. Solid circles: $t\bar t$. Triangles: $W\rightarrow\ell\nu$, $\tau\nu$. Downward triangles: $Z\rightarrow\nu\bar\nu$, $\tau\tau$. Squares: QCD jets. Histogram: all backgrounds.} \end{figure} \begin{figure}[ht] \dofig{2.95in}{point2_147.epsi} \caption{Signal and SM backgrounds for Point~2. See Fig.~1 for symbols.} \end{figure} \begin{figure}[t] \dofig{2.95in}{point3_147.epsi} \caption{Signal and SM backgrounds for Point~3. See Fig.~1 for symbols.} \end{figure} \begin{figure}[ht] \dofig{2.95in}{point4_147.epsi} \caption{Signal and SM backgrounds for Point~4. See Fig.~1 for symbols.} \end{figure} \begin{figure}[ht] \dofig{2.95in}{point5_147.epsi} \caption{Signal and SM backgrounds for Point~5. See Fig.~1 for symbols.} \end{figure} \section{Effective Mass Analysis} The first step after discovering a deviation from the SM is to estimate the mass scale. SUSY production at the LHC is dominated by gluinos and squarks, which decay into jets plus missing energy. The mass scale can be estimated using the effective mass, defined as the scalar sum of the $p_T$'s of the four hardest jets and the missing transverse energy $\slashchar{E}_T$, $$ M_{\rm eff} = p_{T,1} + p_{T,2} + p_{T,3} + p_{T,4} + \slashchar{E}_T\,. $$ ISAJET~7.20~\badcite{ISAJET} was used to generate samples of 10K events for each signal point, 50K events for each of $t \bar t$, $Wj$ with $W \to e\nu,\mu\nu,\tau\nu$, and $Zj$ with $Z \to \nu\bar\nu,\tau\tau$ in five bins covering $50 < p_T < 1600\,{\rm GeV}$, and 2500K QCD events, i.e., primary $g$, $u$, $d$, $s$, $c$, or $b$ jets, in five bins covering $50 < p_T < 2400\,{\rm GeV}$. The detector response was simulated using a toy calorimeter with \begin{eqnarray} {\rm EMCAL} &\quad& 10\%/\sqrt{E} + 1\% \nonumber\\ {\rm HCAL} &\quad& 50\%/\sqrt{E} + 3\% \nonumber\\ {\rm FCAL} &\quad& 100\%/\sqrt{E} + 7\%,\ |\eta| > 3\,.\nonumber \end{eqnarray} Jets were found using a simple fixed-cone algorithm (GETJET) with $R=[(\Delta\eta)^2+(\Delta\phi)^2]^{1/2}=0.7$. To suppress the SM background, the following cuts were made: \begin{itemize} \item $\slashchar{E}_T > 100\,{\rm GeV}$ \item $\ge4$ jets with $p_T > 50\,{\rm GeV}$ and $p_{T,1} > 100\,{\rm GeV}$ \item Transverse sphericity $S_T > 0.2$ \item Lepton veto \item $\slashchar{E}_T > 0.2 M_{\rm eff}$ \end{itemize} With these cuts and the idealized detector assumed here, the signal is much larger than the SM backgrounds for large $M_{\rm eff}$, as is illustrated in Figs.~1--5. \begin{table}[b] \caption{The value of $M_{\rm eff}$ for which $S = B$ compared to $M_{\rm SUSY}$, the lighter of the gluino and squark masses. Note that Point~4 is strongly influenced by the $\slashchar{E}_T$ and jet $p_T$ cuts.} \begin{center} \begin{tabular}{cccc} \hline\hline Point& $M_{\rm eff}\,({\rm GeV})$& $M_{\rm SUSY}\,({\rm GeV})$& Ratio\\ \hline 1 & \phantom{0}980 & 663 & 1.48 \\ 2 & 1360 & 926 & 1.47 \\ 3 & 1420 & 928 & 1.53 \\ 4 & \phantom{0}470 & 300 & 1.58 \\ 5 & \phantom{0}980 & 586 & 1.67 \\ \hline\hline \end{tabular} \vskip-10pt \end{center} \end{table} The peak of the $M_{\rm eff}$ mass distribution, or alternatively the point at which the signal and background are equal, provides a good first estimate of the SUSY mass scale, which is defined to be $$ M_{\rm SUSY} = \min(M_{\tilde g}, M_{\tilde u_R}) $$ (The choice of $M_{\tilde u_R}$ as the typical squark mass is arbitrary.) The ratio of the value $M_{\rm eff}$ for which $S = B$ to $M_{\rm SUSY}$ was calculated by fitting smooth curves to the signal and background and is given in Table~II. To see whether the approximate constancy of this ratio might be an accident, 100 SUGRA models were chosen at random with $100 < m_0 < 500\,{\rm GeV}$, $100 < m_{1/2} < 500\,{\rm GeV}$, $-500 < A_0 < 500\,{\rm GeV}$, $1.8 < \tan\beta < 12$, and $\mathop{\rm sgn}\mu=\pm1$ and compared to the assumed signal, Point~1. The light Higgs was assumed to be known, and all the comparison models were required to have $M_h = 100.4 \pm 3\,{\rm GeV}$. A sample of 1K events was generated for each point, and the peak of the $M_{\rm eff}$ distribution was found by fitting a Gaussian near the peak. Figure~6 shows the resulting scatter plot of $M_{\rm SUSY}$ vs.{} $M_{\rm eff}$. The ratio is constant within about $\pm10\%$, as can be seen from Fig.~7. This error is conservative, since there considerable contribution to the scatter from the limited statistics and the rather crude manner in which the peak was found. \begin{figure}[t] \dofig{2.95in}{scan1.epsi} \caption{Scatter plot of $M_{\rm SUSY} = \min(M_{\tilde g}, M_{\tilde u})$ vs.{} $M_{\rm eff}$ for randomly chosen SUGRA models having the same light Higgs mass within $\pm3\,{\rm GeV}$ as Point~1.} \end{figure} \begin{figure}[t] \dofig{2.95in}{scan3.epsi} \caption{Ratio $M_{\rm eff}/M_{\rm SUSY}$ from Fig.~6} \end{figure} \def$\tilde\chi_2^0 \ra \tilde\ell \ell \ra \lsp \ell\ell${$h \rightarrow b \bar b$} \section{Selection of $\tilde\chi_2^0 \ra \tilde\ell \ell \ra \lsp \ell\ell$} \begin{figure}[ht] \dofig{2.95in}{x1_mbb.epsi} \caption{$M(b \bar b)$ for pairs of $b$ jets for the Point~1 signal (open histogram) and for the sum of all backgrounds (shaded histogram) after cuts described in the text. The smooth curve is a Gaussian plus quadratic fit to the signal. The light Higgs mass is $100.4\,{\rm GeV}$.} \end{figure} For Point~1 the decay chain $\tilde\chi_2^0 \rightarrow \tilde\chi_1^0 h$, $h \rightarrow b \bar b$ has a large branching ratio, as is typical if this decay is kinematically allowed. The decay $h \rightarrow b \bar b$ thus provides a handle for identifying events containing $\tilde\chi_2^0$'s~\badcite{ERW}. Furthermore, the gluino is heavier than the squarks and so decays into them. The strategy for this analysis is to select events in which one squark decays via $$ \tilde q \rightarrow \tilde\chi_2^0 q,\ \tilde\chi_2^0 \rightarrow \tilde\chi_1^0 h,\ h \rightarrow b \bar b\,, $$ and the other via $$ \tilde q \rightarrow \tilde\chi_1^0 q\,, $$ giving two $b$ jets and exactly two additional hard jets. ISAJET~7.22~\badcite{ISAJET} was used to generate a sample of 100K events for Point~1, corresponding to about $5.6\,{\rm fb}^{-1}$. Background samples of 250K each for $t \bar t$, $Wj$, and $Zj$, and 5000K for QCD jets were also generated, equally divided among five $p_T$ bins. The background samples generally represent a small fraction of an LHC year. The detector response was simulated using the toy calorimeter described above. Jets were found using a fixed cone algorithm with $R = 0.4$. The following cuts were imposed: \begin{itemize} \item $\slashchar{E}_T > 100\,{\rm GeV}$ \item $\ge4$ jets with $p_T > 50\,{\rm GeV}$ and $p_{T,1} > 100\,{\rm GeV}$ \item Transverse sphericity $S_T > 0.2$ \item $M_{\rm eff} > 800\,{\rm GeV}$ \item $\slashchar{E}_T > 0.2 M_{\rm eff}$ \end{itemize} Jets were tagged as $b$'s if they contained a $B$ hadron with $p_T > 5\,{\rm GeV}$ and $\eta < 2$; no other tagging inefficiency or $b$ mistagging was included. Figure~8 shows the resulting $b \bar b$ mass distributions for the signal and the sum of all SM backgrounds with $p_{T,b} > 25\,{\rm GeV}$ together with a Gaussian plus quadratic fit to the signal. At a luminosity of $10^{33}\,{\rm cm}^{-2} {\rm s}^{-1}$, \hbox{ATLAS} will have a $b$-jet tagging efficiency of 70\% for a rejection of 100~\badcite{ATLAS}. Hence, the number of events should be reduced by a factor of about two, but the mistagging background is probably small compared to the real background shown. The Higgs mass peak is shifted downward somewhat; using a larger cone, $R = 0.7$, gives a peak which is closer to the true mass but wider. \begin{figure}[t] \dofig{2.95in}{x1_mbbj.epsi} \caption{The smaller of the two $b \bar b j$ masses for signal and background events with $73 < M(b \bar b) < 111\,{\rm GeV}$ in Fig.~8 and with exactly two additional jets $j$ with $p_T > 75\,{\rm GeV}$. The endpoint of this distribution should be approximately the mass difference between the squark and the $\tilde \chi_1^0$, about $542\,{\rm GeV}$.} \end{figure} Events were then required to have exactly one $b \bar b$ pair with $73 < M(b \bar b) < 111\,{\rm GeV}$ and exactly two additional jets with $p_T > 75\,{\rm GeV}$. The invariant mass of each jet with the $b \bar b$ pair was calculated. For the desired decay chain, one of these two must come from the decay of a single squark, so the smaller of them must be less than the kinematic limit for single squark decay, $M(\tilde u_R) -M(\tilde\chi_1^0) = 542\,{\rm GeV}$. The smaller of the two $b \bar b j$ masses is plotted in Fig.~9 for the signal and for the sum of all backgrounds and shows the expected edge. The SM background shows fluctuations from the limited Monte Carlo statistics but seems to be small near the edge, at least for the idealized detector considered here. There is some background from the SUSY events above the edge, presumably from other decay modes and/or initial state radiation. \def$\tilde\chi_2^0 \ra \tilde\ell \ell \ra \lsp \ell\ell${$W \rightarrow q \bar q$} \section{Selection of $\tilde\chi_2^0 \ra \tilde\ell \ell \ra \lsp \ell\ell$} \begin{figure}[b] \dofig{2.95in}{x1_mjj.epsi} \caption{$M_{34}$ for non-$b$ jets in events with two $200\,{\rm GeV}$ jets and two $50\,{\rm GeV}$ jets for the Point~1 signal (open histogram) and the sum of all backgrounds (shaded histogram).} \end{figure} Point~1 also has a large combined branching ratio for one gluino to decay via $$ \tilde g \rightarrow \tilde q_L \bar q,\ \tilde q_L \rightarrow \tilde\chi_1^\pm q,\ \tilde\chi_1^\pm \rightarrow \tilde\chi_1^0 W^\pm,\ W^\pm \rightarrow q \bar q\,, $$ and the other via $$ \tilde g \rightarrow \tilde q_R q,\ \tilde q_R \rightarrow \tilde\chi_1^0 q\,, $$ giving two hard jets and two softer jets from the $W$. The branching ratio for $\tilde q_L \rightarrow \tilde\chi_1^0 q$ is small for Point~1, so the contributions from $\tilde g \rightarrow \tilde q_L \bar q$ and from $\tilde q_L \tilde q_L$ pair production are suppressed. The same signal sample was used as in Section~III, and jets were again found using a fixed cone algorithm with $R = 0.4$. The combinatorial background for this decay chain is much larger than for the previous one, so harder cuts are needed: \begin{itemize} \item $\slashchar{E}_T > 100\,{\rm GeV}$ \item $\ge4$ jets with $p_{T1,2} > 200\,{\rm GeV}$, $p_{T3,4} > 50\,{\rm GeV}$, and \hfil $\eta_{3,4} < 2$ \item Transverse sphericity $S_T > 0.2$ \item $M_{\rm eff} > 800\,{\rm GeV}$ \item $\slashchar{E}_T > 0.2 M_{\rm eff}$ \end{itemize} The same $b$-tagging algorithm was applied to tag the third and fourth jets as not being $b$ jets. Of course, this is not really feasible; instead one should measure the $b$-jet distributions and subtract them. The mass distribution $M_{34}$ of the third and fourth highest $p_T$ jets with these cuts is shown in Fig.~10 for the signal and the sum of all backgrounds. A peak is seen a bit below the $W$ mass with a fitted width surprisingly smaller than that for the $h$ in Fig.~8, note that the $W$ natural width has been neglected in the simulation of the decays. The SM background is more significant here than for $h \to b \bar b$. Events from this peak can be combined with another jet as was done for $h \rightarrow b \bar b$ in Fig.~9, providing another determination of the squark mass. Figure~10 also provides a starting point for measuring $W$ decays separately from other sources of leptons such as gaugino decays into sleptons. \vfill \def$\tilde\chi_2^0 \ra \tilde\ell \ell \ra \lsp \ell\ell${$\tilde\chi_2^0 \rightarrow \tilde\ell \ell \rightarrow \tilde\chi_1^0 \ell\ell$} \section{Selection of $\tilde\chi_2^0 \ra \tilde\ell \ell \ra \lsp \ell\ell$} \begin{figure}[b] \dofig{2.95in}{x1_mll.epsi} \caption{$M_{\ell\ell}$ for the Point~1 signal (open histogram) and the sum of all backgrounds (shaded histogram).} \end{figure} \begin{figure}[ht] \dofig{2.95in}{x1_mlljh.epsi} \caption{$M_{\ell\ell j\tilde\chi_1^0}$ for events with $86 < M_{\ell\ell} < 109\,{\rm GeV}$ in Fig.~11, using $\vec p_{\tilde\chi_1^0} = M_{\tilde\chi_1^0} / M_{\ell\ell} \vec p_{\ell\ell}$ for the Point~1 signal (open histogram) and the SM background (shaded histogram).} \end{figure} Point~1 has relatively light sleptons, which is generically necessary if the $\tilde\chi_1^0$ is to provide acceptable cold dark matter~\badcite{BB}. Hence the two-body decay $$ \tilde\chi_2^0 \rightarrow \tilde\ell_R \ell \rightarrow \tilde\chi_1^0 \ell^+\ell^- $$ is kinematically allowed and competes with the $\tilde\chi_2^0 \rightarrow \tilde\chi_1^0 h$ decay, producing opposite-sign, like-flavor dileptons. The largest SM background is $t \bar t$. To suppress this and other SM backgrounds the following cuts were made on the same signal and SM background samples used in the two previous sections: \begin{itemize} \item $M_{\rm eff} > 800\,{\rm GeV}$ \item $\slashchar{E}_T > 0.2M_{\rm eff}$ \item $\ge 1$ $R=0.4$ jet with $p_{T,1} > 100\,{\rm GeV}$ \item $\ell^+\ell^-$ pair with $p_{T,\ell}> 10\,{\rm GeV}$, $\eta_\ell < 2.5$ \item $\ell$ isolation cut: $E_T < 10\,{\rm GeV}$ in $R=0.2$ \item Transverse sphericity $S_T > 0.2$ \end{itemize} With these cuts very little SM background survives, and the $M_{\ell\ell}$ mass distribution shown in Fig.~11 has an edge near $$ M_{\ell\ell}^{\rm max} = M_{\tilde\chi_2^0} \sqrt{1-{M_{\tilde\ell}^2 \over M_{\tilde\chi_2^0}^2}} \sqrt{1-{M_{\tilde\chi_1^0}^2 \over M_{\tilde\ell}^2}} \approx 112\,{\rm GeV}\,, $$ If $M_{\ell\ell}$ is near its kinematic limit, then the velocity difference of the $\ell^+\ell^-$ pair and the $\tilde\chi_1^0$ is minimized. Having both leptons hard requires $M_{\tilde\ell}/ M_{\tilde\chi_2^0}^2 \sim M_{\tilde\chi_1^0} / M_{\tilde\ell}$. Assuming this and $M_{\tilde\chi_2^0} = 2 M_{\tilde\chi_1^0}$ implies that the endpoint in Fig.~11 is equal to the $\tilde\chi_1^0$ mass. An improved estimate could be made by detailed fitting of all the kinematic distributions. Events were selected with $M_{\ell\ell}^{\rm max} -10\,{\rm GeV} < M_{\ell\ell} < M_{\ell\ell}^{\rm max}$, and the $\tilde\chi_1^0$ momentum was calculated using this crude $\tilde\chi_1^0$ mass and $$ \vec p_{\tilde\chi_1^0} = (M_{\tilde\chi_1^0} / M_{\ell\ell})\,\vec p_{\ell\ell}\,. $$ The invariant mass $M_{\ell\ell j\tilde\chi_1^0}$ of the $\ell^+\ell^-$, the highest $p_T$ jet, and the $\tilde\chi_1^0$ was then calculated and is shown in Fig.~12. A peak is seen near the light squark masses, 660--$688\,{\rm GeV}$. More study is needed, but this approach looks promising. \vfil This work would have been impossible without the contributions of my collaborators on ISAJET, H. Baer, S. Protopopescu, and X. Tata. It was supported in part by the United States Department of Energy under contract DE-AC02-76CH00016.
2024-02-18T23:39:42.766Z
1996-09-17T04:43:23.000Z
algebraic_stack_train_0000
166
3,003
proofpile-arXiv_065-835
\section{Introduction} It is well known that the main problem for a quantum formulation of gauge theories is the fact that the space ${\cal A}$ of gauge potentials is `too large'; there are infinitely many gauge equivalent configurations corresponding to one and the same physical situation. This huge redundancy should be eliminated by constructing the physical configuration space, the space of orbits, ${\cal A}_{phys} = {\cal A} / {\cal G}$, consisting of gauge fields modulo gauge transformations, the elements of the gauge group ${\cal G}$. The aim of this contribution is to find the physical configuration space ${\cal A}_{phys}$ by identifying a subset of ${\cal A}$ with ${\cal A}_{phys}$. The method for doing that is the familiar procedure of {\em gauge fixing}, which amounts to choosing a representative gauge potential $A$ on each of the orbits. In doing that, two criteria have to be met. (i) {\em existence:} A representative should be selected on {\em any} orbit; no orbit should be omitted. (ii) {\em uniqueness:} There should be only {\em one} representative obeying the gauge condition on each orbit. If, on the other hand, there are (at least) two gauge equivalent fields, $A_1$, $A_2$, satisfying the gauge condition, the gauge is not completely fixed. There is a residual gauge freedom given by a gauge transformation, say, $U$, between the `Gribov copies' \cite{Gri78}, $U$: $A_1 \to A_2$. In view of that, it is clear that the physical configuration space ${\cal A}_{phys}$ is the maximal subset of ${\cal A}$ containing no Gribov copies. It is called a `fundamental modular domain' \cite{vBa92}. In order to find the physical configuration space we follow the pioneering work of Feynman on (2+1)-dimensional Yang-Mills theory \cite{Fey81} and make use of the intuitive Hamiltonian formalism within a functional Schr\"odinger picture. As reemphasized recently in \cite{KN96}, there are reasons to believe that the fundamental domain (at least in 2+1 dimensions) has a finite volume leading to a purely discrete spectrum. This would provide a natural explanation of a mass gap in the theory. The quantum mechanical analogue for this is the infinite square well of extension $d$. The gap between the ground state and the first excited state is of the order $1/d^2$. The non-abelian case corresponds to finite $d$ and thus to a finite gap; the abelian case corresponds to infinite size, $d \to \infty$, and thus to vanishing gap (the analogue of the massless photon). \section{An Example from Quantum Mechanics} To gain some more intuition let us stay a little bit further within the context of quantum mechanics. Consider a point particle moving in a plane described by (cartesian) coordinates $q_1$, $q_2$, and let the particle have vanishing angular momentum, $G \equiv q_1 p_2 - q_2 p_1 = 0$. This latter identity we interpret as Gauss's law, so that $G$ is the generator of gauge transformations which are just rotations around the origin. If we introduce polar coordinates, the radius $r$, and the angle $\phi$, it is obvious that the angle changes under rotations and is thus gauge variant, whereas the radius $r$ is the gauge invariant variable. Accordingly, the physical configuration space is the positive real line. The associated physical Hamiltonian is \begin{equation} H_{phys} = - \frac{1}{2} r^{-1} \frac{\partial}{\partial r} r \frac{\partial}{\partial r} \; , \label{HPHYS} \end{equation} \noindent and depends only on the gauge invariant variable $r$, as it should. Let us assume now that we are not as smart as to guess the gauge invariant variables and proceed in a pedestrian's manner via gauge fixing. We gauge away $q_2$, $\chi(q) \equiv q_2 = 0$, and immediately realize that this gauge choice selects {\em two} representatives on each orbit at $\pm q_1$ (see Fig.~1). There is a (discrete) residual gauge freedom between the copies, $q_1 \to - q_1$. So we have a Gribov problem, which is best analysed in terms of the Faddeev-Popov (FP) operator \cite{FP67} given by the commutator of the gauge fixing with Gauss's law, \begin{equation} \mbox{FP} \equiv -i \, [\chi, G] = q_1 \; . \end{equation} \noindent The latter turns out to be coordinate dependent and vanishes at the `Gribov horizon', $q_1 = 0$, which is just the point separating the two gauge equivalent regions $q_1 > 0$ and $q_1 < 0$. So we can fix the gauge completely by demanding that $q_1$ be positive (an inequality, thus a non-holonomic constraint), and thus again we find that the physical configuration space is the positive real line where we can identify $q_1$ with the radius $r$. Due to the simplicity of the example the decomposition of the configuration space into its redundant and physical parts via gauge fixing can easily be visualized (Fig.~1). \vspace{.5cm} \beginpicture \setcoordinatesystem units <1cm,1cm> \setplotarea x from -3.5 to 3.5, y from -3.5 to 3.5 \arrow <5pt> [.15,.3] from -3.5 0 to 3.5 0 \arrow <5pt> [.15,.3] from 0 -3.5 to 0 3.5 \put {$q_1$} at 3.4 -0.3 \put {$q_2$} at 0.3 3.4 \circulararc 360 degrees from 1 0 center at 0 0 \circulararc 360 degrees from 2 0 center at 0 0 \circulararc 360 degrees from 3 0 center at 0 0 \put {$\bullet$} at -3 0 \put {$\bullet$} at -2 0 \put {$\bullet$} at -1 0 \put {$\bullet$} at 1 0 \put {$\bullet$} at 2 0 \put {$\bullet$} at 3 0 \arrow <5pt> [.15,.3] from 1.5 0.35 to 1.5 0 \put {\footnotesize FMD} at 1.4 0.55 \arrow <5pt> [.15,.3] from -2.5 0.35 to -2.5 0 \put{\footnotesize GF} at -2.5 .55 \arrow <5pt> [.15,.3] from 2.83 -2.83 to 2.12 -2.12 \put {orbits} at 2.8 -3.2 \put{ \begin{picture}(2,2) \setlength{\unitlength}{1cm} \put(0,0){\circle*{0.2}} \end{picture}} [Bl] at -.11 0 \linethickness=1pt \putrule from 0 0 to 3.2 0 \endpicture \vspace{.5cm} {\sl Figure 1: The configuration space of the quantum mechanical example. The orbits are circles around the origin. The gauge fixing (GF), $q_2 = 0$, cuts every orbit twice at the Gribov copies $\pm q_1$ ($\bullet$). The Gribov horizon is the origin, the fundamental modular domain (FMD) the positive real line, $q_1 > 0$.} \vspace{.5cm} Using Gauss's law to eliminate the unphysical momentum, $p_2$, from the original Hamiltonian, $H = (p_1^2 + p_2^2)/2$, one ends up with the physical Hamiltonian (\ref{HPHYS}). Note the additional factors of $r = |\det \mbox{FP}|$ in the kinetic term. They guarantee that the Hamiltonian is hermitean on ${\cal L}^2 (r dr, {\cal R}^+)$. As a result we can state that gauge fixing amounts to mapping the original cartesian coordinates onto curvilinear ones and setting the gauge variant part of these (the `angles') to zero. The Jacobian of the transformation is the FP determinant and enters the physical Hamiltonian as well as the scalar product \cite{CL80}. \section{Yang-Mills Theory in Palumbo Gauge} Let us now consider the case of field theory. According to Dirac, there is still another, more physical, interpretation of gauge fixing as a `dressing' of the fermions \cite{Dir55}. In the abelian case, one has the dressed electron \begin{equation} \psi_\chi ({\bf x}) = \psi ({\bf x}) \exp ie \! \int \! d^3 y \, {\bf E}_\chi ({\bf x} - {\bf y}) \cdot {\bf A} ({\bf y}) \; , \end{equation} \noindent where ${\bf E}_\chi$ is the dressing electric field corresponding to the gauge fixing $\chi = 0$. For the Coulomb gauge it is of course the familiar Coulomb field \cite{Dir55}, for the axial gauge, $\chi \equiv A_3 = 0$, it is the singular configuration \begin{equation} E_1 = E_2 = 0\; , \quad E_3 = e \theta (x_3) \delta (x_1) \delta (x_2) \, , \label{ESTRING} \end{equation} \noindent corresponding to a string of electric flux in the 3-direction \cite{Dir55}. It is easy to check that (\ref{ESTRING}) is a singular solution of Gauss's law for a point charge at the origin, \begin{equation} G \equiv \partial_i E_i = e \delta^3 ({\bf x}) = e \rho_\perp (x_1 , x_2 ) \rho_L (x_3) \; , \end{equation} \noindent where the last step stands for a smearing of the delta functions in transverse and longitudinal direction with respect to the string. It has been known for a long time that the singular configuration (\ref{ESTRING}) leads to an infrared divergence in the energy \cite{Dir55} \cite{Sch63} of the form \begin{equation} H \sim e L \left[ \int dx_3 \, \rho_L (x_3) \right]^2 + {\rm finite} \; , \end{equation} \noindent where we have introduced the length $L$ of the string. The remedy of this infrared problem is to introduce a homogeneous background charge density in 3-direction, $\rho_L \equiv \delta(x_3) - 1/2L$. Calculating the integral of the Gauss operator, $G$, \begin{equation} \int\limits_{-L}^{L} dx_3 \, G \sim \int\limits_{-L}^{L} dx_3 \, \rho_L = 0 \; , \end{equation} \noindent one finds that the zero mode of $G$ is the charge of the string which vanishes, and that the energy of the string is finite. Thus, only neutral strings have finite energy, a property somewhat reminiscent of confinement. Due to the manifest appearance of (chromo-)electric strings, the axial gauge has been suggested as an appropriate gauge choice for studying the confinement problem \cite{Man79}. There are several gauges which correspond to the above modification of the axial gauge (differing in the way the residual gauge freedom is fixed \cite{Hei96}). However, all of them have in common that $A_3$ is not completely set to zero, but only those modes with momentum $k_3 \ne 0$. A zero mode, having $k_3 = 0$, is retained, \begin{equation} A_3 (x_1 , x_2 , x_3 ) \to a_3 (x_1 , x_2 ) \; . \label{GAUGEFIX} \end{equation} \noindent To fix the residual gauge freedom still left one can do a (self-) similar construction in 1- and 2-direction. This is the Palumbo gauge \cite{Pal86}. Details are unnecessary for what follows. The gauge fixing (\ref{GAUGEFIX}) can be shown to exist by explicitly constructing the transformation $U$ that takes an arbitrary configuration to this gauge. One finds that $U$ is of the form $U = W \times V$, with \begin{eqnarray} W ({\bf x}) &=& P \exp i \int_{-L}^{x_3} dy_3 \, A_3 (x_1 , x_2 , y_3 ) \; , \\ V ({\bf x}) &=& \exp [i (x_3 + L) a_3 (x_1 , x_2 )] \; . \end{eqnarray} \noindent The first exponential, $W$, takes one to the pure axial gauge ($A_3 \to 0$), the second exponential, $V$, restores the zero mode ($0 \to a_3$), which is determined via $2L \, a_3 (x_1 , x_2 ) = \log w (x_1 , x_2 )$ , where $w (x_1 , x_2 ) \equiv W (x_1 , x_2, L)$. Note that $a_3$ is in the Lie algebra $su(2)$ and thus obtained from the group element $W \in SU(2)$ by taking a logarithm. As the latter is a multivalued function we are right at the question of uniqueness. The main virtue of the Palumbo gauge (and related gauges) is the fact that the FP operator, its inverse and its determinant can be calculated exactly \cite{Hei96}. The result for the determinant is \begin{equation} \det \mbox{FP} = \frac{\sin^2 |a_3| L}{(|a_3| L)^2} \; , \label{DETFP} \end{equation} \noindent with $|a_3| = (a_3^a a_3^a)^{1/2}$ being the modulus of $a_3$. The FP determinant (\ref{DETFP}) is just the Haar measure of $SU(2)$, thus the Jacobian of the exponential map, $\exp: su(2) \to SU(2)$, from the algebra to the group. At the Gribov horizons, where $\det \mbox{FP}$ vanishes, one has $|a_3| = \pi k/L$, $k$ integer, (in the algebra) and $w=\pm 1$ (in the group). At these points, the exponential map becomes singular, the inverse map, the logarithm, becomes multi-valued and thus the `angle' variable $a_3$ ill-defined. If one writes the zero mode with the help of a unit vector $n^a$ in color space as $a_3^a = |a_3| n^a$, the residual gauge freedom can be described in the following way: one has transformations $u_1(k)$ which do not change the direction of the color vector \begin{equation} u_1 (k) : |a_3| n^a \to |a_3 + 2\pi k / L | n^a \; . \end{equation} The second type of residual gauge transformations, $u_2$, transforms along the Gribov horizons ($w = \pm 1$) without changing the length of the color vector, \begin{equation} u_2 : |a_3| \, n^a \to |a_3| \, \tilde n^a \; , \quad |a_3| = \pi k /L \; . \end{equation} The fundamental modular domain is therefore given in terms of the inequality $| a_3 | < \pi/L$, which describes the interior of a sphere in color space~(see Fig.~2). \beginpicture \setcoordinatesystem units <1cm,1cm> \setplotarea x from -3.5 to 3.5, y from -3.5 to 3.5 \arrow <5pt> [.15,.3] from -3.5 0 to 3.5 0 \arrow <5pt> [.15,.3] from 0 -3.5 to 0 3.5 \put {$a_3^1$} at 3.4 -0.5 \put {$a_3^2$} at 0.5 3.4 \circulararc 360 degrees from 1 0 center at 0 0 \circulararc 360 degrees from 2 0 center at 0 0 \circulararc 360 degrees from 3 0 center at 0 0 \put {$\bullet$} at 0 0 \put {$\bullet$} at -1.4142 1.4142 \arrow <5pt> [.15,.3] from 0 0 to -1.4 1.4 \put {$u_1$} at -1.2 0.7 \put {$\bullet$} at -2.598 -1.5 \put {$\bullet$} at -1.026 -2.819 \circulararc 40 degrees from -2.8 -1.7 center at 0 0 \arrow <5pt> [.15,.3] from -1.15 -3.065 to -1.05 -3.1 \put {$u_2$} at -2.5 -3 \arrow <5pt> [.15,.3] from 2.83 -2.83 to 2.12 -2.12 \put {det FP = 0} at 2.8 -3.2 \setshadegrid span <2pt> \setquadratic \hshade -.707 -.707 -.707 <,z,,> 0 -1 -.707 .707 -.707 -.707 / \vshade -.707 -.707 .707 <z,z,,> 0 -1 1 .707 -.707 .707 / \hshade -.707 .707 .707 <z,,,> 0 .707 1 .707 .707 .707 / \endpicture \vspace{.5cm} {\sl Figure 2: A two-dimensional plot of the configuration space of $a_3^a$, $a=1,2$. The Gribov horizons where {\rm det FP} vanishes are (apart from the origin) circles of radius $\pi/L$, $2\pi/L$ etc. The shaded interior of the inner circle constitutes the fundamental modular domain. The arrows denote the action of the residual gauge transformations $u_1 (k=1)$ and $u_2$ between particular Gribov copies represented by black dots ($\bullet$).} \vspace{.6cm} The physical Hamiltonian containing all the Jacobian factors is a somewhat lengthy expression and can be found in \cite{Hei96}. One crucial question remains to be answered: what is the physics associated with the Gribov horizons being discontinuities in field space \cite{JMR78}? To find the answer one would need some geometrical space-time picture of the configurations at the horizon. In related gauges, they seem to correspond to monopoles \cite{FNP81} or magnetic vortices \cite{Gri95}. An explicit but formal construction of a horizon configuration has been given in \cite{PS93}. The center of the group seems to play a peculiar role as the horizon configurations correspond exactly to group elements valued in the center, $w = \pm1$. Finally, a $\theta$-angle might emerge when the wave functional $\Psi$ `bites its tail' at the horizon with a quasi-periodicity like $\Psi (\pi / L) = e^{i\theta} \Psi (-\pi / L)$. It might be worthwhile to study these issues on a lattice where one has a finite number of degrees of freedom and a better control of infinities. In the context of maximally abelian gauge fixing on a lattice, the Gribov problem and its relation to the monopole condensation scenario of confinement have recently attracted a lot of attention \cite{HT96}. A lattice gauge fixing similars to ours has just appeared in the literature \cite{BDH96}. It looks promising to investigate these issues further. \bigskip \noindent {\bf Acknowledgements} \smallskip The author gratefully acknowledges instructive discussions with A.~Jaramillo, S.~Shabanov and D.~Zwanziger. It is a pleasure to thank the organizer S.~Narison and his team for all their efforts.
2024-02-18T23:39:42.767Z
1996-09-06T13:20:00.000Z
algebraic_stack_train_0000
167
2,687
proofpile-arXiv_065-929
\section{Introduction}\label{intro} Spiral waves are spatio-temporal patterns typically found in distributed media with active elements. They have been studied extensively for excitable and oscillatory media. \cite{gen,act} For both types of media, it is conventional to consider systems with two dynamical variables. Activator-inhibitor or propagator-controller systems are often used to analyse spiral dynamics in excitable media \cite{act,fife}, while the complex Ginzburg--Landau equation is the prototypical model describing spatially-distributed oscillatory media near the Hopf bifurcation point \cite{kur}. Spiral waves may also exist in media where the local dynamics supports complex periodic or even chaotic motion that cannot be represented in a two-dimensional phase plane. Various patterns involving rotating spiral waves have been observed in coupled map lattices or reaction-diffusion dynamics based on the R\"{o}ssler chaotic attractor \cite{ma}. The three-variable reaction-diffusion system with chaotic local reaction kinetics given by the Willamowski-R\"{o}ssler rate law \cite{wr} has been studied in \cite{prl}. Stable spiral waves exist in this system and the nucleation and annihilation of spiral pairs leading to spiral turbulence have been observed. The change of dimensionality of phase space from two to three significantly complicates the description of the dynamics. Descriptions in terms of phase and amplitude, well established for two-variable models, cannot be directly generalized. Although several definitions have been proposed for the phase of chaotic oscillations, all of them suffer from some degree of ambiguity (see \cite{pik} for a discussion). Similar difficulties arise in the consideration of nonchaotic oscillatory dynamics which is nevertheless more complex than a single loop in phase space; for example, in the oscillations that appear in the period doubling cascade to chaos or in the mixed-mode oscillations observed in experiments in chemical systems. \cite{mix} In this paper we study the spatiotemporal organization of a reacting medium which supports a single spiral wave and where the local rate law exhibits period-$2^n$ or chaotic oscillations. Through an analysis of the dynamics at different spatial points in the medium we show that a number of phenomena arise for $n>0$ which are nonexistent in period-1 oscillatory media. Section~\ref{spiral} introduces the model and presents some features of the spiral wave behavior in a chaotic medium. The local dynamics in the medium is considered in detail in Sec.~\ref{local}. The analysis allows one to identify the loop exchange process for local trajectories and the complicated pattern of the distribution of different types of local dynamics in the medium. A characteristic feature of this distribution is the existence of a curve where the local dynamics is effectively period-1. Section~\ref{topo} introduces a coarse-grained description of $2^n$-periodic local orbits which allows one to characterize the local dynamics that is observed in the medium. The topological conflict between the phase space structure of local trajectories and the constraints imposed on the medium by the existence of a spiral wave is considered in Sec.~\ref{glob}. We show that the observed changes of the local orbits are necessary to maintain the global coherence of the medium. The conclusions of the study are presented in Sec.~\ref{conc}. \section{Spiral Waves in Periodic and Chaotic Media}\label{spiral} While many aspects of the phenomena we describe in this paper are general and apply to systems in which complex periodic or chaotic orbits exit, we consider situations where a chaotic attractor arises by a period-doubling cascade and confine our simulations to the Willamowski-R\"{o}ssler (WR) model \cite{wr}, \begin{eqnarray} A_1 +X &\mathrel{ \mathop{\kern0pt {\rightleftharpoons}}\limits^{{k_1}}_{k_{-1}}}& 2X,\;\; X+Y \mathrel{\mathop{\kern0pt {\rightleftharpoons}}\limits^{{k_2}}_{k_{-2}}} 2Y,\nonumber \\ A_5 +Y & \mathrel{\mathop{\kern0pt {\rightleftharpoons}}\limits^{{k_3}}_{k_{-3}}}& A_2,\;\; X+Z \mathrel{\mathop{\kern0pt {\rightleftharpoons}}\limits^{{k_4}}_{k_{-4}}} A_3, \label{eq_mechanism} \\ A_4 +Z & \mathrel{\mathop{\kern0pt {\rightleftharpoons}}\limits^{{k_5}}_{k_{-5}}}& 2Z\;. \nonumber \end{eqnarray} Only the $X$, $Y$ and $Z$ species vary with time; all others are assumed fixed by flows of reagents. Study of this model allows us to illustrate most features of the structure of a spatially distributed medium supporting spiral waves. In addition, it is useful to deal with a specific example since certain aspects of the analysis of periodic and chaotic orbits in high-dimensional concentration phase spaces rely on geometrical constructions that pertain to a specific class of attractors. The rate law that follows from the mechanism (\ref{eq_mechanism}) is \begin{eqnarray} \label{mass} { d c_x(t) \over d t} &=&\kappa_1 c_x -\kappa_{-1} c_x^2 -\kappa_2 c_x c_y +\kappa_{-2} c_y^2 -\kappa_4 c_x c_z \nonumber \\ & & +\kappa_{-4} =R_x({\bf c}(t))\;, \nonumber \\ { d c_y(t) \over d t} &=&\kappa_2c_x c_y -\kappa_{-2} c_y^2 -\kappa_{3} c_y + \kappa_{-3} \\ & & =R_y({\bf c}(t))\;, \nonumber \\ { d c_z(t) \over d t} &=&-\kappa_4 c_x c_z + \kappa_{-4} +\kappa_5 c_z -\kappa_{-5} c_z^2 =R_y({\bf c}(t))\;, \nonumber \end{eqnarray} where the rate coefficients $\kappa_i$ include the concentrations of any species held fixed by constraints. We take $\kappa_2$ to be the bifurcation parameter while all other coefficients are fixed: ($\kappa_1=31.2, \kappa_{-1}=0.2, \kappa_{-2}=0.1, \kappa_{3}=10.8, \kappa_{-3}=0.12, \kappa_{4}=1.02, \kappa_{-4}=0.01, \kappa_{5}=16.5, \kappa_{-5}=0.5$). In this parameter region the WR model has been shown \cite{ka} to possess a chaotic attractor arising from a period-doubling cascade as $\kappa_2$ is varied in the interval [1.251,1.699]. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f1.eps} \end{center} \caption{Chaotic attractor for the Willamowski-R\"{o}ssler model at $\kappa_2=1.567$. } \label{cha} \end{figure} Figure~\ref{cha} shows the four-banded chaotic attractor at $\kappa_2 = 1.567$. Throughout the entire parameter domain $\kappa_2\in [1.251,1.699]$ the system's attractor is oriented so that its projection onto the $(c_x,c_y)$ plane exhibits a folded phase space flow circulating around the unstable focus ${\bf c^*}$. This allows one to introduce a coordinate system in the Cartesian $(c_x,c_y,c_z)$ phase space which is appropriate for the description of the attractor. We take the origin of a cylindrical coordinate system $(\rho,\phi,z)$ at ${\bf c^*}$ so that the $z$ and zero-phase-angle ($\phi=0$) axes are directed along the $c_z$ and $c_y$ axes, respectively. The phase angle $\phi$ increases along the direction of flow as shown in Fig.~\ref{frame}. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f2.eps} \end{center} \caption{Cylindrical coordinate frame $(\rho,\phi,z)$ with origin at ${\bf c}^*$ in the $(c_x,c_y,c_z)$ phase space. A period-2 orbit is shown in this coordinate frame.} \label{frame} \end{figure} For a period-1 oscillation $\phi$ coincides with the usual definition of the phase and uniquely parametrizes the attractor $\rho_a=\rho_a(\phi), z_a=z_a(\phi), \phi\in [0,2\pi)$. After the first period-doubling this parametrization is no longer unique since the periodic orbit does not close on itself after $\phi$ changes by $2\pi$. For a period-$2^n$ orbit $2^n$ of its points lie in any semi-plane $\phi=\phi_0$. The angle variable $\Phi\in [0,2^n\cdot 2\pi)$ may be used to parametrize the period-$2^n$ attractor if one acknowledges that all $\Phi$ from the interval $[0,2^n\cdot 2\pi)$ are different but any two values of $\Phi$, $\Phi_1$ and $\Phi_2$, with $\Phi_2=\Phi_1+ 2^n\cdot 2\pi$, are equivalent. For a chaotic orbit $(n \rightarrow \infty)$ all angles $\Phi \in [0,\infty)$ are non-degenerate. When $\Phi$ is defined in this way it is no longer an observable. Indeed, any $\Phi\in [0,2^n\cdot 2\pi)$ can be represented as $\Phi=\phi +m\cdot 2\pi$ where $\phi\in [0,2\pi)$ and $m\in {\bf N}$. While $\phi$ is just the angle coordinate in the $(\rho,\phi,z)$ system and is a single-valued function of the instantaneous concentrations $\phi=\phi(c_x(t),c_y(t),c_z(t))$, the integer number of turns $m$ can be calculated only if the entire attractor is known. The spatially-distributed system is described by the reaction-diffusion equation, \begin{equation} \label{rds} {\partial {\bf c}({\bf x},t) \over \partial t} = {\bf R}({\bf c}({\bf x},t)) + D \nabla^2 {\bf c}({\bf x},t)\;, \end{equation} where we have assumed the diffusion coefficients of all species are equal. If the rate law parameters correspond to a period-1 limit cycle, we may initiate a spiral wave in the medium and describe its dynamics and structure using well-developed methods. The core of such a spiral wave is a topological defect which is characterized by the topological charge \cite{me} \begin{equation} \label{charge} {1 \over 2 \pi} \oint \nabla \phi({\bf r})\cdot d{\bf l}=n_t\;, \end{equation} where $\phi({\bf r})$ is the local phase and the integral is taken along a closed curve surrounding the defect. To obtain additional insight into the organization of the medium around the defect the local dynamics may be considered. For this purpose we introduce a polar coordinate system ${\bf r}={\bf x}-{\bf r}_d(t)=(r,\theta)$ centered at the defect whose (possibly time-dependent position) is ${\bf r}_d(t)$. Let ${\bf c}({\bf r},t)$ be a vector of local concentrations at space point ${\bf r}=(r,\theta)$. A local trajectory in the concentration phase space from $t=t_0$ to $t=t_0+\tau$ at point ${\bf r}$ in the medium will be denoted by \begin{equation} C({\bf r}|t_0,\tau) = \{ {\bf c}({\bf r},t)| t\in [t_0,t_0+\tau]\}\;. \end{equation} Figure~\ref{p-1tr} shows a number of local trajectories $C(r,\theta| t_0,\tau)$ at points with increasing separation $r$ from the defect for a period-1 oscillation at $\kappa_2 =1.420$. One sees that as $r \rightarrow 0$ the oscillation amplitude decreases and the limit cycle shrinks to the phase space point ${\bf c}^*_d$ corresponding to the spiral core. The results of our simulations show that the value of ${\bf c}^*_d$ differs only slightly from ${\bf c^*}$ which is chosen as the origin of the coordinate frame $(\rho,\phi,z)$. Thus, the angle $\phi$ can serve as a phase that characterizes all points in the period-1 oscillatory medium except for a small neighborhood of the defect with radius $r\approx 1$. \cite{amend} The concentration field ${\bf c}({\bf r},t)$ is organized so that the instantaneous $(c_x,c_y,c_z)$ phase space representation of the local concentration on any closed path in the medium surrounding the defect is a simple closed curve encircling ${\bf c^*}$. For large $r$, $r\geq r_{max}$ (in Fig.~\ref{p-1tr} $r_{max}\approx 40$), one finds that $C(r,\theta| t_0,\tau)$ ceases to change shape and is indistinguishable from the period-1 attractor of (\ref{mass}) on the scale of the figure. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f3.eps} \end{center} \caption{Local trajectories calculated for the period-1 oscillatory medium ($\kappa_2=1.420$) at radii 5, 10, 20, 30, 40, 56, and fixed $\theta$. The periodic orbits grow monotonically in size with $r$; the difference between trajectories corresponding to $r=40$ and $r=56$ is not resolved on the scale of the figure. Local orbits appear to be independent on angle $\theta$. The location of ${\bf c}^*$ is designated by a diamond.} \label{p-1tr} \end{figure} One may initiate the analog of a defect in $2^n$-periodic and chaotic media. The defect serves as the core of a spiral wave which may exist even if the oscillation is not simply period-1. A defect was introduced in the center of the medium by fixing $c_z({\bf r})=c^{*}_{z}$ and choosing initial concentrations $ (c_x({\bf r}),c_y({\bf r}))$ to produce orthogonal spatial gradients. The influence of the symmetry of the spatial domain on the dynamics was investigated by performing simulations on square $(L \times L)$ arrays as well as on disk-shaped domains with radius $R$. No-flux boundary conditions were used to prevent the formation of defects with opposite topological charge within the medium and to minimize effects arising from the self-interaction of spiral waves. The implementation of these initial and boundary conditions does not guarantee the formation of a solitary stable spiral wave; new spiral pairs and other patterns (e.g. pacemakers) may appear as a result of instabilities of the spiral arm and lead to spiral turbulence. The ability to maintain a stationary spiral wave in the center of the medium is sensitive to the parameters. For various values of the system size and rate constants the defect can move along expanding or contracting spiral trajectories or trajectories with complex ``daisy"-like forms \cite{daisy}. Simulations show that the stability of a spiral wave with a stationary core located at the center of the medium increases with the system size and for rate constants lying close to the chaotic regime within the period-doubling cascade. In the following we restrict our considerations to parameters that lead to the formation of a single spiral wave whose core is stationary and lies in the center of the domain. Long transient times ($\approx10^2$ spiral revolutions) are often necessary to reach this attracting state. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f4.eps} \end{center} \caption{Frames showing a rotating spiral wave in the chaotic ($\kappa_2=1.567$) disk-shaped medium with $R=80$. The local angle variable $\phi(r,\theta,t)$ is shown as grey shades. Time increases from left to right and from top to bottom. The frames are separated by one period of spiral revolution $T_r$. The integration time step is $\Delta t= 10^{-4}$ and the scaled diffusion coefficient is $D \Delta t/ (\Delta x)^2 = 10^{-2}$.} \label{4sp} \end{figure} Figure~\ref{4sp} shows four consecutive states of the disk-shaped medium with $R=80$, separated by one period of the spiral rotation, $T_r$, for $\kappa_2 =1.567$ where the rate law supports a chaotic attractor. Only within a sufficiently small region with radius $r \approx 20$ centered on the defect does the medium return to the same state after one period of spiral rotation. At points farther from the defect the system appears to return to the same state only after two spiral rotation periods. The transition from period-1 to period-2 behavior occurs smoothly along any ray emanating from the defect. \section{Analysis of Local Dynamics} \label{local} More detailed information may be obtained from an investigation of the local dynamics of the medium supporting a spiral wave. Local trajectories $C({\bf r}|t_0,\tau)$ were computed along rays emanating from the defect at various angles $\theta$. Figure~\ref{6traj} (left column) shows short-time trajectories ($\tau \approx 10T_r$) at different radii $r$ and arbitrary but large $t_0$. These trajectories clearly demonstrate that the local dynamics undergoes transformation from small-amplitude period-1 oscillations in the neighborhood of the defect to period-4 oscillations near the boundary.\cite{f3} \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f5.eps} \end{center} \caption{Local trajectories $C(r|\,t_0,\tau)$ for the disk-shaped medium ($\kappa_2=1.567,\;R=80$): (a, d) $r=10$; (b, e) $r=35$; (c, f) $r=76$. The observation times are $\tau\approx10T_r$ for left column and $\tau =80T_r$ for right column. All the trajectories are shown on the same scale.} \label{6traj} \end{figure} The well-resolved period-doubling structure of $C({\bf r}|t_0,\tau)$ is destroyed if the time of observation $\tau$ becomes sufficiently large. The right column of Fig.~\ref{6traj} shows trajectories sampled at the same spatial locations but with the time of observation $\tau = 80 T_r$. These long-time trajectories appear to be ``noisy" period-1 and period-2 orbits: the trajectory in panel (d) is a thickened period-1 orbit while both the period-2 (panel (b)) and period-4 (panel (c)) orbits now appear as thickened period-2 orbits in panels (e) and (f) with trajectory segments lying between the period-2 bands. As $\tau$ tends to infinity the resulting local attractor, $C(r)$, is independent of $t_0$ and the angle $\theta$. \subsection{First-return maps} An analysis of the local trajectories shows that the period-doubling phenomenon is not a monotonic function of $r$. Consider the first return map constructed from a Poincar\'{e} section of a local trajectory $C({\bf r}|t_0,\tau)$ in the following way: choose the plane $c_y = c_y^{*}$ with normal ${\bf n}$ along the $c_y$ axis as the surface of section and select those intersection points where ${\bf n}$ forms a positive angle with the flow. This yields a set $\{(c_x({\bf r},t_n),c_z({\bf r},t_n))| n\in [1,N]\}$ where $t_0 <t_1<t_2<\ldots <t_N<t_0+\tau$ is a sequence of times at which the trajectory crosses the surface of section. For the WR model the points $(c_x({\bf r},t_n),c_z({\bf r},t_n))$ lie on a curve which deviates only slightly from a straight line. Consequently, we may choose either $c_x$ or $c_z$ to construct the first return map. Let $\xi_n({\bf r})=c_x({\bf r},t_n)$ denote a point in the Poincar\'{e} section. The relation $\xi_{n+1}({\bf r})=f(\xi_{n}({\bf r}))$ between the successive intersections of the Poincar\'{e} surface defines the local first return map, \begin{eqnarray} g({\bf r}|t_0,\tau) & = &\{ (\xi_n({\bf r}),\xi_{n+1}({\bf r})) \nonumber \\ & & |\,t_n\in [t_0,t_0+\tau], n\in [1,N] \}\;. \end{eqnarray} Combining such maps for all $r$ along some ray emanating from the defect at an angle $\theta$, we obtain the cumulative first return map, \begin{equation} G(\theta|t_0,\tau)= \bigcup_{r\in (0,R)} g(r,\theta |t_0,\tau)\;. \end{equation} \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f6.eps} \end{center} \caption{Cumulative first return map $G$ constructed for the disk-shaped array ($\kappa_2=1.567,\;R=80$). The letters indicate the $r$ values discussed in the text: (a) 20, (b) 31 and (c) 43.} \label{r-frm} \end{figure} For sufficiently long times $\tau$, $g$ is independent of $\theta$ and $t_0$. Letting $\lim_{\tau \to \infty} g(r,\theta |t_0,\tau)= g(r)$, we may write the corresponding cumulative first return map as $G=\lim_{\tau \to \infty} G(\theta|t_0,\tau)$. Figure~\ref{r-frm} shows $G$ for the disk-shaped medium under consideration. The first return map is comprised of several branches which can be identified as thread-like maxima of the first return map point density. These branches are parametrized by the spatial coordinate with $r$ increasing from the bottom left corner to the ends of the wide-spread arms of $G$ (cf. Fig.~\ref{r-frm}). Generally for $r\leq 40$ points lying on lines $\xi_n(r) + \xi_{n+1}(r) = {\rm const}$ belong to the same $g(r)$ though overlaps of neighboring $g$-map points are common. Thus, measuring the separation between branches of $G$ in the direction perpendicular to the bisectrix one can determine the character of $C(r)$. In spite of some evidence of fine structure, from the fact that map points are located along the bisectrix in Fig.~\ref{r-frm} one can infer that up to $r=20$ the local dynamics is predominantly period-1. Starting from $r=21$ (labeled by $a$ in Fig.~\ref{r-frm}), $G$ splits into two branches which diverge from the bisectrix indicating a period-2 structure of $C(r)$. As $r$ increases these branches bend and cross the bisectrix at $r=31$ (labeled by $b$ in Fig.~\ref{r-frm}), indicating a return of the local dynamics to the period-1-like pattern. After this crossing the separation between the branches grows rapidly reflecting the development of period-2 structure. An examination of the main branches of $G$ reveals period-4 fine structure. This period-4 structure is visible for $r>28$ and beyond $r\approx 43$ (labeled by $c$ in Fig.~\ref{r-frm}) it becomes prominent and can be easily seen in the structure of $C(r)$ (cf. Fig.~\ref{6traj}). \subsection{Loop exchange and $\Omega$ curve} ~From the analysis of the time series of the local concentration one may determine the processes responsible for the differences between the local trajectories $C({\bf r}|t_0,\tau)$ for short and long time intervals $\tau$ (cf. Fig.~\ref{6traj}). Figure~\ref{p2ex} shows the signature of this phenomenon for $c_x(r,t)$ at $r=50$ in a disk-shaped array with $R=80$ and $\kappa_2 = 1.544$, a parameter value corresponding to period-4 dynamics in the rate law. Every second maximum of $c_x(r,t)$ is indicated by diamond or cross symbols. The envelope curves obtained by joining like symbols cross at $t=t_{ex}$, thus the curve which connected large-amplitude maxima at $t<t_{ex}$ joins low-amplitude maxima at $t>t_{ex}$ and vice-versa. This implies that if at some $t_0<t_{ex}$ the representative point ${\bf c}(r,t_0)$ was found on the small-amplitude band of period-2, then at $t =t_0+nT_2>t_{ex}$, where $T_2$ is the period of the period-2 oscillation, it will be found on the larger-amplitude band.\cite{f4} This phenomenon can be interpreted as an exchange of the local attractor's bands. Indeed, approaching $t_{ex}$ from the left one finds that with each period of oscillation the small-amplitude band grows while large-amplitude band shrinks. At $t=t_{ex}$ both bands reach and pass each other. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f7.eps} \end{center} \caption{Concentration time series $c_x(r,t)$ at $r=50$ for the disk-shaped array ($\kappa_2=1.544,\;R=80$) showing the loop exchange process. Time unit equals $10^5$ $\Delta t$.} \label{p2ex} \end{figure} For a short period of time near $t_{ex}$ the bands are indistinguishable in phase space and the oscillation is effectively period-1. It is this exchange phenomenon that produces loops that fill the gap between the period-2 bands in the long-time local trajectories (cf. Fig.~\ref{6traj}) and contribute to a sparsely scattered ``gas"-like density in $G$ (cf. Fig.~\ref{r-frm}). An examination of the loop exchanges at different locations in the medium revealed the existence of the following spatio-temporal pattern. At any fixed location the exchange occurs periodically, with period $T_{ex}\approx 55T_r$, independent of the position $(r,\theta)$ in the medium. For sufficiently large radii $(r\geq 35)$ this periodicity takes an even stronger form: the entire oscillation pattern, however complex, returns with period $T_{ex}$ to the same configuration. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f8.eps} \end{center} \caption{Sketch of the $\Omega$ curve for the disk-shaped array ($\kappa_2=1.567,\;R=80$). Points where the period-2 band exchange was observed are indicated by diamonds. } \label{r-om} \end{figure} This property smoothly disappears as the defect is approached. For two locations ${\bf r}_1=(r_0,\theta_1)$ and ${\bf r}_2=(r_0,\theta_2)$ at the same radius $r_0$ from the defect but at different angles, the oscillation pattern at one of them, say ${\bf r}_2$, can be obtained from the corresponding pattern at ${\bf r}_1$ through translation in time by $T_{ex}(\theta_2-\theta_1)/2\pi$, the sign of the translation being defined by ${\rm sign}(\theta_2-\theta_1)$. In view of this observation it is convenient to introduce a coordinate system $(r',\theta')$ rotating with angular velocity $2\pi/T_{ex}$ relative to the laboratory-fixed coordinate system $(r,\theta)$. In this rotating frame the local dynamics is described by a time-homogeneous pattern, unique for every spatial point ${\bf r'}$, and the locations in the medium where loop exchange occurs correspond to points where the local dynamics always has a period-1-like character. The set of loop exchange points constitute a curve $\Omega$ with spiral symmetry which winds twice around the defect (see Fig.~\ref{r-om}). The two convolutions of $\Omega$ lie close to circular arcs with radii 19 and 32. This result may be compared with the data obtained from an examination of $G$ (cf. Fig~\ref{r-frm}). The crossings of the bands of $G$ occur at loci lying on $\Omega$. Close to the defect the resolution of the loop exchange event is difficult. At $r<18$ the difference between the period-2 bands is comparable to the band thickness and the determination of $\Omega$ for smaller radii becomes impractical. Variation of the system parameters results in a change of the characteristics of $\Omega$; for example, the radius of the domain $R$ does not affect the shape of the $\Omega$ but does change the angular velocity with which the coordinate frame $(r',\theta')$ in which $\Omega$ is immobile rotates relative to the laboratory-fixed frame $(r,\theta)$. The angular velocity is higher for smaller system sizes: a decrease in $R$ from 80 to 60 reduces the period $T_{ex}$ by a factor of 0.42. A change in the rate constants $\kappa_i$ leads to a deformation of $\Omega$, although the identification of $\Omega$ as a set of exchange points remains and it retains the topology of a curve passing from the defect to the boundary. In Sec.~\ref{glob} we shall show that the existence of $\Omega$ is essential for the maintenance of spatial continuity in media composed of $2^n$-periodic oscillators. Simulations on a square array with dimension $80\times 80$ (all parameters were the same as for the disk) show that a rotating frame is not necessary to observe the time-homogeneous local dynamics of $C({\bf r}|t_0,\tau)=C({\bf r}|\tau)$. For this system geometry the $\Omega$ curve is fixed in the medium, \end{multicols} \widetext \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f9.eps} \end{center} \caption{Local trajectories calculated on circle with $r=55$ for the square array. All the trajectories are shown on the same scale.} \label{circ} \end{figure} \begin{multicols}{2} \narrowtext \noindent a slight wobbling of the defect (frame origin) being neglected. Figure~\ref{circ} shows a number of long-time $(\tau \gg T_r)$ local trajectories on a circle with radius $r_0=55$ surrounding the defect in the square domain. One sees a significant dependence of the shape of $C(r_0,\theta|\tau)$ on the angle $\theta$. The local trajectories range from a period-1 orbit at the intersection with $\Omega$ to the well-established period-4 orbit observed in a certain range of $\theta$. To highlight the loop exchange phenomenon, a particular time instant $t=t^*$ is marked on all the trajectories (see Fig.~\ref{circ}). Compare the two $C(r_0,\theta|\tau)$ at the locations $\theta_1$ and $\theta_2$ chosen symmetrically on either side of the point $\theta=\theta_{\Omega}$ where the circle intersects $\Omega$. Visual inspection of these orbits shows that their shapes are essentially identical but representative points ${\bf c}(\theta_1,t^*)$ and ${\bf c}(\theta_2,t^*)$ appear on different period-2 bands of the corresponding orbits. This clearly demonstrates that the period-2 bands do not just approach but indeed pass each other at $\theta=\theta_{\Omega}$, exchanging their positions in phase space. Since it is not necessary to work in a rotating coordinate system in the case of a square domain, one may resolve the fine structure of the local trajectories to a greater degree as can be seen \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f10.eps} \end{center} \caption{Cumulative first return map $G(\theta)$ for the square array ($\kappa_2=1.567,\;L=80,\; \theta=0$) (top panel) and a magnification of a portion of its structure (bottom panel). Letters on the bottom panel denote radii for which corresponding portions of $G(\theta)$ are constructed: (a) 9; (b) 19; (c) 25, and (d) 31.} \label{s-frm} \end{figure} \noindent in Fig.~\ref{s-frm} (a,b) which shows the cumulative first return map $G(\theta)$ and a magnification of a portion of its structure (compare with Fig.~\ref{r-frm}). The results show that $G$ is comprised of four branches with the fine structure of period-4 resolved even in the vicinity of the defect ($r=5$ is the closest distance to the defect for which $g(r)$ is shown). Any perturbation of the self-organized pattern of local oscillator synchronization due to irregular motion of the defect, influence of the boundary or the presence of another defect may obliterate subtle fine structure of the local trajectories. In such a circumstance one is able to observe only two gross branches of $G$ and their split nature is not resolvable except for very large $r$. These observations allow one to suppose that the local trajectories may have the same number of fine structure levels everywhere in the medium but the degree to which different levels are resolved in their phase portraits strongly depends on the position in the medium relative to the defect. In view of this hypothesis the phenomenon of spatial period-doubling should not be understood in the literal sense but rather as an enhanced ability to resolve the fine structure with the increase of separation from the defect. The stationary rotating spiral wave arises from the complex defect-organized cooperation of local oscillators. Each location in the medium develops some site-specific pattern of oscillation which often differs significantly from that of the corresponding rate law attractor and varies substantially from one space point to another. There exists a (possibly rotating) reference frame $(r',\theta')$, centered on the moving defect, in which local dynamics takes a simple, time-homogeneous form. Each point of the medium in this frame can be assigned a unique oscillatory pattern, different for different spatial points. This allows one to introduce the notion of a defect-organized field associated with $(r',\theta')$ which specifies the pattern of dynamics in every spatial point of the medium. This field exhibits a complicated architecture lacking of any simple symmetries (which can be easily seen from the shape of the $\Omega$ curve). The slow rotation of this field in disk-shaped arrays restores the circular symmetry of the solution. Although the manner in which different types of local dynamics are distributed in the medium is complex, it is not disordered. Due to the continuity of the medium maintained by the diffusion, it obeys certain topological principles studied in the subsequent sections. \section{Coarse-grained description of local trajectories} \label{topo} In the previous section the phase space shapes of the local trajectories were shown to vary considerably but smoothly from one point in the medium to another. To describe the transformations of these orbits into each other, it is useful to introduce a description which captures only topologically significant changes of phase portraits and disregards unimportant details. To understand the topological principles which determine the global organization of the defect-organized field one also needs a means to compare the time dependence of local trajectories. In this section we present a scheme that allows one to partition the continuum of all the observed local trajectories into a finite number of discrete classes according to their phase space shape and time dependence. \subsection{Representation of attractors by closed braids} Consider a period-$2^n$ attractor, $P_{2^n}$, consisting of $2^n$ loops in the concentration phase space ${\cal P} = (c_x,c_y,c_z)$. Using the cylindrical coordinate system introduced earlier, we may project $P_{2^n}$ on the $(\rho,\phi)$ plane preserving its original orientation and 3D character by explicitly indicating whether self-intersections correspond to over or under crossings. Such a projection shows a span of $\phi$ free from crossings where loops are essentially parallel to each other. This span can be used to number loops, say, in the order of their separation from the origin. This procedure maps $P_{2^n}$ onto a closed braid ${\bar B}_{2^n}$ \cite{bir}. Figure~\ref{p-4toB} illustrates the construction of the braid representation for the $P_4$ attractor of the WR model. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f11.eps} \end{center} \caption{Projection of the $P_4$ attractor on the $(c_x,c_y)$ plane (top panel) and the corresponding closed braid ${\bar B}_4$ (bottom panel). } \label{p-4toB} \end{figure} It is convenient to subdivide the closed braid ${\bar B}_{2^n}$ into the open braid $B_{2^n}$ (separated by dashed lines in Fig.~\ref{p-4toB}) and its closure where threads run parallel to each other. The direction of the flow on the attractor is indicated by the arrows. Each crossing on the projection of $P_4$ corresponds to an elementary braid $\sigma_i$ which refers to the fact that thread $i$ overcrosses thread $i+1$ (cf. Fig.~\ref{brth} in Appendix for notation rule). An under crossing will be designated by ${\sigma}_i^{-1}$. A braid may be described by a braid word that gives the order and types of crossings of braid threads. For example, for the closed braid corresponding to $P_4$ (cf. Fig.~\ref{p-4toB}) $P_4 \mapsto {\bar B}_4 = \overline{\sigma_3\sigma_2\sigma_1\sigma_3\sigma_2}$. The closed braid${\bar B}_{2^n}$ corresponding to $P_{2^n}$ can be represented by several braid words, which can be transformed into one another by a set of allowed moves (see Appendix). Any braid word representing $P_{2^n}$ induces a permutation $\pi^{(n)}_i$ describing the order in which loops of $P_{2^n}$ are visited during one oscillation period $T_{2^n}$. In general, each $P_{2^n}$ attractor is represented by several possible $\pi^{(n)}_i$, their number growing with $n$; for example, for $P_2$ there is only one permutation $\pi^{(1)}_1= (^{12}_{21})$ while two permutations $\pi^{(2)}_1 = (^{1234}_{3421})$ (which corresponds to the braid shown in Fig.~\ref{p-4toB}) and $\pi^{(2)}_2 = (^{1234}_{4312})$ exist for $P_4$. With a given loop numbering convention each braid word represents a unique permutation while one permutation can be induced by many braid words. \subsection{Symbolic representation of periodic orbits} Take two period-$2^n$ oscillators whose trajectories ${\bf c}_1(t),{\bf c}_2(t)$ lie on the same attractor, but which are nevertheless non-identical since at any given time $t$ their dynamical variables are different ${\bf c}_1(t) \neq {\bf c}_2(t)$. Since the orbits are periodic there is a time $\delta t$ such that ${\bf c}_1(t + \delta t) = {\bf c}_2(t)$ for any $t$. This operation can be formally considered as an action of translation operator ${\cal T}_{\delta t}$ on the trajectory of the first oscillator: \begin{equation} {\cal T}_{\delta t}\; {\bf c}_1(t) = {\bf c}_1(t + \delta t) = {\bf c}_2(t). \end{equation} The concentration time series ${\bf c}(t)$ of the first oscillator then appears to be shifted backward by $\delta t$ relative to that of the time series of the second oscillator if $\delta t > 0$ and forward otherwise. Of course, trajectories corresponding to different attractors cannot be made to correspond by such time translations, e.g. $P_{2^n}$ attractors described by different permutations $\pi^{(n)}_i$ have different patterns of oscillation, but even if two $P_{2^n}$ lie in the same $\pi^{(n)}_i$ class their actual shapes in ${\cal P}$ may differ significantly. To compare the local dynamics at different points in the medium one needs to single out the most important characteristic features of the oscillation pattern while discarding unnecessary details. A coarse-grained symbolic description of trajectories appears to be useful for this purpose. We assume that the times $t_1,t_2,\ldots,t_{2^n}$ at which the trajectory crosses a surface of section $\phi=\phi_0$ (see Sec.~2) are approximately equally spaced, independent of the choice of $\phi_0$. Thus, the phase point ${\bf c}(t)$ moving along $P_{2^n}$ takes approximately the same time $T_{2^n}/2^n$ to traverse each loop of the attractor.\cite{f5} At $t=t_0$ let the phase point of the period-$2^n$ orbit be on the $j_0$-th loop of $P_{2^n}$, at $t=t_0+T_{2^n}/2^n$ on the $j_1$-th loop, and so on (where $j_l\in[1,2^n],\; l\in[1,2^n],\; j\equiv\!\!\!\!\!\!/ \; l$) until at $t=t_0+T_{2^n}$ the phase point returns to the $j_0$-th loop and the pattern $(j_0,j_1,\ldots j_{2^n})$ repeats. The symbolic string $s_j=(j_0,j_1,\ldots j_{2^n})$ constructed in this way captures the most significant gross features of the oscillation pattern it describes. In this coarse-grained representation the number of possible non-identical trajectories corresponding to a particular $\pi^{(n)}_i$ of $P_{2^n}$ is finite and the different trajectories are simply given by the $2^n$ cyclic permutations of $s_j$. Likewise the time translation operators constitute a finite group ${\cal T}_{l}, \; l\in[-2^{n-1},2^{n-1})$. They act on the symbolic string representing the orbit to give one of its cyclic permutations. ~From its definition it can be easily seen that $\pi^{(n)}_i$ serves as a symbolic permutation representation of ${\cal T}_{+1}$ for the corresponding $i$-th permutation class of $P_{2^n}$. Indeed, consider as an example a period-4 oscillation whose representative point lies on loop 3 at the reference moment of time $t=t_0$. Then for the pattern of oscillation determined by $\pi^{(2)}_1$ the state reads $s_1=(3241)$. To obtain the new state translated by $T_4/4$ backward one acts on $s_1$ by the permutation representation $\pi^{(2)}_1$ of the ${\cal T}_{+1}$ operator to get \begin{equation} {\cal T}_{+1}\: s_1= \left(^{{\displaystyle 1234}}_{{\displaystyle 3421}}\right) (3241)=(2413)=s_2, \end{equation} which correctly describes the result of the shift of the initial state $s_1$. \section{ Global organization of medium}\label{glob} \subsection{Period-1 regime} We now return to the spatially distributed medium and begin by reviewing some properties of the local dynamics in the vicinity of a stable defect with topological charge $n_t=\pm 1$ in a period-1 oscillatory medium. Consider a cyclic path $\Gamma=\{r=r_0>r_{max},\theta\in [0,2\pi)\}$ surrounding the defect. Here $r_{max}$ is a radius such that for all $(r,\theta), r>r_{max},\: \theta\in [0,2\pi)$ the shape of the local orbit in phase space ${\cal P}$ is independent of $(r,\theta)$ and closely approximates that of the period-1 attractor of the mass action rate law (see Sec.~\ref{spiral}). If one starts at an arbitrary point $(r_0,\theta_0)\in \Gamma$ one finds that the instantaneous local phase $\phi({\bf r},t)$ changes by $2\pi$ or $-2\pi$ (depending on the sign of the topological charge) along $\Gamma$. Let us now fix a particular time instant $t=t^*$ and construct the set of points ${\cal S}=\{{\bf c}({\bf r},t^*),r\in\Gamma\}$ as a phase space image of instantaneous concentrations at points lying on $\Gamma$. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f12.eps} \end{center} \caption{${\cal S}$-curves (shown by diamonds) constructed for $\Gamma$ with $r_0=55$ in period-1 oscillatory ($\kappa_2=1.420$) (a) and chaotic ($\kappa_2=1.567$) (b) media. Solid curves represent short time local trajectories on $\Gamma$.} \label{s-curve} \end{figure} The property of a defect (\ref{charge}) and the continuity of the medium insure that ${\cal S}$ is a simple closed curve winding once around ${\bf c}^*$. Figure~\ref{s-curve}(a) shows the ${\cal S}$-curve constructed for the contour $\Gamma$ with radius $r_0=55,\, r_0>r_{max}$ in a period-1 oscillatory medium with $\kappa_2=1.420$. Since all the points on the ${\cal S}$-curve lie at the same time on the local trajectories $C({\bf r}|t_0,\tau),\, {\bf r}\in\Gamma$ with $t^*\in [t_0,t_0+\tau]$, and for $\Gamma$ with $r_0>r_{max}$ all the local trajectories are the same and approximated by the period-1 attractor of the system (\ref{mass}), the ${\cal S}$-curve simply coincides with this attractor for any $t^*$ (cf. Fig.~\ref{s-curve}(a)). The ${\cal S}$-curve constructed for an arbitrary simple closed path encircling the defect in the medium possesses the same property as long as the path lies in the open region $r>r_{max}$. This result can be reformulated in terms of time translations of local trajectories as follows. Let the local trajectory $C(r_0,\theta_0 |t_0,\tau)$ at the point $(r_0,\theta_0)\in\Gamma$ be taken as a reference, then all of the local trajectories on $\Gamma$ can be obtained through the translation of $C(r_0,\theta_0 |t_0,\tau)$ by some time $\delta t(\theta-\theta_0)$ (see Sec.~\ref{topo}). The condition (\ref{charge}) implies that $\delta t(\theta-\theta_0)$ is a monotonically increasing (decreasing) function such that $\delta t(2\pi)=\pm T_1$ where $T_1$ is the period of oscillation and the sign is that of $n_t$. Thus, the oscillation pattern is continuously time shifted along $\Gamma$ such that upon return to the initial point it has experienced translation by the period. \subsection{Period-$2^n$ regime} For $2^n$-periodic and chaotic media property (\ref{charge}) holds where $\phi({\bf r},t)$ should be understood as the angle variable introduced in Sec.~\ref{spiral}. This can be seen from the following argument. Take a period-2 medium with rate constants chosen in the vicinity of the bifurcation from period-1 to period-2 such that the attractor $P_2$ of (\ref{mass}) lies infinitesimally close to $P_1$ from which it bifurcated. Due to the continuity of the solutions of the reaction-diffusion equation (\ref{rds}), the value of $\oint \phi({\bf r},t)\;d{\bf l}$ cannot change abruptly when the bifurcation parameter is changed through the period-doubling bifurcation. This implies that the ${\cal S}$-curve constructed for a contour $\Gamma$ in a period-$2^n$ medium, as in case of a simple period-1 medium, is a closed curve which loops once around ${\bf c}^*$ in phase space. This is illustrated in panel (b) of Fig.~\ref{s-curve} which shows ${\cal S}$ for contour $\Gamma$ with radius $r_0=55$ in medium with $\kappa_2=1.567$ and time $t=t^*$. Recall again that the points of the ${\cal S}$ curve have to lie on the local trajectories $C({\bf r}|t_0,\tau),\, {\bf r}\in\Gamma$ (cf. Fig.~\ref{circ} where points designated by diamonds lie on ${\cal S}$ for the chosen time moment and contour shown in the figure). Since the local trajectories in a period-$2^n$ medium loop several times around ${\bf c}^*$, the curve ${\cal S}$ which winds only once $(n_t=\pm1)$ around ${\bf c}^*$ cannot span the entire local trajectory as is the case for a period-1 medium. As one sees from Fig.~\ref{s-curve}(b) ${\cal S}$ follows the larger loop of the local trajectory, which for $\Gamma$ with $r_0=55$ is typically a period-2 orbit (cf. Fig.~\ref{circ}), and instead of making the second turn on the smaller loop, it crosses the gap between the loops and closes on itself. Although the shape of ${\cal S}$ changes with time (see \cite{prl} for details), for any $t^*$ there exist segments of ${\cal S}$ which connect different loops of local trajectories. This behavior of the ${\cal S}$ curves would be impossible if loop exchanges were nonexistent. The analysis shows that the segments of ${\cal S}$ covering the gaps between the loops of the local trajectories are images of points on $\Gamma$ which lie close to the intersection with the $\Omega$ curve. Thus, the loop exchanges observed in period-$2^n$ media are necessary to reconcile the contradiction between the one-loop topology of the ${\cal S}$ curves determined by the presence of a defect and the multi-loop topology of the local trajectories determined by the local rate law. The change of the local trajectories along the contour $\Gamma$ in period-$2^n$ media can be considered in terms of time translations if one adopts a generalization of the translation operation in the following way. In a period-2 medium let the contour $\Gamma$ and the reference point $(r_0,\theta_0)\in\Gamma$ be chosen so that $\Gamma$ intersects the $\Omega$ curve in the single point $(r_0,\theta_{\Omega})$ and suppose that these points are sufficiently separated from each other. Since the shapes of the local orbits change significantly along any closed path surrounding a defect (cf. Fig.~\ref{circ}) these trajectories cannot be made to coincide by time translation as this operation is defined in Sec.~\ref{topo}. Nevertheless, the general features of the temporal pattern of the trajectories are preserved (e.g. sharp maxima in $c_i(t)$ time series) and for two \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f13.eps} \end{center} \caption{Period-2 local concentration time series $c_x({\bf r},t)$ calculated on a cyclic path $\Gamma$ surrounding the defect: (a) series sampled at four consecutive locations separated by $\delta\theta=30^o$; (b) two series sampled at locations chosen symmetrically on either side of the intersection with the $\Omega$ curve. } \label{gamma} \end{figure} \noindent locations $(r_0,\theta_1)$ and $(r_0,\theta_2)$ one is able to find a time shift $\Delta t(\theta_1,\theta_2)$ such that some measure of the deviation between the trajectories, say, \begin{equation} \label{match} M(\Delta t(\theta_1,\theta_2))= \int_{t_0}^{t_0+\tau} |{\bf c}^{(1)}(t+\Delta t) - {\bf c}^{(2)}(t)|\: dt , \end{equation} is minimized. Choosing the local trajectory $C(r_0,\theta_0|t_0,\tau)$ as a reference and comparing it to all the other local orbits on $\Gamma$ one is able to define the time shift function $\delta t(\theta-\theta_0)\equiv\Delta t(\theta,\theta_0)$. The shift function $\delta t(\theta-\theta_0)$ increases (or decreases) monotonically and almost linearly (see Fig.~\ref{gamma}(a)) with $d (\delta t)/d \theta \approx T_2/2\cdot 2\pi$ everywhere on $\Gamma$ except for a small neighborhood of $\theta=\theta_{\Omega}$ where it exhibits break. Indeed, the loop exchange at $\theta=\theta_{\Omega}$ causes the discontinuity of $\delta t(\theta-\theta_0)$. At $\theta=\theta_{\Omega}$ both loops of the local orbit become equivalent and the oscillation is effectively period-1 with period $T_1=T_2/2$. Since the loops exchange at $\theta=\theta_{\Omega}$, to find the best match (\ref{match}) between local trajectories sampled at points $(r_0,\theta_{\Omega} -\varepsilon)$ and $(r_0,\theta_{\Omega}+\varepsilon)$, one needs to translate one of the trajectories by $\delta t = T_1 + O(\varepsilon)$. This can be easily seen in Fig.~\ref{gamma}(b) which displays two $c_x(t)$ series calculated at spatial points lying $\theta-\theta_{\Omega}=\pm 10^o$ on either side of $\theta_{\Omega}$ on $\Gamma$. \subsection{ Trajectory transformations along $\Gamma$} The transformation of local trajectories along $\Gamma$ can be imagined to occur a result of two separate processes. Suppose everywhere on $\Gamma$ except $\theta=\theta_{\Omega}$ the shape of the local trajectories in ${\cal P}$ is the same and is equivalent to that of $C(r_0,\theta_0|t_0,\tau)$. Then all the other local trajectories $C(r_0,\theta|t_0,\tau), \; \theta\in[0,2\pi), \theta\neq \theta_{\Omega}$ can be found by time translation of $C(r_0,\theta_0|t_0,\tau)$ by $\delta t(\theta-\theta_0)=T_2(\theta-\theta_0)/2\cdot 2\pi$. Assume that all the deformations of the phase space portrait of the local trajectory which take place along $\Gamma$, including the exchange of loops, occur at the point $\theta=\theta_{\Omega}$ so that the passage through $\theta_{\Omega}$ shifts the oscillation by $\delta t_p=T_1=T_2/2$. Then the result of the continuous time translation that occurs during $2\pi$ circulation along $\Gamma$ may be described by the action of the ${\cal T}_{n_t}$ operator $(n_t=\pm 1)$, while the result of the loop exchange is described by the operator ${\cal T}_{-n_t}$.~\cite{f9} The total transformation of the local oscillation after a complete cycle over $\Gamma$ is equivalent to the identity transformation and thus the result is in accord with continuity of the medium. If one makes the assumption that loop exchange does not occur on some contour $\Gamma$ encircling a defect with $|n_t|=1$, the time shift function $\delta t(\theta-\theta_0)$ becomes monotonic and continuous everywhere on $\Gamma$. As a result one arrives at the incorrect conclusion that starting from the point $(r_0,\theta_0)$ with the oscillation pattern symbolically represented by the string $s_1$, say $s_1=(12)$, and moving along $\Gamma$ in the clockwise direction one returns to the same point $(r_0,\theta_0+ 2\pi)\equiv (r_0,\theta_0)$ but with the oscillation pattern shifted by $T_2/2$ and given by $s_2=(21)\neq s_1$. Note that this contradiction does not arise in the period-1 oscillatory medium where circulation over any closed path encircling a defect results in the translation by the entire period which automatically satisfies the continuity principle. Thus the necessity of loop exchanges in period-$2^n$, $n>0$ media with a topological defect demonstrated earlier in this section in terms of ${\cal S}$-curves is now explained in terms of time translations. The results for the period-2 medium can be generalized for any $n>1$ using the following hypothesis. ~From the main property of a topological defect (\ref{charge}) it follows that integration of an infinitesimal continuous shift $d(\delta t)$ over any closed path surrounding a defect results in a total shift by $\pm T_{2^n}/2^n$ and can be symbolically described by the ${\cal T}_{n_t}$ operator. Numerical simulations demonstrate the existence of time translation discontinuity points such that sum of $\delta t$ jumps over these points amounts to a shift of $\mp T_{2^n}/2^n$ described by the ${\cal T}_{-n_t}$ operator. The locations of these points in the medium can be identified with the $\Omega$ curve and the origin of the time translation discontinuities with the loop exchange phenomenon. The relation (\ref{prod}) of the Appendix connects translations and loop exchanges and allows one to predict the number and the kind of loop exchanges necessary to perform the required ${\cal T}_{-n_t}$ translation. \subsection{Examples} Consider again the square $80\times80$ array with rate constants corresponding to chaotic regime ($\kappa_2=1.567$). As period-4 fine structure is the highest level of local organization resolved in the medium, it is sufficient to use the formalism developed above for $P_4$ to describe the local dynamics. The analysis shows that in the bulk of the medium the oscillation is given by the $\pi^{(2)}_1=(^{1234}_{3421})$ pattern.~\cite{f10} Using this data and the results presented in the Appendix one can easily enumerate all the sequences of exchanges resulting in ${\cal T}_{+1}$ translation. Indeed, one should expect either exchange of loops 3 and 4 followed by the exchange of period-2 bands $(^{1234}_{3412})$ or first the period-2 bands exchange followed by exchange of loops 1 and 2. Figure~\ref{s-om} is a schematic representation of the medium with a negatively charged $(n_t=-1)$ defect in the center and the $\Omega$ curve displayed. Consider the change of the oscillation pattern along ray $ABC$ emanating from the defect as the value of $r$ increases (see Fig.~\ref{s-frm} for the cumulative first-return map constructed for this ray). The pattern of oscillation $s_A=(4132)$ corresponding to permutation $\pi^{(2)}_1=(^{1234}_{3421})$ can be followed from $r=5$ to $r=19$ where the period-2 bands undergo exchange. This results in the switch to the oscillation pattern described by $\pi^{(2)}_2=(^{1234}_{4312})$ seen at $r=21$. The pattern $\pi^{(2)}_1$ is restored after loops 1 and 2 exchange at $r=22$ and and this pattern persists until another exchange occurs at $r=28$. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f14.eps} \end{center} \caption{Sketch of the $\Omega$ curve for the square array ($\kappa_2=1.567,\;L=80$). The points were obtained from simulations. The ray $ABC$ intersects $\Omega$ at locations with radii 20 and 31.} \label{s-om} \end{figure} Using translation operator ${\cal T}_{+1}$ one can express the transition of the state $s_A$ ($r<20$) through the sequence of loop exchanges described above to the state $s_B=(1324)$ (for $22<r<28$) as $s_B={\cal T}_{+1} s_A$. The same shift can be achieved by continuous translation along the path $ADEFB$ which does not intersect $\Omega$ but winds once counter-clockwise around the defect. The $c_x(t)$ time series at points $A,D,E,F$ and $B$ are displayed in Fig.~\ref{abc} and demonstrate that this is the case. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f15.eps} \end{center} \caption{Concentration time series $c_x({\bf r},t)$ calculated at points $A,D,E,F,B$ of the square array shown in Fig.~15. } \label{abc} \end{figure} Continuing to advance along the ray $ABC$, one finds that at $r=28$ loops 3 and 4 exchange and oscillation switches once more to the state corresponding to $\pi^{(2)}_2$. After the period-2 band exchange at $r=32$ the pattern corresponding to $\pi^{(2)}_1$ is reinstated and remains unchanged for all $r>32$. Again the oscillation at $r>32$, described symbolically by $s_C=(3241)$, appears to be shifted by $T_4/4$ relative to $s_B$ and by $T_4/2$ relative to $s_A$. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f16.eps} \end{center} \caption{ Segment of the concentration time series $c_x(r,t)$ calculated for the disk-shaped array ($\kappa_2=1.567,\;R=80$) at $r=76$ showing the $T_4/4$ time shift of the oscillation pattern (see explanation in the text).} \label{p4ex} \end{figure} The existence of a $T_4/4$ shift after crossing $\Omega$ can also be seen from the results for the disk-shaped array with $R=80$. Figure~\ref{p4ex} shows a segment of the $c_x({\bf r},t)$ time series sampled in a fixed frame $(r,\theta)$ at $r=76$. In this coordinate system $\Omega$ slowly rotates clockwise (again $n_t=-1$) with period $T_{ex}$. Two time windows each of length $T_4$ marked by dotted lines and separated by $\Delta t = 8T_4$ allow one to see how the oscillation state (4132) is substituted by its forward $T_4/4$ translation (2413) after the $\Omega$ curve passes the observation point at $t = t_{ex}$. \section{Conclusions} \label{conc} General principles underlie the organization of $2^n$-periodic or chaotic media supporting spiral waves. As in simple oscillatory media, the core of a spiral is a topological defect which acts as an organizing center determining dynamics in its vicinity; however, the structural organization of the medium that arises from the existence of the defect is far more complicated. Due to the absence of a conventional definition of phase for oscillations more complex than period-1, the identification of a defect in terms of the relation (\ref{charge}) is not obvious and requires the introduction of (often model-dependent) phase substitutes which for some systems may be provided by angle variables. Despite the complications with the definition of phase, one can identify a defect in terms of local trajectories. Indeed, as one moves away from the defect the local dynamics takes the form of a progression of period-doubled orbits, from near harmonic, small-amplitude, period-1 orbits to ``noisy'' period-$2^l$ orbits, where $l$ is a function of variables such as diffusive coupling and the system size and shape. The presence of a defect imposes topological constraints on the global organization of medium as well. As was shown above, when $2^n>n_t$ the $2^n$-loop structure of the local trajectories conflicts with the period-$n_t$ structure of the ${\cal S}$-curve and a complex, asymmetric, spatial pattern of local dynamics, the defect-organized field, arises as a result of the necessity to maintain the continuity of the medium. The most prominent characteristic feature of this field is the $\Omega$ curve defined as the set of points where the local dynamics most closely resembles period-1. This signals the exchange of period-2 bands. If the local trajectories possess structure finer than period-2, other loop exchanges leading to more subtle changes in the local orbits can be found in the vicinity of $\Omega$. The net result of these exchanges is to produce a time shift of the trajectories which compensates for the smooth time translation accumulated on continuous paths. Since the topological continuity must be observed on any arbitrarily large closed path encircling a defect and, therefore, this contour has a point of intersection with $\Omega$, a single defect in a period-$2^n$ $(n>0)$ medium cannot be localized. We point out again that many of the phenomena we have discussed above are not dependent on the existence of a period-doubling cascade or chaotic local dynamics, although this is the case we have analysed in detail. Reaction-diffusion systems with local complex periodic orbits in phase space dimensions higher than two should exhibit similar features when they support spiral waves. It should be possible to experimentally probe the phenomena described in this paper. The appropriate parameter regime can be determined from investigations of well-stirred systems. For example, period-doubling and chaotic attractors have been observed in the Belousov-Zhabotinsky reaction. \cite{BZchaos} If the spiral wave dynamics is then studied in a continuously-fed-unstirred reactor \cite{cfur}, one should be able to observe the characteristics of the spiral dynamics and the loop exchange process that serve as signatures of the phenomena described above. \section{Acknowlegements} We thank Peter Strizhak for his interest in this work and helpful comments. This work was supported in part by a grant from the Natural Sciences and Engineering Research Council of Canada and by a Killam Research Fellowship (R.K.). \begin{appendix} \section{Braid moves and loop exchange operators} In this appendix we make use of the projection of the period-doubled attractors $P_{2^n}$ onto closed braids ${\bar B}_{2^n}$ (see Sec.~\ref{topo}) to show how loop exchanges affect the pattern of oscillation. We demonstrate that those combinations of loop exchanges that produce identity transformations of $P_{2^n}$ result in nontrivial time translations of trajectories. Each closed braid ${\bar B}_{2^n}$ is represented by a set of non-identical braid words with their number rapidly growing with $n$. Without violation of the topology of ${\bar B}_{2^n}$ they can be transformed into one another by the following set of moves (see, e.g. \cite{bir}) : \begin{enumerate} \item commutation relation, $\sigma_i\sigma_j = \sigma_j\sigma_i, \;\; |i-j| \geq 2$; \item type 2 Reidemeister move, $\sigma_i\sigma_i^{-1} = \sigma_i^{-1}\sigma_i = {\bf 1}$; \item type 3 Reidemeister move, $\sigma_i\sigma_{i+1}\sigma_i = \sigma_{i+1}\sigma_i\sigma_{i+1}$; \item first Markov move, $\sigma_i\Sigma\sigma_i^{-1} = \sigma_i^{-1}\Sigma\sigma_i = \Sigma, \; \Sigma \in B$; \end{enumerate} where $B$ is a set of open braids. While the first three rules are common for all braids, rule 4 is specific for closed braids. Indeed, it can be written in the form $\sigma_i\Sigma = \Sigma\sigma_i$ which, for elementary braids $\sigma_i$, corresponds to moving $\sigma_i$ around ${\bar B}_{2^n}$ resulting in the exchange of the closed braid loops (cf. Fig.~\ref{brth}(d)). Type 1 Reidemeister (or second Markov) moves are not allowed since they do not preserve the number of loops, an essential feature of $P_{2^n}$ \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f17.eps} \end{center} \caption{Conventional designations and basic braid moves: (a) definition of elementary braids $\sigma_i$ and $\sigma_i^{-1}$; (b) type 2 Reidemeister move; (c) type 3 Reidemeister move, and (d) first Markov move ($\Sigma$ represents arbitrary braid). } \label{brth} \end{figure} \noindent attractors. While rules 1, 2 and 3 do not affect $\pi^{(n)}_i$, the first Markov move does (except for the degenerate case of $P_2$ which is represented by the single permutation $\pi^{(1)}_1$). Thus any number of rearrangements affecting only the braid $B_{2^n}$ of the closed braid ${\bar B}_{2^n}$ leave the braid word in the same permutation class $\pi^{(n)}_i$, while each application of the first Markov move yields a new permutation class. \subsection{Loop exchanges for $P_2$ and $P_4$} We now examine how the loop exchanges influence the patterns of oscillation for the period-2 and period-4 attractors. For $P_2$ one has only the single braid word $\sigma_1 (\sigma_1^{-1})$ and the single permutation $\pi^{(1)}_1 = (^{12}_{21})$ induced by $\sigma_1$. Two different symbolic states $s_1 = (12)$ and $s_2 = (21)$ are possible for the period-2 oscillation with respect to some fixed time frame. We introduce an operator $A^{(1)}_1$ whose action on the closed braid representing $P_2$ is to move $\sigma_1$ by $2\pi$ in a direction opposite to the flow. The result of the action of this operator, which is the first Markov move for $\Sigma = {\bf 1}$, is to leave the attractor $P_2$ unchanged; however, one finds that loops 1 and 2 have exchanged their locations in phase space. In the time series for the dynamical variable $c_i(t)$ the exchange can be seen as a substitution of taller maxima (2) by shorter maxima (1) and vice-versa. If this process is followed in time it produces the characteristic pattern shown in Fig.~\ref{p2ex}. Thus, application of $A^{(1)}_1$ to $P_2$ induces a transformation of the oscillation state $s_1$ into $s_2$ and vice-versa. This can be symbolically described as an action of an exchange operator ${\cal A}^{(1)}_1$ represented by the permutation $(^{12}_{21})$ : \begin{eqnarray} {\cal A}^{(1)}_1\: s_1 & = & \left(^{{\displaystyle 12}}_{{\displaystyle 21}}\right) (12) = (21) = s_2, \nonumber \\ {\cal A}^{(1)}_1\: s_2 & = & \left(^{{\displaystyle 12}}_{{\displaystyle 21}}\right)(21) = (12) = s_1. \end{eqnarray} One sees that the action of ${\cal A}^{(1)}_1$ is equivalent to that of ${\cal T}_{+1}$ which translates the oscillation pattern by half a period. The inverse of the braid operator $A^{(1)}_1$ can be introduced in an analogous way as an operator moving $\sigma_1$ {\it along} the direction of the flow. It corresponds to an exchange operator $({\cal A}^{(1)}_1)^{-1} = {\cal T}_{-1}$ acting on strings. Since for $P_2$ application of ${\cal T}_{+1}$ or ${\cal T}_{-1}$ results in essentially the same states the sign of the shift is chosen to maintain consistency with corresponding operators for $P_{2^n}$, $n>1$. Double application of ${\cal A}^{(1)}_1$ results in translation by a full period and thus in the identity operator \begin{equation} ({\cal A}^{(1)}_1)^2 = ({\cal A}^{(1)}_1)^{-2} = {\bf 1}. \end{equation} The $P_4$ attractor possesses a richer set of transformations. Since under coarse-graining braid words which induce the same $\pi^{(n)}_i$ are indistinguishable, we can single out two essential representatives $\Sigma_1 = \sigma_3\Sigma$ for $\pi^{(2)}_1=(^{1234}_{3421})$ and $\Sigma_2 = \sigma_1\Sigma$ for $\pi^{(2)}_2= (^{1234}_{4312})$, where $\Sigma$ stands for $\sigma_2\sigma_1\sigma_3\sigma_2$. Let $A^{(2)}_1$ be an operator on ${\bar B_2}$ which moves the double-thread crossing (cf. large dashed box in Fig.~\ref{p-4toB}) $\Sigma$ by $2\pi$ in the direction opposite to the flow, analogous to the action of $A^{(1)}_1$ on $\sigma_1$ for ${\bar B_2}$. In fact, the action of $A^{(2)}_1$ can be seen as an exchange of the period-2 bands of $P_4$, each consisting of two period-4 loops. Unlike $A^{(1)}_1$ the operator $A^{(2)}_1$ alternates braid words and the corresponding pattern-defining permutations $\Sigma_1,\pi^{(2)}_1 \stackrel{A^{(2)}_1}{\longleftrightarrow} \Sigma_2,\pi^{(2)}_2$. The application of $A^{(2)}_1$ induces transformations of symbolic strings $s_j$ which again can be described by the action of an exchange operator ${\cal A}^{(2)}_1$ represented by the permutation $(^{1234}_{3412})$. Due to the apparent similarity of action of $A^{(1)}_1$ and $A^{(2)}_1$ on the corresponding attractors, ${\cal A}^{(2)}_1$ inherits the algebraic properties of ${\cal A}^{(1)}_1$. Indeed, ${\cal A}^{(2)}_1$ produces identity operator when applied twice and, thus, is equal to its inverse. Finer rearrangement of the $P_4$ loop structure is provided by the action of $A^{(2)}_2$ defined as an operator which moves the single crossing (enclosed in the smaller box in Fig.~\ref{p-4toB}) by $2\pi$ in the direction opposite to flow. ~From the structure of ${\bar B_4}$ one sees that after application of $A^{(2)}_2$ the single crossing does not return to the same location in ${\cal P}$ but appears on the other period-2 band; thus, braid words and $\pi^{(n)}_i$ permutations alternate. Depending on the initial state, the application of $A^{(2)}_2$ results in different loop exchanges. In the case of $\Sigma_1 = \sigma_3\Sigma$ action of $A^{(2)}_2$ leads to the exchange of loops $3$ and $4$ and results in the string transformation described by the exchange operator ${\cal A}^{(2)}_2$ with symbolic representation $(^{1234}_{1243})$. When it acts on $\Sigma_2 = \sigma_1\Sigma$ it exchanges loops 1 and 2 and the permutation representation of ${\cal A}^{(2)}_2$ changes to $(^{1234}_{2134})$. The inverse of $A^{(2)}_2$ moves the single crossing along the flow and produces opposite results; i.e., acting on $\Sigma_1$ it leads to $1 \leftrightarrow 2$ exchange and when applied to $\Sigma_2$ it results in the exchange $3 \leftrightarrow 4$. Note the difference between action of $A^{(n)}_i$ on braids and the action of exchange operators ${\cal A}^{(n)}_i$ on symbolic strings. While several operations $A^{(n)}_i$ applied to the same initial braid word lead to equivalent final words (e.g. $A^{(2)}_1$ and $A^{(2)}_2$) the resulting loop exchanges and, thus their permutation descriptions, can be quite different ($(^{1234}_{3412})$ and $(^{1234}_{1243})$ for the example chosen). Consequently, compositions of braid operations returning the braid ${\bar B}_{2^n}$ to its initial state (e.g. $A^{(2)}_2\circ A^{(2)}_2$) may induce nontrivial translations of $s_j$. To demonstrate this let $({\cal A}^{(2)}_2)^2$ act on the trial state $s_1 = (3241)$. Applying the rules one obtains \begin{equation} ({\cal A}^{(2)}_2)^2 \: s_1= \left(^{{\displaystyle 1234}}_{{\displaystyle 2134}}\right) \left(^{{\displaystyle 1234}}_{{\displaystyle 1243}}\right) (3241)=(4132)={\cal T}_{+2}\:s_1, \end{equation} thus relating the simultaneous loop exchange $(13)\leftrightarrow (24)$ with translation of the period-4 oscillation by half of a period. This implies as well that application of ${\cal A}^{(2)}_2$ four times results in the identity string transformation $ ({\cal A}^{(2)}_2)^4={\cal T}_{+4}={\bf 1}$ and, therefore, $({\cal A}^{(2)}_2)^{-1}=({\cal A}^{(2)}_2)^3$. Compositions of braid operators $A^{(2)}_1$ and $A^{(2)}_2$ provide another example of how identity braid operators induce nontrivial string transformations. Since both operators and their inverses alternate braid words $\Sigma_1 \leftrightarrow \Sigma_2$ the application of the composition of any two of them returns the braid to the same $\pi^{(2)}_i$ permutation class and, thus, the resulting string transformation is equivalent to some translation. The relations for the compositions of the ${\cal A}^{(2)}_1$ and ${\cal A}^{(2)}_2$ operators can be obtained directly from their symbolic representations : \begin{eqnarray} \label{exch} {\cal A}^{(2)}_1 &\circ& {\cal A}^{(2)}_2 = \left(^{{\displaystyle 1234}}_{{\displaystyle 1243}}\right) \left(^{{\displaystyle 1234}}_{{\displaystyle 3412}}\right) = {\cal A}^{(2)}_2\circ {\cal A}^{(2)}_1 \nonumber \\ & & = \left(^{{\displaystyle 1234}}_{{\displaystyle 3412}}\right) \left(^{{\displaystyle 1234}}_{{\displaystyle 2134}}\right) = \left(^{{\displaystyle 1234}}_{{\displaystyle 3421}}\right) = \pi^{(2)}_1 = {\cal T}_{+1} \; , \\ {\cal A}^{(2)}_1 &\circ& ({\cal A}^{(2)}_2)^{-1} = \left(^{{\displaystyle 1234}}_{{\displaystyle 2134}}\right) \left(^{{\displaystyle 1234}}_{{\displaystyle 3412}}\right)= ({\cal A}^{(2)}_2)^{-1}\circ {\cal A}^{(2)}_1 \nonumber \\ & & = \left(^{{\displaystyle 1234}}_{{\displaystyle 3412}}\right) \left(^{{\displaystyle 1234}}_{{\displaystyle 1243}}\right)= \left(^{{\displaystyle 1234}}_{{\displaystyle 4312}}\right)= \pi^{(2)}_2 = {\cal T}_{-1} \; . \nonumber \end{eqnarray} These relations are constructed using the assumption that the initial state of the braid is $\Sigma_1$. Although application to an alternative initial condition changes actual permutation representations of the exchange operators it yields algebraically equivalent results. ~From (\ref{exch}) one sees that all the exchange operators commute and their compositions provide operators which translate the oscillation by all the allowed multiples of $T_4/4$. \subsection{Loop exchanges for $P_{2^n}$ attractors} A generalization of the phenomena discussed above to arbitrary $n$ may be inferred from the observation of the structural organization of the closed braids ${\bar B_{2^n}}$ corresponding to period-doubled attractors $P_{2^n}$. Indeed, ${\bar B_{2^{n+1}}}$ can be obtained from ${\bar B_{2^n}}$ by doubling each thread of ${\bar B_{2^n}}$ and adding a single crossing on top to preserve simple connectivity of the construction. The braid ${\bar B_{2^n}}$ arising as a result of $n$ successive iterations of this procedure can be subdivided into $n$ non-overlapping structurally similar blocks of braids $\Sigma^{(n)}_m, \; m=\overline{1,n}$. This principle of structural organization is illustrated in Fig.~\ref{p-8} representing $B_8$ and its three crossing blocks shown in a series of boxes with decreasing sizes. The analysis shows that these blocks can be moved as whole entities along ${\bar B_{2^n}}$ without interference from each other resulting in the exchange of those loops along which they move. The essential parts of these moves can be represented by a set $A^{(n)}_m$ of $2\pi$ movements of structural blocks $\Sigma^{(n)}_m$ so that $A^{(n)}_1$ corresponds to the largest block and results in an exchange involving all the $2^n$ loops, $A^{(n)}_2$ corresponds to movement of next-smaller braid block and results in the exchange of $2^{n-1}$ loops, and so on. The transformations of time trajectories resulting from exchanges of loops can be again described by the action of permutation operators ${\cal A}^{(n)}_m$ on symbolic strings $s_j$. The fact that the crossing blocks move independently results in the commutivity of operators ${\cal A}^{(n)}_m$ with each other. The geometry of ${\bar B_{2^n}}$ also defines the basic algebraic property of ${\cal A}^{(n)}_m$ demonstrated above for the $n=1,2$ examples \begin{equation} ({\cal A}^{(n)}_m)^{2^m} = {\bf 1}, \; m\in[1,n] . \end{equation} Some compositions of the exchange operators yield translation operators ${\cal T}_{l}$ where $l\in[-2^{n-1},2^{n-1})$. For the discussion of the phenomena described in Sec.~\ref{local} only the operator ${\cal T}_{+1}$ and its inverse are of particular interest. \begin{figure}[htbp] \begin{center} \leavevmode \epsffile{f18.eps} \end{center} \caption{Braid $B_8$ constructed for the $P_8$ attractor. } \label{p-8} \end{figure} Using induction from the analysis of cases with small $n$ one may infer the general expression for the ${\cal T}_{+1}$ translation operator : \begin{equation} \label{prod} {\cal T}_{+1} = \prod_{m=1}^n {\cal A}^{(n)}_m \;. \end{equation} \end{appendix}
2024-02-18T23:39:43.058Z
1996-09-09T22:09:39.000Z
algebraic_stack_train_0000
179
11,992
proofpile-arXiv_065-951
\section{Introduction} Globular cluster spectra provide an excellent means to study the kinematics and metallicities of old stellar populations, and to probe the mass distributions of their parent galaxies (see Brodie 1993 and Zepf 1995 for recent reviews). Globular clusters and planetary nebulae (PNe) bridge the gap between the inner regions of ellipticals and early-type spirals, where masses and M/L ratios can be determined from rotation curve work or integrated light techniques, and the outermost regions which can be studied via X-ray emission from hot gas. In our Galaxy, it is well-known that two globular cluster populations exist--a rapidly rotating, metal-rich disk, and a slowly rotating and metal-poor halo (Zinn 1985; Armandroff \& Zinn 1988; Armandroff 1989). The same situation appears to hold in M31, though this is more controversial (Huchra 1993; Ashman \& Bird 1993). In M33, there is no significant rotation in the old, metal-poor clusters (Schommer et al. 1991). To date, M81 is the only late-type galaxy outside the Local Group with measured cluster velocities. Perelmuter, Brodie \& Huchra (1995) obtained velocities for 25 clusters out to 20 kpc from the galaxy center, and found a mass of $\sim$ 3 $\times$ 10$^{11}$ M$_\odot$ for M81 within the same radius using the Projected Mass Estimator (PME). Their data ``cannot be used to demonstrate rotation" in the cluster system, though the cluster velocities are consistent with the HI rotation curve. There exist a handful of ellipticals with measured cluster velocities: NGC 5128 (87: Harris, Harris \& Hesser 1988), NGC 1399 (47: Grillmair et al. 1994), M87 (44), and M49 (26) (Mould et al. 1987, 1990). For M87 and M49, the cluster system has a rotation of $\sim$ 200 km/sec (for {\it all} clusters combined--no separation has been done by colour/metallicity). There is {\it no} rotation seen in the metal-poor clusters in NGC 5128 and NGC 1399, while there is significant rotation seen in their PNe at a level of 100--300 km/sec (Hui et al. 1995, Arnaboldi et al. 1994). In NGC 5128, the {\it metal-rich} cluster population identified by Zepf \& Ashman (1993) is observed to rotate like the PNe. Such kinematical differences between clusters and PNe are extremely provocative, and we will return to this point later. M104 (NGC 4594; the Sombrero) has been well-studied spectroscopically in its inner regions (see Table 1 for basic properties of M104). Faber et al (1977) and Schweizer (1978) derived major axis rotation curves from ionized gas, stellar absorption lines and 21cm emission. Schweizer found a rotation velocity of $\sim$ 300--350 km/sec between 5--15 kpc, and a total mass of 3.3 $\times$ 10$^{11}$ M$_\odot$ and a M/L$_B$ of 4.9 $\pm$ 1.2 within 15 kpc. These values are all in reasonable agreement with those found by Faber et al. Kormendy \& Illingworth (1982) obtained the velocity and velocity dispersion profiles along the M104 major axis from optical absorption lines, finding that V flattens out at $\sim$ 250 km/sec between 40--120$^{\prime \prime}$ from the galaxy center, while $\sigma$ flattens out to $\sim$ 100 km/sec at the same distance. These values are in good agreement with those found by Kormendy (1988), Jarvis \& Dubath (1988), and Hes \& Peletier (1993), although these authors were more interested in the behavior very near the galaxy center. Most authors have not found any rotation around the minor axis, with the exception of Hes \& Peletier who did find some minor axis rotation. Kormendy \& Westpfahl (1989) showed that 2 $\leq$ M/L$_V$ $\leq$ 4 between 0.5 $\leq$ r $\leq$ 180$^{\prime \prime}$. Hes \& Peletier found absorption-line strength gradients along the major axis, and concluded on the basis of these gradients and the central kinematics that the M104 bulge is similar in many respects to a giant elliptical. Jarvis \& Freeman (1985) constructed models which showed that the surface brightness profiles and kinematics are consistent with the M104 bulge being an isotropic oblate spheroid, flattened mostly by rotation, {\it unlike} the case for ellipticals of the same luminosity. Jarvis \& Freeman found a M/L$_V$ = 3.6 within $\sim$ 80$^{\prime \prime}$, and a disk/bulge ratio of $\sim$ 0.25 for M104. Finally, Burkhead (1986) has carried out a major photometric study and decomposition of M104. Previous photometric studies of the M104 globular clusters include Wakamatsu (1977), Harris et al. (1984), and Bridges \& Hanes (1992). The last two studies found a cluster specific frequency S$_N$ of 2--3, while Bridges et al. estimated the clusters to have [Fe/H] $\simeq$ $-$0.8 from B$-$V colours, a metallicity higher than that found for the clusters in the Milky Way and other nearby spirals. Globular cluster candidates for spectroscopic analysis were selected in two ways. First, the COSMOS team carried out astrometry and crude photometry on `V' (AAT 1853, IIaD + GG485) and `B' (AAT 1859, IIIaJ + GG385) AAT plates, giving internal positions for thousands of objects. The HST Guide Star Catalogue was then used to identify a set of several dozen stars which were picked by COSMOS, allowing a transformation to absolute coordinates to better than 0.2$^{\prime \prime}$ rms. COSMOS had trouble detecting and measuring objects close to the galaxy bulge, but we also had B,V CCD photometry covering three small patches of the M104 bulge (Bridges \& Hanes 1992). These photometrically calibrated data then allowed us to crudely calibrate the COSMOS data by magnitude and colour, and to establish the astrometric scales in the CCD data so that we could add a few near-central objects to our target lists. Final object selection was made on the basis of stellar appearance, and by cuts in magnitude and colour. In all, 103 candidates in 3 fields were obtained, extending in radius between 0.8 to 6.2 arcmin from the galaxy center. The LEXT package was used to produce input x, y files which were then used to produce the punched masks. A slit width of 1.5$^{\prime \prime}$ was used, and the slit length varied between 10--60 arcsec. \section{Data} \subsection{Observations and Data Reduction} Spectra for 76 cluster candidates in two of these fields were obtained with the Low Dispersion Survey Spectrograph (LDSS-2) in April 1994; see Table 2 for further details of the observing. Dome and twilight flats were taken at the beginning and end of each night, and CuAr arcs were taken throughout each night. Finally, long-slit spectra of M13, M92, NGC 6356, and radial velocity standard stars were taken for velocity and metallicity calibration. Except where indicated, the LEXT package was used for all data reduction. First, the MAKEFF task was used to produce a domeflat of mean unity, which was then divided into all of the program frames. After checking to ensure that there were no spatial or spectral shifts between the program frames, the LCCDSTACK program (kindly supplied by Karl Glazebrook) was used to combine the program frames for each mask in an optimal way; all cosmic rays were removed during the stacking. Wavelength calibration was then done using the ARC task on the CuAr exposures; a 3rd order polynomial was found to give a satisfactory fit with residuals typically $\leq$ 0.1 $\AA$. Finally, the spectra were optimally extracted and sky subtracted, generally with linear fits for the background sky. 71 spectra were extracted in total, though many of the spectra are of low S/N. Figure 1 shows representative spectra ranging from low to high S/N for three confirmed globular clusters. \begin{table} \caption{Basic and Derived Information about M104.} \begin{tabular}{ccc} \hline\hline Quantity & Value & Reference \\ \hline RA (1950) & 12 37 22.80 & 1 \\ Dec (1950) & $-$11 21 00.0 & 1 \\ V$_{hel}$ & 1091 km/sec & 1 \\ Hubble Type & Sa$^+$/Sb$^-$ & 2 \\ M$_B$ & $-$20.4 & 3 \\ Adopted Distance & 8.55 Mpc & 4 \\ Specific Frequency & 2 $\pm$ 1 & 5 \\ Cluster [Fe/H] & $-$0.7 $\pm$ 0.3 & 6 \\ M/L$_{B_T}$ & 22$^{+7.5}_{-6.5}$ & 6 \\ \hline \end{tabular} \bigskip \noindent References \bigskip (1)~~NASA/IPAC Extraglactic Database (NED) \\ (2)~~Sandage \& Tammann (1981) \\ (3)~~Burkhead (1986), taking D=8.55 Mpc \\ (4)~~Ciardullo, Jacoby, \& Tonry (1993) \\ (5)~~Bridges \& Hanes (1992) \\ (6)~~This work \end{table} \begin{table} \caption{Observing Log} \begin{tabular}{cc} \hline\hline Dates & April 11-14 1994 \\ Telescope/Instrument & 4.2m WHT/LDSS-2 \\ Dispersion (Resolution) & 2.4 $\AA$/pixel (6 $\AA$ FWHM) \\ Detector & 1024$^2$ TEK CCD \\ Wavelength Coverage & 3800--5000 $\AA$ \\ Seeing & 1--2$^{\prime \prime}$ \\ Exposure time (Mask \#1/\#2) & 4.5/2.5 hr \\ Mean Airmass (Mask \#1/\#2) & 1.4/1.5 \\ Number Objects (Mask \#1/\#2) & 44/32 \\ \hline \end{tabular} \end{table} \subsection{Radial Velocities and Confirmed Globular Clusters} The extracted spectra were first scrunched onto a log wavelength scale and then cross-correlated with template spectra of M13, M92, NGC 6356 and HD172, using the IRAF FXCOR task. By experimentation, we found that cross-correlations with peak heights less than 0.1 were not reliable, and hence were not used. We further demanded that bona-fide globular clusters have two or more reliable cross-correlations. Table 3 shows our final velocities for the 71 extracted spectra in the two fields. The velocity shown in Table 3 is the mean (weighted by the cross-correlation peak height) for the four templates; `??' means that no spectrum was extracted and `**' signifies that no reliable cross-correlation could be obtained. \begin{table} \caption{Velocities of Globular Cluster Candidates in M104. Successive columns give Id \#, Ra, Dec, Velocity, and Velocity Error, for 44 cluster candidates in Field \#1. A `??' means that no spectrum could be extracted, and a `**' means that no reliable cross-correlation could be obtained. There are 34 confirmed globular clusters (see last column).} \begin{tabular}{lllllc} \hline\hline Id & Ra & Dec & V$_{hel}$ & Error \\ & (1950) & (1950) & (km/s) & (km/s) & Cluster? \\ \hline 1$-$1 & 12 37 40.460 & -11 21 33.55 & 212 & 18 & N \\ 1$-$2 & 12 37 34.548 & -11 20 18.29 & 776 & 11 & Y \\ 1$-$3 & 12 37 35.516 & -11 21 47.40 & 1369 & 129 & Y \\ 1$-$4 & 12 37 33.716 & -11 18 53.53 & 1152 & 40 & Y \\ 1$-$5 & 12 37 34.864 & -11 19 27.65 & 755 & 61 & Y\\ 1$-$6 & 12 37 26.784 & -11 18 10.02 & 109 & 20 & N \\ 1$-$7 & 12 37 28.576 & -11 19 8.93 & ** & ** & N \\ 1$-$8 & 12 37 27.544 & -11 16 14.52 & ** & ** & N \\ 1$-$9 & 12 37 28.896 & -11 17 36.80 & 194 & 29 & N \\ 1$-$10 & 12 37 36.140 & -11 24 20.96 & -51 & 21 & N \\ 1$-$11 & 12 37 28.848 & -11 19 54.47 & 1457 & 12 & Y \\ 1$-$12 & 12 37 37.988 & -11 22 11.38 & 1370 & 32 & Y \\ 1$-$13 & 12 37 36.624 & -11 18 30.11 & 219 & 78 & N \\ 1$-$14 & 12 37 33.112 & -11 17 0.81 & ** & ** & N \\ 1$-$15 & 12 37 35.600 & -11 25 29.50 & 1035 & 24 & Y \\ 1$-$16 & 12 37 39.288 & -11 21 7.67 & 1256 & 18 & Y \\ 1$-$17 & 12 37 35.336 & -11 22 26.42 & ** & ** & N \\ 1$-$18 & 12 37 12.224 & -11 24 59.35 & ** & ** & N \\ 1$-$19 & 12 37 20.336 & -11 21 58.89 & 573 & 21 & Y \\ 1$-$20 & 12 37 5.104 & -11 19 40.84 & 1186 & 51 & Y \\ 1$-$21 & 12 37 35.580 & -11 16 37.59 & ** & ** & N \\ 1$-$22 & 12 37 31.244 & -11 22 56.33 & 131 & 43 & N \\ 1$-$23 & 12 37 41.120 & -11 25 3.76 & ** & ** & N \\ 1$-$24 & 12 37 24.920 & -11 23 12.60 & ?? & ?? & N \\ 1$-$25 & 12 37 7.988 & -11 21 2.23 & 1231 & 26 & Y \\ 1$-$26 & 12 37 28.316 & -11 23 51.23 & 1199 & 14 & Y \\ 1$-$27 & 12 37 3.460 & -11 24 23.50 & -233 & 26 & N \\ 1$-$28 & 12 37 18.652 & -11 24 8.86 & 932 & 25 & Y \\ 1$-$29 & 12 37 37.460 & -11 22 45.05 & 853 & 31 & Y \\ 1$-$30 & 12 37 17.232 & -11 23 25.00 & ** & ** & N \\ 1$-$31 & 12 37 6.112 & -11 16 49.60 & ** & ** & N \\ 1$-$32 & 12 37 14.744 & -11 22 27.14 & 1025 & 203 & Y \\ 1$-$33 & 12 37 25.420 & -11 20 5.94 & 448 & 141 & N \\ 1$-$34 & 12 37 11.292 & -11 18 24.53 & -31 & 6 & N \\ 1$-$35 & 12 37 16.768 & -11 22 43.71 & 1300 & 40 & Y \\ 1$-$36 & 12 37 35.436 & -11 26 11.98 & ** & ** & N \\ 1$-$37 & 12 37 14.296 & -11 16 37.64 & 2456 & 24 & N \\ 1$-$38 & 12 37 18.800 & -11 24 42.64 & ** & ** & N \\ 1$-$39 & 12 37 6.072 & -11 18 55.96 & 832 & 61 & Y \\ 1$-$40 & 12 37 26.668 & -11 25 16.04 & ** & ** & N \\ 1$-$41 & 12 37 7.676 & -11 20 24.56 & ** & ** & N \\ 1$-$42 & 12 37 3.752 & -11 23 11.61 & ** & ** & N \\ 1$-$43 & 12 37 16.964 & -11 22 14.78 & ** & ** & N \\ 1$-$44 & 12 37 41.244 & -11 20 52.17 & ** & ** & N \\ \hline \end{tabular} \end{table} \begin{table} \contcaption{Id \#, Ra, Dec, Velocity, and Velocity Errors are given for 32 cluster candidates in Field \#2.} \begin{tabular}{lllllc} \hline\hline Id & Ra & Dec & V$_{hel}$ & Error \\ & (1950) & (1950) & (km/s) & (km/s) & Cluster? \\ 2$-$1 & 12 37 40.460 & -11 21 33.55 & 306 & 45 & N \\ 2$-$2 & 12 37 30.192 & -11 20 20.00 & 808 & 39 & Y \\ 2$-$3 & 12 37 36.332 & -11 21 45.12 & 1524 & 14 & Y \\ 2$-$4 & 12 37 16.272 & -11 19 13.20 & 968 & 78 & Y \\ 2$-$5 & 12 37 31.388 & -11 19 24.32 & 1220 & 89 & Y \\ 2$-$6 & 12 37 24.600 & -11 18 52.32 & 1283 & 45 & Y \\ 2$-$7 & 12 37 17.976 & -11 19 37.71 & 976 & 24 & Y \\ 2$-$8 & 12 37 23.200 & -11 22 0.79 & 979 & 14 & Y \\ 2$-$9 & 12 37 39.308 & -11 25 25.85 & -83 & 25 & N \\ 2$-$10 & 12 37 23.312 & -11 20 4.55 & ** & ** & N \\ 2$-$11 & 12 37 32.396 & -11 17 30.21 & 1411 & 56 & Y \\ 2$-$12 & 12 37 13.260 & -11 18 7.37 & 857 & 42 & Y \\ 2$-$13 & 12 37 23.720 & -11 18 27.38 & 616 & 34 & Y \\ 2$-$14 & 12 37 25.464 & -11 22 36.75 & 1045 & 7 & Y \\ 2$-$15 & 12 37 26.900 & -11 22 18.79 & 828 & 71 & Y \\ 2$-$16 & 12 37 16.784 & -11 19 50.53 & 875 & 22 & Y \\ 2$-$17 & 12 37 35.248 & -11 23 16.06 & 1275 & 100 & Y \\ 2$-$18 & 12 37 7.460 & -11 23 3.04 & ** & ** & N \\ 2$-$19 & 12 37 34.280 & -11 16 18.73 & ?? & ?? & N \\ 2$-$20 & 12 37 37.480 & -11 22 51.50 & ** & ** & N \\ 2$-$21 & 12 37 34.952 & -11 24 25.56 & ** & ** & N \\ 2$-$22 & 12 37 5.504 & -11 23 17.64 & ?? & ?? & N \\ 2$-$23 & 12 37 6.072 & -11 24 50.98 & ?? & ?? & N \\ 2$-$24 & 12 37 30.260 & -11 23 46.25 & 1505 & 119 & Y \\ 2$-$25 & 12 37 7.380 & -11 21 3.77 & ** & ** & N \\ 2$-$26 & 12 37 40.516 & -11 16 50.78 & ?? & ?? & N \\ 2$-$27 & 12 37 13.332 & -11 16 53.25 & ** & ** & N \\ 2$-$28 & 12 37 43.736 & -11 23 31.64 & ** & ** & N \\ 2$-$29 & 12 37 21.284 & -11 16 34.38 & ** & ** & N \\ 2$-$30 & 12 37 5.916 & -11 24 31.71 & 1104 & 104 & Y \\ 2$-$31 & 12 37 15.416 & -11 21 48.77 & 939 & 21 & Y \\ 2$-$32 & 12 37 12.236 & -11 24 9.03 & ** & ** & N \\ \hline \end{tabular} \end{table} A word on velocity {\it uncertainties} is in order. The errors given in Table 3 are merely the rms amongst the four templates. We do not have a good idea of the {\it external} uncertainties. There is one object in common between the two masks (\# 1--1~=~2--1), with a velocity difference of 95 km/sec--however, it is a foreground object (we were not able to observe our third field in M104, which had several objects in common with the other two, because of the higher priority placed on NGC 4472, our principal target). For lack of information, we assume velocity errors of 50--100 km/sec, values found from other data of comparable S/N (e.g. M81: Perelmuter, Brodie, \& Huchra 1995; M87 \& M49: Mould et al. 1990). In any event, even velocity errors of 100 km/sec have little effect on our mass and M/L determinations, given that the observed velocity dispersion of our confirmed cluster sample is $\sim$ 250 km/sec (Section 3.1 below). In order to isolate a sample of true globular clusters, we subjected the velocity distribution of the (46) objects in Table 3 with reliable velocities (see Figure 2) to the KMM mixture modelling analysis (McLachlan \& Basford 1988; Ashman, Bird, \& Zepf 1994). This analysis found a best fit of 3 groups--one high velocity point (\#1--37), a foreground group of 11 objects, and 34 objects with velocities 500 $\leq$ V $\leq$ 1600 km/sec. Inspection of Figure 2 suggests that this is a sensible split. The last column of Table 3 shows which objects are confirmed globular clusters; there are 34 such objects in total. \section{Results} \subsection{Mass and M/L Ratio} We have used the ROSTAT code (Beers, Flynn, \& Gebhardt 1990; Bird \& Beers 1993), to obtain robust values for the mean cluster velocity and velocity dispersion. We find that the robust mean velocity (location) is 1074 ($-$79,$+$78) km/sec, where we quote boot-strapped 90\% confidence intervals. Similarly, the biweight velocity dispersion (scale) is 255 (210, 295) km/sec, where this value has been corrected for a possible rotation of $\sim$ 70 km/sec (Section 3.2.1), and velocity errors of 100 km/sec per point are assumed; we again quote 90\% boot-strapped confidence intervals. The correction for possible rotation has been done assuming a step function (cf. Mould et al. 1990, and see Section 3.2.1 and Figure 3); this correction changes $\sigma$ by only $\sim$ 10 km/sec. The mean velocity is reassuringly close to the recession velocity of M104 itself, which is 1091 $\pm$ 5 km/sec (RC3). The M104 cluster velocity dispersion is much larger than that of old clusters in other spiral galaxies (e.g. Milky Way: $\sigma$ $\simeq$ 100 km/sec--Armandroff 1989, Da Costa \& Armandroff 1995; M31: $\sigma$ $\simeq$ 150 km/sec--Huchra 1993; M33: $\sigma$ $\simeq$ 70 km/sec--Schommer et al. 1991); M81: $\sigma$ $\simeq$ 150 km/sec--Perelmuter, Brodie \& Huchra 1995), but smaller than that of gE galaxies (e.g. M87: $\sigma$ $\simeq$ 385 km/sec; M49:~ $\sigma$ $\simeq$ 330 km/sec--Mould et al. 1990; NGC 1399:~$\sigma$ $\simeq$ 390 km/sec--Grillmair et al. 1994). While a direct comparison of these numbers isn't very meaningful, since they are measured to different radii (or, more physically, scale-lengths) in each galaxy, they do indicate the presence of dark matter halos in these galaxies. In an attempt to see if the cluster velocity dispersion varies with galactocentric radius, we divided the data into 4 radial bins, with roughly equal numbers of clusters in each bin. Unfortunately, the small number of data points and resulting large confidence intervals mean that we cannot place useful constraints on any such variation. We have used the Projected Mass Estimator (PME) to determine the mass of M104: \[M_p={f_p\over NG} \sum_{i=1}^{N} r_{i} v_{i}^{2} \] \noindent where r$_i$ is the projected galactocentric radius of the {\it i}th cluster, and v$_i$ is the velocity corrected for the mean cluster velocity (1074 km/sec). The f$_p$ factor depends on the assumed cluster velocity distribution. We have adopted a value of f$_p$ = 16/$\pi$ which assumes isotropic orbits and a central point mass (Bahcall \& Tremaine 1981; Heisler, Tremaine, \& Bahcall 1985); for radial and tangential orbits f$_p$ = 32/$\pi$ and 32/3$\pi$ respectively. Finally, we have assumed a distance D= 8.55 Mpc based on the SBF distance of Ciardullo, Jacoby, \& Tonry (1993; Ford et al. 1996 give a slightly higher distance of 8.9 $\pm$ 0.6 Mpc, based on the PNLF) for M104--the mass will scale with D. The robust implementation of the PME gives M$_p$= 5.2 (3.9, 6.7) $\times$ 10$^{11}$ M$_\odot$, where we quote 90\% boot-strapped confidence intervals, and we have taken the galaxy centroid as the dynamical center. Correcting for velocity errors, possible rotation, and using the clusters themselves to define the dynamical center reduce M$_p$ by $<$ 10\%. Thus, we quote M$_p$= 5.0 (3.5, 6.7) $\times$ 10$^{11}$ M$_\odot$. The main uncertainties in M$_p$ are the assumptions about the cluster orbital distribution, the M104 distance, and whether or not we use a central point mass or extended mass distribution for M104, all of which are uncertain by roughly a factor of two. We take M$_p$ to be the mass within the projected radius of the furthermost cluster in our sample, which lies at 5.5 arcmin (14 kpc for D=8.55 Mpc). From Burkhead (1986), the total integrated magnitude of M104 is B$_{tot}$ = 9.24 within 5.5 arcmin, where we have corrected for A$_B$= 0.12 mag (Burstein \& Heiles 1984). Thus, L$_B$ ($<$~5.5$^{\prime}$) = 2.3 $\times$ 10$^{10}$ L$_\odot$ for D=8.55 Mpc, and M/L$_{B_T}$ = 22$^{+7.5}_{-6.5}$, where the uncertainties reflect only the {\it formal} confidence intervals for M$_p$ above. Taking B$-$V = 1 (Burkhead 1986), the corresponding M/L$_V$ = 16$^{+5.5}_{-5.0}$. These M/L ratios scale as ${8.55 \rm Mpc} \over {D}$. Given the uncertainties mentioned above, our quoted M/L ratios are probably only believable to within a factor of two. Do our data imply the existence of dark matter in the M104 halo? Kormendy \& Westpfahl (1989) showed that spectroscopic data available at that time yielded 2 $\leq$ M/L$_V$ $\leq$ 4 between 0.5 -- 180$^{\prime \prime}$, assuming a distance of 18 Mpc, and they concluded that there {\it ``is no evidence for halo dark matter between 11 $\leq$ r $\leq$ 215$^{\prime \prime}$."} From our cluster data, we can obtain a lower bound on the M/L ratio by assuming tangential orbits, a central point mass, and a distance of 18 Mpc:~ we find that M/L$_V$ $\geq$ 5.3 within 5.5 arcmin. Thus, we find that the M/L ratio must increase with radius (by a factor of $\sim$ 4 between 180--330$^{\prime \prime}$ for our best estimate of M/L$_V$ = 16), and we conclude that {\it there is indeed dark matter in the M104 halo}. Our M/L agrees well with those found from globular clusters in other spirals (e.g. M31: M/L$_B$ = 16--Huchra 1993; M81: M/L$_B$= 19-- Perelmuter, Brodie \& Huchra 1995, both within 20 kpc). For some gE/cD galaxies, M/L is larger (cf. M87: M/L$_V$ = 31 inside $\sim$ 40 kpc; NGC 1399: M/L$_B$ $\simeq$ 70--80 between 20--40 kpc. For other ellipticals, however, the M/L is lower (e.g. M49: M/L$_B$ $\leq$ 10 inside 20 kpc; NGC 5128: M/L$_B$ $\simeq$ 10). \subsection{Kinematics and Comparison with PNe} \subsubsection{Rotation in the Cluster System?} In Figure 3, we show the cluster velocities as a function of distance along the major axis of M104; as discussed in Section 2.2, the error bars are only internal, and the true uncertainties are likely to be 50--100 km/sec. While there is considerable scatter at all radii, there is a {\it hint} of rotation in the cluster system, with clusters west of the galaxy center having lower velocities on average than those on the eastern side. In Figure 3, the solid line is a linear least squares fit, with a slope of 0.43 km/sec/arcsec. Over the 8 arcmin along the major axis covered by the studied clusters, this amounts to $\sim$ 210 km/sec or a V$_{rot}$ of $\sim$ 105 km/sec. Another way to estimate the possible rotation is to compare the mean cluster velocity east and west of the minor axis. We have done this in a robust way with ROSTAT, since this should be more reliable than a classical Gaussian estimator with the small number of datapoints. We find that the mean velocities for the two datasets are: V$_{east}$= 1137 ($-$116, $+$124); V$_{west}$ = 991 ($-$66, $+$107) km/sec, where the values in brackets are the 90\% bootstrapped uncertainties. Therefore, 2V$_{rot}$= 146$^{+164}_{-133}$ km/sec, or V$_{rot}$= 73$^{+82}_{-66}$ km/sec. We have also carried out a Mann-Whitney U test on our data. The hypothesis that the velocities of clusters with RA less than that of M104 are drawn from the same population as the velocities of clusters with RA greater than M104 can be rejected at the 92.5\% confidence level. In addition, the hypothesis that the velocities of clusters with Dec less than that of M104 are drawn from the same population as the velocities of clusters with Dec greater than M104 cannot be rejected. In other words, there is a correlation between velocity and major axis position (since the M104 major axis lies almost exactly East-West), and no correlation between velocity and minor axis position. We interpret this as a marginal detection of rotation in the cluster system, at the 92.5\% confidence level. However, it is clear from Figure 3 that we have large error bars, and there may be considerable real scatter; many more velocities will be needed to resolve this issue. Rotation at a similar amplitude is observed in the outer, metal-rich M31 clusters (V$_{rot}$ $\sim$ 70 km/sec: Huchra 1993). Rotation at lower levels of 40--50 km/sec are found for the metal-rich clusters in M33 (Schommer et al. 1991) and NGC 5128 (Hui et al. 1995), and for the metal-poor Galactic halo and M31 clusters (Da Costa \& Armandroff 1995; Huchra 1993), while no significant rotation is detected in the metal-poor M33 clusters (Schommer et al. 1991). It would be extremely interesting to obtain a large sample of high S/N spectra of M104 clusters, so that a similar comparison could be made between the kinematics of metal-poor and metal-rich clusters in that galaxy. \subsubsection{Comparison with M104 PNe} Ken Freeman has very kindly shared preliminary results for 100 PNe velocities in M104 (of $\sim$ 250 total). For those objects within 74$^{\prime \prime}$ of the galactic plane, the PNe show a rotation of 50--100 km/sec out to $\sim$ 250$^{\prime \prime}$ along the major axis. The PNe velocity dispersion is $\sim$ 220 km/sec at 50$^{\prime \prime}$ radius, dropping off to $\sim$ 180 km/sec between 120--150$^{\prime \prime}$ radius. Thus, the PNe and cluster kinematics are roughly consistent in M104, out to $\sim$ 100$^{\prime \prime}$. The PNe velocity dispersion is slightly lower than that of the globular clusters beyond $\sim$ 120$^{\prime \prime}$, but this difference is not significant given the uncertainties in the cluster velocity dispersion. Interestingly, in the two other galaxies for which we can make a direct comparison of cluster and PNe kinematics, NGC 1399 and NGC 5128, there is PNe rotation ($\sim$ 300 km/sec and 100 km/sec respectively), yet {\it no} rotation is seen in the metal-poor clusters. The metal-rich clusters in NGC 5128 have V $\sim$ 40 km/sec inside 6 kpc. Arnaboldi et al. (1994) attribute the difference in kinematics in NGC 1399 to a tidal interaction between it and the nearby NGC 1404. N-body work by Barnes (1996) shows that stellar populations preserve some kinematic memory after major merger events, and Barnes speculates that ``... if NGC 5128 is the result of a major merger, the planetary nebulae and globular clusters may trace different populations from the original galaxies, with the former exhibiting a kinematic memory of the disks from whence they came." \subsection{The Mean Cluster Metallicity} Although our cluster spectra are individually too noisy to obtain useful metallicity estimates, the combination of all 34 spectra into one higher S/N spectrum does give a good determination of the mean cluster metallicity. Bridges \& Hanes (1992) showed that there is no dependence of mean B$-$V colour on B magnitude or galactocentric radius; thus, the mean metallicity of our 34 clusters should be representative of the M104 cluster system as a whole. Our individual spectra were shifted using the cross-correlation results, and then added to give the final spectrum shown in Figure 4. Ca H\&K, H$\delta$, G-band and H$\gamma$ are all clearly seen. For quantitative analysis, we have used the G-band equivalent width (EW) to determine [Fe/H]; the G-band is one of Brodie \& Huchra's (1990; BH hereafter) 6 primary metallicity indicators. Following their prescription, we fit the continuum on two segments between 4284--4300 $\AA$ and 4336-4351 $\AA$, and calculate the feature EW between 4300-4333 $\AA$. Using the FIGARO ABLINE routine, we find a central wavelength of 4317 $\AA$ and an EW of 4.40 $\AA$. We then convert this EW into the BH feature index I via I=$-$2.5log(1~$-$~EW/$\delta$$\lambda$), yielding I=0.155. Finally, we use the BH calibration between G-band strength and [Fe/H], as determined from calibrating clusters in the Milky Way and M31, to estimate [Fe/H]. We find [Fe/H]=$-$0.70 $\pm$ 0.30, where the uncertainty is taken from the scatter in the BH calibration (their Table 7). There is another BH primary index that falls within our bandpass: $\Delta$, which measures the line-blanketing discontinuity at 4000 $\AA$. (Note that the CNB feature, between 3810--3910 $\AA$ is also within our bandpass, but we lack a continuum passband shortwards of this feature). Unfortunately, our lack of flux calibration (due to the difficulties of flux-calibrating multi-slit spectra) makes broad metallicity indices such as $\Delta$ unreliable. BH list H$+$K (the Calcium H \& K lines) as one of their ``poorer" calibrators, but it is useful as a consistency check on our G-band metallicity. We measure an EW of 16.0 $\AA$ for H$+$K (feature between 3935--3995 $\AA$, and continuum fitted between 3920--3935 and 4000--4010 $\AA$), which translates into [Fe/H] = $-$0.55 $\pm$ 0.4, where again we have taken the uncertainty from the scatter in the BH calibration. Though the H$+$K determination has larger scatter, it is consistent with the G-band value. The mean cluster [Fe/H] agrees well with that estimated previously by Bridges \& Hanes (1992), who found [Fe/H]= $-$0.8 $\pm$ 0.25 from B$-$V colours of $\sim$ 130 cluster candidates with 0.3 $\leq$ B$-$V $\leq$ 1.3. It is encouraging to see the agreement between spectroscopic and photometric metallicities. As we show in Table 4, the M104 globular clusters are seen to be considerably more metal-rich than those of other spirals; Table 4 also shows that M104 is comparable to E/gE galaxies both in luminosity and mean cluster [Fe/H]. \begin{table} \caption{Mean [Fe/H] metallicity of globular cluster systems as a function of parent galaxy luminosity. This Table has been adapted from Figure 7 of Secker et al. (1995). Data for M104 taken from this paper, for M81 from Perelmuter, Brodie, \& Huchra (1995), and for other spirals from Harris (1991). Globular cluster data for NGC 1399 from Ostrov et al. (1993), for M87 from Lee \& Geisler (1993), for NGC 3923 from Zepf et al. (1995), for NGC 6166 from Bridges et al. (1996), and for remaining E galaxies from Harris (1991).} \begin{tabular}{lccc} \hline\hline Galaxy & M$_{V_T}$ & Mean Cluster & uncertainty \\ & & [Fe/H] & \\ \hline & & & \\ & E/gE & & \\ & & & \\ NGC 1399 & -21.1 & -0.90 & 0.20 \\ NGC 3311 & -22.8 & -0.34 & 0.30 \\ NGC 4486 & -22.4 & -0.86 & 0.20 \\ NGC 3923 & -22.05 & -0.55 & 0.20 \\ NGC 4472 & -22.6 & -0.80 & 0.30 \\ NGC 4649 & -22.2 & -1.10 & 0.20 \\ NGC 5128 & -22.0 & -0.84 & 0.10 \\ NGC 6166 & -23.6 & -1.00 & 0.30 \\ & & & \\ & Spirals & & \\ & & & \\ M104 & -22.1 & -0.70 & 0.30 \\ M33 & -19.2 & -1.40 & 0.20 \\ MW & -21.3 & -1.35 & 0.05 \\ M31 & -21.7 & -1.21 & 0.05 \\ M81 & -21.0 & -1.50 & 0.20 \\ NGC 3031 & -21.2 & -1.46 & 0.31 \\ \hline \end{tabular} \end{table} Much of the recent discussion about a possible relationship between galaxy luminosity and mean cluster metallicity has focussed on elliptical galaxies. It is instructive to see if there is any such relationship for spirals alone. Figure 5 shows the data for the 6 spirals in Table 4. Over the small magnitude range $-$21 $\leq$ M$_V$ $\leq$ $-$22, there does seem to be a trend, though this is largely driven by our new data for M104. By contrast, there seems to be considerably more scatter about any possible relationship for E/gE galaxies (cf. Secker et al 1995). This difference between spirals and ellipticals would be expected if most ellipticals are created by mergers. During a merger a new generation of clusters of higher metallicity may be created (AZ), and the galaxy luminosity will change, creating more scatter in both M$_V$ and cluster [Fe/H]. Spirals, however, have presumably not experienced major merger events, and will adhere more closely to any ``primordial" M$_V$--[Fe/H] relation. However, M104 is not a typical spiral: it is very bright and its disk is a minor component; it is thus not clear if it should be included in Figure 4. M33 is also unusual in the other extreme, since it has very little stellar bulge yet has managed to produce a significant cluster population (e.g. Bothun 1992). More metallicities for globular clusters in spiral galaxies are urgently needed. \section{Conclusions} We have obtained spectra for 76 globular cluster candidates in M104. 34 of these objects have been confirmed as M104 globular clusters from their spectra and radial velocities; this sample extends out to $\sim$ 5.5 arcmin in projected radius (14 kpc for our adopted distance of D=8.55 Mpc). Our main conclusions are as follows: \noindent{\bf (1):~~}The cluster velocity dispersion is 255 (210, 295) km/sec, after correction for possible rotation of the cluster system. This result confirms that M104 is a very massive spiral, as would be inferred from its luminosity. \noindent{\bf (2):~~}The Projected Mass Estimator yields a mass of 5.0 (3.5, 6.7) $\times$ 10$^{11}$ M$\odot$ for M104, within a projected radius of 5.5 arcmin (14 kpc). The corresponding mass-to-light ratio is M/L$_{B_T}$ = 22$^{+7.5}_{-6.5}$ (M/L$_{V_T}$= 16$^{+5.5}_{-5.0}$), assuming isotropic orbits and a central point mass for M104. Although there are considerable uncertainties in this M/L determination, we believe that our quoted value is at the low end of the allowed range. Comparing to the M/L$_V$ $\leq$ 4 found from stellar and HI rotation curves within 200$^{\prime \prime}$, our best estimate is that the M/L ratio of M104 increases with radius, as would be expected if the galaxy was surrounded by a dark matter halo. \noindent{\bf (3):~~} There is a marginal detection of rotation in the M104 globular cluster system at the 92.5\% confidence level. However, many more cluster velocities are required to conclusively establish rotation. The kinematics of the clusters are roughly consistent with preliminary results of Freeman et al. for the M104 PNe. \noindent{\bf (4):~~} The mean globular cluster metallicity as determined from the G-band equivalent width of the composite cluster spectrum is [Fe/H]= $-$0.70 $\pm$ 0.3, using the calibration of Brodie \& Huchra (1990). This metallicity is higher than that of clusters in other spiral galaxies, but comparable to that of E/gE galaxies of similar luminosity to M104. \section*{acknowledgements} We would like to acknowledge Harvey MacGillivray for doing the COSMOS scans, and Dave Malin for expediting the delivery ofthe M104 plates to Edinburgh. We thank Ken Freeman and his collaborators for sharing preliminary results from their M104 PNe data. We are grateful to Nial Tanvir and Karl Glazebrook for all of their help with LEXT. Many thanks go to Tina Bird for her assistance with ROSTAT and all things robust. We appreciate the useful discussions with Joe Haller and Scott Tremaine regarding the use of the Projected Mass Estimator. We also acknowledge the assistance of Peter Bleackley in data reduction. SEZ acknowledges support from NASA through grant number HF-1055.01-93A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NASA-26555. DAH and JJK acknowledge NSERC for support through an Operating Grant provided to DAH. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \newpage
2024-02-18T23:39:43.123Z
1996-09-25T18:08:01.000Z
algebraic_stack_train_0000
185
6,296
proofpile-arXiv_065-996
\section{Introduction} Since its discovery in 1966 (Giacconi et al. 1966) Cyg X-3 has remained one of the most unusual and enigmatic objects in the sky. A neutron star--Wolf Rayet binary system, Cyg X-3 exhibits the following unusual properties: major radio flares at intervals of aproximately 18 months (Waltman et al. 1995); a 4.8 hour period making it the shortest period high mass X-ray binary; and following a major flare, radio jets which expand at 0.35$c$ (Spencer et al. 1986). Recently Newell, Garrett \& Spencer (1996, hereafter NGS; Newell 1996) published VLBA maps showing superluminal motion at a significantly higher magnitude than previous superluminal velocities in the galaxy. The system also displayed superluminal contraction. We consider models for this behaviour. \section{Superluminal motion} Superluminal expansion in one-sided quasars has been well documented and we use the same notation here. Consider a blob of material emitted at a velocity $v$ at an angle $\theta$ to the line of sight; the blob appears to travel normal to the line of sight with a velocity $v_{\rm app}$ given by \begin{equation} v_{\rm app} = \frac{v\sin{\theta}}{1-\beta\cos{\theta}} \end{equation} where $\beta = v/c$. For a given $\beta$ the angle at which the maximum superluminal effect occurs is given by $\cos{\theta_{\rm max}} = \beta$ and at this angle the apparent velocity has a maximum of $\beta_{\rm app}({\rm max}) = \gamma\beta$ where we have substituted $\gamma = (1-\beta^{2})^{-1/2}$. \section{Cygnus X-3 results} Observations by NGS of Cygnus X-3 showed that it was undergoing apparent superluminal expansion and contraction on both the major and minor axes of an ellipse. The observations include frames showing the object at intermediate size. Taking $\beta_{\rm app}({\rm max}) = \beta_{\rm app}$ and using the $\beta_{\rm app}$ values quoted by NGS, we find $\beta$ and $\gamma$ as reported in Table 1. Using $\beta_{\rm app}({\rm max})$ for $\beta_{\rm app}$ will tend to underestimate the actual velocities. The $\beta_{\rm app}$ values reported by NGS may also be underestimates of the superluminal velocities (see section \ref{offsetcentre}). We assume that the distance adopted by NGS for Cyg X-3 is not seriously in error. \begin{table} \begin{tabular}{lllccc}\hline Flare& Motion&Axis & $\beta_{\rm app}$ & $\beta$ & $\gamma$ \\ \hline 1&Expansion &Major & 2.45 $\pm$ 0.55 & 0.920 & 2.56 \\ & &Minor & 0.84 $\pm$ 0.09 & 0.579 & 1.23 \\ &Contraction&Major & 2.97 $\pm$ 0.33 & 0.947 & 3.12 \\ & &Minor & 2.53 $\pm$ 0.32 & 0.926 & 2.65 \\ 2&Expansion &Major & 4.75 $\pm$ 0.42 & 0.979 & 4.88 \\ & &Minor & 2.32 $\pm$ 0.32 & 0.914 & 2.46 \\ &Contraction&Major & 6.76 $\pm$ 0.72 & 0.989 & 6.86 \\ & &Minor & 2.53 $\pm$ 0.54 & 0.926 & 2.65 \\ \hline \end{tabular} \caption{Apparent and actual velocities along the axes of the ellipse} \end{table} Questions that arise are: \begin{itemize} \item{What is moving? Is it material moving out and radiating, or is it a pattern of radiation illuminating fixed material?} \item{Do any realistic models produce contraction?} \item{Why are the speeds of expansion and contraction different, with contraction being the faster? Why does the second flare give greater speeds?} \item{Why is the shape elliptical and not circular?} \end{itemize} \section{Cygnus X-3 models} We describe a variety of models which attempt to explain the superluminal expansions and contractions shown in Table 1 along with the observed elliptical shape. A jet of radio-emitting plasma, an expanding radiating shell, a beam of highly energetic particles or radiation striking a stationary medium and exciting it, and a pattern of radiation are considered. We do not attempt to explain how such conditions arise, nor do we consider radiation or cooling mechanisms. \subsection{Bipolar jet model} Superluminal expansion has been observed in the galactic bipolar jet sources GRS 1915+105 (Mirabel \& Rodr\'{\i}guez 1994) and GRO 1655-40 (Tingay et al. 1995). Although the expansion velocities observed here are much larger, could such a model explain the superluminal expansion of Cyg X-3? To explain superluminal {\it contraction} however by a similar process the bulk motion of the material would have to be reversed, which is unrealistic. Nor does this model account readily for the elliptical shape of the emission. It is worth noting that while this model does not account for the motion in Cygnus X-3, it is consistent with the relativistic motions in GRS 1915+105 and GRO J1655-40 and cannot be discounted for these sources. Before moving to the next model, note that an apparently superluminal blue-shifted jet is accompanied by an apparently subluminal red-shifted jet. In obtaining their values for $\beta_{\rm app}$ NGS assumed that the apparent expansion (or contraction) was from (or to) a central point, and of equal magnitude in opposite directions. Thus the superluminal speeds have probably been underestimated. This is developed further in the next section. \subsection{Offset centre} \label{offsetcentre} A consequence of the hybrid mapping technique used by NGS is that absolute positional information is lost and so it is not possible accurately to locate the ellipses relative to each other or to the core of the system. NGS assume that the bright core of each map represents the same feature. If the superluminal expansion speeds along the major axis are caused by a bulk motion with a component towards the observer, then the speed in the opposite direction must be subluminal, and the point from which the expansion takes place must be offset from the observed centre of the major axis. The same argument applies to motion along the minor axis. The source of the expanding material is located at or near the rim of the ellipse, lying on neither its major nor minor axis, and the superluminal expansion speeds are thus approximately doubled (Table 2) An outburst might produce an elliptical lobe with the central source lying at one end of the major axis of the ellipse, but seems unlikely to do so for the central source offset from both the major and minor axes. In addition this model does not readily explain the superluminal contraction. We set this model aside. \begin{table} \begin{tabular}{lcccc}\hline Axis & $\beta_{\rm app}$ & $\beta^{\prime}_{\rm app}$ & $\beta^{\prime}$ & $\gamma^{\prime}$ \\ \hline Major & 4.8 & 9.6 & 0.995 & 9.64 \\ Minor & 2.3 & 4.6 & 0.977 & 4.68 \\ \hline \end{tabular} \caption{Projected velocities and true velocities based on a shifted expansion centre based on flare 2.} \end{table} \subsection{Christmas tree model or propagating photon pattern} The velocities found in the previous models are at the limit of physical reality. If, however, the superluminal effect is caused by a pattern of photons propagating, then relativistic bulk motion, and the reversal of relativistic bulk motion may not be necessary. Equation 1, with $v=c$ applies. If an intense burst of radiation was emitted from the core of Cyg X-3 the observer might, in addition to seeing some of these photons directly, see either photons reflected/scattered off surrounding material or secondary photons generated in this material following excitation by the burst of radiation. The size of the emitting patch would be governed by the extent of the distribution of material around the core of Cyg X-3, by the distance from the core that the burst of radiation had travelled and possibly by the excitation time for the case of secondary photons. It is possible to envisage an expanding patch of emission, growing as the burst of radiation travels further out. The maximum size observed is determined by the duration of the burst or by the extent of the surrounding material or (less likely) by the optical depth of the surrounding material to the centrally emitted radiation. Note however that equation 1 is cylindrically symmetric about the line of sight, so that an isotropic burst of radiation into an isotropic distribution of surrounding material would give a circular patch of emission. The observed elliptical shape could be produced if either the burst of radiation or the surrounding material were confined to a disc inclined to the line of sight. Equation 1 requires that the total apparent expansion speed along any axis is $\geq$ 2$c$, and the observations satisfy this. Along the axis of the ellipse that lies in the plane of the sky the total apparent expansion speed takes the minimum value of 2$c$. With higher signal-to-noise observations it should be possible to identify this axis and determine the orientation of the disc in space. The superluminal contraction, in this model, is most likely explained by a steady reduction in the effective extent of the photon pattern. Either the central intensity drops continuously and insufficient radiation reaches the outer areas to make visible the material there, or (less likely) a steady change of the central wavelength makes the optical depth gradually greater. It is unlikely that the extent of the surrounding material shrinks superluminally. Another possibility is that we are seeing the cooling of an excited region after the central radiation has turned off. However if the radiation ceases totally, the central parts of the patch cool first. This is not what is observed. \subsection{Searchlight beam model} \label{radiation} If a conical jet of high energy photons or particles was ejected from the Cyg X-3 core, and there was some absorption by material in a surrounding stationary spherical shell, then that part of the shell intersected would become excited and radiate. If the conical beam is inclined to the line of sight then the observed emission patch has an elliptical shape. For flare 2 the required eccentricity is produced if the cone axis is at an angle of 61$\degr$ to the line of sight. If the inclination varies by $\sim 10\degr$ then the observed changes in eccentricity can be accommodated. The observed emission is centrally peaked suggesting that the beam also is more intense along its central axis. The apparent superluminal expansions and contractions could be produced by relatively small expansions and contractions of the opening angle of the cone, provided that the heating and cooling time constants are less than the timescales for the change of cone angle. The confinement of the beam might be by a magnetic throat, with the opening angle in part governed by the flux in the beam, producing the observed correlation beween size and intensity of the emission. Alternatively, at least a part of the expansion and contraction could be produced by the weaker emission from the edge of the ellipse rising above and falling below a detection threshold. \subsection{Doppler boosted spot on an expanding shell} \label{spotprojection} Consider a spherical shell of material expanding relativistically at fixed velocity $V$. The emission from material travelling at small angles, $\phi$, to the line of sight will have its intensity Doppler boosted according to \begin{equation} I(\phi) = \frac{I^{\prime}(\phi)}{\gamma^{3}\left(1 - \beta\cos{\phi}\right)^{3}} \end{equation} where $I^{\prime}(\phi)$ is the intensity in the rest frame of the radiating material (Rybicki \& Lightman, 1979). Assuming $I^{\prime}(\phi)$ to be constant, a plot of observed intensity with angle is shown in Figure \ref{Dopplerboosting}. The shell appears to have a bright spot centred on our line of sight, which expands as the shell expands. \begin{figure} \begin{picture}(10,140) \put(-85,-510){\special{psfile=intensity.ps hscale=85 vscale=85}} \end{picture} \caption{Plot of Doppler boosted intensity against offset angle. For a dynamical range of 32, the maximum angle at which flux is detected is 17.6 degrees. The plot is for the major axis of the second flare; $\gamma = 4.88$.} \label{Dopplerboosting} \end{figure} Let $r$ be the radius of the shell at time $t$, and $\alpha$ the angular diameter of the spot (Figure \ref{expandingshells}). \begin{figure*} \begin{picture}(60,120) \put(-200,0){\special{psfile=shell.eps}} \end{picture} \caption{Geometries for a spherically-symmetric shell expanding at a fixed velocity, $V$. The observed angular radius of a spot on the shell as it expands is shown by the values $\alpha$, $\alpha + \delta\alpha$.} \label{expandingshells} \end{figure*} The shell is expanding at a velocity $V = \delta r / \delta t$ and the observed expansion of the spot can be written as $v = \delta\alpha / \delta t$. Therefore $\sin{\phi} = \alpha / r = \delta\alpha / \delta r$, hence $V = \left(r / \alpha\right)v$ giving $v < V$. To find $(r/\alpha)$, we use the dynamical range of the {\it VLBA} images to give the limit at which lack of Doppler boosting is unable to bring the intensity up to detectable levels. The dynamical range for the VLBA images in Cyg X-3 is 32:1 which would imply from Figure \ref{Dopplerboosting} that any flux outside the angle of 17.6$\degr$ is undetectable. The profile used for this assumes $\gamma = 4.88$. If $\phi = 17.6\degr$ then $(r/\alpha) = 3.30$ which implies shell speeds for the second flare as shown in Table 3. \begin{table} \begin{tabular}{lcccc}\hline Axis & $\beta_{\rm app}({\rm spot})$ & $\beta_{\rm app}({\rm shell})$ & $\beta(\rm shell) $ & $\gamma({\rm shell})$ \\ \hline Major & 4.8 & 15.8 & 0.998 & 15.8 \\ Minor & 2.3 & 7.58 & 0.991 & 7.63 \\ \hline \end{tabular} \caption{Adjusted velocities for a shell of material with a Doppler boosted spot} \label{shellspeeds} \end{table} In this model a spherical shell will produce a circular spot with circular expansion velocities. To simulate the elliptical shape observed we postulate a shell distorted by expansion into an anisotropic dense medium. A dense medium, sufficient to decelerate the shell, is required to explain the contraction. In this model the apparent contraction is due to a rapid reduction in intensity, lowering the emission from most of the spot area below the detection threshold. The rapid reduction in intensity is due to the collapse of the Doppler boosting when the expansion velocity drops. A reduction in Doppler boosting of the intensity of 1/32 would require a velocity change of $\delta\beta = - 0.819$. If this occurs in $\sim$ 60 minutes (the typical time between contractions in the NGS observations) then the retardation is $\sim$ 70 km s$^{-2}$ and the shell pushes back the retarding medium by $\sim$ 900$R_{\odot}$. Considerable energy would be transfered to the medium, presumably with detectable consequences. \subsection{Illuminated shells} \label{illuminatedshell} This hybrid model, which combines the useful features of the moving shell and the searchlight beam models ($\S\S$ 4.4 and 4.5) best addresses the difficult questions concerning the contraction and the elliptical shape of the emitting area. The central source produces a series of expanding spherical shells, and a beam of energetic particles or photons. The beam, which is inclined to the line of sight, illuminates a patch on an expanding shell which is seen as an elliptical area of emission. As the shell expands the area expands superluminally, as set out in $\S$ 4.5, with the minor axis expansion velocities apparently smaller than the major axis ones. Doppler boosting of the intensity is not significant here because of the large angle to the line of sight. However, as the shell expands the illuminating beam intensity per unit area decreases and so the emitted radiation falls. If the intensity of the emission across the area has a flat distribution and is close to the detection threshold of the observer's equipment then as the source fades its detectable area will rapidly shrink simulating superluminal contraction. Meanwhile another shell has been produced and is expanding in the wake of the first. As emission from the first shell fades, the expanding spot on this second shell becomes visible. The difference in apparent speeds for the two flares can be explained if we imagine the flares to be running into some ambient medium and imparting momentum. A graph of the distance travelled by the shell against time is shown in Figure \ref{shellvel}. If both shells are expanding at the same initial rate, we would observe flare 1 to travel unhindered, then run into an object that slows it down. After a time $t$ we would have observed it to have travelled a distance $A$ at an average velocity $v_{1} = A/t$. During this deceleration, if the shell pushes back the ambient medium it will have allowed flare 2 to expand a greater distance before being decelerated. For the second flare the initial speed is the same, but the shell travels further, to $B$ before it is slowed down. On the same time-scale, $t$, the average speed will have increased to $v_{2} = B/t$. The maximum shell expansion speed occurs when a previous shell has pushed the braking medium out far enough so we do not see deceleration within the time scale $t$. This is the true expansion velocity of the shell. \begin{figure*} \begin{picture}(60,250) \put(-150,0){\special{psfile=speedchange.eps}} \end{picture} \caption{Shell expansions based on a constant initial speed for two shells. Shell 1 travels to a distance A before it is halted by some braking medium. Flare 1 moves the braking medium back a bit so shell 2 travels out further before being decelerated and thus has a higher average speed. The distance between A and B is approximately 900 $R_{\odot}$.} \label{shellvel} \end{figure*} In this model a shell apparently expanding at 16$c$ produces a superluminal expansion of an elliptical area which then fades and appears to contract superluminally. The model accounts for the elliptical shape and the lower minor axis velocities. Illumination of a subsequent shell produces the next expansion and contraction phase. This shell is expected to travel further before retardation and have a higher average speed. \section{Conclusions} We have considered a number of models which attempt to explain the superluminal expansions and contractions observed in Cygnus X-3, and the observed elliptical shape. The latter two features are the most difficult to model. The two most successful models are a) the propagating photon pattern model and b) the model in which expanding shells of material are illuminated by an off-axis jet or beam of radiation. \section*{Acknowledgements} We would like to thank Pete Taylor and Tim Ash for useful comments during the development of the models presented here. RNO and SJN acknowledge the support of PPARC studentships.
2024-02-18T23:39:43.313Z
1996-09-17T20:57:37.000Z
algebraic_stack_train_0000
192
3,100
proofpile-arXiv_065-1078
\section{Introduction} In recent years it has proved possible to solve the problem of back-reaction in the context of preequilibrium parton production in the quark--gluon plasma \cite{CEKMS,KES}. This addresses the scenario in which two ultrarelativistic nuclei collide and generate color charges on each other, which in turn create a chromoelectric field between the receding disk-like nuclei. Parton pairs then tunnel out of this chromoelectric field through the Schwinger mechanism \cite{Sau,HE,Sch} and may eventually reach thermal equilibrium if the plasma conditions pertain for a sufficient length of time. While the tunneling and the thermalizing collisions proceed, the chromoelectric field accelerates the partons, producing a current which in turn modifies the field. This back-reaction may eventually set up plasma oscillations. This picture for preequilibrium parton production has been studied in a transport formalism \cite{BC} in which the chromoelectric field is taken to be classical and abelian and collisions between the partons are completely ignored so that the only interaction of the partons is with the classical electric field, this interaction being the source of the back-reaction. Alternatively, the mutual scattering between partons has been considered in an approximation that assumes rapid thermalization and treats the collisions in a relaxation approximation about the thermal distribution \cite{KM}; in this study no back-reaction was allowed. Both interparton collisions and back-reaction were considered in a calculation \cite{GKM} done in the hydrodynamic limit, which thus took into account only electric conduction within the parton plasma. All of these studies focused on the region of central rapidity. The more recent calculations \cite{CEKMS,KES} carried out a comparison between the transport formalism for back-reaction and the results of a field-theory calculation for the equivalent situation (see also \cite{CM,KESCM1,KESCM2}) and found a remarkable similarity between the quantal, field-theory results and those of the classical transport equations using a Schwinger source term. This link has also been established formally to a certain degree \cite{BE}. (This close relationship tends to fail, in part, for a system confined to a finite volume as a dimension of this volume becomes comparable with the reciprocal parton effective mass \cite{Eis1}.) The studies relating field theory with transport formalism were all carried out under the assumption of a classical, abelian electric field and no parton--parton scattering. The removal of the assumption of a classical field, and thus the inclusion of interparticle scattering through the exchange of quanta, has been considered quite recently \cite{CHKMPA} for one spatial dimension. The study reported here is carried out within the framework of the transport formalism (the parallel field-theory case is also currently under study \cite{Eis2}) and incorporates both back-reaction and a collision term in the approximation of relaxation to thermal equilibrium. Thus it assumes that thermalization takes place fast enough so that it makes sense to speak of the ongoing tunneling of partons, with back-reaction, as the collisions produce conditions of thermal equilibrium. It may be seen as combining the features of the studies of Bia\l as and Czy\.z \cite{BC} with those of Kajantie and Matsui \cite{KM}, or of paralleling the calculation \cite{GKM} of Gatoff, Kerman, and Matsui, but at the level of the transport formalism without further appeal to hydrodynamics. Along with the other studies noted, it restricts itself to the region of central rapidity. The study provides a model for comparing the interplay between the thermalizing effects of particle collisions and the plasma oscillations produced by back-reaction. \section{Formalism} The transport formalism for back-reaction using boost-invariant variables has been presented previously in considerable detail \cite{CEKMS} and is modified here only by the appearance of the collision term in the approximate form appropriate to relaxation to thermal equilibrium \cite{KM}. The Boltzmann--Vlasov equation in $3 + 1$ dimensions then reads, in the notation of \cite{CEKMS}, \begin{equation} \label{BV} p^\mu\frac{\partial f}{\partial q^\mu} - ep^\mu F_{\mu\nu} \frac{\partial f}{\partial p_\nu} = S + C, \end{equation} where $f = f(q^\mu,p^\mu)$ is the distribution function, $S$ is the Schwinger source term, and $C$ is the relaxation-approximation collision term. The electromagnetic field is $F_{\mu\nu}$ and the electric charge $e.$ The variables we take are \begin{equation} \label{variables} q^\mu = (\tau,x,y,\eta),\quad\quad p_\mu = (p_\tau,p_x,p_y,p_\eta), \end{equation} where $\tau = \sqrt{t^2-z^2}$ is the proper time and $\eta = \frac{1}{2}\log[(t+z)/(t-z)]$ is the rapidity. Thus, as usual, the ordinary, laboratory-frame coordinates are given by \begin{equation} \label{labcoords} z = \tau\sinh\eta, \quad\quad t = \tau\cosh\eta. \end{equation} The momentum coordinates in eq.~(\ref{variables}) relate to the laboratory momenta through \begin{equation} \label{momenta} p_\tau = (Et - pz)/\tau, \quad\quad p_\eta = -Ez + tp, \end{equation} where $E$ is the energy and $p$ is the $z$-component of the momentum, the $z$-axis having been taken parallel to the initial nucleus--nucleus collision direction or initial electric field direction. Inserting expressions \cite{CEKMS,KM} for the source term $S$ and for the collision term $C$, and restricting to $1 + 1$ dimensions, the Boltzmann--Vlasov equation becomes \begin{eqnarray} \label{transport} \frac{\partial f}{\partial\tau} + e\tau{\cal E}(\tau)\frac{\partial f}{\partial p_\eta} & = & \pm(1\pm 2f)e\tau|{\cal E}(\tau)| \log\left\{1\pm\exp\left[-\frac{\pi m^2}{|e{\cal E}(\tau)|}\right]\right\} \delta(p_\eta) \nonumber \\ & - & \frac{f - f_{\rm eq}}{\tau_{\rm c}}. \end{eqnarray} Here the upper sign refers throughout to boson production and the lower sign to the fermion case, and we have incorporated the necessary \cite{CEKMS} boson enhancement and fermion blocking factor $(1\pm 2f).$ The electric field is given for these variables by \begin{equation} \label{E} {\cal E}(\tau) = \frac{F_{\eta\tau}}{\tau} = -\frac{1}{\tau}\frac{dA}{d\tau}, \end{equation} where $A = A_\eta(\tau)$ is the only nonvanishing component of the electromagnetic four-vector potential in these coordinates. In eq.~(\ref{transport}) we have assumed, as usual, that pairs emerge with vanishing $p_\eta,$ which is the boost-invariant equivalent of the conventional assumption that pairs are produced with zero momentum in the laboratory frame. The thermal equilibrium distribution is \begin{equation} \label{feq} f_{\rm eq}(p_\eta,\tau) = \frac{1}{\exp[p_\tau/T]\mp 1}, \end{equation} where $T$ is the system temperature, determined at each moment in proper time from the requirement \cite{KM} \begin{equation} \label{T} \int\frac{dp_\eta}{2\pi}\ f(p_\eta,\tau)\ p_\tau = \int\frac{dp_\eta}{2\pi}\ f_{\rm eq}[T(\tau);\, p_\eta,\tau]\ p_\tau; \end{equation} here and throughout $p_\tau = \sqrt{m^2 + p_\eta^2/\tau^2},$ where $m$ is the parton effective mass, and the independent variables in terms of which the transport equations are evolved are $p_\eta$ and $\tau.$ In eq.~(\ref{transport}), $\tau_{\rm c}$ is the collision time or time for relaxation to thermal equilibrium. Back-reaction generates variations in ${\cal E}(\tau)$ as a function of proper-time through the Maxwell equation \begin{eqnarray} \label{Maxwell} -\tau\frac{d{\cal E}}{d\tau} = j_\eta^{\rm cond} + j_\eta^{\rm pol} & = & 2e\int\frac{dp_\eta}{2\pi\tau p_\tau}\ f\ p_\eta \nonumber \\ & \pm & \left[1\pm 2f(p_\eta=0,\tau)\right] \frac{me\tau}{\pi}{\rm sign}[{\cal E}(\tau)] \nonumber \\ & \times & \log\left\{1\pm \exp\left[-\frac{\pi m^2}{|e{\cal E}(\tau)|}\right]\right\}; \end{eqnarray} here the two contributions on the right-hand side are for the conduction and polarization currents, respectively. Note that in $1 + 1$ dimensions the units of electric charge $e$ and of the electric field ${\cal E}$ are both energy. For numerical convenience \cite{CEKMS} a new variable is introduced, namely, \begin{equation} \label{u} u = \log(m\tau), \quad\quad \tau = (1/m)\exp(u). \end{equation} Equations (\ref{transport}) and (\ref{Maxwell}) are to be solved as a system of partial differential equations in the independent variables $p_\eta$ and $\tau$ for the dependent variables $f$ and ${\cal E},$ determining the temperature $T$ at each proper-time step from the consistency condition of eq.~(\ref{T}). \section{Numerical results and conclusions} The numerical procedures used here are patterned after those of ref.~\cite{Eis1}, and involve either the use of a Lax method or a method of characteristics. In practice the latter is considerably more efficient in this context and all results reported here are based on it. We note that these methods are completely different from those used in ref.~\cite{CEKMS}; as a check on numerical procedures we verified that full agreement was achieved with the results reported there. All quantities having dimensions of energy are scaled \cite{CEKMS} here to units of the parton effective mass $m,$ while quantities with dimensions of length are given in terms of the inverse of this quantity, $1/m.$ In order to present a relatively limited number of cases, we fix all our initial conditions at $u = -2$ in terms of the variable of eq.~(\ref{u}). At that point in proper time we take ${\cal E} = 4,$ with no partons present; the charge is set to $e = 1.$ This has been found \cite{CEKMS} to be a rather representative case; in particular, little is changed by applying the initial conditions at $u = 0$ rather than at $u = -2.$ We shall exhibit results for three values of $\tau_{\rm c},$ namely 0.2, 1, and 10. Our results are presented in fig.~1 for boson production and in fig.~2 for fermions. The uppermost graph in each case shows the temperature derived from the consistency condition of eq.~(\ref{T}) while the middle curves are for the electric field ${\cal E}$ and the lower graph gives the total currents. Both for bosons and for fermions, the cases with $\tau_{\rm c} = 0.2$ and $\tau_{\rm c} = 1$ involve a collision term that damps the distributions very rapidly. Thus no signs of plasma oscillations, which would arise if back-reaction came into play unhindered, are seen. For these values, the electric field and total current damp rather quickly to zero, and a fixed value of $T$ is reached. The temperature peaks at around $1.5 m$ for the boson cases, and near $2m$ for fermions. The temperature ultimately achieved depends, of course, on $\tau_{\rm c}.$ For $\tau_{\rm c} = 10,$ the plasma oscillations of back-reaction are clearly visible in the electric field and in the total current, both for bosons and for fermions. In fact, these cases are rather similar to their counterparts without thermalizing collisions \cite{CEKMS}, except for greater damping, especially of the current, when thermalization is involved. The plasma frequency is changed only a little by this damping. The plasma oscillations are reflected very slightly in the temperature behavior in a ripple at the onset of the oscillations, where they naturally have their largest excursion. However, the oscillations have the effect of pushing off the region at which a constant temperature is reached. Extending the calculation further out in the variable $u,$ one finds that the temperature in that case levels off around $u \sim 5$ at a value of $T \sim 0.38$ for bosons and 0.39 for fermions. In conclusion, this calculation allows an exploration of the transition between a domain dominated by parton collisions that bring about rapid thermalization in the quark--gluon plasma and a domain governed in major degree by back-reaction. In the first situation, the electric field from which the parton pairs tunnel, and the current which is produced from these pairs by acceleration in the field, both decay smoothly to zero and a terminal temperature is reached. In the latter case, plasma oscillations set in which delay somewhat the achievement of a final constant temperature. This qualitative difference between the two situations occurs for collision times about an order of magnitude larger than the reciprocal effective parton mass. {\sl Note added in proof:} After this paper was completed and posted in the Los Alamos archive, I learned of a similar study carried out by B. Banerjee, R.S. Bhalerao, and V. Ravishankar [Phys. Lett. B 224 (1989) 16]. The present work has several features that are different from the previous one, notably, the application to bosons as well as to fermions and the inclusion of factors for Bose--Einstein enhancement or Fermi--Dirac blocking in the Schwinger source term. By comparing with the field theory results, these factors have been found to be of considerable importance \cite{CEKMS,KES,KESCM1,KESCM2}. The earlier work uses massless fermions, and, while the initial motion is taken to be one-dimensional as here, it includes a transverse momentum distribution, so that it is difficult to make a direct quantitative comparison between the two studies. There are also a number of technical differences between the calculations. Qualitatively, very similar behavior is found and the earlier study points out very clearly the necessity for treating the interplay between back-reaction and thermalization. I am very grateful to Professor R.S. Bhalerao for acquainting me with this earlier reference. It is a pleasure to acknowledge useful conversations with Fred Cooper, Salman Habib, Emil Mottola, Sebastian Schmidt, and Ben Svetitsky on the subject matter of this paper. I also wish to express my warm thanks to Professor Walter Greiner and the Institute for Theoretical Physics at the University of Frankfurt and to Fredrick Cooper and Emil Mottola at Los Alamos National Laboratory for their kind hospitality while this work was being carried out. This research was funded in part by the U.S.-Israel Binational Science Foundation, in part by the Deutsche Forschungsgemeinschaft, and in part by the Ne'eman Chair in Theoretical Nuclear Physics at Tel Aviv University. \vfill\eject \vskip 2 true pc
2024-02-18T23:39:43.542Z
1996-10-04T09:16:49.000Z
algebraic_stack_train_0000
206
2,353
proofpile-arXiv_065-1194
\section{INTRODUCTION} All the relevant properties of the deconfinement transition in finite temperature pure Lattice Gauge Theories (LGT) can be described by a suitable effective action for the order parameter, the Polyakov loop~\cite{sy}. The construction of this effective action necessarily involves some approximations. It is of crucial importance to choose these approximations so as to obtain effective actions simple enough to be easily studied (either with exact solutions, or with some mean field like technique) and, at the same time, rich enough to keep track of the whole complexity of the original gauge theory. In the past years this problem was addressed with several different approaches (see~\cite{bcdp} for references and discussion). However a common feature of all these approaches was that the effective actions were always constructed neglecting the spacelike part of the action. As a consequence it was impossible to reach a consistent continuum limit for the critical temperature. The aim of this contribution is to show that it is possible to avoid such a drastic approximation. We shall discuss a general framework which allows one to construct improved effective actions which take into account perturbatively (order by order in the spacelike coupling $\beta_s$) the spacelike part of the original gauge action and are {\sl exact to all orders in the timelike coupling}. Our approach is valid for any gauge group $G$ and for any choice of lattice regularization of the gauge action (Wilson, mixed, heat kernel actions\ldots). Moreover it can be extended, in principle, to all orders in $\beta_s$. We shall only outline here the general strategy and show, as an example, some results obtained by taking into account the first order contribution in $\beta_s$ in the case of the $SU(2)$ gauge model in (3+1) dimensions. Much more details and a complete survey of our approach can be found in~\cite{bcdp}. \section{CONSTRUCTION OF THE EFFECTIVE ACTION} Since we treat in a different way the spacelike and timelike parts of the action, we are compelled to use two different couplings $\beta_s$ and $\beta_t$. We shall denote with $\rho^2\equiv\beta_t/\beta_s$ the asymmetry parameter, with $N_t$ ($N_s$) the size of the lattice in the timelike (spacelike) direction and with $d$ the number of spacelike dimensions. The first step is to expand in characters both the timelike and the spacelike part of the action. The expansion of the spacelike part is truncated at the chosen order in $\beta_s$; the timelike part is kept exact to all orders. For the contribution due to the trivial representation term in the spacelike expansion (the ``zeroth order'' approximation in $\beta_s$) the integration over the spacelike degrees of freedom is straightforward. The resulting effective action is that of a $d$ dimensional spin model (the spins being the Polyakov loops of the original model) with next neighbour interactions only. The terms of higher order $\beta_s$ give rise to non trivial interactions among several Polyakov loops. For instance at the order $\beta_s^2$ the interaction involves the four Polyakov loops around a spacelike plaquette. The explicit form of these interactions can be written in terms of rather complicated group integrals, which can be solved by means of suitable Schwinger Dyson (SD) equations. The use of these SD equations is crucial for our whole construction since for these integrals (in which the Polyakov loops are kept as free variables) the usual techniques, developed for ordinary strong coupling expansions, are useless. We shall discuss in detail, in the next section, a simple example. \subsection{Schwinger-Dyson equations} As an example, let us study the integral which appears in the discussion of the term due to the adjoint representation in the character expansion of the spacelike action for the SU(2) model: \begin{equation} \label{new1} \int DU U_{\alpha\beta} U^\dagger_{\gamma\delta} \chi_j(UP_{{\vec x} + i}U^\dagger P^\dagger_{\vec x} ) \equiv\delta_{\alpha\delta} \delta_{\gamma\beta} {\cal C}^{(j)}_{\alpha\beta} . \end{equation} First of all, it follows from (\ref{new1}) that the non vanishing integrals in the l.h.s. depend only on $|U_{\alpha\beta}|^2$, and hence that ${\cal C}^{(j)}_{11} = {\cal C}^{(j)}_{22}$ and ${\cal C}^{(j)}_{12} ={\cal C}^{(j)}_{21}$. To compute the matrix elements ${\cal C}^{(j)}_{\alpha\beta}$, we note that the matrix ${\cal C}^{(j)}$ can be expressed in terms of the integral \begin{eqnarray} \label{new2} K^{(j)}(\theta_{\vec x},\theta_{{\vec x}+i})& =& \int DU \chi_j(U P_2 U^\dagger P^\dagger_1) \nonumber \\ &=&{1\over d_j} \chi_j(P_2)\chi_j(P^\dagger_1) \end{eqnarray} through a system of two linear Schwinger--Dyson-like equations. Indeed, considering the integral $\int DU U_{\alpha\beta}$ $U^\dagger_{\beta\alpha}$ $\chi_j(UP_{{\vec x} + i}U^\dagger P^\dagger_{\vec x})$ we easily find that \begin{equation} \label{newrel1} {\cal C}^{(j)}_{11} + {\cal C}^{(j)}_{12}=K^{(j)} . \end{equation} To construct a second independent equation, let us consider the integral \begin{equation} \label{newint} \int DU \chi_{1\over 2}(UP_{{\vec x} + i}U^\dagger P^\dagger_{\vec x}) \chi_j(UP_{{\vec x} + i}U^\dagger P^\dagger_{\vec x}). \end{equation} On one hand we can write the character $\chi_{1\over 2}$ explicitly as a trace and express the integral in terms of the ${\cal C}^{(j)}_{\alpha\beta}$ by using eq.(\ref{new1}). On the other hand, the integral (\ref{newint}) can be written in terms of $K^{(j)}$ functions by using the basic SU(2) Clebsch-Gordan relation: $\chi_{1\over 2} \chi_j$ $= \chi_{j+{1\over 2}} + \chi_{j-{1\over 2}}$. The resulting equation is: \begin{eqnarray} \label{newrel2} 2 \cos(\theta_{{\vec x} + i} - \theta_{\vec x}) {\cal C}^{(j)}_{11} &+& 2\cos(\theta_{{\vec x} + i} + \theta_{\vec x}) {\cal C}^{(j)}_{12} =\nonumber\\ K^{(j-{1\over 2})} &+& K^{(j+{1\over 2})} \end{eqnarray} where $\{e^{i\theta_{\vec x}}, e^{-i\theta_{\vec x}}\}$ are the eigenvalues of the Polyakov loop $P_{\vec x}$. Eq.s (\ref{newrel1}) and (\ref{newrel2}) form a set of two linear equations in the unknowns ${\cal C}^{(j)}_{11}$ and ${\cal C}^{(j)}_{12}$ whose solution is: \begin{eqnarray} \label{newsol} {\cal C}^{(j)}_{11} & = & {K^{(j-{1\over 2})} - 2 \cos(\theta_{{\vec x} + i} + \theta_{\vec x}) K^{(j)} + K^{(j+{1\over 2})} \over 4 \sin \theta_{{\vec x} + i} \sin \theta_{\vec x}} \nonumber\\ {\cal C}^{(j)}_{12} & = & -{K^{(j-{1\over 2})} - 2 \cos(\theta_{{\vec x} + i} - \theta_{\vec x}) K^{(j)} + K^{(j+{1\over 2})} \over 4 \sin \theta_{{\vec x} + i} \sin \theta_{\vec x}} \nonumber \end{eqnarray} This solves the problem. \section{DISCUSSION OF THE RESULTS} The effective action obtained in the previous section describes a $d$ dimensional spin model with complicated interactions and cannot be solved exactly. However several features of the model can be figured out rather easily. In particular, the deconfinement temperature can be estimate by using a mean field approximation. The results can then be used in two ways. \subsection{Asymmetric lattices} As $\rho$ varies we have different, but equivalent, lattice regularization of the same model. This equivalence however implies, at the quantum level, in the (3+1) dimensional case a non trivial relation between the couplings. This problem was studied in the weak coupling limit by F. Karsch~\cite{k81} who found, in the $\rho\to\infty$ limit, the following relation: \begin{equation} \beta_t=\rho\,(\beta+\alpha_t^0)+\alpha_t^1 \label{rel3} \end{equation} where $\beta$ is the coupling of the equivalent symmetric regularization. $\alpha_t^0$ and $\alpha_t^1$ are group dependent constants whose value in the SU(2) case is: $\alpha^{0}_{t}=-0.27192$ and $\alpha^{1}_{t}=1/2$; The $\rho$ dependence of our estimates for the critical temperature shows a remarkable agreement with the behaviour predicted by eq.(\ref{rel3}) (see tab.I.). As $N_t$ increases our estimates for $\alpha^0_t$ and $\alpha^1_t$ cluster around the theoretical values of~\cite{k81}. This agreement is highly non trivial since $\alpha_t^0$ and $\alpha_t^1$ were obtained with a {\sl weak coupling} calculation, while our effective action is the result of a {\sl strong coupling} expansion. The reason of this success is very likely related to the fact that we have been able to sum to all orders in $\beta_t$ the timelike contribution of the effective action. \begin{table} \label{tab2} \begin{center} \begin{tabular}{c c c } \hline\hline $ N_t$ & $\alpha^0_t$ & $\alpha^1_t$ \\ \hline $2$ & $-0.184$ & $0.414$\\ $3$ & $-0.210$ & $0.375$\\ $4$ & $-0.221$ & $0.373$\\ $5$ & $-0.235$ & $0.372$\\ $6$ & $-0.249$ & $0.389$\\ $8$ & $-0.271$ & $0.413$\\ $16$ & $-0.327$ & $0.508$\\ \hline & $-0.27192$ & $0.50$\\ \hline\hline \end{tabular} \end{center} \vskip 0.3cm {\bf Tab. I}{\it~~ Values of of $\alpha_t^0$ and $\alpha_t^1$ as functions of $N_t$. The theoretical values are reported, for comparison, in the last row of the table.} \end{table} \subsection{Scaling behaviour} The second important test is the scaling behaviour of the deconfinement temperature as a function of $N_t$. The $N_t$ dependence is predicted to be of logarithmic type in (3+1) dimensions and the Montecarlo data confirm this analysis. On the contrary all the effective actions obtained neglecting the spacelike plaquettes predict a linear scaling (see Fig.1). This was in past years one of the major drawbacks of the standard effective action approach to the deconfinement transition. The inclusion of the first non trivial corrections due to the space-like plaquettes greatly improves the scaling behaviour. The values obtained with our effective action for the critical couplings are plotted in Fig.1, where they are also compared with the Montecarlo results (extracted from~\cite{fhk}). \begin{figure} \null\vskip -1cm\hskip 2cm \epsfxsize = 7truecm \epsffile{su2lat96.eps} \label{data} \end{figure} Indeed, if one compares our mean field estimates with the ones of the Montecarlo simulations, it turns out that the discrepancy in critical couplings is within 10\% in the range $2\leq N_t \leq 5 $. While the logarithmic scaling predicted by the renormalization group is still beyond the present scheme, being related to non perturbative effects in $\beta_s$, it is reasonable to expect that higher order approximations would lead to better and better numerical results at least for not too high values of $N_t$.
2024-02-18T23:39:43.854Z
1996-09-12T16:11:48.000Z
algebraic_stack_train_0000
226
1,749
proofpile-arXiv_065-1335
\section{Introduction} The main goal of existing wireless networks is to provide the highest possible spectral efficiency and the best possible data rate for human users. But, machine-type communications (MTC) will become a strong challenge for next generation wireless networks. Traffic patterns for MTC completely differ from human-generated traffic and can be characterized by the following features: (a) a huge number of autonomous devices connected to one access point, (b) low energy consumption is a vital requirement, (c) short data packets and (d) low traffic intensity generated by single device. 3GPP has proposed multiple candidate solutions for massive MTC (mMTC). The main candidates are multi-user shared access (MUSA, \cite{yuan2016non}), sparse coded multiple access (SCMA, \cite{nikopour2013sparse}) and resource shared multiple access (RSMA, \cite{3gpp.R1-164688, 3gpp.R1-164689}), but the lack of implementation details does not allow to select the most preferable solution. At the same time, we mention that no one of 3GPP solutions is based on polar codes \cite{Arikan} despite the fact that these codes in combination with Tal--Vardy list decoder \cite{TalVardyList2015} are extremely good for short code lengths and low code rates. In this paper, we fill this gap. Polar codes \cite{Arikan} are the first class of error-correcting codes which is proved to achieve the capacity of any binary memoryless symmetric channel with a low-complexity encoding and decoding procedures. However, it appeared to be a challenging problem to construct (optimize) such codes for the finite blocklength regime. This question was addressed in \cite{TalVardyConstr2013, MoriTanaka2009, Trifonov2012}. These methods as well as Tal--Vardy list decoder \cite{TalVardyList2015} allowed to significantly improve the practical performance of such codes. As a result, these codes were selected as a coding scheme for the control channel of the enhanced mobile broadband (eMBB) \cite{3gpp.finrep, 3gpp.R1-1611109}. In \cite{Telatar2user, Onay2013} polar codes were proved to achieve the full admissible capacity region of the two-user binary input MAC. In \cite{TelatarMuser} the results were generalized for the $K$-user case. At the same time, there are no efficient decoding and optimization methods for the case of finite blocklength. In this paper, we address this question and investigate the practical performance of polar codes in $K$-user MAC. Our contribution is as follows. We compare two possible decoding techniques: joint successive cancellation algorithm and joint iterative algorithm. In order to optimize the codes (choose frozen bits) we propose a special and efficient design algorithm. We investigate the performance of the resulting scheme in the Gaussian multiple access channel (GMAC) by means of simulations. The scheme is shown to outperform LDPC based solution by approximately $1$~dB and to be close to the achievability bound for GMAC. \section{Preliminaries} \subsection{Polar Codes} Let us consider the Arikan's kernel \[ G_2 \triangleq \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}, \] then the \textit{polar transform} of size $N = 2^n$ is defined as follows \[ G_N \triangleq B_N G_2^{\otimes n}, \] where $\otimes$ is the Kronecker power and $B_N$ is called a \textit{shuffle reverse} operator (see \cite{Arikan}). In order to construct an $(N, k)$ polar coset code let us denote the set of frozen positions by $\mathcal{F}$, $|\mathcal{F}| = N - k$. By $\mathbf{u}_\mathcal{F}$ we denoted the projection of the vector $\mathbf{u}$ to positions in $\mathcal{F}$. For now, we can define a \textit{polar coset code} $\mathcal{C}$ as follows \[ \mathcal{C}(N, k, \mathcal{F}, \mathbf{f}) = \left\{ \mathbf{c} = \mathbf{u} G_N \:\: | \:\: \mathbf{u} \in \{0,1\}^N, \:\: \mathbf{u}_\mathcal{F} = \mathbf{f} \right\}. \] \subsection{System model} Let us describe the system model. There are $K$ active users in the system. Communication proceeds in a frame-synchronized fashion. The length of each frame is $N$ and coincides with the codeword length. Each user has $k$ bits to transmit during a frame. All users have equal powers and code rates. Let us describe the channel model \begin{equation*} \mathbf{y} = \sum_{i=1}^{K} \mathbf{x}_i + \mathbf{z}, \end{equation*} where $\mathbf{x}_i \in \mathbb{R}^n$ is a codeword transmitted by the $i$-th user and $\mathbf{z} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ is an additive white Gaussian noise (AWGN). We note, that non-asymptotic achievability and converse bounds for this channel were derived in \cite{polyanskiy2017perspective}. We note, that these bounds were proved for the case of the same codebook and decoding up to permutation, but can be easily changed for the use in different codebook case. In what follows we compare the performance of our codes to these bounds. In our system the users utilize \textit{different} polar coset codes $\mathcal{C}_i(N, k, \mathcal{F}_i, \mathbf{f}_i)$, $i=1,\ldots, K$. Lets us consider the \mbox{$i$-th} user. In order to send the information word $\mathbf{u}_i$ the user first encodes it with the code $\mathcal{C}_i(N, k, \mathcal{F}_i, \mathbf{f}_i)$ and obtain a codeword $\mathbf{c}_i$. Then the user performs BPSK modulation or equivalently \[ \mathbf{x}_i = \tau(\mathbf{c}_i), \quad \tau(\mathbf{c}_i) = (\tau(c_{i,1}), \ldots, \tau(c_{i,N})), \] where $\tau:\{0, 1\} \rightarrow \{\sqrt{P}, -\sqrt{P}\}$. The probability of error (per user) is defined as follows \begin{equation} \label{eq:p_e} P_e = \frac{1}{K} \sum\limits_{i=1}^{K} \Pr(\mathbf{u}_i \ne \hat{\mathbf{u}}_i), \end{equation} where $\hat{\mathbf{u}}_i$ is the estimate of $\mathbf{u}_i$ provided by the decoder. As energy efficiency is of critical importance for mMTC scenario we focus on optimization of the required energy per bit ($E_b/N_0$). Recall, that it is calculated as follows \[ E_b/N_0 = \frac{N P}{2 k}. \] \section{Decoding algorithms} \subsection{Joint Successive Cancellation Decoding} Let us first explain the main idea for the toy example with $N=2$ (see \Fig{fig:PolarRepr}). We see that instead of working with bits of different users and several polar codes we can work with a single polar code over $\mathbb{Z}_2^K$. In our example we will first decode a bit configuration (tuple) $(u_1, v_1)$ -- first bits of users and then a tuple $(u_2, v_2)$ -- second bits of users. We assume the decoder to work with tuple distributions rather than with probabilities of single bits. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{polar_jsc.eps} \caption{Representation as a polar code over $\mathbb{Z}_2^K$ for $K=2$.} \label{fig:PolarRepr} \end{figure} The input of the decoder is the vector $\mathbf{P} = ({\mu}_1,\ldots,{\mu}_N)$ of length $N$ consisting of a priory probability mass functions (pmf) ${\mu}_i \in [0, 1]^{2^K}$, $i = 1, \ldots, N$. Let us show how to initialize the $k$-th pmf. Recall, that the channel output is a vector $\mathbf{y}$ and consider its $k$-th component $y_k$ and let $g = (b_1, \ldots, b_K) \in \mathbb{Z}_2^K$. \begin{flalign} &{\mu}_k(g) = \Pr[g = (b_1, \ldots, b_K) | y_k] \nonumber \\ &\propto \exp \left\{-\frac{(y_k - \sum_{i=1}^K \tau(b_i))^2}{2\sigma^2}\right\}, \label{eq:initialization} \end{flalign} recall, that the noise variance $\sigma^2 = 1$ in our case. Let us first consider the decoding of the basic block shown in \Fig{fig:PolarRepr}. Let us assume, that we are given two a priory pmfs $\mathbf{\mu}_1$ and $\mathbf{\mu}_2$. Let us describe the operations. We start with decoding of the tuple corresponding to the first bits of the users ($(u_1, v_1)$ in our example). In order to do this, we need to calculate the distribution of the sum of two random variables over $\mathbb{Z}_2^K$. In what follows we refer to this operation as the \textit{check-node operation (cnop)}. Clear, that this can be done by means of convolution, i.e. $\hat{\mu}_1 = \mu_1 \ast \mu_2$. As we are working in the abelian group $\mathbb{Z}_2^K$, so there exists a Fourier transform (FT) $\mathcal F$. In what follows in order to perform a convolution we use the FFT-based technique proposed in \cite{Declercq} the case of LDPC codes over Abelian groups. Thus, the final rule is as follows \begin{equation*} \hat{\mu}_1 \propto \mathcal F^{-1}\left(\mathcal F(\mu_1) \odot \mathcal F(\mu_2)\right), \end{equation*} where $\odot$ denotes the element-wise multiplication. After we calculated the pmf $\hat{\mu}_1$ we can make a hard decision $\hat{g}_1$ taking into account the values of frozen bits in this position. After $\hat{g}_1$ is found we proceed with \textit{variable-node operation (vnop)}. The rule is as follows \begin{equation*} \hat{\mu}_2(g) \propto \mu_1(g + \hat{g}_1) \mu_2(g)\ \ \ \forall g\in \mathbb Z_2^K. \end{equation*} The final joint successive cancellation (JSC) decoding algorithm utilizes $cnop$ and $vnop$ functions in a recursive manner. Please see Algorithm~\ref{alg:jsc} for full description. \begin{algorithm} \caption{Joint Successive Cancellation Decoding (JSC)} \label{alg:jsc} \begin{algorithmic}[1] \INPUT{$N$ -- code length, $K$ -- number of users, $\mathbf{F} \in \{0, 1, \text{inf}\}^{K \times N}$ -- matrix of frozen bits, $\mathbf{y} \in \mathbb{R}^N$ -- received signal.} \State Initialize $\mathbf{P} = ({\mu}_1,\ldots,{\mu}_N)$ according to \eqref{eq:initialization}. \Function{PolarDecode}{$\mathbf{P}$, $\mathbf{F}$} \If{$ \mathop{len}(P) = 1$} \State $u, x = \mathop{decision}(\mathbf{P}, \mathbf{F})$ \Comment{Make decision based on probabilities and the matrix of frozen bits} \Else \State $\mathbf{P}_o = ({\mu}_1, {\mu}_3, \ldots)$, $\mathbf{P}_e = ({\mu}_2, {\mu}_4, \ldots)$ \State $\mathbf{P}_{1} = \mathop{cnop}(\mathbf{P}_e, \mathbf{P}_o)$ \State $\mathbf{u}_1, \mathbf{x}_1 = \mathop{PolarDecode}(\mathbf{P}_{1}, \mathbf{F})$ \State $\mathbf{P}_{2} = \mathop{vnop}(\mathop{cnop}(\mathbf{x}_1, \mathbf{P}_o), \mathbf{P}_e)$ \State $\mathbf{u}_2, \mathbf{x}_2 = \mathop{PolarDecode}(\mathbf{P}_{2}, \mathbf{F})$ \State $\mathbf{u} = \mathop{concat}(\mathbf{u}_1, \mathbf{u}_2)$ \State $\mathbf{x} = \mathop{merge}(\mathbf{x}_1, \mathbf{x}_2)$ \EndIf \EndFunction \OUTPUT{$\mathbf{u}$, $\mathbf{x}$} \end{algorithmic} \end{algorithm} \begin{remark} It is worth noting that we can easily improve the decoding procedure by using list decoding method \cite{TalVardyList2015}. It means we will take into account not only the most probable path (the path consists of tuples in our case) but $L$ different paths with the highest metric. \end{remark} \subsection{Iterative Decoding} Now let us describe the iterative decoding algorithm. The aim of this decoder is to update the log-likelihood ratios (LLR) for every bit that has been transmitted by every user. During an iteration the algorithm selects the next user from the list in a round robin manner, fixes the remaining LLRs and updates the LLR vector only for the user under consideration. Every iteration consists of a message passing algorithm on a graph shown in Figure~\ref{fig:IterScheme}. This graph has the following nodes: a) the polar list decoder node, which uses the LLR values as inputs and generates $L$ candidate codewords as the output, b) LLR evaluation nodes (circles), which perform the per-bit LLR evaluation given the input candidate codewords list~\citep{Pyndiah}, and c) the functional nodes (represented by triangles) performing the per-user LLR update given the LLR vectors for every user and the received signal vector $\mathbf{y}$. As soon as the polar list decoding is a well-known procedure~\citep{TalVardyList2015} as well as the method of constructing the LLR vector from the candidate codeword list~\citep{Pyndiah}, one need to describe in details the message passing procedure to and from the functional nodes. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{polar_iterative.eps} \caption{Iterative decoding scheme for $K=2$.} \label{fig:IterScheme} \end{figure} \begin{algorithm} \caption{Iterative Decoding} \label{alg:polar_iterative} \begin{algorithmic}[1] \INPUT{$N$ - code length, $K$ - number of users, $\mathbf{F}\in \{0, 1, inf\}^{K\times N}$ - matrix of frozen bits, $\mathbf{y}\in \mathbb{R}^N$ - received signal.} \State initialize the LLR values of variable nodes for each user code with zero values assuming equal probability for $\sqrt{P}$ and $-\sqrt{P}$ values \For {$i = 1, \ldots, K\times I$} \Comment perform $I$ iterations \State {$u = mod(I, K)$} \Comment round robin user selection \State Update LLR vector for given user assuming all other users have fixed LLRs~eq.~\eqref{eq:update_func_nodes} \Comment from functional nodes to polar decoder \State Perform single user list polar decoding~\citep{TalVardyList2015} given the input LLR vector \Comment corresponds to orange arrow on Figure~\ref{fig:IterScheme} \State Derive output LLR vector given the decoded candidate list \Comment corresponds to magenta arrow on Figure~\ref{fig:IterScheme} \EndFor \State Make decisions given the output LLR vector for every user \OUTPUT{u, x} \end{algorithmic} \end{algorithm} Every functional node corresponds to a single channel use. As mentioned above, every user's LLR vector is updated under fixed LLR vectors for all other users. For convenience let us consider some arbitrary functional node (its index is omitted for convenience) and the first user. The goal of the functional node is to marginalize out the uncertainty about the signal transmitted by users $j=2,\ldots,K$ \begin{equation} L(x_1) = \log \left(\frac{ \sum\limits_{x_1 = +\sqrt{P}, x_2, \ldots x_K} p\left(y \bigg| \sum\limits_{j=1}^{K} x_j\right)\prod\limits_{j=2}^{K} \Pr(x_j) } { \sum\limits_{x_1 = -\sqrt{P}, x_2, \ldots x_K} p\left(y \bigg| \sum\limits_{j=1}^{K} x_j\right)\prod\limits_{j=2}^{K} \Pr(x_j) } \right), \label{eq:update_func_nodes} \end{equation} where the numerator corresponds to the total probability that user $1$ has transmitted the signal $x_1=+\sqrt{P}$ and the denominator -- that $x_1=-\sqrt{P}$ has been transmitted (subscript corresponds to user number) and $L(x_1)$ is the output LLR for the first user. The probability $p(y|a) = \frac{1}{\sqrt{\pi}}\exp{\left(-(y-a)^2\right)}$ corresponds to AWGN channel assumption. Full algorithm description is presented in~Algorithm~\ref{alg:polar_iterative}. \section{Design of Polar Codes for GMAC} In this section, we propose a method to optimize polar codes for the use in $K$-user GMAC. First of all, let us mention the fact, that GMAC is not a symmetric channel. To see this fact let us consider $K=2$ and a noiseless case. We see, that the tuples $(0,1)$ and $(1,0)$ will lead to the channel output $0$ and it is not possible to distinguish in between this two hypotheses given $y = 0$. At the same time $(0,0)$ and $(1,1)$ will lead to $y = 2$ and $y = -2$ and the decoder can easily find the transmitted tuple. Thus, the zero codeword assumption (so popular in the single user case) does not work in our case. In order to construct the codes we apply the approach of \cite{SGMAC} and ``symmetrize'' the channel (see \Fig{fig:SGMAC}). The main idea is in adding and subtracting (during demodulation process (\ref{eq:initialization})) of a random element $\mathbf{h}$ distributes uniformly on $\mathbb{Z}_2^K$ (different $\mathbf{h}$ for each channel use). \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{SGMAC.eps} \caption{Equivalent symmetric channel} \label{fig:SGMAC} \end{figure} It is easy to see that the resulting channel (see \Fig{fig:SGMAC}) is symmetric. In what follows we refer to it as sGMAC and construct the codes for it. For this channel, we can use a zero codeword assumption. Initially, we supposed to write density evolution rules for our case. The idea is very similar to density evolution for non-binary LDPC codes \cite{NBDE_LDPC}. To be precise we mean Gaussian approximation, i.e. pdfs of the messages are approximated with use of multidimensional (the dimension is $2^K$) Gaussian mixtures. But we found that this procedure requires much more computational resources in comparison to simple Monte-Carlo simulation to determine good subchannels. The problem is in $\mathop{cnop}$ operation. It is rather difficult and requires sampling and fitting operations. Finally, we found that the major problem for the decoder is non-unique decoding rather than the noise and propose a construction method for the noiseless adder MAC, which works fine for sGMAC also. Let us briefly define the method. We suppose that zero tuples are being transmitted through symmetric noiseless MAC (see \Fig{fig:SGMAC}). First of all we need to calculate the initial pmf, which goes to the decoder. It can be done as follows \[ {\mu}_0(\mathbf{x}) = \frac{1}{2^K}\sum\limits_{\mathbf{h}: \:\: \weight{\mathbf{h} + \mathbf{x}} = \weight{\mathbf{h}}} \frac{1}{\binom{K}{\weight{\mathbf{h}}}}, \] by $\weight{\cdot}$ we mean the Hamming weight, i.e. a number of non-zero elements in a vector. \begin{example} Let $K = 2$ and assume, that the users send a tuple $(0, 0)$. Then consider $4$ cases of $\mathbf{h}$ and calculate $\mu_0$ for this case: \begin{enumerate} \item $\mathbf{h} = (0,0), \quad \mu = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}$; \item $\mathbf{h} = (0,1), \quad \mu = \begin{bmatrix} 1/2 & 0 & 0 & 1/2 \end{bmatrix}$; \item $\mathbf{h} = (1,0), \quad \mu = \begin{bmatrix} 1/2 & 0 & 0 & 1/2 \end{bmatrix}$; \item $\mathbf{h} = (1,1), \quad \mu = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}$; \end{enumerate} Thus, the resulting initial distribution (averaged over $h$) is $\mu_0 = \begin{bmatrix} 3/4 & 0 & 0 & 1/4 \end{bmatrix}$. The elements of $\mu_0$ are indices by the tuples in lexicographic order. \end{example} At each of $\nu = 0, \ldots, n-1$, with $n = \log_2 N$ steps we construct $2$ new pmfs: $\mu_{\nu+1}^-$ and $\mu_{\nu+1}^+$ $$ \mu_{\nu+1}^- = \mathop{cnop} (\mu_{\nu}, \mu_{\nu}),\quad \mu_{\nu+1}^+ = \mathop{vnop} (\mu_{\nu}, \mu_{\nu}), $$ where $\mu_{\nu+1}^{(2i-1)} = \mu_{\nu+1}^{-,(i)}$, $\mu_{\nu+1}^{(2i)} = \mu_{\nu+1}^{+, (i)}$. To choose the subchannels we compare $\mu_{n}(\mathbf{0})$ values on the $n$-th step. \section{Numerical Results and Experiments} We conducted a series of experiments of proposed algorithms and compared the results with GMAC random coding bound \cite{polyanskiy2017perspective} and with PEXIT optimized LDPC code ($15$ inner and $15$ outer iterations) proposed in \cite{10.1007/978-3-030-01168-0_15}. Let us describe how we constructed polar codes for our experiments. In order to choose the frozen positions we utilized the proposed design procedure. We have selected the common set of frozen tuples for all users and the values of frozen bits have been selected at random for different users because the same frozen values lead to poor performance. The first experiment was conducted for $K=2$ users. The probability of decoding error~\eqref{eq:p_e} performance is shown in~\Fig{fig:comparison_2users}. Both JSC and iterative decoding schemes were tested with the list size $L=8, 16, 32$. We have selected $15$ decoding iterations for the iterative scheme. One can observe that the JSC scheme outperforms the iterative one and the LLR estimation procedure does not experience significant gain when increasing the list size. With the list size being increased the JSC also experiences higher performance gain in comparison with iterative scheme. We note, that for JSC algorithm we plotted the probability that the correct word belongs to the output list. The choice of the codeword can be done by means cyclic redundancy check (CRC) and we expect $3-5$ bit CRC to be enough. Another interesting approach is dynamically or parity-check frozen bits. We also note, that we do not need CRC for iterative decoder as the list is used only for LLR calculation. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{comparison_2users.eps} \caption{Probability of decoding error for $K=2$ users with code parameters $N=512, k=128$ with different list size $L$.} \label{fig:comparison_2users} \end{figure} In the second experiment we run the same schemes for $K=4$ users. We found that the result of iterative decoding is quite bad, so we used only JSC method with the same list sizes as for $K=2$ case. The results of this experiment are presented in \Fig{fig:comparison_4users}. One can easily see that JSC algorithm really improves decoding efficiency in both setups. For both cases JSC can achieve $10^{-3}$ probability of error on at least $1$ dB less energy-per-bit than PEXIT optimized LDPC. And our best-performing solution is less than $0.8$ dB apart from the random coding bound at $10^{-3}$ probability of error level. The list size also affects the performance and in case of JSC, we can see a significant performance gain when increasing the list size. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{comparison_4users.eps} \caption{Probability of decoding error for $K=4$ users with code parameters $N=512, k=128$ with different list size $L$.} \label{fig:comparison_4users} \end{figure} Another important practical issue is that JSC has no tunable parameters rather than iterative decoding Algorithm~\ref{alg:polar_iterative} (see \cite{Pyndiah}). We have also performed best parameters search when running iterative decoding algorithm. \section{Conclusions and Future Work} \vskip -0.028cm In this paper, NOMA schemes based on polar codes are discussed. We proposed two different decoding algorithms. We have also derived a code designing procedure that optimizes polar codes for $K$-user GMAC. Then we compared our schemes with existing NOMA technique based on PEXIT optimized LDPC codes. As a result, we can conclude that JSC decoding algorithm for designed polar codes outperforms Iterative decoding procedures for both considered LDPC and polar coding schemes and becomes less that $0.8$ dB apart from the random coding bound~\citep{polyanskiy2017perspective}. We have considered single antenna AWGN model in this work and leave MIMO and fading channels for the future research. \bibliographystyle{IEEEtran}
2024-02-18T23:39:44.358Z
2019-01-23T02:33:46.000Z
algebraic_stack_train_0000
253
3,676
proofpile-arXiv_065-1475
\section{Introduction} \subsection{Modified gravity: why?} General relativity is doubtlessly a very successful theory, serving as the standard model of gravity. Nonetheless, modified theories of gravity have been explored actively for several reasons. Probably the most major reason in recent years arises from the discovery of the accelerated expansion of the present universe~\cite{Riess:1998cb,Perlmutter:1998np}. This may be caused by the (extremely fine-tuned) cosmological constant, but currently it would be better to have other possibilities at hand and a long distance modification of general relativity is one of such possible alternatives. Turning to the accelerated expansion of the early universe, it is quite likely that some scalar field called inflaton provoked inflation~\cite{Starobinsky:1979ty,Guth:1980zm,Sato:1980yn} and there are a number of models in which the inflaton field is coupled nonminimally to gravity. Such inflation models are studied within the context of modified gravity. In order to test gravity, we need to know predictions of theories other than general relativity. This motivation is becoming increasingly important after the first detection of gravitational waves~\cite{Abbott:2016blz}. In view of this, modified gravity is worth studying even if general relativity should turn out to be {\em the} correct (low-energy effective) description of gravity in the end. Aside from phenomenology, pursuing consistent modifications of gravity helps us to learn more deeply about general relativity and gravity. For example, by trying to develop massive gravity one can gain a deeper understanding of general relativity and see how special a massless graviton is. Similarly, by studying gravity in higher (or lower) dimensions one can clarify how special gravity in four dimensions is. This motivation justifies us to study modified gravity even if we are driven by academic interest. Finally, one should bear in mind that general relativity is incomplete anyway as a quantum theory and hence needs to be modified in the UV, though this subject is beyond the scope of this review. \subsection{Modified gravity: how?} Having presented some motivations, let us move to explain how one can modify general relativity. According to Lovelock's theorem~\cite{Lovelock:1971yv,Lovelock:1972vz}, the Einstein equations (with a cosmological constant) are the only possible second-order Euler-Lagrange equations derived from a Lagrangian scalar density in four dimensions that is constructed solely from the metric, ${\cal L}={\cal L}[g_{\mu\nu}]$. To extend Einstein's theory of gravity, one needs to relax the assumptions of Lovelock's theorem. The simplest way would be just adding a new degree of freedom other than the metric, such as a scalar field. Higher-dimensional gravity may be described by an effective scalar-tensor theory in four dimensions via a dimensional reduction. Incorporating higher derivatives may lead to a pathological theory (as will be argued shortly) or something that can be recast in a scalar-tensor theory (e.g., $R^2$ gravity). Abandoning diffeomorphism invariance is also equivalent to introducing new degrees of freedom. Thus, modifying gravity amounts to changing the degrees of freedom in any case. In particular, many different theories of modified gravity can be described at least effectively by some additional scalar degree(s) of freedom on top of the usual two tensor degrees corresponding to gravitational waves. We therefore focus on {\em scalar-tensor theories} in this review. \subsection{Ostrogradsky instability} One of the guiding principles we follow when we seek for a ``healthy'' extension of general relativity is to avoid what is called the Ostrogradsky instability~\cite{Ostrogradsky:1850fid,Woodard:2015zca}. The theorem states that a system described by nondegenerate higher-derivative Lagrangian suffers from ghost-like instabilities. We will demonstrate this below by using a simple example in the context of mechanics. Let us consider the following Lagrangian involving a second derivative: \begin{align} L=\frac{a}{2}\ddot\phi^2-V(\phi),\label{Ost:eq1} \end{align} where $a\,(\neq 0)$ is a constant and $V(\phi)$ is an arbitrary potential. The Euler-Lagrange equation derived from~\eqref{Ost:eq1} is of fourth order: $a\ddddot\phi-{\rm d} V/{\rm d}\phi =0$. To solve this we need four initial conditions, which means that we have in fact two dynamical degrees of freedom. According to the Ostrogradsky theorem, one of them must be a ghost. This can be seen as follows. By introducing an auxiliary variable, the Lagrangian~\eqref{Ost:eq1} can be written equivalently as \begin{align} L&=a\psi\ddot\phi-\frac{a}{2}\psi^2-V(\phi) \notag \\ &=-a\dot\psi\dot\phi-\frac{a}{2}\psi^2-V(\phi)+a \frac{{\rm d}}{{\rm d} t}\left(\psi\dot\phi\right). \label{Ost:eq2} \end{align} It is easy to see that the first line reproduces the original Lagrangian~\eqref{Ost:eq1} after substituting the Euler-Lagrange equation for $\psi$, namely, $\psi=\ddot\phi$. The last term in the second line does not contribute to the Euler-Lagrange equation. In terms of the new variables defined as $q=(\phi+\psi)/\sqrt{2}$ and $Q=(\phi-\psi)/\sqrt{2}$, the Lagrangian~\eqref{Ost:eq2} can be rewritten (up to a total derivative) in the form \begin{align} L=-\frac{a}{2}\dot q^2+\frac{a}{2}\dot Q^2 - U(q,Q). \end{align} This Lagrangian clearly shows that the system contains two dynamical degrees of freedom, one of which has a wrong sign kinetic term, signaling ghost instabilities. This is true irrespective of the sign of $a$. Although we have seen the appearance of the Ostrogradsky instability in higher-derivative systems only through the above simple example, this is generically true in higher-derivative field theory. The theorem can be extended to the systems with third-order equations of motion~\cite{Motohashi:2014opa}. In this review, we will therefore consider scalar-tensor modifications of general relativity that have second-order field equations. The most general form of the Lagrangian for the scalar-tensor theory having second-order field equations has been known as the Horndeski theory~\cite{Horndeski:1974wa}, and it has been widely used in cosmology and astrophysics beyond general relativity over recent years. An important postulate of the Ostrogradsky theorem is that the Lagrangian is nondegenerate. If this is not the case, one can reduce a set of higher-derivative field equations to a healthy second-order system. This point will also be discussed in the context of scalar-tensor theories. \subsection{Structure of the review} The outline of this article is as follows. In the next section, we review aspects of the Horndeski theory, the most general scalar-tensor theory with second-order equations of motion. It is shown that the original form of the Horndeski action is indeed equivalent to its modern form frequently used in the literature (i.e., the generalized Galileons). A short status report is also given on the attempt to extend the Horndeski theory to allow for multiple scalar fields. We then present some applications of the Horndeski theory to cosmology in Sec.~\ref{sec:cosmoH}. Scalar-tensor theories that are more general than Horndeski necessarily have higher-order equations of motion. Nevertheless, one can circumvent the Ostrogradsky instability if the system is degenerate, as argued above. This idea gives rise to new healthy scalar-tensor theories beyond Horndeski. We review the recent developments in this direction in Sec.~\ref{sec:beyondH}. The nearly simultaneous detection of gravitational waves GW170817 and the $\gamma$-ray burst GRB 170817A places a very tight constraint on the speed of gravitational waves. We mention the status of the Horndeski theory and its extensions after this event. The Vainshtein screening mechanism is essential for modified gravity evading solar-system constraints. In Sec.~\ref{sec:Vainshtein_mech}, we describe this mechanism based on the Horndeski theory and theories beyond Horndeski, emphasizing in particular the interesting phenomenology of partial breaking of Vainshtein screening inside matter in degenerate higher-order scalar-tensor theories. Black hole solutions in the Horndeski theory and beyond are summarized very briefly in Sec.~\ref{sec:BHs}. Finally, we draw our conclusion in Sec.~\ref{sec:final}. This review only covers scalar-tensor theories. For a more comprehensive review including other types of modified gravity, see~\cite{Clifton:2011jh,Heisenberg:2018vsk}. We will not describe much about cosmological tests of gravity, which are covered by excellent reviews such as~\cite{Jain:2010ka,Joyce:2014kja,Koyama:2015vza}. \section{Horndeski theory} \subsection{From Galileons to Horndeski theory}\label{subsec:GtoH} To introduce the Horndeski theory in a pedagogical manner, we start from the Galileon theory (see also~\cite{Deffayet:2013lga} for a review on the same subject). The Galileon~\cite{Nicolis:2008in} is a scalar field with the symmetry under the transformation $\phi\to \phi + b_\mu x^\mu + c$. This is called, by an analogy to a Galilei transformation in classical mechanics, the Galilean shift symmetry. In order to avoid ghost instabilities, we demand that $\phi$'s equation of motion is of second order. The most general Lagrangian (in four dimensions) having these properties is given by~\cite{Nicolis:2008in} \begin{align} {\cal L}&=c_1\phi +c_2X-c_3X\Box\phi \notag \\ & \quad +\frac{c_4}{2}\left\{ X\left[(\Box\phi)^2-\partial_\mu\partial_\nu\phi\partial^\mu\partial^\nu\phi\right] +\Box\phi\partial^\mu\phi\partial^\nu\phi\partial_{\mu}\partial_{\nu}\phi -\partial_\mu X\partial^\mu X \right\} \notag \\ & \quad +\frac{c_5}{15}\bigl\{ -2X\left[ (\Box\phi)^3-3\Box\phi\partial_\mu\partial_\nu\phi\partial^\mu\partial^\nu\phi +2\partial_\mu\partial_\nu\phi\partial^\nu\partial^\lambda\phi \partial_\lambda\partial^\mu\phi \right] \notag \\ & \quad\qquad +3\partial^\mu\phi\partial_\mu X \left[(\Box\phi)^2-\partial_\mu\partial_\nu\phi\partial^\mu\partial^\nu\phi\right] +6\Box\phi \partial_\mu X\partial^\mu X-6 \partial^\mu\partial^\nu\phi \partial_\mu X\partial_\nu X \bigr\} ,\label{eq:Lag_original_Galileon1} \end{align} where $X:=-\eta^{\mu\nu}\partial_\mu\phi\partial_\nu\phi/2$ and $c_1,\cdots,c_5$ are constants. This can be written in a more compact form by making use of integration by parts as \begin{align} {\cal L}&=c_1\phi +c_2X-c_3X\Box\phi+c_4 X \left[(\Box\phi)^2-\partial_\mu\partial_\nu\phi\partial^\mu\partial^\nu\phi\right] \notag \\ & \quad -\frac{c_5}{3}X\left[ (\Box\phi)^3-3\Box\phi\partial_\mu\partial_\nu\phi\partial^\mu\partial^\nu\phi +2\partial_\mu\partial_\nu\phi\partial^\nu\partial^\lambda\phi \partial_\lambda\partial^\mu\phi \right].\label{eq:Lag_original_Galileon2} \end{align} Note that the field equation is of second order even though the Lagrangian depends on the second derivatives of the field. The Lagrangian~\eqref{eq:Lag_original_Galileon2} describes a scalar-field theory on a fixed Minkowski background. One can introduce gravity and consider a covariant version of~\eqref{eq:Lag_original_Galileon2} by promoting $\eta_{\mu\nu}$ to $g_{\mu\nu}$ and $\partial_\mu$ to $\nabla_\mu$. However, since covariant derivatives do not commute, the naive covariantization leads to higher derivatives in the field equations, which would be dangerous. For example, one would have derivatives of the Ricci tensor $R_{\mu\nu}$ from the term having the coefficient $c_4$, \begin{align} c_4 X\nabla^\mu \left[ \nabla_\mu \nabla_\nu\nabla^\nu\phi - \nabla_\nu \nabla_\mu \nabla^\nu \phi \right] =- c_4 X\nabla^\mu\left(R_{\mu\nu}\nabla^\nu\phi\right), \end{align} in the scalar-field equation of motion. Such higher derivative terms can be canceled by adding curvature-dependent terms appropriately to Eq.~\eqref{eq:Lag_original_Galileon2}. The covariant version of~\eqref{eq:Lag_original_Galileon2} that leads to second-order field equations both for the scalar field and the metric is given by~\cite{Deffayet:2009wt} \begin{align} {\cal L}&=c_1\phi +c_2X-c_3X\Box\phi +\frac{c_4}{2}X^2R +c_4 X \left[(\Box\phi)^2-\phi^{\mu\nu}\phi_{\mu\nu}\right] \notag \\ & \quad +c_5X^2 G^{\mu\nu}\phi_{\mu\nu} -\frac{c_5}{3}X\left[ (\Box\phi)^3-3\Box\phi\phi^{\mu\nu}\phi_{\mu\nu} +2\phi_{\mu\nu}\phi^{\nu\lambda}\phi_\lambda^\mu \right],\label{eq:Lag_cov_Galileon} \end{align} where $R$ is the Ricci tensor, $G_{\mu\nu}$ is the Einstein tensor, $\phi_\mu:=\nabla_\mu\phi$, $\phi_{\mu\nu}:=\nabla_\mu\nabla_\nu\phi$ and now $X:=-g^{\mu\nu}\phi_\mu\phi_\nu/2$. Here, the fourth term in the first line and the first term in the second line are the ``counter terms'' introduced to remove higher derivatives in the field equations. The counter terms are unique. This theory is called the covariant Galileon. Since the field equations derived from the Lagrangian~\eqref{eq:Lag_cov_Galileon} involve first derivatives of $\phi$, the Galilean shift symmetry is broken in the covariant Galileon theory. Only the second property of the Galileon, i.e., the second-order nature of the field equations, is maintained in the course of covariantization. The covariant Galileon theory~\eqref{eq:Lag_cov_Galileon} is formulated in four spacetime dimensions, but it can be extended to arbitrary dimensions~\cite{Deffayet:2009mn}. The generalized Galileon~\cite{Deffayet:2011gz} is a further generalization of the covariant Galileon~\cite{Deffayet:2009wt,Deffayet:2009mn} retaining second-order field equations. More precisely, first one determines the most general scalar-field theory on a fixed Minkowski background which yields a second-order field equation, assuming that the Lagrangian contains at most second derivatives of $\phi$ and is polynomial in $\partial_\mu\partial_\nu\phi$. One then promotes the theory to a covariant one in the same way as above by adding appropriate (unique) counter terms so that the field equations are of second order both for $\phi$ and the metric. The generalized Galileon can thus be obtained. It should be noted that this procedure can be done in arbitrary spacetime dimensions. In four dimensions, the Lagrangian for the generalized Galileon is given by~\cite{Deffayet:2011gz} \begin{align} {\cal L}&=G_2(\phi,X)-G_3(\phi,X)\Box\phi + G_4(\phi,X)R +G_{4X} \left[(\Box\phi)^2-\phi^{\mu\nu}\phi_{\mu\nu}\right] \notag \\ & \quad +G_5(\phi,X) G^{\mu\nu}\phi_{\mu\nu} -\frac{G_{5X}}{6}\left[ (\Box\phi)^3-3\Box\phi\phi^{\mu\nu}\phi_{\mu\nu} +2\phi_{\mu\nu}\phi^{\nu\lambda}\phi_\lambda^\mu \right],\label{eq:Lag_gen_Galileon} \end{align} where $G_2$, $G_3$, $G_4$, and $G_5$ are arbitrary functions of $\phi$ and $X$. Here and hereafter we use the notation $f_X:=\partial f/\partial X$ and $f_\phi:=\partial f/\partial \phi$ for a function $f$ of $\phi$ and $X$. The generalized Galileon~\eqref{eq:Lag_gen_Galileon} is now known as the Horndeski theory~\cite{Horndeski:1974wa}, i.e., {\em the most general scalar-tensor theory having second-order field equations in four dimensions}. However, Horndeski determined the theory starting from the different assumptions than those made for deriving the generalized Galileon, and the original form of the Lagrangian~\cite{Horndeski:1974wa} looks very different from~\eqref{eq:Lag_gen_Galileon}: \begin{align} {\cal L}&=\delta_{\mu\nu\sigma}^{\alpha\beta\gamma} \left[ \kappa_1 \phi^\mu_\alpha R_{\beta \gamma}^{~~\;\nu\sigma}+\frac{2}{3}\kappa_{1X} \phi^\mu_\alpha\phi^\nu_\beta\phi^\sigma_\gamma +\kappa_3\phi_\alpha\phi^\mu R_{\beta\gamma}^{\;~~\nu\sigma} +2\kappa_{3 X}\phi_\alpha \phi^\mu \phi^\nu_\beta \phi^\sigma_\gamma \right] \notag \\ & \quad +\delta^{\alpha\beta}_{\mu\nu}\left[ (F+2W)R_{\alpha\beta}^{\;~~ \mu\nu}+2F_X\phi^\mu_\alpha \phi^\nu_\beta +2\kappa_8\phi_\alpha\phi^\mu\phi^\nu_\beta\right] \notag \\ &\quad -6\left(F_\phi+2W_\phi-X\kappa_8\right)\Box\phi+\kappa_9. \label{eq:Lag_original_Horndeski} \end{align} Here, $\delta^{\alpha_1\alpha_2...\alpha_n}_{\mu_1\mu_2...\mu_n}:=n!\delta_{\mu_1}^{[\alpha_1} \delta_{\mu_2}^{\alpha_2}...\delta_{\mu_n}^{\alpha_n]}$ is the generalized Kronecker delta, and $\kappa_1$, $\kappa_3$, $\kappa_8$, and $\kappa_9$ are arbitrary functions of $\phi$ and $X$. We have another function $F=F(\phi,X)$, but this must satisfy $F_X=2\left(\kappa_3+2X\kappa_{3X}-\kappa_{1\phi}\right)$ and hence is not independent. We also have a function of $\phi$, $W=W(\phi)$, which can be absorbed into the redefinition of $F$: $F_{\rm old}+2W\to F_{\rm new}$. Thus, we have the same number of free functions of $\phi$ and $X$ as in the generalized Galileon theory. Nevertheless, the equivalence between the two theories is apparently far from trivial. In~\cite{Kobayashi:2011nu} it was shown that the generalized Galileon can be mapped to the Horndeski theory by identifying $G_i(\phi,X)$ as \begin{align} G_2&=\kappa_9+4X\int^X{\rm d} X'\left(\kappa_{8\phi}-2\kappa_{3\phi\phi}\right), \\ G_3&= 6F_\phi-2X\kappa_8-8X\kappa_{3\phi}+2\int^X{\rm d} X'(\kappa_8-2\kappa_{3\phi}), \\ G_4&=2F-4X\kappa_3, \\ G_5&=-4\kappa_1, \end{align} and performing integration by parts. Having thus proven that the generalized Galileon is indeed equivalent to the Horndeski theory, we can now use~\eqref{eq:Lag_gen_Galileon} as the Lagrangian for the most general scalar-tensor theory with second-order field equations. However, while the generalized Galileon is formulated in arbitrary dimensions, the higher-dimensional extension of the Horndeski theory has not been known so far and it is unclear whether or not the generalized Galileon gives the most general second-order scalar-tensor theory in higher dimensions. Note in passing that the lower-dimensional version of the Horndeski theory can be obtained straightforwardly~\cite{Horndeski:1974wa}. The Horndeski theory was obtained already in 1974, but the paper~\cite{Horndeski:1974wa} had long been forgotten until 2011 when it was rediscovered by~\cite{Charmousis:2011bf}. Let us sketch (very briefly) the original derivation of the Horndeski theory. The starting point is a generic action of the form \begin{align} S=\int {\rm d}^4x\sqrt{-g}{\cal L}(g_{\mu\nu}, g_{\mu\nu,\lambda_1},\cdots, g_{\mu\nu,\lambda_1,\cdots,\lambda_p}, \phi, \phi_{,\lambda_1}, \cdots, \phi_{,\lambda_1,\cdots,\lambda_{q}} ), \end{align} with $p,q\ge 2$ {\em in four dimensions}. The assumptions here should be contrasted with those in the generalized Galileon: Horndeski's derivation starts from the more general Lagrangian, but it is restricted to four dimensions. Varying the action with respect to the metric and the scalar field, we obtain the field equations: ${\cal E}_{\mu\nu}\left(:=2(\sqrt{-g})^{-1}\delta S/\delta g^{\mu\nu}\right)=0$ and ${\cal E}_\phi\left(:=(\sqrt{-g})^{-1}\delta S/\delta \phi\right)=0$, where ${\cal E}_{\mu\nu}$ and ${\cal E}_\phi$ are assumed to involve at most second derivatives of $g_{\mu\nu}$ and $\phi$. As a consequence of the diffeomorphism invariance of the action, the following ``Bianchi identity'' holds: \begin{align} \nabla^\nu {\cal E}_{\mu\nu}=-\nabla_\mu\phi\, {\cal E}_\phi. \label{eq:generalized_Bianchi_id} \end{align} In general, $\nabla^\nu {\cal E}_{\mu\nu}$ would be of third order in derivatives of $g_{\mu\nu}$ and $\phi$. However, since the right-hand side contains at most second derivatives, $\nabla^\nu {\cal E}_{\mu\nu}$ must be of second order even though ${\cal E}_{\mu\nu}$ itself is of second order. This puts a tight restriction on the structure of ${\cal E}_{\mu\nu}$. Our next step is to construct the tensor ${\cal A}_{\mu\nu}$ satisfying this property. After a lengthy procedure one can determine the general form of ${\cal A}_{\mu\nu}$ in the end. (At this step the assumption on the number of spacetime dimensions is used.) Then, one further restricts the form of ${\cal A}_{\mu\nu}$ by requiring that $\nabla^\nu {\cal A}_{\mu\nu}$ is proportional to $\nabla_\mu\phi$ as implied by Eq.~\eqref{eq:generalized_Bianchi_id}. The tensor ${\cal A}_{\mu\nu}$ thus obtained will be ${\cal E}_{\mu\nu}$. The final step is to seek for the Lagrangian ${\cal L}$ that yields the Euler-Lagrange equations ${\cal E}_{\mu\nu}=0$ and ${\cal E}_\phi=0$. Fortunately enough, it turns out that the Euler-Lagrange equations derived from ${\cal L}=g^{\mu\nu}{\cal E}_{\mu\nu}$ reproduces the structure of ${\cal E}_{\mu\nu}$ and ${\cal E}_\phi$. This is how we can arrive at the Lagrangian~\eqref{eq:Lag_original_Horndeski}. By taking the functions in Eq.~\eqref{eq:Lag_gen_Galileon} appropriately, one can reproduce any second-order scalar-tensor theory as a specific case. For example, nonminimal coupling of the form $f(\phi)R$ can be obtained by taking $G_4=f(\phi)$, and its limiting case $G_4={\rm const}=M_{\rm Pl}^2/2$ gives the Einstein-Hilbert term. Clearly, $G_2$ is the familiar term used in k-inflation~\cite{ArmendarizPicon:1999rj}/k-essence~\cite{Chiba:1999ka,ArmendarizPicon:2000dh}, and the $G_3$ term was investigated more recently in the context of kinetic gravity braiding~\cite{Deffayet:2010qz}/G-inflation~\cite{Kobayashi:2010cm}. It is well known that $f(R)$ gravity (a theory whose Lagrangian is given by some function of the Ricci scalar) can be expressed equivalently as a second-order scalar-tensor theory and hence is a subclass of the Horndeski theory (see, e.g.,~\cite{Sotiriou:2008rp,DeFelice:2010aj}). Nonminimal coupling of the form $G^{\mu\nu}\phi_\mu\phi_\nu$ has been studied often in the literature (see, e.g.,~\cite{Germani:2010gm}), and this term can be expressed in two ways, $G_4=X$ or $G_5=-\phi$, with integration by parts. A nontrivial and interesting example is nonminimal coupling to the Gauss-Bonnet term, \begin{align} \xi(\phi)\left(R^2-4R_{\mu\nu}R^{\mu\nu} +R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\right).\label{eq:nonminimal_GB} \end{align} No similar terms can be found in~\eqref{eq:Lag_gen_Galileon} or~\eqref{eq:Lag_original_Horndeski}, but since the Horndeski theory is the most general scalar-tensor theory with second-order field equations and it is known that the term~\eqref{eq:nonminimal_GB} yields the second-order field equations, this {\em must} be obtained somehow as a specific case of the Horndeski theory. In fact, one can reproduce~\eqref{eq:nonminimal_GB} by taking~\cite{Kobayashi:2011nu} \begin{align} &G_2=8\xi^{(4)}X^2\left(3-\ln X\right), \quad G_3=4\xi^{(3)}X\left(7-3\ln X\right), \notag \\ &G_4=4\xi^{(2)}X\left(2-\ln X\right), \quad\;\; G_5=-4\xi^{(1)} \ln X,\label{eq:GB_Galileon_expression} \end{align} where $\xi^{(n)}:=\partial^n\xi/\partial\phi^n$. To confirm that~\eqref{eq:GB_Galileon_expression} is indeed equivalent to~\eqref{eq:nonminimal_GB} at the level of the action is probably extremely difficult, but it is straightforward to see the equivalence if one works at the level of the equations of motion. Note in passing that a function of the Gauss-Bonnet term in a Lagrangian, $f(R^2-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma})$, can be recast into the form of~\eqref{eq:nonminimal_GB} by introducing an auxiliary field, and hence it is also included in the Horndeski theory. Another nontrivial example is the derivative coupling to the double dual Riemann tensor, \begin{align} \phi_\mu\phi_\nu\phi_{\alpha\beta}L^{\mu\alpha\nu\beta}, \end{align} where \begin{align} L^{\mu\alpha\nu\beta}:=R^{\mu\alpha\nu\beta}+ \left(R^{\mu\beta}g^{\nu\alpha}+R^{\nu\alpha}g^{\mu\beta} -R^{\mu\nu}g^{\alpha\beta}-R^{\alpha\beta}g^{\mu\nu}\right) +\frac{1}{2}R\left(g^{\mu\nu}g^{\alpha\beta}-g^{\mu\beta}g^{\nu\alpha}\right). \end{align} This can be reproduced simply by taking $G_5=X$~\cite{Narikawa:2013pjr}. There are other well-motivated models or scenarios which have some links to the Galileon/Horndeski theory. The Dvali-Gabadadze-Porrati (DGP) model~\cite{Dvali:2000hr} based on a five-dimensional braneworld scenario gives rise to the cubic Galileon interaction $\sim (\partial\phi)^2\Box\phi$ in its four-dimensional effective theory~\cite{Luty:2003vm}. Actually, the Galileon was originally proposed as a generalization of the DGP effective theory. The Dirac-Born-Infeld (DBI) action, which is described by a particular form of $G_2(\phi,X)$ and often studied in the context of inflation~\cite{Silverstein:2003hf,Alishahiha:2004eh}, can be obtained from a probe brane moving in a five-dimensional bulk spacetime. By extending the probe brane action, one can similarly derive the generalization of the Galileon whose action is of the particular Horndeski form involving $G_3$, $G_4$, and $G_5$~\cite{deRham:2010eu,Goon:2011qf,Goon:2011uw,Trodden:2011xh}. It is shown in~\cite{VanAcoleyen:2011mj} and revisited in~\cite{vandeBruck:2018jlz} that some particular cases of the Horndeski action can be obtained through a Kaluza-Klein compactification of higher-dimensional Lovelock gravity. The nonminimal couplings in the Horndeski theory capture the essential structure of (the decoupling limit of) massive gravity~\cite{deRham:2011by,Heisenberg:2014kea}. \subsection{ADM decomposition} For the later purpose, it is convenient to perform a $3+1$ Arnowitt-Deser-Misner (ADM) decomposition~\cite{Arnowitt:1962hi} in the Horndeski theory. We take the unitary gauge in which $\phi$ is homogeneous on constant-time hypersurfaces, so that $\phi=\phi(t)$. (We assume that it is possible to take such a coordinate system.) The metric can be expressed as \begin{align} {\rm d} s^2=-N^2{\rm d} t^2+\gamma_{ij}\left({\rm d} x^i+N^i{\rm d} t\right)\left({\rm d} x^j+N^j{\rm d} t\right), \end{align} where $N$ is the lapse function, $N_i\;\left(=\gamma_{ij}N^j\right)$ is the shift vector, and $\gamma_{ij}$ is the three-dimensional spatial metric. Then, since $X=\dot\phi^2(t)/(2N^2)$, the functions of $\phi$ and $X$ can be regarded as those of $t$ and $N$. We will need the extrinsic curvature of the spatial hypersurfaces, \begin{align} K_{ij}:=\frac{1}{2N}\left(\dot\gamma_{ij}-D_iN_j-D_jN_i\right), \end{align} where a dot denotes differentiation with respect to $t$ and $D_i$ is the covariant derivative associated with $\gamma_{ij}$. The second derivatives of $\phi$ can be expressed using $K_{ij}$. For example, we have $\phi_{ij}=-(\dot\phi/N)K_{ij}$. The four-dimensional Ricci tensor can also be expressed using the extrinsic curvature and the three-dimensional Ricci tensor, $R_{ij}^{(3)}$. With some manipulation, we find that the Horndeski action in the ADM form is given by~\cite{Gleyzes:2014dya} \begin{align} S=\int {\rm d} t{\rm d}^3x \sqrt{\gamma} N& \biggl[ A_2(t,N)+A_3(t,N)K +B_4(t,N)R^{(3)} \notag \\ & -(B_4+NB_{4N})\left(K^2-K_{ij}K^{ij}\right) +B_5(t,N)G_{ij}^{(3)}K^{ij} \notag \\ & +\frac{NB_{5N}}{6}\left( K^3-3KK_{ij}K^{ij}+2K_{ij}K^{jk}K_k^i \right) \biggr].\label{action:ADM_form} \end{align} Now we have four free functions of $t$ and $N$, which are related to $G_2$, $G_3$, $G_4$, and $G_5$ as \begin{align} A_2&=G_2+\sqrt{X}\int^X\frac{G_{3\phi}}{\sqrt{X'}}{\rm d} X', \\ A_3&=\int^X G_{3X'}\sqrt{2X'}{\rm d} X'-2\sqrt{2X}G_{4\phi}, \\ B_4&=G_4-\frac{\sqrt{X}}{2}\int^X\frac{G_{5\phi}}{\sqrt{X'}}{\rm d} X', \\ B_5&=-\int^X G_{5X'}\sqrt{2X'}{\rm d} X'. \end{align} The above ADM form of the action is particularly useful for studying cosmology in the Horndeski theory. Since the scalar field is apparently gone in the ADM description, one might wonder how one can understand from the action~\eqref{action:ADM_form} that the theory has $(2+1)$ dynamical degrees of freedom. The point is that $\delta S/\delta N=0$ gives the equation that determines $N$ in terms of $\gamma_{ij}$ and $\dot\gamma_{ij}$ rather than a constraint among $\gamma_{ij}$ and $\dot\gamma_{ij}$, which signals an extra degree of freedom. Note that this remains true even if one generalizes the ADM action to \begin{align} S=\int{\rm d} t{\rm d}^3x \sqrt{\gamma}N&\bigl[\cdots + B_4(t,N)R^{(3)}+C_4(t,N)\left(K^2-K_{ij}K^{ij}\right) \notag \\ & +B_5(t,N)G_{ij}^{(3)}K^{ij}+C_5(t,N)\left(K^3+\cdots\right) \bigr], \end{align} where $B_4$, $B_5$, $C_4$, and $C_5$ are {\em independent} functions. This idea hints at the possibility of generalizing the Horndeski theory while retaining the number of dynamical degrees of freedom. Indeed, the Gleyzes-Langlois-Piazza-Vernizzi (GLPV) generalization of the Horndeski theory was noticed in this way~\cite{Gleyzes:2014dya,Gleyzes:2014qga}. We will come back to the GLPV theory in Sec.~\ref{sec:dhost}. See also Refs.~\cite{Gao:2014soa,Gao:2014fra,Saitou:2016lvb,Lin:2017utd,Gao:2018znj} for a further generalization of the ADM description of scalar-tensor theories. \subsection{Multi-scalar generalization} Having determined the most general {\em single}-scalar-tensor theory with second-order field equations, it is natural to explore its multi-scalar generalization. However, so far no complete multi-scalar version of the Horndeski theory has been obtained. Let us summarize the current status of attempts to generalize the Galileon/Horndeski theory to multiple scalar fields. The Galileon is generalized to mixed combinations of $p$-form fields in~\cite{Deffayet:2010zh}, a special case of which is the bi- and multi- Galileon theory~\cite{Padilla:2010de,Padilla:2010tj}. The multi-Galileon theory can also be derived from a probe brane embedded in a higher-dimensional bulk by extending the method of~\cite{deRham:2010eu} to the case with higher co-dimensions~\cite{Trodden:2011xh,Hinterbichler:2010xn}. The multi-Galileon theory can be promoted to involve arbitrary functions of the $N$ scalar fields $\phi^I$ ($I=1,\cdots, N$) and their kinetic terms $-\partial_\mu\phi^I\partial^\mu\phi^J/2$~\cite{Padilla:2012dx,Sivanesan:2013tba}. Similarly to the single-field Galileon, the multi-Galileon can be covariantized, while maintaining the second-order nature, to give~\cite{Padilla:2012dx} \begin{align} {\cal L}&=G_2-G_{3I}\Box\phi^I +G_4 R+G_{4,\langle IJ\rangle} \left(\Box\phi^I\Box\phi^J-\nabla_\mu\nabla_\nu\phi^I\nabla^\mu\nabla^\nu\phi^J\right) \notag \\ &\quad +G_{5I}G^{\mu\nu}\nabla_\mu\nabla_\nu\phi^I -\frac{1}{6}G_{I,\langle JK\rangle }\bigl[ \Box\phi^I\Box\phi^J\Box\phi^K- 3\Box\phi^{(I}\nabla_\mu\nabla_\nu\phi^J\nabla^\mu\nabla^\nu\phi^{K)} \notag \\ & \quad +2\nabla_\mu\nabla_\nu\phi^I\nabla^\nu\nabla^\lambda\phi^J\nabla_\lambda \nabla^\mu\phi^K \bigr],\label{eq:multi_generalized_Galileon} \end{align} where $G_2$, $G_{3I}$, $G_4$, and $G_{5I}$ are arbitrary functions of $\phi^I$ and $X^{IJ}:=-g^{\mu\nu}\partial_\mu\phi^I\partial_\nu\phi^J/2$, and we defined the symmetrized derivative for any function $f$ of $X^{IJ}$ as $f_{,\langle IJ\rangle}:=(\partial f/\partial X^{IJ}+\partial f/\partial X^{JI})/2$. For these functions we must require that \begin{align} G_{3I,\langle JK\rangle},\quad G_{4,\langle IJ\rangle,\langle KL\rangle}, \quad G_{5I,\langle JK\rangle},\quad G_{5I,\langle JK\rangle,\langle LM\rangle} \end{align} are symmetric in all of their indices $I,J, \cdots$, so that the field equations are of second order. This may be regarded as the multi-scalar version of the Lagrangian~\eqref{eq:Lag_gen_Galileon}, obtained by generalizing the multi-Galileon theory on a fixed Minkowski background. Unlike the case of the single-field Galileon, the Lagrangian~\eqref{eq:multi_generalized_Galileon} does {\em not} give the most general multi-scalar-tensor theory with second-order field equations. Indeed, the probe-brane derivation of the multi-field DBI-type Galileon~\cite{RenauxPetel:2011dv,RenauxPetel:2011uk} yields the terms that cannot be described by~\eqref{eq:multi_generalized_Galileon} but nevertheless have second-order field equations~\cite{Kobayashi:2013ina}. Thus, the ``Galileon route'' is unsuccessful. In contrast, Ohashi {\em et al.} followed closely the original derivation of the Horndeski theory and derived the most general second-order {\em equations of motion} for {\em bi}-scalar-tensor theory~\cite{Ohashi:2015fma}. However, the corresponding action has not been obtained so far. More recently, new terms for the multi-Galileon that had been overlooked was proposed in~\cite{Allys:2016hfl}, and their covariant completion was obtained in~\cite{Akama:2017jsa}. These new terms can reproduce the multi-field DBI Galileon~\cite{Akama:2017jsa}. However, whether or not the most general second-order multi-scalar-tensor theory is described by the Lagrangian~\eqref{eq:multi_generalized_Galileon} plus these new terms remains an open question. \section{Horndeski theory and cosmology}\label{sec:cosmoH} A great variety of dark energy/modified gravity models have been proposed so far to account for the present accelerated expansion of the universe. Also in the context of the accelerated expansion of the early universe, namely, inflation, gravity modification is now a popular way of building models. (Actually, one of the earliest proposals of inflation already invoked higher curvature terms~\cite{Starobinsky:1979ty,Starobinsky:1980te}.) The Horndeski theory provides us with a useful tool to study such cosmologies in a unifying way. In this section, we review the applications of the Horndeski theory to cosmology. \subsection{Structure of the background equations} Let us review the derivation of the field equations for a spatially flat Friedmann-Lema\^itre-Robertson-Walker (FLRW) metric and a homogeneous scalar field, \begin{align} {\rm d} s^2=-N^2(t){\rm d} t^2+a^2(t)\delta_{ij}{\rm d} x^i{\rm d} x^j, \quad \phi = \phi(t). \end{align} Here, $N(t)$ can be set to $1$ by redefining the time coordinate, but we need to retain it for the moment in order to derive the background equation corresponding to the Friedmann equation (the $tt$ component of the gravitational field equations). First, let us consider the universe filled only with the scalar field. Substituting this metric and the scalar field to Eq.~\eqref{eq:Lag_gen_Galileon}, we get the action of the form \begin{align} S=\int {\rm d} t{\rm d}^3x\,L(N,\dot N; a, \dot a, \ddot a; \phi, \dot\phi,\ddot\phi), \label{eq:action_FLRW} \end{align} where a dot denotes differentiation with respect to $t$. Varying this action with respect to $N$, $a$, and $\phi$, and then setting $N=1$, we obtain the following set of the background equations, \begin{align} {\cal E}(H;\phi, \dot\phi)&:= -\frac{1}{a^3}\frac{\delta S}{\delta N}=0, \label{eq:00FLRW} \\ {\cal P}(H,\dot H; \phi,\dot\phi,\ddot\phi)&:= \frac{1}{3a^2}\frac{\delta S}{\delta a}=0, \label{eq:ijFLRW} \\ {\cal E}_\phi(H,\dot H; \phi,\dot\phi,\ddot\phi)&:= \frac{1}{Na^3}\frac{\delta S}{\delta \phi} =0, \label{eq:scalarFLRW} \end{align} with $H:=\dot a/a$ being the Hubble parameter. Equations~\eqref{eq:00FLRW} and~\eqref{eq:ijFLRW} correspond respectively to the $tt$ and $ij$ components of the gravitational field equations, and Eq.~\eqref{eq:scalarFLRW} is the equation of motion for $\phi$. Explicit expressions for ${\cal E}$, ${\cal P}$, and ${\cal E}_\phi$ are found in~\cite{Kobayashi:2011nu}. Although the action apparently depends on $\dot N$, $\ddot a$, and $\ddot\phi$, all the higher derivatives are canceled in the field equations, leading to the second-order system as expected. In particular, ${\cal E}(H;\phi,\dot\phi)=0$ is the constraint equation. It is interesting to see that ${\cal P}$ and ${\cal E}_\phi$ depend on both $\dot H$ and $\ddot\phi$ in general. This implies the kinetic mixing of gravity and the scalar field, which does not occur in Einstein gravity (plus $G_2$). (In Einstein gravity, ${\cal P}$ (respectively ${\cal E}_\phi$) is independent of $\ddot\phi$ (respectively $\dot H$).) In the traditional scalar-tensor theory whose Lagrangian is of the form ${\cal L}=G_2(\phi,X)+G_4(\phi)R$, this mixing can be undone by moving to the Einstein frame through the conformal transformation of the metric, $\tilde g_{\mu\nu}=C(\phi)g_{\mu\nu}$. However, once $G_3(\phi,X)$ is introduced, the mixing becomes essential and cannot be removed by such a field redefinition. This nature is called {\em kinetic gravity braiding}~\cite{Deffayet:2010qz,Pujolas:2011he}. Equations~\eqref{eq:00FLRW},~\eqref{eq:ijFLRW}, and~\eqref{eq:scalarFLRW} are useful for studying the background dynamics of inflation (and its alternatives), because the early universe can be described generically by a gravity-scalar system. However, if one considers the late-time universe based on the Horndeski theory, it is necessary to introduce the other kind of matter (dark matter, baryons, and radiation). In that case, assuming that the matter is minimally coupled to gravity, the background equations are given by \begin{align} {\cal E}=-\rho, \quad {\cal P}=-p,\quad {\cal E}_\phi = 0, \end{align} where $\rho$ and $p$ are respectively the energy density and pressure of the matter. \subsection{Cosmological perturbations} \subsubsection{Linear perturbations and stability} Linear perturbations around a FLRW background are important in two ways. First, one can judge the stability of a given cosmological model by studying linear perturbations. Second, linear perturbations can be used to test modified theories of gravity against cosmological observations. For these purposes let us derive the quadratic action for linear cosmological perturbations. Linear perturbations around a FLRW background can be decomposed into scalar, vector, and tensor components according to their transformation properties under three-dimensional spatial rotations (see, e.g.,~\cite{Kodama:1985bj,Mukhanov:1990me}). The vector perturbations are less interesting because they are nondynamical in scalar-tensor theories as well as in Einstein gravity. We therefore focus on scalar and tensor perturbations. Thanks to the general covariance, one may make use of the gauge transformation, $t\to t-T(t,\Vec{x})$, $\Vec{x}\to\Vec{x}-\Vec{\xi}(t,\Vec{x})$, to remove some of the perturbation variables. For example, fluctuations in the scalar field, $\delta\phi(t,\Vec{x})$, transform as \begin{align} \delta\phi\to \delta\phi+\dot\phi T, \end{align} and thus we are allowed to take $\delta\phi=0$ by choosing the time coordinate appropriately. This is called the {\em unitary gauge}, which is particularly useful and hence we will use for the moment. Now all the fluctuations are in the metric, and in the ADM form we parametrize them as \begin{align} N=1+\delta N,\quad N_i=\partial_i\psi,\quad \gamma_{ij}=a^2e^{2\zeta}(e^h)_{ij}, \label{def:metric_pert} \end{align} where \begin{align} (e^h)_{ij}:=\delta_{ij}+h_{ij}+\frac{1}{2}h_{ik}h_{kj}+\cdots. \end{align} Here, $\delta N$, $\psi$, and $\zeta$ are scalar perturbations and $h_{ij}$ are tensor perturbations (gravitational waves) satisfying the transverse and traceless conditions, $\partial^i h_{ij}=0=h_i^i$. Writing the spatial metric as $\gamma_{ij}=a^2e^{2\zeta}(e^h)_{ij}$ rather than $\gamma_{ij}=a^2\left[(1+2\zeta)\delta_{ij}+h_{ij}\right]$ simplifies the computation of the action for the cosmological perturbations. Note that the spatial gauge transformation was used to put $\gamma_{ij}$ into the form given above. Substituting the metric~\eqref{def:metric_pert} to the action and expanding it to second order in perturbations, we obtain \begin{align} S^{(2)}=S^{(2)}_{\rm tensor} + S^{(2)}_{\rm scalar}, \end{align} with \begin{align} S^{(2)}_{\rm tensor}= \frac{1}{8}\int{\rm d} t{\rm d}^3x a^3\left[ {\cal G}_T\dot h_{ij}^2-\frac{{\cal F}_T}{a^2}(\partial_k h_{ij})^2 \right] \label{action2:tensor} \end{align} and \begin{align} S^{(2)}_{\rm scalar}=\int {\rm d} t{\rm d}^3x a^3 & \biggl[ -3{\cal G}_T\dot\zeta^2+\frac{{\cal F}_T}{a^2}(\partial\zeta)^2 +\Sigma\delta N^2 -2\Theta \delta N \frac{\partial^2\psi}{a^2} +2{\cal G}_T\dot\zeta\frac{\partial^2\psi}{a^2} \notag \\ & +6\Theta\delta N\dot\zeta -2{\cal G}_T\delta N\frac{\partial^2\zeta}{a^2} \biggr]. \label{action2:scalar1} \end{align} The coefficients are given explicitly by \begin{align} {\cal G}_T&:=2\left[G_4-2XG_{4X}-X\left(H\dot\phi G_{5X}-G_{5\phi}\right)\right], \label{stability:gt} \\ {\cal F}_T&:=2\left[G_4-X\left(\ddot\phi G_{5X}+G_{5\phi}\right)\right], \label{stability:ft} \\ \Sigma&:=XG_{2X}+2X^2G_{2XX}+12H\dot\phi XG_{3X} +6H\dot\phi X^2G_{3XX} -2XG_{3\phi}-2X^2G_{3\phi X} \notag \\ & \quad -6H^2G_4 +6\Bigl[H^2\left(7XG_{4X}+16X^2G_{4XX}+4X^3G_{4XXX}\right) \notag \\ & \quad -H\dot\phi\left(G_{4\phi}+5XG_{4\phi X}+2X^2G_{4\phi XX}\right) \Bigr] \notag \\ & \quad +30H^3\dot\phi XG_{5X}+26H^3\dot\phi X^2G_{5XX}+4H^3\dot\phi X^3G_{5XXX} \notag \\ & \quad -6H^2X\bigl(6G_{5\phi} +9XG_{5\phi X}+2 X^2G_{5\phi XX}\bigr), \\ \Theta&:=-\dot\phi XG_{3X}+ 2HG_4-8HXG_{4X} -8HX^2G_{4XX}+\dot\phi G_{4\phi}+2X\dot\phi G_{4\phi X} \notag \\ & \quad -H^2\dot\phi\left(5XG_{5X}+2X^2G_{5XX}\right) +2HX\left(3G_{5\phi}+2XG_{5\phi X}\right), \end{align} which depend on time in general. We see that time derivatives of $\delta N$ and $\psi$ do not appear in the quadratic action for the scalar perturbations~\eqref{action2:scalar1}. Therefore, variation with respect to $\delta N$ and $\psi$ yields the constraint equations, \begin{align} \Sigma\delta N -\Theta\frac{\partial^2\psi}{a^2}+3\Theta\dot\zeta -{\cal G}_T\frac{\partial^2\zeta}{a^2}&=0, \\ \Theta\delta N-{\cal G}_T\dot\zeta&=0. \end{align} These equations allows us to express $\delta N$ and $\psi$ in terms of $\zeta$. Then, one can remove $\delta N$ and $\psi$ from Eq.~\eqref{action2:scalar1} and obtain the action written solely in terms of $\zeta$ (the curvature perturbation in the unitary gauge): \begin{align} S_\zeta^{(2)}= \int {\rm d} t{\rm d}^3x a^3 \left[ {\cal G}_S\dot\zeta^2-\frac{{\cal F}_S}{a^2}(\partial\zeta)^2 \right], \label{action2:scalar2} \end{align} where \begin{align} {\cal G}_S&:=\frac{\Sigma}{\Theta^2}{\cal G}_T^2+3{\cal G}_T, \label{stability:gs} \\ {\cal F}_S&:=\frac{1}{a}\frac{{\rm d}}{{\rm d} t}\left(\frac{a}{\Theta}{\cal G}_T^2\right)-{\cal F}_T. \label{stability:fs} \end{align} The general quadratic actions~\eqref{action2:tensor} and~\eqref{action2:scalar2} were derived in~\cite{Kobayashi:2011nu}. It is instructive to check here that the standard textbook result is reproduced in the case of general relativity $+$ a canonical scalar field, $G_2=X-V(\phi)$, $G_4=M_{\rm Pl}^2/2$, $G_3=G_5=0$. Obviously, we have ${\cal G}_T={\cal F}_T=M_{\rm Pl}^2$. Since $\Sigma = X-3M_{\rm Pl}^2H^2$ and $\Theta = M_{\rm Pl}^2H$, we have ${\cal G}_S=X/H^2$ and ${\cal F}_S=M_{\rm Pl}^2(-\dot H/H^2)=M_{\rm Pl}^2\epsilon$, where $\epsilon:=-\dot H/H^2$ is the slow-roll parameter. Using the background equation, $-M_{\rm Pl}^2\dot H=X$, it turns out that ${\cal G}_S={\cal F}_S=M_{\rm Pl}^2\epsilon$, and thus the standard result is obtained. The propagation speeds of the tensor and scalar modes are given respectively by \begin{align} c_{\rm GW}^2&:=\frac{{\cal F}_T}{{\cal G}_T},\label{speedofGW_H} \\ c_s^2&:=\frac{{\cal F}_S}{{\cal G}_S}. \end{align} These quantities must be positive, $c_{\rm GW}^2>0$, $c_s^2>0$, because otherwise each perturbation mode exhibits an exponential growth. This is called the gradient instability. This instability is dangerous in particular for short-wavelengh modes, because the time scale of the instability is proportional to the wavelength. In addition to the above stability conditions, we require that ${\cal G}_T>0$ and ${\cal G}_S>0$ in order to guarantee the positivity of the kinetic terms for $h_{ij}$ and $\zeta$, i.e., the absence of ghost instabilities. To sum up, for a given cosmological model to be stable, one must demand that \begin{align} {\cal G}_T>0,\quad {\cal F}_T>0, \quad {\cal G}_S>0,\quad {\cal F}_S>0.\label{stablity_conditions} \end{align} The equation of motion derived from~\eqref{action2:scalar2} is \begin{align} \frac{1}{a^3{\cal G}_S}\frac{{\rm d}}{{\rm d} t}\left( a^3{\cal G}_S\frac{{\rm d}\zeta}{{\rm d} t} \right)-\frac{c_s^2}{a^2}\partial^2\zeta=0. \end{align} On large (superhorizon\footnote{Since the sound speed $c_s$ is different from 1 in general, the horizon scale here should be understood as the sound horizon scale. The same remark applies to the tensor modes.}) scales, one may ignore the second term and obtain the solution \begin{align} \zeta(t,\Vec{x})\simeq C(\Vec{x})+D(\Vec{x})\int^t \frac{{\rm d} t'}{a^3(t'){\cal G}_S(t')}, \label{sol:superhorizon_zeta} \end{align} where $C$ and $D$ are integration functions. It is natural to assume that all the time-dependent functions (except, of course, for the scale factor, $a\sim e^{Ht}$) vary slowly during inflation, and hence ${\cal G}_S\simeq\,$const. If this is the case, the second term in Eq.~\eqref{sol:superhorizon_zeta} decays rapidly and thus can be neglected, leading to the conservation of $\zeta$ on superhorizon scales, $\dot\zeta\simeq 0$, in the Horndeski theory~\cite{Naruko:2011zk,Gao:2011mz}. This is the generalization of the standard result. Note, however, that even in the case of general relativity + a canonical scalar field the ultra slow-roll/nonattractor phase of inflation can appear, in which we have ${\cal G}_S\propto a^{-6}$ and the second term grows~\cite{Inoue:2001zt,Kinney:2005vj}. The nonattractor inflationary dynamics may be more complicated in the presence of the Galileon terms~\cite{Hirano:2016gmv}. Similarly, for tensor perturbations we have the superhorizon solution, \begin{align} h_{ij}(t,\Vec{x})\simeq C_{ij}(\Vec{x})+D_{ij}(\Vec{x})\int^t \frac{{\rm d} t'}{a^3(t'){\cal G}_T(t')}, \end{align} where $C_{ij}$ and $D_{ij}$ are integration functions, and we see that the second term corresponds to the decaying mode. However, ${\cal G}_T$ can vary rapidly in time in some ultra slow-roll models of inflation with nonminimal couplings between gravity and the scalar field (i.e., nonconstant $G_4$ and $G_5$). In such a model, the would-be decaying tensor mode can grow in a similar manner to the aforementioned growth of $\zeta$~\cite{Mylova:2018yap}. For the purpose of computing the power spectra of primordial perturbations from inflation, it is convenient to recast the quadratic actions~\eqref{action2:tensor} and~\eqref{action2:scalar2} into the canonically normalized form. For $\zeta$ we introduce the new time coordinate defined by ${\rm d} y:=(c_s/a){\rm d} t$ and variable \begin{align} u:=z\zeta,\quad z:=\sqrt{2}a({\cal F}_S{\cal G}_S)^{1/4}. \end{align} Then, we have \begin{align} S_\zeta^{(2)}=\frac{1}{2}\int {\rm d} y {\rm d}^3x\left[ (u')^2-(\partial u)^2+\frac{z''}{z}u^2 \right], \end{align} where a prime here denotes differentiation with respect to $y$. This is of the familiar ``Sasaki-Mukhanov'' form. Tensor perturbations can be analyzed in a similar way~\cite{Kobayashi:2011nu}. The power spectrum can be evaluated by following the standard procedure to quantize $u$~\cite{Mukhanov:1990me}. Let us assume for simplicity that the time dependent coefficients in the quadratic action vary very slowly during inflation. In such a ``slow-varying'' limit, the power spectra for the curvature and tensor perturbations are given respectively by \begin{align} {\cal P}_\zeta = \frac{{\cal G}_S^{1/2}}{2{\cal F}_S^{3/2}}\frac{H^2}{4\pi^2}, \\ {\cal P}_h = \frac{8{\cal G}_T^{1/2}}{{\cal F}_T^{3/2}}\frac{H^2}{4\pi^2}, \end{align} evaluated at the (sound) horizon crossing time. The tensor-to-scalar ratio $r:={\cal P}_h/{\cal P}_\zeta$ is given by \begin{align} r=16\left(\frac{{\cal F}_S}{{\cal F}_T}\right)^{3/2} \left(\frac{{\cal G}_S}{{\cal G}_T}\right)^{-1/2}. \end{align} The standard expression $r=16\epsilon$ can be reproduced by substituting ${\cal G}_T={\cal F}_T=M_{\rm Pl}^2$ and ${\cal G}_S={\cal F}_S=M_{\rm Pl}^2\epsilon$, but in general the tensor-to-scalar ratio and the consistency relation can be nonstandard in the Horndeski theory. \subsubsection{Beyond linear order} With increasingly precise measurements of CMB anisotropy, it is important to study non-Gaussian signatures of primordial perturbations from inflation. For this purpose we need to compute the action to cubic (and higher) order in perturbations. Following the seminal work by Maldacena~\cite{Maldacena:2002vr}, this program can be carried out in the context of the Horndeski theory. The cubic action for the scalar perturbations in the Horndeski theory is given in~\cite{Gao:2011qe,DeFelice:2011uc}. It was pointed out in~\cite{RenauxPetel:2011sb} that no new operators appear compared to simpler k-inflation~\cite{Seery:2005wm,Chen:2006nt}, though we have four free functions in the theory so that there is a larger degree of freedom in adjusting the coefficients of each term in the cubic action. Shapes of non-Gaussianities have been investigated in more detail in~\cite{Ribeiro:2011ax,DeFelice:2013ar}. The cubic action for the tensor perturbations is presented in~\cite{Gao:2011vs}, where it was found that only two independent operators appear including the one that is already present in general relativity. The non-Gaussian contribution from the new term in the Horndeski theory might in principle be detectable through CMB B-mode polarization if the corresponding coefficient is extremely large~\cite{Tahara:2017wud}. Cross-bispectra among tensor and scalar perturbations were computed in~\cite{Gao:2012ib}. \subsection{NEC-violating cosmologies and their stability}\label{sec:nogotheo} In this subsection, we discuss cosmological consequences of violation of the null energy condition (NEC) based on the Horndeski theory. See also Ref.~\cite{Rubakov:2014jja} for a mini-review on the same subject. The NEC demands the following bound on the energy momentum tensor: \begin{align} T_{\mu\nu}k^\mu k^\nu \ge 0 \end{align} for any null vector $k^\mu$. In the context of cosmology, this condition is equivalent to \begin{align} \rho + p \ge 0, \end{align} and then in general relativity the NEC implies that \begin{align} \dot H\le 0\label{NEC-H} \end{align} through the Einstein equations. As there is no clear distinction between the energy-momentum tensor of the scalar field and the ``left-hand side'' (i.e., the geometrical part) of the gravitational field equations in scalar-tensor theories, in the following we mean Eq.~\eqref{NEC-H} by the NEC. In usual inflationary cosmology with a canonical scalar field, we have $-2M_{\rm Pl}^2\dot H = \rho+p=\dot\phi^2\ge 0$, and thus the NEC is automatically satisfied. This in particular implies that the spectrum of primordial tensor modes, ${\cal P}_h=2H^2/\pi^2M_{\rm Pl}^2$ evaluated at horizon crossing, must be red. If the spectrum of the tensor modes would have a blue tilt, the NEC might be violated during inflation due to some nonstandard mechanism. Although inflation is a very attractive scenario, inflationary spacetime is past incomplete~\cite{Borde:1993xh,Borde:1996pt,Borde:2001nh} (see also~\cite{Yoshida:2018ndv}), which motivates nonsingular alternatives to inflation such as bouncing models (see, e.g.,~\cite{Lehners:2008vx,Novello:2008ra,% Brandenberger:2009jq,Battefeld:2014uga,Cai:2014bea} for a review). In nonsingular cosmologies, there must be some interval during which the NEC is violated. A noncanonical scalar field or some other kind of matter is required to realize such nonsingular alternatives. It is therefore interesting to explore the possibilities of NEC-violating cosmology in scalar-tensor theories. Since we would expect that the energy conditions are somehow related to the stability of spacetime, now the key question is: can we construct {\em stable} NEC-violating cosmology? To answer this question, let us first consider Einstein gravity plus $G_2(\phi, X)$. The background equations in this case read \begin{align} 3M_{\rm Pl}^2H^2&=2XG_{2X}-G_2 , \\ -M_{\rm Pl}^2\left(3H^2+2\dot H\right) &= G_2 , \end{align} and hence $-M_{\rm Pl}^2 \dot H = XG_{2X}$. The NEC can therefore be violated if the function $G_2(\phi,X)$ is chosen so that $G_{2X}<0$ can occur. However, in this theory we have ${\cal F}_S=M_{\rm Pl}^2(-\dot H/H^2)$, which implies that NEC-violating solutions are unstable. Thus, in this simplest case the NEC is closely related to stability. The situation drastically changes if one adds $G_3$ and the other more general terms that are included in the Horndeski theory, because the general expressions for the stability conditions, \eqref{stability:gt},~\eqref{stability:ft},~\eqref{stability:gs} and~\eqref{stability:fs}, are not correlated with the sign of $\dot H$. Therefore, one can construct NEC-violating stages that are nevertheless stable within the Galileon and Horndeski theories~\cite{Nicolis:2009qm} (see, however,~\cite{Sawicki:2012pz}). This opens up Pandora's box of nonsingular bouncing cosmology~\cite{Qiu:2011cy,Easson:2011zy,Osipov:2013ssa} as well as blue gravitational waves from inflation~\cite{Kobayashi:2010cm} (see, however,~\cite{Cai:2014uka}). A novel NEC-violating cosmological scenario called the {\it galilean genesis} was proposed based on the cubic Galileon theory~\cite{Creminelli:2010ba}. The Lagrangian for this scenario is given by \begin{align} {\cal L}=\frac{M_{\rm Pl}^2}{2}R-e^{2\phi/f}X-\frac{X}{\Lambda^3}\Box\phi+ \frac{X^2}{\Lambda^3f}, \end{align} where $f$ and $\Lambda$ are positive constants having the dimension of mass. This theory admits the following approximate solution valid for $M_{\rm Pl}(-t)\gg (f/\Lambda)^{3/2}$: \begin{align} a\simeq 1 + \frac{f^3}{8M_{\rm Pl}^2\Lambda^3}\frac{1}{(-t)^2}, \quad H\simeq \frac{f^3}{4M_{\rm Pl}^2\Lambda^3}\frac{1}{(-t)^3}, \quad e^{\phi/f}\simeq \sqrt{\frac{3f}{2\Lambda^3}}\frac{1}{(-t)} \quad (t<0).\label{genesis_solution} \end{align} As seen from~\eqref{genesis_solution}, the universe starts expanding from a low energy, quasi-Minkowski state in the asymptotic past. Clearly, the NEC is violated. Nevertheless, we have \begin{align} {\cal G}_S\simeq {\cal F}_S\simeq 12M_{\rm Pl}^4(-t)^2\left(\frac{\Lambda}{f}\right)^3>0, \end{align} showing that this solution is stable. The galilean genesis thus has a potential to be an interesting alternative to inflation. This scenario has further been generalized and investigated in more detail in Refs.~\cite{LevasseurPerreault:2011mw,Liu:2011ns,Hinterbichler:2012mv,% Wang:2012bq,Liu:2012ww,Creminelli:2012my,Hinterbichler:2012fr,% Hinterbichler:2012yn,Liu:2013xt,Easson:2013bda,Nishi:2014bsa,% Nishi:2015pta,Nishi:2016wty,Nishi:2016ljg,Ageeva:2018lko}. As an alternative to inflation, the genesis phase described by~\eqref{genesis_solution} is supposed to be matched onto a radiation-dominated universe across the reheating stage at $t\sim -(f/\Lambda)^{3/2}/M_{\rm Pl}$. Or, one may consider the initial genesis phase followed by inflation as an ``early-time completion'' of the inflationary scenario~\cite{Pirtskhalava:2014esa}. In any case, a problem arises in considering the whole history of such a singularity-free universe: gradient instabilities show up at some moment in the history. Not only the galilean genesis but also bouncing models have the same problem. Several examples show that instabilities may occur at the transition from the NEC-violating phase to some subsequent phase or even in a far future after the transition~\cite{Easson:2011zy,Pirtskhalava:2014esa,% Cai:2012va,Koehn:2013upa,Battarra:2014tga,Qiu:2015nha,Wan:2015hya,Kobayashi:2015gga}. This implies that the instabilities may not be related directly to the violation of the energy condition. In fact, it can be proven that the appearance of gradient instabilities is generic to all nonsingular cosmological solutions in the Horndeski theory~\cite{Libanov:2016kfc,Kobayashi:2016xpl} (see also~\cite{Rubakov:2013kaa,Elder:2013gya}). As we have seen, one can construct a NEC-violating solution that is stable during a finite interval. What we will observe below is that such a solution is however unstable once the whole history is concerned. The key inequality follows from Eq.~\eqref{stability:fs} and the stability conditions: \begin{align} \frac{{\rm d}\xi}{{\rm d} t}>a{\cal F}_T>0 \quad (-\infty < t<\infty) ,\label{ineq:nons} \end{align} where $\xi:=a{\cal G}_T^2/\Theta$. For a stable, nonsingular cosmological solution we have $a\ge \,$const, ${\cal G}_T>0$, and $|\Theta|<\infty$. Therefore, $\xi$ must be a monotonically increasing function of time that never crosses zero, which means that $\xi\to\,$const as $t\to\infty$ or $-\infty$. (Note that $\Theta$ and hence $\xi$ can take either sign.) Integrating Eq.~\eqref{ineq:nons} from $-\infty$ to some $t$ and from some $t$ to $\infty$, one obtains \begin{align} \xi(t)-\xi(-\infty)>\int^t_{-\infty}a{\cal F}_T{\rm d} t, \quad \xi(\infty)-\xi(t)>\int^{\infty}_ta{\cal F}_T{\rm d} t. \label{ineq:geogw} \end{align} At least either of the integrals must be convergent. Otherwise the stability conditions would be violated at some moment in the entire history of the universe. By designing the functions in the Horndeski action so that either of the integrals in~\eqref{ineq:geogw} is convergent, it is indeed possible to construct a stable, nonsingular cosmological solution~\cite{Kobayashi:2016xpl,Ijjas:2016vtq}. However, the convergent integral indicates that the spacetime is geodesically incomplete for the propagation of gravitons~\cite{Creminelli:2016zwa}. This can be understood by moving to the Einstein frame for gravitons via disformal transformation~\cite{Creminelli:2014wna}. Moreover, the normalization of vacuum quantum fluctuations tells us that they would grow and diverge if ${\cal F}_T$ approaches zero sufficiently fast either in the asymptotic past or the future, which implies that the tensor sector is pathological. If one requires geodesic completeness for gravitons and thereby avoids this subtle behavior, then stable, nonsingular cosmologies are prohibited within the Horndeski theory. Note that this no-go theorem cannot tell when the gradient instability shows up. That moment may be in the remote future from the early NEC-violating phase. Several comments are now in order. First, the no-go theorem for nonsingular cosmologies can be extended to include multiple components other than the Horndeski scalar~\cite{Creminelli:2016zwa,Kolevatov:2016ppi,Akama:2017jsa} and to the spatially open universe~\cite{Akama:2018cqv}. Second, there is some debate about zero-crossing of $\Theta$ and the validity of the use of the curvature perturbation in the unitary gauge $\zeta$~\cite{Battarra:2014tga,Quintin:2015rta,% Ijjas:2017pei,Dobre:2017pnt,Mironov:2018oec}. Third, from the effective field theory viewpoint, the strong coupling scale may cut off the instabilities~\cite{Koehn:2015vvy} (see also~\cite{deRham:2017aoj}). Finally, the no-go theorem can be circumvented in scalar-tensor theories beyond Horndeski~\cite{Creminelli:2016zwa,Cai:2016thi,Cai:2017tku,% Cai:2017dyi,Kolevatov:2017voe,Mironov:2018oec,Ye:2019frg}.\footnote{As will be argued in Sec.~\ref{sec:dhost}, only theories that can be generated from the Horndeski theory via disformal transformation~\eqref{def:disformalTr} are phenomenologically viable. Since the disformal transformation is just a field redefinition, one may wonder why the the no-go theorem can be evaded in theories beyond (and disformally related to) Horndeski. The trick is that the disformal transformation that generates the theories admitting stable nonsingular cosmology is singular at some moment~\cite{Creminelli:2016zwa}.} \subsection{Inclusion of matter} So far we have considered cosmological perturbations in the universe dominated by $\phi$, bearing the application to the early universe in mind. Let us extend the previous results to include the other kind of matter, since in the late universe matter perturbations would be also important. \subsubsection{Stability conditions} As additional matter, we are interested in an irrotational, barotropic perfect fluid minimally coupled to the metric. Such a fluid can be mimicked by a k-essence field $\chi$ whose action $S_m$ is of the form \begin{align} S_m=\int {\rm d}^4x\sqrt{-g}P(Y), \quad Y:=-\frac{1}{2}g^{\mu\nu}\partial_\mu\chi\partial_\nu\chi. \end{align} Introducing the k-essence field as a perfect fluid is a concise and useful technique to treat the fluid at the action level. The energy-momentum tensor of $\chi$ is given by $T_{\mu\nu}=2P_Y\partial_\mu\chi \partial_\nu\chi +Pg_{\mu\nu}$, from which we see that the energy density, pressure, and four-velocity of this fluid are expressed as $\rho=2YP_Y-P$, $p=P$, and $u_\mu=-\partial_\mu\chi/\sqrt{2Y}$. The background equation for $\chi$, which is equivalent to $\nabla_\nu T^\nu_{\mu}=0$, reads \begin{align} \frac{{\rm d}}{{\rm d} t}\left(a^3P_Y\dot\chi\right)=0\quad \Rightarrow\quad \ddot\chi+3c_m^2H\dot\chi = 0,\label{eom_background_chi} \end{align} where \begin{align} c_m^2:=\frac{\dot p}{\dot \rho}=\frac{P_Y}{P_Y+2YP_{YY}}, \end{align} is the sound speed squared of the matter. For $P\propto Y^n$, we have $w:=p/\rho=\,$const$\,=1/(2n-1) (=c_m^2)$~\cite{Matarrese:1984zw,Garriga:1999vw}. This implies that one must be careful when taking the limit of pressureless dust, $c_m^2,w\to 0$, which is singular (see~\cite{Boubekeur:2008kn}). We therefore assume that $c_m^2\neq 0$ for the moment. Expanding the action to second order in perturbations, we obtain $S^{(2)}=S_{\rm tensor}^{(2)}+S_{\rm scalar}^{(2)}+S_m^{(2)}$, where $S_{\rm tensor}^{(2)}$ and $S_{\rm scalar}^{(2)}$ are given by Eqs.~\eqref{action2:tensor} and~\eqref{action2:scalar1}, respectively. The contribution from the matter action, $S_m^{(2)}$, is given by \begin{align} S_m^{(2)}&=\int {\rm d} t{\rm d}^3x \frac{a^3P_Y}{c_m^2}\left[ Y\delta N^2-\dot\chi\left(\delta N-3c_m^2\zeta\right)\dot{\delta\chi} +c_m^2\dot\chi \frac{\partial^2\psi}{a^2}\delta\chi+\frac{1}{2} \dot{\delta\chi}^2-\frac{c_m^2}{2a^2}(\partial\delta\chi)^2 \right],\label{action:chi2} \end{align} where $\delta\chi=\delta\chi(t,\Vec{x})$ is a fluctuation of $\chi$. Since the tensor sector remains unaltered by the inclusion of the matter, we focus on the scalar sector. It follows from $\delta S^{(2)}/\delta(\delta N)=0$ and $\delta S^{(2)}/\delta\psi = 0$ that \begin{align} \Sigma\delta N -\Theta\frac{\partial^2\psi}{a^2}+3\Theta\dot\zeta -{\cal G}_T\frac{\partial^2\zeta}{a^2}+\frac{YP_Y}{c_m^2} \delta N -\frac{P_Y}{2c_m^2}\dot\chi\dot{\delta\chi}&=0, \\ \Theta\delta N-{\cal G}_T\dot\zeta -\frac{P_Y}{2}\dot\chi{\delta\chi} &=0.\label{momentum_const_matter} \end{align} Similarly to the previous analysis without $\chi$, one can eliminate $\delta N$ and $\psi$ from the quadratic action by using Eq.~\eqref{momentum_const_matter}. The reduced action written solely in terms of $\zeta$ and $\delta\chi$ takes the form~\cite{DeFelice:2011bh} \begin{align} S^{(2)} &=\int{\rm d} t{\rm d}^3x a^3 \left[ G_{AB}\dot q^A\dot q^B-\frac{1}{a^2}F_{AB}\partial q^A\cdot\partial q^B+\cdots \right], \end{align} where \begin{align} q^A:=\left(\zeta, \frac{{\cal G}_T}{\Theta}\frac{\delta\chi}{\dot\chi} \right), \end{align} and \begin{eqnarray} G_{AB} = \left( \begin{array}{cc} {\cal G}_S+Z & -Z \\ -Z & Z \end{array} \right), \quad F_{AB} = \left( \begin{array}{cc} {\cal F}_S & -c_m^2Z \\ -c_m^2Z & c_m^2Z \end{array} \right), \end{eqnarray} with \begin{align} Z:=\left(\frac{{\cal G}_T}{\Theta}\right)^2\frac{\rho+p}{2c_m^2}. \end{align} Here we only write the terms that are relevant to ghost and gradient instabilities. To avoid ghost instabilities we require that $G_{AB}$ is a positive definite matrix. This is equivalent to ${\cal G}_S>0$ and $Z>0$. The propagation speeds of the two scalar modes are determined by solving det$(v^2G_{AB}-F_{AB})=0$, yielding $v^2=({\cal F}_S-c_m^2Z)/{\cal G}_S$ and $v^2=c_m^2$. Thus, the stability conditions in the presence of an additional perfect fluid are summarized as \begin{align} {\cal G}_S>0,\quad \rho+p>0,\quad c_m^2>0, \quad {\cal F}_{S}> \frac{1}{2}\left(\frac{{\cal G}_T}{\Theta}\right)^2(\rho+p). \end{align} It can be seen that the conditions imposed on the fluid component are quite reasonable. \subsubsection{Matter density perturbations} In late-time cosmology, we are often interested in the evolution of the density perturbations of pressureless matter on subhorizon scales. The analysis is usually done in the Newtonian gauge, in which the metric takes the form \begin{align} {\rm d} s^2=-[1+2\Phi(t,\Vec{x})]{\rm d} t^2+a^2[1-2\Psi(t,\Vec{x})]\delta_{ij}{\rm d} x^i{\rm d} x^j, \label{metric:Newtonian} \end{align} with the nonvanishing scalar-field fluctuation, \begin{align} \phi=\phi(t)+\delta\phi(t,\Vec{x}). \end{align} One can move from the unitary gauge to the Newtonian gauge by performing the coordinate transformation $t_N=t-T$ such that \begin{align} \Phi=\delta N+\dot T,\quad \Psi=-\zeta-HT, \quad 0=\psi-T,\quad \delta\phi=0+\dot\phi T.\label{eq:uni-to-N1} \end{align} The fluctuation of $\chi$ in the Newtonian gauge is given by \begin{align} \delta \chi_N=\delta \chi+\dot\chi T.\label{eq:uni-to-N2} \end{align} Substituting Eqs.~\eqref{eq:uni-to-N1} and~\eqref{eq:uni-to-N2} to Eqs.~\eqref{action2:scalar1} and~\eqref{action:chi2}, we obtain the Newtonian gauge expression for the quadratic action. As we are interested in the quasi-static evolution of the perturbations inside the (sound) horizon, we assume that $\dot\varepsilon\sim H\varepsilon \ll \partial\varepsilon/a$ ($\varepsilon=\Phi,\Psi, H\delta\phi/\dot\phi$). We will take the pressureless limit $c_m^2\to0$, which is apparently singular. Therefore, we retain carefully the would-be singular terms in this limit. The resultant action in the quasi-static approximation is given by \begin{align} S^{(2)}_{\rm QS}= \int{\rm d} t{\rm d}^3x&\biggl\{a \left[ {\cal F}_T(\partial\Psi)^2-2{\cal G}_T\partial\Phi\partial\Psi +b_0H^2(\partial T)^2 -2b_1H\partial T\partial\Psi -2b_2H\partial T\partial\Phi \right] \notag \\ & + \frac{a^3 P_Y}{2} \left[ -\frac{1}{a^2}(\partial\delta\chi_N)^2 + \frac{1}{c_m^2} \left(\dot{\delta\chi}_N-\dot\chi\Phi\right)^2 \right]\biggr\},\label{action:matter_density} \end{align} where $T=\delta\phi/\dot\phi$ and the coefficients are defined as \begin{align} b_0&:=\frac{1}{H^2}\left[ \dot\Theta+H\Theta +H^2({\cal F}_T-2{\cal G}_T)-2H\dot{\cal G}_T +YP_Y \right],\label{def:b0} \\ b_1&:=\frac{1}{H}\left[\dot{\cal G}_T+H({\cal G}_T-{\cal F}_T)\right], \\ b_2&:=\frac{1}{H}\left(H{\cal G}_T-\Theta \right). \end{align} In Eq.~\eqref{def:b0} one may replace $YP_Y$ with $\rho/2$ in the pressureless limit. Note that there could be terms of the form $m^2\varepsilon^2$ (without spatial derivatives) which are larger than ${\cal O}(H^2\varepsilon^2)$ and can be as large as ${\cal O}((\partial\varepsilon)^2)$, but for simplicity we ignored such terms. The equations of motion derived from~\eqref{action:matter_density} are \begin{align} \delta\Psi:&\quad \partial^2\left({\cal F}_T\Psi-{\cal G}_T\Phi -b_1H T\right)=0, \label{eom1_density_pert} \\ \delta \Phi:&\quad \partial^2\left({\cal G}_T\Psi +b_2HT\right)= \frac{a^2P_Y}{2c_m^2}\left(\dot\chi\dot{\delta\chi}_N-\dot\chi^2\Phi\right) \;\left(=\frac{\delta\rho}{2}\right), \label{eom2_density_pert} \\ \delta T:&\quad \partial^2\left(b_0HT-b_1\Psi-b_2\Phi \right)=0, \label{eom3_density_pert} \end{align} and \begin{align} \delta \chi_N:\quad \frac{{\rm d}}{{\rm d} t}\left[\frac{a^3P_Y}{c_m^2}\left(\dot{\delta\chi}_N-\dot\chi\Phi\right) \right]=aP_Y\partial^2\delta\chi_N,\label{eom4_density_pert} \end{align} where the right-hand side of Eq.~\eqref{eom2_density_pert} may be replaced with the density perturbation by noting that \begin{align} \delta\rho =\frac{P_Y}{c_m^2}\left(\dot\chi\dot{\delta\chi}_N-\dot\chi^2\Phi\right). \end{align} Thus, the apparently singular behavior pf this equation in the $c_m^2\to 0$ limit can be eliminated. Taking $b_0=b_1=b_2=0$ in Eqs.~\eqref{eom1_density_pert}--\eqref{eom3_density_pert}, the standard result in Einstein gravity can be recovered. Solving the algebraic equations~\eqref{eom1_density_pert}--\eqref{eom3_density_pert}, one arrives at the modified Poisson equation~\cite{DeFelice:2011hq}, \begin{align} \frac{1}{a^2}\partial^2\Phi = 4\pi G_{\rm eff}(t)\delta\rho, \quad 8\pi G_{\rm eff}:=\frac{b_0{\cal F}_T-b_1^2}{b_0{\cal G}_T^2+2b_1b_2{\cal G}_T+b_2^2{\cal F}_T}. \label{eq:mod_Poiss} \end{align} The effective gravitational coupling $G_{\rm eff}$ can be different from the Newton constant, and, as seen below, it affects the evolution of the density perturbations. The ratio \begin{align} \eta(t):=\frac{\Psi}{\Phi } =\frac{b_0{\cal G}_T+b_1b_2}{b_0{\cal F}_T-b_1^2} \end{align} is also an important quantity because $\eta\neq 1$ if gravity is nonstandard. Since the bending of light depends on $\Phi+\Psi$, weak-lensing observations are useful to test $\eta\neq 1$. Note that the above expressions cannot be used for $f(R)$ and chameleon models of dark energy, because we dropped the mass term for simplicity. See Refs.~\cite{Noller:2013wca,Sawicki:2015zya,Chiu:2015voa} for the limits of the quasi-static assumption and Refs.~\cite{DeFelice:2011hq,Gleyzes:2013ooa,Bellini:2014fua} for the complete expression of the equations (without using the quasi-static approximation). The equation of motion~\eqref{eom4_density_pert} can be written as \begin{align} \frac{{\rm d}}{{\rm d} t}\left(a^3\delta\rho\right) -3c_m^2H\left(a^3\delta\rho\right) = aP_Y\dot\chi \partial^2\delta\chi_N, \end{align} where we used~\eqref{eom_background_chi}. Using~\eqref{eom_background_chi} again, one finds \begin{align} \frac{{\rm d}}{{\rm d} t}\left\{a^2\left[ \frac{{\rm d}}{{\rm d} t}\left(a^3\delta\rho\right) -3c_m^2H\left(a^3\delta\rho\right) \right]\right\}=a^3 \partial^2\left(c_m^2\delta\rho + 2YP_Y\Phi\right). \end{align} Now we can take the limit $c_m^2\to 0$ and $2YP_Y\to\rho$ safely to get \begin{align} \ddot\delta + 2H\dot \delta = \frac{1}{a^2}\partial^2\Phi, \label{eq:evol_dens} \end{align} where $\delta:=\delta\rho/\rho$. Thus, as expected, the familiar evolution equation for the density contrast $\delta$ is recovered from the equation of motion for $\delta\chi_N$ in the pressureless limit. Combining this with the modified Poisson equation~\eqref{eq:mod_Poiss}, one can derive the closed-form evolution equation for $\delta$. Instead of introducing the k-essence field $\chi$, one may replace the second line in Eq.~\eqref{action:matter_density} with \begin{align} -a^3\Phi\delta\rho, \end{align} and use the usual fluid equations for a pressureless dust. This is a simpler procedure to arrive at the same result. \section{Beyond Horndeski}\label{sec:beyondH} So far we have considered the most general scalar-tensor theory having second-order equations of motion and its application to cosmology. Thanks to this second-order nature, the theory is obviously free of the Ostrogradsky instability. However, it should be emphasized that the second-order equations of motion are {\em not} the necessary conditions for the absence of the Ostrogradsky instability in theories with multiple fields. To see this, let us consider a simple toy model in mechanics whose Lagrangian is given by~\cite{Langlois:2015cwa} \begin{align} L=\frac{a}{2}\ddot\phi^2+b\ddot\phi \dot q+\frac{c}{2}\dot q^2+\frac{1}{2}\dot\phi^2 -\frac{1}{2}\phi^2-\frac{1}{2}q^2. \end{align} Here, the coefficients $a$, $b$, and $c$ are assumed to be constants. The Euler-Lagrange equations are of higher order in general: \begin{align} a\ddddot\phi +b\dddot q - \ddot\phi - \phi &=0,\label{deg-eom1} \\ b\dddot \phi + c\ddot q + q&=0,\label{deg-eom2} \end{align} implying that the system contains an extra degree of freedom and hence suffers from the Ostrogradsky ghost. However, if the kinetic matrix constructed from the highest derivative terms, \begin{align} M= \begin{pmatrix} a & b \\ b & c \end{pmatrix}, \end{align} is degenerate, i.e., $ac-b^2=0$, then the system contains only 2 degrees of freedom. Indeed, if $ac-b^2=0$ is satisfied, we can combine the equations of motion~\eqref{deg-eom1} and~\eqref{deg-eom2} to reduce the number of derivatives. First, $c\times$\eqref{deg-eom1}$-b\times {\rm d}$\eqref{deg-eom2}$/{\rm d} t$ gives \begin{align} \ddot\phi +\frac{b}{c}\dot q + \phi=0.\label{deg-eom3} \end{align} Then, ${\rm d}$\eqref{deg-eom3}$/{\rm d} t$ is used to remove $\dddot\phi$ from Eq.~\eqref{deg-eom2}, yielding \begin{align} \left(1-\frac{b^2}{c^2}\right)\ddot q -\frac{b}{c}\dot\phi + \frac{1}{c}q=0. \label{deg-eom4} \end{align} We thus arrive at the two second-order equations of motion~\eqref{deg-eom3} and~\eqref{deg-eom4} for $\phi$ and $q$. This shows that the degenerate system is free of the Ostrogradsky ghost and hence is healthy despite the higher-order Euler-Lagrange equations. In this section, we will briefly explore such healthy degenerate higher-order theories containing the metric and a scalar field and extend the Horndeski theory. The reader is referred to~\cite{Langlois:2017mdk,Langlois:2018dxi} for a more complete review on this topic. \subsection{Degenerate higher-order scalar-tensor theories}\label{sec:dhost} The first example of degenerate higher-order scalar-tensor (DHOST) theories beyond Horndeski~\cite{Zumalacarregui:2013pma} was obtained by performing a disformal transformation~\cite{Bekenstein:1992pj} \begin{align} g_{\mu\nu}\to \tilde g_{\mu\nu}=C(\phi,X)g_{\mu\nu}+D(\phi,X)\phi_\mu\phi_\nu. \label{def:disformalTr} \end{align} This is a generalization of the familiar conformal transformation, $\tilde g_{\mu\nu}=C(\phi)g_{\mu\nu}$. The disformal transformation~\eqref{def:disformalTr} is invertible if \begin{align} C(C-XC_X+2X^2D_X)\neq 0.\label{condition:invert} \end{align} Since the disformal transformation contains derivatives of $\phi$, the theory transformed from Horndeski has higher-order field equations.\footnote{If both $C$ and $D$ depend only on $\phi$, the transformed field equations remain of second order and so the Horndeski theory is mapped to Horndeski~\cite{Bettoni:2013diz}.} Nevertheless, it is a degenerate theory with $(2+1)$ degrees of freedom because an invertible field redefinition does not change the number of physical degrees of freedom~\cite{Arroja:2015wpa,Domenech:2015tca,Takahashi:2017zgr}. This example implies the existence of a wider class of healthy scalar-tensor theories than the Horndeski class. Degenerate higher-order scalar-tensor theories have been constructed and investigated systematically in~\cite{Langlois:2015cwa,Langlois:2015skt,Crisostomi:2016czh,% Achour:2016rkg,BenAchour:2016fzp,Langlois:2017mxy}. Let us follow Ref.~\cite{Langlois:2015cwa} and consider the extension of Horndeski's $G_4$ Lagrangian given by \begin{align} {\cal L}=f(\phi,X)R+\sum_{I=1}^5A_I(\phi,X)L_I,\label{qDHOSTL1} \end{align} where \begin{align} &L_1=\phi_{\mu\nu}\phi^{\mu\nu},\quad L_2=(\Box\phi)^2, \quad L_3=\Box\phi \phi^\mu\phi^\nu\phi_{\mu\nu} \notag \\ & L_4=\phi^\mu\phi_{\mu\alpha}\phi^{\alpha \nu}\phi_\nu, \quad L_5=(\phi^\mu\phi^\nu\phi_{\mu\nu})^2. \end{align} These five constituents exhaust all the possible quadratic terms in second derivatives of $\phi$, and the Horndeski theory is the special case with $A_2=-A_1=f_X$ and $A_3=A_4=A_5=0$. The scalar field (respectively, the metric) corresponds to $\phi$ (respectively, $q$) in the previous mechanical toy model. By inspecting the structure of the highest derivative terms in~\eqref{qDHOSTL1},\footnote{We require the degeneracy in any coordinate systems. It is argued in~\cite{DeFelice:2018mkq} that one can relax this requirement and consider theories that are degenerate when restricted to the unitary gauge.} one finds that the degeneracy conditions are given by three equations relating the six functions in the Lagrangian, leaving three arbitrary functions (except for some special cases). The degenerate theories whose Lagrangian is of the form~\eqref{qDHOSTL1} are called quadratic DHOST theories. Note that one is free to add to~\eqref{qDHOSTL1} the Horndeski terms $G_2(\phi,X)-G_3(\phi,X)\Box\phi$, because these two terms are nothing to do with the degeneracy conditions. Quadratic DHOST theories are classified into several subclasses~\cite{Langlois:2015cwa,Crisostomi:2016czh,Achour:2016rkg}. Of particular importance among them is the so called class Ia, which is characterized by \begin{align} A_2&=-A_1, \\ A_4&=\frac{1}{2(f+2XA_1)^2}[ 8XA_1^3+(3f+16Xf_X)A_1^2-X^2fA_3^2+2X(4Xf_X-3f)A_1A_3 \notag \\ &\qquad\qquad\qquad\quad +2f_X(3f+4Xf_X)A_1+2f(Xf_X-f)A_3+3ff_X^2 ], \\ A_5&= -\frac{\left(f_X+A_1+XA_3\right) \left(2fA_3-f_XA_1-A_1^2+3XA_1A_3\right)}{2(f+2XA_1)^2} , \end{align} with $f+2XA_1\neq 0$. (Recall that we are using the notation $X:=-g^{\mu\nu}\phi_\mu\phi_\nu/2$.) The arbitrary functions are thus taken to be $f$, $A_1$, and $A_3$. Cosmology in this class of DHOST theories has been studied in Refs.~\cite{Crisostomi:2017pjs,Frusciante:2018tvu,Crisostomi:2018bsp}, where it is demonstrated that the apparently higher-order equations of motion can be reduced to a second-order system for the scale factor and the scalar field. Clearly, the Horndeski theory is included in class Ia. Another important particular case is (a subclass of) the GLPV theory~\cite{Gleyzes:2014dya,Gleyzes:2014qga} satisfying \begin{align} A_2=-A_1=f_X+XA_3\quad \Rightarrow\quad A_4=-A_3, \quad A_5=0. \end{align} In this case one has two arbitrary functions $f$ and $A_3$, and the Horndeski theory is reproduced by further taking $A_3=0$. The Lagrangian for the GLPV theory is written explicitly as \begin{align} {\cal L}_{\rm GLPV}&=fR+f_X\left[(\Box\phi)^2-\phi_{\mu\nu}\phi^{\mu\nu}\right] \notag \\ &\quad +A_3\left\{ X\left[(\Box\phi)^2-\phi_{\mu\nu}\phi^{\mu\nu}\right] +\Box\phi \phi^\mu\phi^\nu\phi_{\mu\nu}- \nabla_\mu X\nabla^\mu X \right\}.\label{GLPV4L} \end{align} Interestingly, the second line (with $A_3=\,$const) can be obtained from a naive, minimal covariantization of the Galileon theory (see the second line in Eq.~\eqref{eq:Lag_original_Galileon1}). In Sec.~\ref{subsec:GtoH} we introduced the counter term to cancel the higher derivatives which appear upon covariantization. However, this example shows that without the counter term we still have a healthy degenerate theory~\cite{Deffayet:2015qwa}. A notable property of DHOST theories in class Ia is that all the Lagrangians can be mapped into a Horndeski Lagrangian through a disformal transformation~\eqref{def:disformalTr}~\cite{Achour:2016rkg}. In other words, one can remove two of the three functions of $\phi$ and $X$ in the quadratic DHOST sector by using the two functions, $C(\phi,X)$ and $D(\phi,X)$, in the disformal transformation, to move into a ``Horndeski frame'' with a single function $G_4(\phi,X)$ at quadratic order. At this point it is worth emphasizing that class Ia DHOST theories in the presence of minimally coupled matter are equivalent to Horndeski with disformally coupled matter, but {\em not} to Horndeski with minimally coupled matter. This fact is crucial in particular to the screening mechanism discussed in the next section. Subclasses other than class Ia are phenomenologically unacceptable. In these subclasses, the gradient terms in the quadratic actions for scalar and tensor cosmological perturbations have opposite signs (i.e., either of the two modes is unstable), or tensor perturbations are nondynamical~\cite{Langlois:2017mxy,deRham:2016wji}. Therefore, only the DHOST theories disformally related to Horndeski can be viable. In this subsection we have focused for simplicity on DHOST theories whose Lagrangian is a quadratic polynomial in $\phi_{\mu\nu}$. One can do similar manipulation to construct cubic DHOST theories as an extension of Horndeski's $G_5$ Lagrangian (i.e., DHOST theories whose Lagrangian is a cubic polynomial in $\phi_{\mu\nu}$)~\cite{BenAchour:2016fzp}, though their classification is much more involved. Cubic DHOST theories disformally disconnected to Horndeski also exhibit gradient instabilities in tensor or scalar modes. One can go beyond the polynomial assumption and generate a novel family of DHOST theories from nondegenerate theories via a noninvertible disformal transformation with \begin{align} D(\phi,X)=\frac{C(\phi,X)}{2X}+F(\phi), \end{align} where $C(\phi,X)$ and $F(\phi)$ are arbitrary (see Eq.~\eqref{condition:invert})~\cite{Takahashi:2017pje,Langlois:2018jdg}. This is essentially the field redefinition used in the context of mimetic gravity~\cite{Chamseddine:2013kea,Chamseddine:2014vna} (see~\cite{Sebastiani:2016ras} for a review). The idea behind this is that noninvertible field redefinition can change the number of dynamical degrees of freedom. New DHOST theories thus generated are not disformally connected to Horndeski in general. Such ``mimetic DHOST'' theories suffer from gradient instabilities of tensor or scalar modes~\cite{Takahashi:2017pje,Langlois:2018jdg} (see also~\cite{Ramazanov:2016xhp,Ganz:2018mqi} for more about this instability issue). The idea of degenerate theories can be extended to include more than one higher derivative fields~\cite{Motohashi:2016ftl,Crisostomi:2017aim}, though it seems challenging to construct concrete nontrivial examples of a multi-scalar version of DHOST theories. Degenerate theories involving only the metric were explored in~\cite{Crisostomi:2017ugk} under the name of ``beyond Lovelock gravity.'' \subsection{After GW170817} Measuring the speed of gravitational waves $c_{\rm GW}$ can be a test for modified gravity theories~\cite{Nishizawa:2014zna,Lombriser:2015sxa,Lombriser:2016yzn,Bettoni:2016mij}. Indeed, the nearly simultaneous detection of gravitational waves GW170817 and the $\gamma$-ray burst GRB 170817A~\cite{TheLIGOScientific:2017qsa,Monitor:2017mdv,GBM:2017lvd} provides a tight constraint on $c_{\rm GW}$ and hence on scalar-tensor theories and other types of modified gravity. The limit on the difference between $c_{\rm GW}$ and the speed of light imposed by this recent event is\footnote{The lower bound on $c_{\rm GW}$ can also be obtained from the argument on the gravitational Cherenkov radiation, which can even be tighter than this~\cite{Moore:2001bv,Kimura:2011qn}. However, the frequencies concerned are much higher than those of LIGO observations.} \begin{align} -3\times 10^{-15} < c_{\rm GW}-1 < 7\times 10^{-16}. \end{align} This constraint motivates us to identify the viable subclass of the Horndeski and DHOST theories as an alternative to dark energy satisfying $c_{\rm GW}=1$~\cite{Creminelli:2017sry,Sakstein:2017xjx,Ezquiaga:2017ekz,Baker:2017hug,% Bartolo:2017ibw,Kase:2018iwp,Ezquiaga:2018btd,Kase:2018aps} (see also~\cite{Amendola:2018ltt,Copeland:2018yuh} for scalar-tensor theories that achieve $c_{\rm GW}=1$ dynamically). We start with the Horndeski theory, in which the general form of the propagation speed of gravitational waves is given by Eq.~\eqref{speedofGW_H}: \begin{align} c_{\rm GW}^2 = \frac{G_4-X(\ddot\phi G_{5X}+G_{5\phi})}{G_4-2XG_{4X}-X(H\dot\phi G_{5X}-G_{5\phi})}. \end{align} In order for this to be equal to the speed of light irrespective of the background cosmological evolution, we require that \begin{align} G_{4X}=0, \quad G_5=0. \end{align} Thus, the viable subclass within Horndeski is described by the Lagrangian \begin{align} {\cal L}=G_2(\phi,X)-G_{3}(\phi,X)\Box\phi + G_4(\phi)R. \end{align} This excludes for instance the scalar field coupled to the Gauss-Bonnet term. Let us then consider DHOST theories. It turns out that any term in the cubic DHOST Lagrangians leads to $c_{\rm GW}\neq 1$ (just as the $G_5$ term in the Horndeski theory does), and hence all cubic DHOST theories are ruled out. In quadratic DHOST theories, the action for the tensor perturbations is given by~\cite{Langlois:2017mxy,deRham:2016wji} \begin{align} S^{(2)}_{\rm tensor}= \frac{1}{4}\int{\rm d} t{\rm d}^3x a^3\left[ (f+2XA_1)\dot h_{ij}^2-\frac{f}{a^2}(\partial_k h_{ij})^2 \right].\label{tensor2_dhost} \end{align} From this we see that the propagation speed of gravitational waves is \begin{align} c_{\rm GW}^2=\frac{f}{f+2XA_1}, \end{align} and so we impose $A_1=0$. The viable subclass in DHOST theories thus reduces to \begin{align} A_2&=-A_1=0,\label{c1t1} \\ A_4&=\frac{1}{2f}[ -X^2A_3^2+2(Xf_X-f)A_3+3f_X^2 ],\label{c1t2} \\ A_5&= -\frac{A_3\left(f_X+XA_3\right)}{f}.\label{c1t3} \end{align} We have two free functions $f(\phi,X)$ and $A_3(\phi,X)$ in addition to the lower order Horndeski terms $G_2(\phi,X)$ and $G_3(\phi,X)$. More recently, it was pointed out that gravitons can decay into $\phi$ in DHOST theories~\cite{Creminelli:2018xsv}. To avoid this graviton decay, it is further required that $A_3=0$. (Otherwise, gravitational waves would not be observed.) We thus finally have \begin{align} A_4=\frac{3f_X^2}{2f},\quad A_1=A_2=A_3=A_5=0. \end{align} Note that this subclass does not belong to Horndeski nor GLPV families (if $f_X\neq 0$). As argued in the previous subsection, mimetic gravity can be viewed as a kind of DHOST theories. The implications of GW170817 for the mimetic class of DHOST theories have been discussed in~\cite{Casalino:2018tcd,Ganz:2018vzg,Casalino:2018wnc}. It should be emphasized that in constraining scalar-tensor theories with gravitational waves we have assumed that the DHOST theory under consideration as an alternative description of dark energy is valid on much higher energy scales where LIGO observations are made ($\sim 100\,$Hz $\sim 10^{-13}\,$eV). The validity of this assumption needs to be looked into carefully~\cite{deRham:2018red}. Similarly, gravity at much higher energies than this remains unconstrained. Therefore, modified gravity in the early universe is free from these gravitational wave constraints. A final remark is that, even if $c_{\rm GW}=1$, $f$ (or $G_4$) may depend on time, which gives rise to the extra contribution $\dot f/f$ to the friction term in the equation of motion for $h_{ij}$. This results in a modification of the amplitude of gravitational waves, which can be measurable~\cite{Belgacem:2017ihm,Amendola:2017ovw,Nunes:2018zot}. \section{Vainshtein screening}\label{sec:Vainshtein_mech} While a scalar-tensor theory as an alternative to dark energy is supposed to give rise to ${\cal O}(1)$ modification of gravity on cosmological scales, the extra force mediated by the scalar degree of freedom $\phi$ must be screened on small scales where general relativity has been tested to high precision. This occurs if $\phi$ is effectively massive in the vicinity of a source, or if $\phi$ is effectively weakly coupled to the source. The former case corresponds to the chameleon mechanism~\cite{Khoury:2003aq,Khoury:2003rn}, in which the effective potential for $\phi$ depends on the local energy density through the coupling of $\phi$ to matter. The latter case is based on the idea of Vainshtein~\cite{Vainshtein:1972sx} (see also~\cite{Deffayet:2001uk}) and is called the Vainshtein mechanism. (There are other screening mechanisms called symmetron~\cite{Hinterbichler:2010es,Hinterbichler:2011ca} and k-Mouflage models~\cite{Babichev:2009ee}, both of which effectively suppress the coupling to matter.) The Vainshtein mechanism is relevant to the Galileon theories, and below we will review this screening mechanism in the context of the generalized Galileon/Horndeski theory. See also~\cite{Babichev:2013usa} for a nice review on the Vainshtein mechanism. \subsection{A Vainshtein primer} We start with emphasizing the need for a screening mechanism, and then introduce the Vainshtein mechanism. To see how gravity is modified around matter in a simple model and as a result it fails to satisfy the experimental constraints, let us consider a theory \begin{align} S=\int{\rm d}^4x\sqrt{-g}\left[f(\phi)R +X\right] +S_m[g_{\mu\nu}, \psi_m],\label{action:vainshtein0} \end{align} where the matter fields (denoted as $\psi_{m}$) are minimally coupled to the metric $g_{\mu\nu}$. We investigate perturbations around a Minkowski background with a constant scalar $\phi$, \begin{align} g_{\mu\nu}=\eta_{\mu\nu}+M_{\rm Pl}^{-1}h_{\mu\nu}(t,\Vec{x}), \quad \phi =\phi_0+\varphi(t,\Vec{x}),\label{def:pert-vainshtein} \end{align} caused by the energy-momentum tensor for matter, $T_{\mu\nu}$ (the above theory admits the background solution $g_{\mu\nu}=\eta_{\mu\nu}$ and $\phi=\phi_0=\,$const). Here we write $f(\phi_0)=M_{\rm Pl}^2/2$ and defined the metric perturbations so that $h_{\mu\nu}$ has the dimension of mass. Expanding~\eqref{action:vainshtein0} to second order in perturbations, we obtain the effective Lagrangian for the description of weak gravitational fields as \begin{align} {\cal L}_{\rm eff}= -\frac{1}{4}h^{\mu\nu}\hat {\cal E}_{\mu\nu}^{\alpha\beta}h_{\alpha\beta } -\frac{1}{2}\partial_\mu\varphi\partial^\mu\varphi -\xi h^{\mu\nu}X_{\mu\nu}^{(1)}+ \frac{1}{2M_{\rm Pl} }h^{\mu\nu}T_{\mu\nu},\label{action:vainshtein01} \end{align} where $\xi:=M_{\rm Pl}^{-1}{\rm d} f/{\rm d}\phi|_{\phi=\phi_0}$, \begin{align} X_{\mu\nu}^{(1)}&:=\eta_{\mu\nu}\Box\varphi-\varphi_{\mu\nu},\label{def:X1mn} \end{align} and \begin{align} \hat {\cal E}_{\mu\nu}^{\alpha\beta}h_{\alpha\beta }:=-\frac{1}{2}\Box h_{\mu\nu} +\partial^\lambda\partial_{(\mu}h_{\nu)\lambda}+\frac{1}{2}\eta_{\mu\nu}\Box h -\frac{1}{2}\eta_{\mu\nu}\partial_\lambda\partial_\rho h^{\lambda\rho} -\frac{1}{2}\partial_\mu\partial_\nu h \end{align} is the linearized Einstein tensor (divided by $M_{\rm Pl}$). Here, indices are raised and lowered by $\eta_{\mu\nu}$. The third term in the Lagrangian~\eqref{action:vainshtein01} signals the mixing of the scalar degree of freedom with the graviton. This can be disentangled by making use of the field redefinition \begin{align} h_{\mu\nu}=\tilde h_{\mu\nu} -2\xi\varphi\eta_{\mu\nu}, \label{tr:conformal1} \end{align} leading to \begin{align} {\cal L}_{\rm eff}= -\frac{1}{4}\tilde h^{\mu\nu}\hat {\cal E}_{\mu\nu}^{\alpha\beta}\tilde h_{\alpha\beta } -\frac{1+6\xi^2}{2}\partial_\mu\varphi\partial^\mu\varphi + \frac{1}{2M_{\rm Pl} }\tilde h^{\mu\nu}T_{\mu\nu} -\frac{\xi}{M_{\rm Pl}}\varphi T .\label{action:vainshtein02} \end{align} The transformation~\eqref{tr:conformal1} is equivalent to the linear part of the conformal transformation to the Einstein frame, $\tilde g_{\mu\nu}=C(\phi)g_{\mu\nu}$ with $C= f(\phi)/f(\phi_0)$. In the new frame we have the nonminimal coupling of the form $\varphi T$, and the field equations are given by \begin{align} \hat {\cal E}_{\mu\nu}^{\alpha\beta}\tilde h_{\alpha\beta }&=M_{\rm Pl}^{-1}T_{\mu\nu}, \\ (1+6\xi^2)\Box\varphi &= M_{\rm Pl}^{-1}\xi T. \end{align} Thus, ${\cal O}(1)$ modification of gravity is expected for $\xi={\cal O}(1)$. To be more concrete, let us consider a spherical distribution of nonrelativistic matter, $T_{\mu\nu}=\rho(r)\delta_\mu^0\delta_\nu^0$, with $\tilde h_{00}=-2\tilde \Phi(r)$ and $\tilde h_{ij}=-2\tilde\Psi(r)\delta_{ij}$. Then, the field equations read \begin{align} \frac{1}{r^2}\left(r^2\tilde\Psi'\right)'&=\frac{\rho}{2M_{\rm Pl}}, \\ \tilde\Psi-\tilde\Phi&=0, \\ \frac{1}{r^2}\left(r^2\varphi'\right)'&=-\frac{\xi}{1+6\xi^2}\frac{\rho}{M_{\rm Pl}}, \end{align} where a prime stands for differentiation with respect to $r$. These equations can be integrated straightforwardly to give \begin{align} M_{\rm Pl}^{-1}\tilde\Phi'=M_{\rm Pl}^{-1}\tilde\Psi' = (8\piM_{\rm Pl}^2)^{-1}\frac{{\cal M}(r)}{r^2}, \quad M_{\rm Pl}^{-1}\varphi'=- \frac{2\xi}{1+6\xi^2}\cdot (8\piM_{\rm Pl}^2)^{-1}\frac{{\cal M}(r)}{r^2}, \label{sol:linear1} \end{align} where ${\cal M}(r)$ is the enclosed mass, ${\cal M}(r):=4\pi \int^r\rho(s)s^2{\rm d} s$. It follows from~\eqref{tr:conformal1} that the metric perturbations in the original frame are given by $\Phi = \tilde\Phi-\xi\varphi$ and $\Psi=\tilde\Psi+\xi\varphi$. Thus, the metric potentials outside the matter distribution are given by \begin{align} \Phi = -\frac{G_N{\cal M}}{r},\quad \Psi =\gamma \Phi, \end{align} with \begin{align} 8\pi G_N:=\frac{1+8\xi^2}{1+6\xi^2}\frac{1}{M_{\rm Pl}^2}, \quad \gamma-1=-\frac{4\xi^2}{1+8\xi^2}. \end{align} For $\xi={\cal O}(1)$ we have $\gamma-1={\cal O}(1)$, which clearly contradicts the solar-system experiments~\cite{Will:2014kxa}. Now we add a Galileon-like cubic interaction to~\eqref{action:vainshtein02}: \begin{align} {\cal L}_{\rm eff}= -\frac{1}{4}\tilde h^{\mu\nu}\hat {\cal E}_{\mu\nu}^{\alpha\beta}\tilde h_{\alpha\beta } -\frac{1+6\xi^2}{2}(\partial\varphi)^2 -\frac{1}{2\Lambda^3}(\partial\varphi)^2\Box\varphi + \frac{1}{2M_{\rm Pl} }\tilde h^{\mu\nu}T_{\mu\nu} -\frac{\xi}{M_{\rm Pl}}\varphi T. \label{action:cubic-vainshtein} \end{align} Then, the scalar-field equation of motion becomes \begin{align} (1+6\xi^2)\Box\varphi +\frac{1}{\Lambda^3} \left[(\Box\varphi)^2-\varphi_{\mu\nu}\varphi^{\mu\nu}\right]=\frac{\xi}{M_{\rm Pl}}T, \end{align} which, for a spherical matter distribution such as a star, gives \begin{align} (1+6\xi^2)r^2\varphi' + \frac{2}{\Lambda^3}r(\varphi')^2 = -\frac{\xi {\cal M}(r)}{4\pi M_{\rm Pl}}. \end{align} This equation can be solved algebraically, yielding \begin{align} \varphi'= c \Lambda^3 r \left[ -1+\sqrt{1-\frac{\xi}{c^2}\left(\frac{r_V}{r}\right)^3} \right], \label{sol:vainshtein1} \end{align} where $c:=(1+6\xi^2)/4$ is an ${\cal O}(1)$ constant and we defined \begin{align} r_V:=\left(\frac{{\cal M}}{8\pi M_{\rm Pl}\Lambda^3}\right)^{1/3}.\label{def:vrad} \end{align} (We consider a stellar exterior so that now ${\cal M}(=\,$const) is the mass of the star.) For $r\gg r_V$, Eq.~\eqref{sol:vainshtein1} reproduces~\eqref{sol:linear1}. However, for $r\ll r_V$, we find \begin{align} \varphi'\simeq (-\xi)^{1/2}\left(\frac{r}{r_V}\right)^{3/2}\tilde\Phi'\ll \tilde\Phi' \quad\Rightarrow\quad \frac{\Phi}{M_{\rm Pl}}\simeq \frac{\Psi}{M_{\rm Pl}} \simeq -\frac{G_N{\cal M}}{r},\quad 8\pi G_N:=\frac{1}{M_{\rm Pl}^2}. \end{align} It turns out that the nonlinear interaction introduced in~\eqref{action:cubic-vainshtein} helps the recovery of standard gravity, and the solar-system constraints can thus be evaded if $r_V$ is sufficiently large. This is the {\em Vainshtein mechanism}, and $r_V$ is called the {\em Vainshtein radius}, within which general relativity is reproduced. Although we are considering small perturbations, we see that \begin{align} \frac{\Box\varphi}{\Lambda^3}\gtrsim{\cal O}(1)\quad {\rm for}\quad r\lesssim r_V. \label{O1_second_deri} \end{align} This tells us why nonlinearity is important even in a weak gravity environment. If the scalar degree of freedom accounts for the present accelerating expansion of the universe, $\Lambda$ is expected to be as small as \begin{align} \Lambda \sim (M_{\rm Pl} H_0^2)^{1/3},\label{estimate:Lambda} \end{align} where $H_0$ is the present Hubble scale. This is deduced from the estimate \begin{align} M_{\rm Pl}^2 H^2_0\sim \dot\phi^2\sim \frac{\dot\phi^2\ddot\phi}{\Lambda^3}, \quad \ddot\phi\sim H_0\dot\phi. \end{align} For $M\sim M_{\odot}$, Eq.~\eqref{def:vrad} with~\eqref{estimate:Lambda} gives \begin{align} r_V\sim 100\,{\rm pc}, \end{align} which is much larger than the size of the solar system. \subsection{Vainshtein screening in Horndeski theory} We can repeat the same analysis in the Horndeski theory~\cite{Narikawa:2013pjr,Koyama:2013paa,DeFelice:2011th,Kase:2013uja}. We only consider the case with $G_5=0$ for a reason to be explained later. In order for the background $g_{\mu\nu}=\eta_{\mu\nu}$ with $\phi=\phi_0=\,$const is a solution, we require that $G_2(\phi_0,0)=G_{2\phi}(\phi_0,0)=0$. In substituting~\eqref{def:pert-vainshtein} to the Horndeski action (now $M_{\rm Pl}$ is defined by $G_{4}(\phi_0,0)=M_{\rm Pl}^2/2$) and expanding it in terms of perturbations, one must carefully retain the nonlinear terms with second derivatives because they can be large on small scales as suggested by~\eqref{O1_second_deri}. More specifically, we have the terms of the following forms in the Lagrangian: \begin{align} (\partial h_{\mu\nu})^2,\quad (\partial\varphi)^2, \quad (\partial\varphi)^2(\partial^2\varphi)^n,\quad h_{\mu\nu}(\partial^2\varphi)^n. \end{align} However, as we are interested in the Vainshtein mechanism, we ignore the mass term $K_{\phi\phi}\varphi^2$. We thus find (in the original frame)~\cite{Koyama:2013paa} \begin{align} {\cal L}_{\rm eff}&= -\frac{1}{4}h^{\mu\nu}\hat {\cal E}_{\mu\nu}^{\alpha\beta} h_{\alpha\beta } -\frac{\eta}{2}(\partial\varphi)^2 +\frac{\mu}{\Lambda^3}{\cal L}_3^{\rm Gal}+\frac{\nu}{\Lambda^6}{\cal L}_4^{\rm Gal} \notag \\ &\quad -\xi h^{\mu\nu}X_{\mu\nu}^{(1)}-\frac{\alpha}{\Lambda^3}h^{\mu\nu}X_{\mu\nu}^{(2)} +\frac{1}{2M_{\rm Pl}}h^{\mu\nu}T_{\mu\nu}, \label{effLag:Vainshtein1} \end{align} where \begin{align} {\cal L}_3^{\rm Gal}&:=-\frac{1}{2}(\partial\varphi)^2\Box\varphi, \\ {\cal L}_4^{\rm Gal}&:=-\frac{1}{2}(\partial\varphi)^2\left[ (\Box\varphi)^2-\varphi_{\mu\nu}\varphi^{\mu\nu} \right], \end{align} $X_{\mu\nu}^{(1)}$ was already defined in Eq.~\eqref{def:X1mn}, and \begin{align} X_{\mu\nu}^{(2)}:=\varphi_\mu^\alpha \varphi_{\alpha\nu}-\Box\varphi \varphi_{\mu\nu} +\frac{1}{2}\eta_{\mu\nu}\left[(\Box\varphi)^2-\varphi_{\alpha\beta}\varphi^{\alpha\beta} \right ]. \end{align} We have defined the dimensionless parameters $\eta$, $\xi$, $\mu$, $\nu$, and $\alpha$ by \begin{align} &G_{4\phi}=M_{\rm Pl} \xi ,\quad G_{2X}-2G_{3\phi}=\eta, \quad G_{3X}-3G_{4\phi X}=-\frac{\mu}{\Lambda^3} \notag \\& G_{4X}=\frac{M_{\rm Pl}\alpha}{\Lambda^3}, \quad G_{4XX}=\frac{\nu}{\Lambda^6}, \end{align} with $\Lambda$ being some energy scale. These dimensionless parameters are assumed to be ${\cal O}(1)$ unless they vanish. The Lagrangian~\eqref{effLag:Vainshtein1} describes the effective theory for the Vainshtein mechanism. Note that this effective theory has the Galilean shift symmetry, $\varphi\to \varphi + b_\mu x^\mu + c$. One notices the presence of the new term representing the mixing of the scalar degree of freedom and the graviton: $h^{\mu\nu}X_{\mu\nu}^{(2)}$. This, as well as $h^{\mu\nu}X_{\mu\nu}^{(1)}$, can be demixed through the field redefinition~\cite{deRham:2010tw} \begin{align} h_{\mu\nu}=\tilde h_{\mu\nu}-2\xi\varphi \eta_{\mu\nu} +\frac{2\alpha}{\Lambda^3} \partial_\mu\varphi\partial_\nu\varphi. \end{align} The new piece $(2\alpha/\Lambda^3)\partial_\mu\varphi\partial_\nu\varphi$ is equivalent to a disformal transformation. After this transformation the effective Lagrangian~\eqref{effLag:Vainshtein1} reduces to \begin{align} {\cal L}_{\rm eff}&= -\frac{1}{4}\tilde h^{\mu\nu}\hat {\cal E}_{\mu\nu}^{\alpha\beta} \tilde h_{\alpha\beta } -\frac{\eta+6\xi^2}{2}(\partial\varphi)^2 +\frac{\mu+6\alpha\xi}{\Lambda^3}{\cal L}_3^{\rm Gal} +\frac{\nu+2\alpha^2}{\Lambda^6}{\cal L}_4^{\rm Gal} \notag \\ &\quad +\frac{1}{2M_{\rm Pl}}\tilde h^{\mu\nu}T_{\mu\nu}-\frac{\xi}{M_{\rm Pl}}\varphi T +\frac{\alpha}{M_{\rm Pl}\Lambda^3}\partial_\mu\varphi\partial_\nu\varphi T^{\mu\nu}. \label{effLag:Vainshtein2} \end{align} Things are more transparent in this Einstein frame than in the original Jordan frame. To see how the Vainshtein mechanism operates generically, let us again consider a spherically symmetric matter distribution. The field equation for $\varphi$, \begin{align} &(\eta+6\xi^2)\Box\varphi +\frac{\mu+6\alpha\xi}{\Lambda^3}\left[ (\Box\varphi)^2-\varphi_{\mu\nu}\varphi^{\mu\nu} \right] \notag \\ & +\frac{\nu+2\alpha^2}{\Lambda^6} \left[ (\Box\varphi)^3-3\varphi_{\mu\nu}\varphi^{\mu\nu}\Box\varphi +2 \varphi_{\mu\nu}\varphi^{\nu\lambda}\varphi_\lambda^\mu \right] =\frac{\xi}{M_{\rm Pl}}T+\frac{2\alpha}{M_{\rm Pl}}\varphi_{\mu\nu}T^{\mu\nu}, \end{align} can be written in the following form after integrated once: \begin{align} \frac{\eta+6\xi^2}{2}x+(\mu+6\alpha\xi)x^2+(\nu+2\alpha^2)x^3=-\xi A, \label{poly:vain} \end{align} where we introduced the convenient dimensionless quantities \begin{align} x(r):=\frac{1}{\Lambda^3}\frac{\varphi'}{r}, \quad A(r):=\frac{1}{M_{\rm Pl}\Lambda^3}\frac{{\cal M}(r)}{8\pi r^3}. \end{align} The field equations for $\tilde h_{\mu\nu}$ imply \begin{align} \frac{1}{\Lambda^3}\frac{\tilde \Phi'}{r} =\frac{1}{\Lambda^3}\frac{\tilde \Psi'}{r} = A. \end{align} Since the special case with $\nu+2\alpha^2=0$ was already essentially analyzed in the previous subsection, we focus on the generic case with $\nu+2\alpha^2\neq 0$. We have, for $A\gg 1$, \begin{align} x\simeq \left(\frac{-\xi A}{\nu+2\alpha^2}\right)^{1/3}.\label{soln:x_vainshtein} \end{align} The metric perturbations in the Jordan frame are obtained from $\Phi=\tilde\Phi-\xi\varphi$ and $\Psi=\tilde\Psi + \xi\varphi -(\alpha/\Lambda^3)(\varphi')^2$, but we see from~\eqref{soln:x_vainshtein} that the extra scalar-field contributions are small, yielding $M_{\rm Pl}^{-1}\Phi'\simeqM_{\rm Pl}^{-1}\Psi' \simeq G_N{\cal M}/r^2$ where $8\pi G_N=M_{\rm Pl}^{-2}=[2 G_4(\phi_0,0)]^{-1}$. Since $A\propto r^{-3}$ outside the source, it is appropriate to define the Vainshtein radius $r_V:=({\cal M}/8\pi M_{\rm Pl}\Lambda^3)^{1/3}$ so that $A=(r_V/r)^3$. The nonlinearity in the scalar-field equation of motion thus helps to suppress the force mediated by $\varphi$, so that standard gravity is recovered inside the Vainshtein radius $r_V$. Though the expression is slightly more complicated, the complete effective Lagrangian from the Horndeski theory including the $G_5$ term can be obtained in the same way as above~\cite{Koyama:2013paa}. One then finds the quintic Galileon interaction for $\varphi$ and another mixing term between $\varphi$ and $h_{\mu\nu}$ in the effective Lagrangian. This mixing cannot be eliminated by a field redefinition~\cite{deRham:2010tw}. It can be shown that the screened region outside the spherically symmetric matter distribution is unstable against linear perturbations in the presence of this mixing~\cite{Koyama:2013paa}. So far we have considered the simplest background solution, $g_{\mu\nu}=\eta_{\mu\nu}$ with $\phi=\phi_0=\,$const, and static, spherically symmetric perturbations on top of the background. The above analysis has been extended to a cosmological background with time-dependent $\phi_0$ in~\cite{Kimura:2011dc}. On a cosmological background we start with the Newtonian gauge metric~\eqref{metric:Newtonian} rather than~\eqref{def:pert-vainshtein} because Lorentz invariance is spontaneously broken due to nonvanishing $\dot\phi_0(t)$. Keeping the appropriate nonlinear terms, the Lagrangian under the quasi-static approximation is computed as \begin{align} {\cal L}_{\rm eff}&= a\left[M^2\left(-c_{\rm GW}^2\Psi\partial^2\Psi +2\Psi\partial^2\Phi\right) -\frac{\eta}{2}(\partial\varphi)^2 -2M\left( \xi_1\Phi -2\xi_2\Psi\right)X^{(1)} \right] \notag \\ & \quad + \frac{\mu}{a\Lambda^3}{\cal L}_{3}^{\rm Gal}+ \frac{\nu}{a^3\Lambda^6}{\cal L}_{4}^{\rm Gal} -\frac{2M}{a\Lambda^3}\left(\alpha_1\Phi-\alpha_2\Psi\right)X^{(2)}-a^3 \Phi\delta \rho, \label{effL:vain_cos} \end{align} where $\delta\rho$ is a density perturbation, \begin{align} M^2:={\cal G}_T, \quad X^{(1)}:=\partial^2\varphi, \quad X^{(2)}:=\frac{1}{2}\left[(\partial^2\varphi)^2-\partial_i\partial_j\varphi \partial^i\partial^j\varphi\right], \end{align} and \begin{align} {\cal L}_3^{\rm Gal}:=-\frac{1}{2}(\partial\varphi)^2 \partial^2\varphi, \quad {\cal L}_4^{\rm Gal}:=-\frac{1}{2}(\partial\varphi)^2\left[ ( \partial^2\varphi)^2 -\partial_i\partial_j\varphi \partial^i\partial^j\varphi\right]. \end{align} The coefficients are (slowly-varying) functions of time in general. Explicitly, we have \begin{align} \frac{M\alpha_1}{\Lambda^3}:=G_{4X}+2XG_{4XX}, \quad \frac{M\alpha_2}{\Lambda^3}:=G_{4X}, \quad \frac{\nu}{\Lambda^6}:=G_{4XX}, \end{align} while \begin{align} & M\xi_1\simeq -XG_{3X}+G_{4\phi}+2XG_{4\phi X}, \quad M\xi_2\simeq G_{4\phi}-2XG_{4\phi X}, \notag \\ & \frac{\mu}{\Lambda^3}\simeq -\left( G_{3X}-3G_{4\phi X}+2X G_{4\phi XX}\right), \end{align} where to simplify the expressions we ignored $\ddot \phi_0$ and $H$ in $\xi_1$, $\xi_2$, and $\mu$. The explicit expression for $\eta$ is not important here. Note that if $c_{\rm GW}^2=1$ then $\alpha_1=\alpha_2=\nu=0$. In the present case, we stay in the Jordan frame rather than try to disentangle the couplings between $\varphi$ and the metric potentials such as $\Phi X^{(1)}$. For a spherical overdensity, the field equations derived from~\eqref{effL:vain_cos} are written as \begin{align} y-c_{\rm GW}^2z&=-2\xi_2x-\alpha_2 x^2,\label{eq:v_cos1} \\ z&=A + \xi_1 x+\alpha_1 x^2,\label{eq:v_cos2} \end{align} and \begin{align} \frac{\eta}{2}x-\xi_1 y + 2\xi_2 z -2 (\alpha_1y-\alpha_2 z)x+\mu x^2+ \nu x^3=0, \label{eq:v_cos3} \end{align} where we introduced \begin{align} x:= \frac{1}{\Lambda^3}\frac{\varphi'}{r}, \quad y:=\frac{M}{\Lambda^3}\frac{\Phi'}{r}, \quad z:=\frac{M}{\Lambda^3}\frac{\Psi'}{r}, \quad A:=\frac{1}{M\Lambda^3}\frac{{\cal M}}{8\pi r^3}, \quad {\cal M}:=4\pi \int^r_0\delta\rho (t,s)s^2{\rm d} s, \end{align} and took $a\to 1$ for simplicity. Using Eqs.~\eqref{eq:v_cos1} and~\eqref{eq:v_cos2} one can remove $y$ and $z$ from Eq.~\eqref{eq:v_cos3} to get \begin{align} &\left[c_1+2(\alpha_2-c_{\rm GW}^2\alpha_1)A\right]x +c_2x^2+\left[ \nu+4\alpha_1\alpha_2-2c_{\rm GW}^2\alpha_1^2 \right]x^3 \notag \\ & =-(2\xi_2-c_{\rm GW}^2\xi_1)A,\label{poly:vain_cos} \end{align} where the coefficients $c_1$ and $c_2$ are written in terms of $\eta$, $\xi_1$, etc. This extends Eq.~\eqref{poly:vain} to a time-dependent background with $\dot\phi_0\neq 0$. In theories with $c_{\rm GW}^2=1$, Eq.~\eqref{poly:vain_cos} becomes \begin{align} c_1x+c_2x^2=\left(\frac{\eta}{2}-\xi_1^2+4\xi_1\xi_2\right)x+\mu x^2=-(2\xi_2-\xi_1)A. \end{align} The Vainshtein radius is defined by $A(r_V)=1$, and for $A\gg 1$ we have $x\sim A^{1/2}\ll A\simeq y\simeq z$. Thus, inside the Vainshtein radius the metric potentials obey \begin{align} \Phi'=\Psi'=\frac{G_N{\cal M}}{r^2},\quad 8\pi G_N:=\frac{1}{2G_4(\phi_0(t))}. \label{timeGN1} \end{align} The situation is similar to that for a static background, $\dot\phi_0=0$. However, it is interesting to see that, even in the minimally-coupled theories with $G_{4}=\,$const, we still have $M \xi_1\simeq -XG_{3X}\neq 0$ if $\dot\phi_0$ is nonvanishing, so that $\varphi$ is coupled to the source via the $G_3$ term. In theories with $c_{\rm GW}^2\neq 1$, $A$ in the coefficient of the linear term plays an important role. For $A\gg 1$, Eq.~\eqref{poly:vain_cos} reduces to \begin{align} &2(\alpha_2-c_{\rm GW}^2\alpha_1)Ax +\left[ \nu+4\alpha_1\alpha_2-2c_{\rm GW}^2\alpha_1^2 \right]x^3\simeq 0 \notag \\&\Rightarrow \quad x^2\simeq -\frac{2(\alpha_2-c_{\rm GW}^2\alpha_1)}{\nu+4\alpha_1\alpha_2-2c_{\rm GW}^2\alpha_1^2}A. \label{soln:x_v_cos} \end{align} This is in contrast to the screened solution on a static background~\eqref{soln:x_vainshtein}, $x^3\sim A$. Substituting the solution~\eqref{soln:x_v_cos} to~\eqref{eq:v_cos1} and~\eqref{eq:v_cos2}, one finds that \begin{align} \Phi'=\Psi'=\frac{G_N{\cal M}}{r^2}, \quad 8\pi G_N=\left.\frac{1}{2[G_4-4X(G_{4X}+XG_{4XX})]}\right|_{\phi=\phi_0(t)}. \label{timeGN2} \end{align} As seen from Eqs.~\eqref{timeGN1} and~\eqref{timeGN2}, apparently the standard gravitational law is recovered.\footnote{The Friedmann equation in the early time takes the form $3H^2\simeq 8\pi G_{\rm cos}\rho$, where ``cosmological $G$'' coincides with $G$ in Newton's law: $G_{\rm cos}=G_N$. Note that in general this $G_N$ is different from the effective gravitational coupling for gravitational waves, $G_{\rm GW}:=(8\pi {\cal F}_T)^{-1}$. The difference can be constrained from the Hulse-Taylor pulsar~\cite{Jimenez:2015bwa}.} However, the effective Newton ``constant'' on a cosmological background depends on time even inside the Vainshtein radius through the cosmological evolution of $\phi_0$~\cite{Babichev:2011iz}. Although it would be natural to think of a slow variation $|\dot G_N/G_N|={\cal O}(1)\times H_0 $, the observational bounds from Lunar Laser Ranging require a much slower variation, $|\dot G_N/G_N|< 0.02 H_0 $~\cite{Williams:2004qba}. This limit can be used to constrain cosmological scalar-tensor theories. We have seen how Vainshtein screening operates around a (quasi-)static, spherically symmetric body. The Vainshtein mechanism away from this simplified setup has been investigated in~\cite{Brax:2011sv,Hiramatsu:2012xj,deRham:2012fw,% deRham:2012fg,Chagoya:2014fza,Bloomfield:2014zfa,% Ogawa:2018srw,Dar:2018dra,Falck:2014jwa,Falck:2015rsa,Falck:2017rvl}. \subsection{Partial breaking of Vainshtein screening} In DHOST theories, the operation of the Vainshtein mechanism turns out to be very nontrivial. This can be seen as follows. Let us consider a simple DHOST theory whose Lagrangian is given by \begin{align} {\cal L}&=G_2-G_3\Box\phi +G_4R+G_{4X}\left[(\Box\phi)^2 -\phi_{\mu\nu}\phi^{\mu\nu}\right] \notag \\ &\quad +A_3\left\{ X\left[(\Box\phi)^2-\phi_{\mu\nu}\phi^{\mu\nu}\right] +\Box\phi \phi^\mu\phi^\nu\phi_{\mu\nu}- \nabla_\mu X\nabla^\mu X \right\}. \end{align} This theory belongs to the GLPV family (see Eq.~\eqref{GLPV4L}). Due to the new term in the second line, the effective Lagrangian for the Vainshtein mechanism under the quasi-static approximation is now given by~\cite{Kobayashi:2014ida} (see also~\cite{DeFelice:2015sya,Kase:2015gxi}) \begin{align} {\cal L}_{\rm eff}&= a\left\{M^2\left[-c_{\rm GW}^2\Psi\partial^2\Psi +2(1+\alpha_H)\Psi\partial^2\Phi\right] -\frac{\eta}{2}(\partial\varphi)^2 -2M\left( \xi_1\Phi -2\xi_2\Psi\right)X^{(1)} \right\} \notag \\ & \quad + \frac{\mu}{a\Lambda^3}{\cal L}_{3}^{\rm Gal}+ \frac{\nu}{a^3\Lambda^6}{\cal L}_{4}^{\rm Gal} -\frac{2M}{a\Lambda^3}\left(\alpha_1\Phi-\alpha_2\Psi\right)X^{(2)}-a^3 \Phi\delta \rho \notag \\ &\quad +\frac{2aM^{3/2}}{\Lambda^{3/2}v} \alpha_H \dot\Psi \partial^2\varphi -\frac{2M}{a\Lambda^3v^2} \alpha_H \partial_i\Psi\partial_j\varphi \partial_i\partial_j\varphi, \label{effL:vain_GLPV} \end{align} where \begin{align} M^2&:=2\left(G_4-2XG_{4X}-2X^2A_3\right), \\ \frac{M\alpha_1}{\Lambda^3}&:=G_{4X}+2XG_{4XX}+X(5A_3+2XA_{3X}), \\ \frac{M\alpha_2}{\Lambda^3}&:=G_{4X}+XA_3, \\ \frac{\nu}{\Lambda^6}&:=G_{4XX}+2A_3+XA_{3X}, \\ M^2\alpha_H&:=4X^2A_3, \end{align} and we write $v:=\dot\phi_0/(M^{1/2}\Lambda^{3/2})\,(={\cal O}(1))$. Explicit expressions of the other coefficients are not important. The two terms in the third line are essentially new contributions in this DHOST theory. Note that even in the quasi-static regime one cannot, in general, neglect the first term in the third line because \begin{align} \frac{M^{3/2}}{\Lambda^{3/2}v}\alpha_H\dot\Psi\partial^2\varphi \sim \frac{M^{3/2}H_0}{\Lambda^{3/2}v}\alpha_H\Psi \partial^2\varphi \sim \frac{M \alpha_H}{v}\Psi X^{(1)}. \end{align} This term modifies the linear evolution equation for density perturbations. More specifically, the coefficient of the friction term ($\propto \dot\delta$) in the evolution equation for $\delta$ acquires an additional contribution other than the Hubble parameter~\cite{Gleyzes:2014dya,Gleyzes:2014qga}. In the regime where the nonlinear terms are dominant, one obtains \begin{align} (1+\alpha_H)y-c_{\rm GW}^2z&\simeq -\alpha_2 x^2-\frac{\alpha_H}{v^2} (x^2+rxx'),\label{eq:v_cos_dhost1} \\ (1+\alpha_H)z&\simeq A +\alpha_1 x^2,\label{eq:v_cos_dhost2} \end{align} and \begin{align} -2 (\alpha_1y-\alpha_2 z)x+ \nu x^3-\frac{\alpha_H}{v^2} \left(3xz+rxz'\right) \simeq 0, \label{eq:v_cos_dhost3} \end{align} where for simplicity we ignored the cosmic expansion by taking $a= 1$. Equations~\eqref{eq:v_cos_dhost1}--\eqref{eq:v_cos_dhost3} can be regarded as generalizations of Eqs.~\eqref{eq:v_cos1}--\eqref{eq:v_cos3}. Using Eqs.~\eqref{eq:v_cos_dhost1} and~\eqref{eq:v_cos_dhost2} one can express $y$ and $z$ in terms of $x$ and $x'$, and then eliminate $y$ and $z$ from~\eqref{eq:v_cos_dhost3}. After doing so one would obtain a differential equation for $x$. However, in fact all the derivative terms are canceled out, yielding an algebraic equation for $x$. This is the consequence of the degeneracy of the system. The resultant algebraic equation is solved to give $x^2 = (\cdots) A+(\cdots) A'$. Substituting this to Eqs.~\eqref{eq:v_cos_dhost1} and~\eqref{eq:v_cos_dhost2}, we arrive at \begin{align} y&=8\pi G_NM^2\left[ A+\frac{\Upsilon_1}{4}\frac{(r^3A)''}{r} \right],\label{solydhost} \\ z&=8\pi G_NM^2\left[ A-\frac{5\Upsilon_2}{4}\frac{(r^3A)'}{r^2} \right],\label{solzdhost} \end{align} where we defined the effective Newton ``constant''\footnote{This also coincides with ``$G_{\rm cos}$'' in the Friedmann equation.} \begin{align} 8\pi G_N&:=\left[2G_4-4X(G_{4X}+XG_{4XX})-4X^2(5A_3 + 2XA_{3X})\right]^{-1}, \end{align} and the dimensionless parameters \begin{align} \Upsilon_1&:=-\frac{4X^2A_3^2}{G_4(G_{4XX}+2A_3+XA_{3X})+G_{4X}(G_{4X}+XA_3)}, \\ \Upsilon_2&:=-\frac{4XA_3(G_{4X}+2XG_{4XX}+5XA_3+2X^2A_{3X})}% {5[G_4(G_{4XX}+2A_3+XA_{3X})+G_{4X}(G_{4X}+XA_3)]}. \end{align} These two parameters characterize the deviation from the Horndeski theory. In terms of more familiar quantities, Eqs.~\eqref{solydhost} and~\eqref{solzdhost} are written as \begin{align} \Phi'=G_N \left(\frac{{\cal M}}{r^2}+\frac{\Upsilon_1{\cal M}''}{4}\right), \label{VB:Phi} \\ \Psi'=G_N \left(\frac{{\cal M}}{r^2}-\frac{5\Upsilon_2{\cal M}'}{4r}\right). \label{VB:Psi} \end{align} From Eqs.~\eqref{VB:Phi} and~\eqref{VB:Psi} we see the followings: (i) the Vainshtein mechanism works outside a source because ${\cal M}=\,$const there; (ii) the Vainshtein mechanism breaks inside a source where ${\cal M}$ is no longer constant. This result implies that DHOST theories can be constrained by astronomical observations of stars, galaxies, and galaxy clusters~\cite{Koyama:2015oma,Saito:2015fza,% Sakstein:2015zoa,Sakstein:2015aac,Jain:2015edg,% Sakstein:2016ggl,Sakstein:2016lyj,Salzano:2017qac,Saltas:2018mxc}. This class of gravity modification can even be tested through the speed of sound in the atmosphere of the Earth~\cite{Babichev:2018rfj}. We have thus seen that gravity is modified inside a source in a simple DHOST theory. Such partial breaking of Vainshtein screening occurs in more general DHOST theories as well. Of particular interest are theories satisfying $c_{\rm GW}^2=1$ (i.e., theories satisfying Eqs.~\eqref{c1t1}--\eqref{c1t3}). After some tedious calculations one ends up with~\cite{Crisostomi:2017lbg,Langlois:2017dyl,Dima:2017pwp,Crisostomi:2017pjs} \begin{align} \Phi'&=G_N \left(\frac{{\cal M}}{r^2}+\frac{\Upsilon_1{\cal M}''}{4}\right), \label{VB:Phi2} \\ \Psi'&=G_N \left(\frac{{\cal M}}{r^2} -\frac{5\Upsilon_2{\cal M}'}{4r}+\Upsilon_3{\cal M}'' \right), \label{VB:Psi2} \end{align} where \begin{align} 8\pi G_N:=\left[2(f-Xf_X-3X^2A_3)\right]^{-1}, \end{align} and \begin{align} \Upsilon_1:=-\frac{(f_X-XA_3)^2}{A_3 f}, \quad \Upsilon_2:=\frac{8Xf_X}{5f}, \quad \Upsilon_3:=\frac{(f_X-XA_3)(f_X+XA_3)}{4A_3f}. \end{align} The three parameters are not independent: $2\Upsilon_1^2-5\Upsilon_1\Upsilon_2-32\Upsilon_3^2=0$. Note that in deriving the above result we have implicitly assumed that $A_3\neq 0$, which means that we need to be more careful when considering DHOST theories devoid of the decay of gravitational waves into $\phi$~\cite{Creminelli:2018xsv}. The Vainshtein regime of this special class of DHOST theories have been investigated recently in~\cite{Hirano:2019scf,Crisostomi:2019yfo}. Going beyond the weak gravity regime, relativistic stars in DHOST theories have been studied in~\cite{Babichev:2016jom,Sakstein:2016oel,Chagoya:2018lmv,Kobayashi:2018xvr}. As explained in the previous section, class Ia DHOST theories can be mapped to the Horndeski theory via a disformal transformation. Therefore, DHOST theories with minimally coupled matter are equivalent to the Horndeski theory with disformally coupled matter. This is the reason why the behavior of gravity in DHOST theories is different from that in the Horndeski theory in the presence of matter. \section{Black holes in Horndeski theory and beyond}\label{sec:BHs} In general relativity, a black hole is characterized solely by its mass, angular momentum, and electric charge. This is the well-known no-hair theorem. In scalar-tensor theories, the scalar field would not be regular at the horizon in many cases unless it has a trivial profile. The no-hair theorem can thus be extended to cover a wider class of theories~\cite{Herdeiro:2015waa,Volkov:2016ehx}, though it can certainly be evaded, e.g., by a nonminimal coupling to the Gauss-Bonnet term~\cite{Kanti:1995vq,Antoniou:2017acq,Antoniou:2017hxj,Bakopoulos:2018nui}. In light of the modern reformulation of the Horndeski theory, it has been argued that nontrivial profiles of the Galileon field are not allowed around static and spherically symmetric black holes~\cite{Hui:2012qt}. The proof of~\cite{Hui:2012qt} is based on the shift symmetry of the scalar field and several other assumptions. It is therefore intriguing to explore how one can circumvent the no-hair theorem in the context of the Horndeski/beyond Horndeski theories. For example, by tuning the form of the Horndeski functions one can evade the no-hair theorem~\cite{Sotiriou:2013qea,Sotiriou:2014pfa}. Another possibility is relaxing the assumptions on the time independence of $\phi$ and/or its asymptotic behavior~\cite{Rinaldi:2012vy,Anabalon:2013oea,% Minamitsuji:2013ura,Babichev:2013cya,Cisterna:2014nua}. In particular, it is important to notice that in shift-symmetric scalar-tensor theories the metric can be static even if the scalar field is linearly dependent on time~\cite{Babichev:2010kj}, \begin{align} {\rm d} s^2 &= -h(r){\rm d} s^2+\frac{{\rm d} r^2}{f(r)}+r^2 \left({\rm d}\theta^2+\sin^2\theta{\rm d}\varphi^2\right), \label{BH:ansatz_met} \\ \phi&=qt +\psi(r),\quad q={\rm const},\label{BH:ansatz_sc} \end{align} because the field equations depend on $\phi$ through $\partial_\mu\phi$ due to the shift symmetry. Starting from the ansatz~\eqref{BH:ansatz_met} and~\eqref{BH:ansatz_sc}, various hairy black hole solutions have been obtained in scalar-tensor theories with the derivative coupling of the form $\sim G^{\mu\nu}\phi_\mu\phi_\nu$ in~\cite{Babichev:2013cya}. The same strategy was then used to derive hairy black hole solutions from more general Lagrangians in the Horndeski family~\cite{Kobayashi:2014eva,Babichev:2016fbg,Tretyakova:2017lyg}, its bi-scalar extension~\cite{Charmousis:2014zaa}, and the GLPV/DHOST theories~\cite{Babichev:2016kdt,Babichev:2017guv,BenAchour:2018dap,% Motohashi:2019jre} (see~\cite{Babichev:2016rlq,Lehebel:2018zga} for a review). Some of these solutions have the Schwarzschild(-(A)dS) geometry dressed with nontrivial scalar-field profiles, i.e., a stealth property. As explained in the previous section, strong constraints have been imposed on scalar-tensor theories as alternatives to dark energy after GW170817. Implications of the limit $c_{\rm GW}^2=1$ for black holes in scalar-tensor theories have been discussed in~\cite{Babichev:2017lmw,Tattersall:2018map,BenAchour:2018dap}. Perturbations of black holes in scalar-tensor theories are also worth investigating for the same reasons as in the case of cosmological perturbations: one can judge the stability of a given black hole solution and give predictions for observations. As in general relativity, for a spherically symmetric background it is convenient to decompose metric perturbations into even parity (polar) and odd parity (axial) modes. The scalar field perturbations come into play only in the even parity sector. Since the Horndeski theory and its extensions preserve parity, the equations of motion for the even and odd modes are decoupled. Within the Horndeski theory, the quadratic actions and the stability conditions of the even and odd parity perturbations were derived for a general static and spherically symmetric background with a time-independent scalar field in~\cite{Kobayashi:2012kh,Kobayashi:2014wsa}.\footnote{These papers contain typos, some of which were pointed out in~\cite{Ganguly:2017ort,Mironov:2018uou}. The reader is recommended to refer to the latest versions of {\ttfamily arXiv:1202.4893} and {\ttfamily arXiv:1402.6740}.} See also~\cite{Cisterna:2015uya,Tattersall:2018nve}. General black hole perturbation theories covering a wider class of Lagrangians have been developed in~\cite{Kase:2014baa,Tattersall:2017erk}. Odd parity perturbations and the stability of static and spherically symmetric solutions with a linearly time-dependent scalar field are discussed in~\cite{Ogawa:2015pea,Takahashi:2016dnv}, but their conclusions about instabilities have been questioned~\cite{Babichev:2018uiw}. The perturbation analysis of spherically symmetric solutions in the Horndeski theory can be applied not only to black holes, but also to wormholes. The structure of the stability conditions for spherically symmetric solutions is analogous to that for cosmological solutions, which allows us to formulate the no-go theorem for stable wormholes in the Horndeski theory in a similar way to proving the no-go for nonsingular cosmologies introduced in Sec.~\ref{sec:nogotheo}~\cite{Rubakov:2015gza,Rubakov:2016zah,Kolevatov:2016ppi,Evseev:2017jek}. Also in the wormhole case, theories beyond Horndeski admit stable solutions~\cite{Franciolini:2018aad,Mironov:2018uou}. \section{Conclusion}\label{sec:final} In this review, we have discussed recent advances in the Horndeski theory~\cite{Horndeski:1974wa} and its healthy extensions, i.e., degenerate higher-order scalar-tensor (DHOST) theories~\cite{Langlois:2015cwa}. We have reviewed how the Horndeski theory was ``rediscovered'' in its modern form in the course of developing the Galileon theories~\cite{Deffayet:2011gz,Kobayashi:2011nu}. This rediscovery has stimulated extensive researches on physics beyond the cosmological standard model using the general framework of scalar-tensor theories. Along with renewed interest in the Horndeski theory, the border of Ostrogradsky-stable scalar-tensor theories has expanded recently to include more general DHOST theories. Among DHOST theories, it is quite likely that only the Horndeski theory and its disformal relatives admit stable cosmological solutions and hence can potentially be viable~\cite{Langlois:2017mxy,deRham:2016wji}. In light of GW170817, we have seen that some of free functions in DHOST Lagrangians can be strongly constrained upon imposing $c_{\rm GW}=1$~\cite{Creminelli:2017sry,Sakstein:2017xjx,% Ezquiaga:2017ekz,Baker:2017hug,% Bartolo:2017ibw,Kase:2018iwp,Ezquiaga:2018btd,Kase:2018aps}. However, one must be careful about the range of the validity of modified gravity under consideration. The cutoff scale of modified gravity as an alternative to dark energy may be close to the energy scales observed at LIGO~\cite{deRham:2018red}, and modified gravity in the early universe (i.e., at much higher energies) is free from the constraint $c_{\rm GW}\simeq 1$. Even if one imposes $c_{\rm GW}=1$, there still is an interesting class of scalar-tensor theories, in which the nonstandard behavior of gravity arises only inside matter~\cite{Crisostomi:2017lbg,Langlois:2017dyl,Dima:2017pwp,Crisostomi:2017pjs}. Having obtained a general framework of healthy scalar-tensor theories, it would be exciting to test gravity with cosmological and astrophysical observations as well as to explore novel models of the early universe. Now we are at the dawn of gravitational-wave astrophysics and cosmology, and gravitational waves allow us to access physics at extremely high energies and in the strong-gravity regime. In view of this, we hope that the general framework presented in this review will prove more and more useful in exploring the fundamental nature of gravity. We also hope that generalizing gravity will result in gaining yet deeper insights into theoretical aspects of gravity. \section*{Acknowledgements} I am grateful to Shingo Akama, Yuji Akita, Antonio De Felice, Takashi Hiramatsu, Shin'ichi Hirano, Aya Iyonaga, Xian Gao, Kohei Kamada, Rampei Kimura, Taro Kunimitsu, Hayato Motohashi, Tatsuya Narikawa, Sakine Nishi, Atsushi Nishizawa, Hiromu Ogawa, Seiju Ohashi, Ryo Saito, Maresuke Shiraishi, Teruaki Suyama, Hiroaki W. H. Tahara, Kazufumi Takahashi, Tomo Takahashi, Yu-ichi Takamizu, Norihiro Tanahashi, Hiroyuki Tashiro, Shinji Tsujikawa, Yuki Watanabe, Kazuhiro Yamamoto, Masahide Yamaguchi, Daisuke Yamauchi, Jun'ichi Yokoyama, and Shuichiro Yokoyama for fruitful collaborations on the Horndeski theory and modified gravity over the recent years. The work of TK was supported by MEXT KAKENHI Grant Nos.~JP15H05888, JP16K17707, JP17H06359, JP18H04355, and MEXT-Supported Program for the Strategic Research Foundation at Private Universities, 2014-2018 (S1411024). \section*{References}
2024-02-18T23:39:44.938Z
2019-07-22T02:06:10.000Z
algebraic_stack_train_0000
276
21,247
proofpile-arXiv_065-1557
\section{Introduction} \label{sec:Intro} Quantum tunnelling is a fundamental and ubiquitous process that sparked a long-standing debate on its duration \cite{Landauer1989,Landsman2015} since the concept was first conceived \cite{MacColl1932}. Time is not an operator in quantum mechanics, but rather a parameter in the time-dependent Schr\"{o}dinger equation (see for example \cite{Pauli1980} p. 63). This fact is often used as a throw-away argument claiming that in consequence, the question ``how long does it take for a quantum particle to tunnel through a potential barrier'' is not physically valid. On the other end of the debate scale, there is the notion that it should be ``easy, just follow the peak of the wave packet''. The peak of the wave packet is the relevant observable when determining the group delay of a dispersive wave packet \begin{equation} T_g = \frac{z}{v_g} = z\cdot \frac{dk}{d\omega} = \frac{d\phi}{d\omega}, \end{equation} where $v_g$ is the group velocity, $\phi$ is the phase of the wave packet for a particular energy component $\omega$, and $k$ is the corresponding wave number. The Wigner delay $\tau_W$, often applied to ionization delays, \cite{Isinger2017} (see also section \ref{sec:SPI}) formally corresponds to the group delay, \begin{equation} \tau_W = \hbar \frac{d\phi}{dE} = \frac{d\phi}{d\omega} = T_g. \label{eq:WignerDelayGroupDelay} \end{equation} However, this concept depends on the fact that the spectrum of the wave packet is unchanged -- a condition not satisfied in the tunnelling process. In particular, tunnelling acts as an energy filter, favouring higher-energy components of the incident wave packet, see figure \ref{fig:TransmissionFilter}. \begin{figure}[hb] \centering \includegraphics[width=0.5\linewidth]{figure01-eps-converted-to} \caption{A potential barrier acts as a high-pass filter for the wave packet, thus strongly modifying the energy components of the ionised wave packet.} \label{fig:TransmissionFilter} \end{figure} Or in the words of M. Buettiker: "There is no conservation law for the peak of a wave packet." Additionally, the electron wave packet is chirped during the propagation in vacuum, unlike photon wave packets. The combination of this chirp with the energy filtering during the tunnelling process means that the Wigner formalism \cite{Isinger2017} for ionization delays is not applicable to the tunnelling ionization case \cite{Sabbar2015,Sabbar2017erratum,Gallmann2017}, where a valence electron tunnels through a potential barrier created by the superposition of the binding Coulomb potential with a strong laser field. The attoclock is a recently developed approach for the extraction of tunnelling time in the context of strong field ionization \cite{Eckle2008,Eckle2008a}. The most recent attoclock experimental measurements \cite{Landsman2014b}, which found sub-luminal tunnelling times over a wide intensity range, sparked a number of theoretical developments \cite{Zimmermann2016,Ni2016,Teeny2016,Torlina2015}. Two other independent attoclock experiments \cite{Camus2017,Sainadh2017} recently came to opposite conclusions regarding the duration of the tunnelling process. Additionally, an experiment on rubidium atoms tunnelling in a kicked optical lattice \cite{Fortun2016} also found finite tunnelling times on a much slower timescale of microseconds, due to the much heavier particles involved. It seems that ultrafast laser technology finally enabled experiments to provide evidence supporting a quote by Landauer in 1989 \cite{Landauer1989}: \begin{quote} More important than the exact result and its relation to theoretical controversies, is the fact that a timescale associated with the barrier traversal can be measured, and is a real (not imaginary) quantity. \end{quote} While most experiments seem to agree that quantum tunnelling does not happen instantaneously, there is no consensus yet on the most recent theoretical side \cite{Zimmermann2016,Torlina2015,Teeny2016,Sainadh2017,Yuan2017a,Wang2018a,Ni2018a,Bray2018b,Ivanov2018}. Here, we discuss the implications of recent new discoveries on the interpretation of attoclock experiments, as well as compare the variety of approaches used to extract tunnelling times in strong field ionization. This topic is important not only to the interpretation of time-resolved studies in attosecond physics, but also in the treatment of many experimental schemes in the atomic and molecular optical physics community which are based on a semiclassical view of strong-field ionization \cite{Meckel2008,Lin2010,Bruner2015}. \label{sec:Terminology} For the sake of clarity, we will use the following terminology. \\ \begin{description} \item[transition point $t_s$:] The transition point $t_s$ is a complex moment in time, usually determined in a Strong Field Approximation (SFA) calculation as the saddle point time, and sometimes interpreted as the beginning of the tunnelling process \cite{Dykhne1960,Dykhne1962,PPT1,PPT2,Yudin2001,Anatomy,Klaiber2015}. \item[starting time $t_0$:] The starting time $t_0$ conceptually corresponds to the real part of the transition point, $\Re[ t_s]$, meaning the beginning of the quantum tunnelling process on the real time axis. \item[ionization time $t_i$:] The ionization time $t_i$ denotes the moment in time when an electron wave packet appears in the continuum. It is typically real-valued \cite{PPT2,Klaiber2015}. \item[tunnelling time $\tau$:] The tunnelling time $\tau = t_i - t_0$ describes the potential barrier traversal time, or in other words, the duration of the tunnelling process. \item[attoclock delay $\tau_A$:] The attoclock delay $\tau_A$ describes the tunnelling time as defined in the attoclock method, $\tau_A = t_i - t_0$, where $t_0$ is assumed to be the moment when the electric field is maximized \cite{Eckle2008,Eckle2008a,Pfeiffer2012}, and $t_i$ is reconstructed from the measurements \cite{Pfeiffer2012,Landsman2014b}. \end{description} \subsection{Attoclock Experiment} \label{sec:AttoclockExperiment} The strong-field ionization process encodes the moment when an electron is entering the continuum in the final asymptotic kinetic momentum $\mathbf{p}$ of the photoelectron measured on a detector \cite{Krausz2009}. This is due to the conservation of the canonical momentum \begin{equation} \mathbf{p} = \mathbf{v}(t)+e\mathbf{A}(t), \label{eq:ConservationCanonicalMomentum} \end{equation} where $\mathbf{v}$ denotes the velocity of a photoelectron at time $t$, and $\mathbf{A}$ the vector potential at the same time. This conservation law is valid under the assumption that during the propagation of the freed electron, the influence of the parent ion Coulomb force can be neglected (Strong Field Approximation SFA). Throughout the paper, atomic units (au) are used unless otherwise specified. At the core of the attoclock experiment \cite{Eckle2008,Eckle2008a,Pfeiffer2012,Cirelli2013,Landsman2014b} lies the comparison of experimentally observed final momenta with calculated values from a semiclassical strong-field tunnel ionization model. For the measurement, ellipticity $\epsilon=0.87$, helium atoms as targets and a near-infrared (near-IR) wavelength of $\lambda = 735\,\mathrm{nm}$ were chosen \cite{Eckle2008a,Landsman2014b}. This results in a rotating electric field with a rotation period of approximately $2.7\,\mathrm{fs}$, see figure \ref{fig:PovRayInputTemplate} for an example sketch. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{figure02-eps-converted-to} \caption{Example for a pulse wave form in the attoclock experiment. The field is elliptically polarized with $x$ as the major polarization axis and $y$ the minor axis. The envelope reaches its maximum value for $t=0$, but the field maximum might be shifted due to the carrier-envelope-offset (CEO) phase $\phi_{\mathrm{CEO}}$.} \label{fig:PovRayInputTemplate} \end{figure} The wave form used in the attoclock experiment can be described as \begin{equation} \mathbf{F}(t) = \frac{F_0}{\sqrt{1+\epsilon^2}}\left( \cos(\omega t + \phi_{\mathrm{CEO}})\hat{x} - \epsilon\sin(\omega t + \phi_{\mathrm{CEO}})\hat{y} \right)\cdot f(t), \label{eq:LaserField} \end{equation} where $F_0 = \sqrt{I}$ is the field strength constant related to the peak intensity $I$, $\omega = 0.062\, \mathrm{au}$ the angular frequency related to the central wavelength $\lambda = 735\,\mathrm{nm}$, major axis of polarization along $x$, and propagation along $z$ direction. The pulse envelope $f(t)$ with $f(0)=1$ defines a pulse duration of 6 fs (7 fs) FWHM for the lower (higher) intensity regime respectively. For our simulations (see section \ref{sec:ClassicalTrajectories}) we used a $\cos^2$ shaped envelope. The carrier-envelope-offset (CEO) phase $\phi_{\mathrm{CEO}}$ was not stabilized in the experiment \cite{Telle1999}, to prevent any artificial angular shifts due to stabilization fluctuations \cite{Eckle2008a,Smolarski2010a}. This leads to random $\phi_{\mathrm{CEO}}$ for each pulse. The maximal field amplitude is therefore \begin{equation} F_{\mathrm{max}} = \frac{F_0}{\sqrt{1+\epsilon^2}}\cdot f(|\phi_{\mathrm{CEO}}/\omega|). \end{equation} It was shown in \cite{Eckle2008a,Eckle2008,Smolarski2010a} that a randomized CEO phase averages out to an effective $\phi_{\mathrm{CEO}} = 0$ for the observable of the most probable final momentum. This is due to the strong dependence on the absolute field strength of the ionization probability. Since the CEO phase was not stabilized in the experiment, corresponding calculation must either integrate over a random distribution of CEO phases as well, or be executed for the averaged effect of $\phi_{CEO} = 0$ \cite{Eckle2008a}. The attoclock analysis of the experiment is only concerned with the most probable final momentum, or the highest probability density value \cite{Eckle2008a,Landsman2014b}. From now on, we assume $\phi_{CEO} = 0$ in all calculations. The aforementioned conservation of canonical momentum is exploited by comparing the measured final momentum offset angle in the plane of polarization $\theta$ (see figure \ref{fig:VMIdataSCT}) to calculations assuming that the free propagation starts (for the most probable electron trajectory) exactly at the peak of the electric field $t_0 = 0$ \cite{Eckle2008a,Landsman2014b}. This zero-time assumption of $t_0 = 0$ means that a polarization measurement determines the orientation of the polarization ellipse in the laboratory frame, yielding the reference for the streaking angle measurement, compare figures \ref{fig:PovRayInputTemplate} and \ref{fig:VMIdataSCT}. \begin{figure}[ht] \centering \includegraphics[height=0.38\linewidth]{figure03a-eps-converted-to} \hfill \includegraphics[height=0.4\linewidth]{figure03b-eps-converted-to} \caption{\textbf{Photoelectron momentum distribution (PMD):} Example of a PMD in the attoclock experiment, projected onto the polarization plane $xy$ \cite{Landsman2014b}. The major axis of polarization is along the $p_x$-axis. According to \eqref{eq:ConservationCanonicalMomentum}, the majority of photoelectrons should therefore have final momentum along the $p_y$-axis. The red line marks the the final electron momentum direction with the highest photoelectron count rate, which corresponds to the most probable photoelectron trajectory. Any streaking angle deviating from 90 degrees (marked by the perpendicular white/gray line) consitutes an offset angle $\theta$, measured in the rotation direction of the laser field. Here, the measured offset angle $\theta$ is larger than the predicted streaking angle assuming instantaneous tunnelling (marked as a black dashed line) from a single classical trajectory (SCT) calculation. } \label{fig:VMIdataSCT} \end{figure} Consequently, the conclusions of the attoclock experiment depend on the characteristics of the zero-time reference, and the approximations going into it. These calculations were performed in a semiclassical framework, where an analytical calculation of the quantum tunnelled wave packet describes the probability distribution of initial conditions for classical trajectories. For a Classical Trajectory Monte Carlo (CTMC) simulation, this probability distribution is sampled for a cloud of trajectories, which then mimic the propagation of the electron wave packet after ionization \cite{Ehrenfest1927}. Taking only the most probable initial conditions for all parameters results in a Single Classical Trajectory (SCT). The SCT follows the highest probability density of the ionized wave packed, see section \ref{sec:SCTvsCTMC} for a detailed discussion. The classical trajectory numerical method allows to fully take account of the ion Coulomb force superposed with the strong laser field during the propagation \cite{Landsman2013b}, as well as other effects such as an induced dipole in the parent ion \cite{Pfeiffer2012}. The assumptions and approximations included in the complete attoclock experiment analysis are as follows. \begin{enumerate}[(i)] \item \emph{Dipole approximation}: the spatial dependence of the laser field is neglected, requiring that the wavelength is much larger than the target size, and the Lorentz force induced by the magnetic field to be negligibly small \cite{Reiss2014,Ludwig2014,Danek2018,Maurer2018}. Also, the laser pulse is short enough that the electron does not travel any significant distance out of the focus before the pulse has already finished. \item \emph{Single Active Electron (SAE) approximation}: it is assumed that the helium target atoms are only singly ionized, and the second electron remains in it's (ionic) ground state. Furthermore, the approximation neglects any electron-electron interactions. Instead, it uses an effective Coulomb potential assuming that the remaining bound electron screens the ion perfectly \cite{Emmanouilidou2015,Majety2017}. \item \emph{Adiabatic (A) approximation} or \emph{non-adiabatic (NA) framework}: In the adiabatic approximation, it is assumed that the temporal change of the laser field is relatively slow compared to the response time of the bound wave function, such that the wave function can instantaneously adapt. This also implies that the tunnelling process can be calculated in a quasistatic picture. On the other hand, in the non-adiabatic framework the temporal dynamics of the laser field and thus the temporal changes to the binding potential of the atom are considered. This has several consequences, including that the tunnelling electron gains some energy from the oscillating or rotating field \cite{PPT2,Mur2001,Hofmann2014,Ni2018}. \item \emph{Classical trajectories} mimicking the propagation of a quantum wave packet: Classical dynamics agree exactly with quantum dynamics as long as the spatial dependence of the driving potential is a polynomial of second order or lower \cite{Ehrenfest1927}. This is the case within the SFA, but not any more if the weak influence of the Coulomb potential is accounted for. However, as long as these classical trajectories stay far enough away from their parent ion, the quantum correction is negligible and the classical dynamics can represent the propagation of the photoelectron wave packet \cite{Shvetsov-Shilovski2013}. \item \emph{"Zero-time" estimate} $t_0$: in order to derive a duration of the tunnelling process, an estimate for the beginning moment is required. In the attoclock analysis, the most probable starting point for tunnelling is assumed to be when the field strength is the strongest, corresponding to the shortest tunnelling barrier. \end{enumerate} In the forthcoming sections, we will take a closer look at different approximations. Recent research on their validity is presented, and implications for the interpretation of strong field ionization experiments in general and the attoclock experiment in particular are discussed. \subsection{Comparison with single photon ionization} \label{sec:SPI} Attosecond photo ionziation delays in atoms have been first measured in the tunnel ionization \cite{Eckle2008a} and then in the single-photon ionization regime \cite{Schultze2010}. More detailed measurements and theory confirmed that in the simplest case, when the electron is promoted into a flat (non-resonant) continuum by direct single photon ionization, the corresponding ionization delay is then given by the Wigner delay, which can be expressed as the energy derivative of the scattering phase and is equivalent to the group delay of the departing electron wave packet \cite{Isinger2017}, see also \eqref{eq:WignerDelayGroupDelay}. To date different attosecond techniques have confirmed this result taking into account a measurement induced delay \cite{Dahlstrom2012,Nagele2011,Pazourek2015a}. This is in contrast to the tunnel ionization where our experimental results do not correspond to the Wigner delay because the center of wave packet makes a phase jump when a chirped wave packet propagates with an energy filter \cite{Gallmann2017,Landsman2014b,Sabbar2015,Sabbar2017erratum} (see Section \ref{sec:Intro}). In this case we loose the direct link to the classical trajectory with the centre of the electron wave packet following the Ehrenfest's theorem \cite{Ehrenfest1927}. However with a flat continuum we do not have such an energy filter and ionization delay is correctly described by the Wigner delay. The situation becomes more complicated when ionization occurs in the vicinity of autoionizing states which significantly affect the Wigner delay \cite{Sabbar2015,Sabbar2017erratum}. This was further confirmed most recently with angle and spectrally resolved measurements where we could demonstrate in collaboration with Anne L`Huillier that not only the phase of the photoelectron wave packet is significantly distorted in the presence of these autoionization resonances in argon, but that this distortion also depends on the electron emission angle \cite{Cirelli2018}. In this situation again we loose the direct link between the Wigner delay and the classical trajectory of the liberated electron. Angular streaking was initially applied to attosecond pulse measurements \cite{Constant1997,Zhao2005} before we applied it to the attoclock concept \cite{Eckle2008,Eckle2008a}. To characterize the temporal structure of ultrafast free electron pulses \cite{Hartmann2018,Schweizer2018} the ultrafast X-ray pulse promotes electrons of a target gas into the continuum by single photon ionization, and these photoelectrons are subsequently streaked by a close to circularly polarized pulse of longer wavelength. However moving away from a pump-probe scheme with circular polarization to a single pulse with elliptical polarization was the key idea to obtain a self-referencing ``time-zero'' calibration for the attoclock \cite{Eckle2008a}. These ideas then for example also have been applied to measure the time-dependent polarization of an ultrashort pulse with sub-cycle resolution \cite{Boge2014a}. \subsection{Other experiments on tunnelling delay} Following the attoclock measurements performed in the Keller group \cite{Eckle2008,Eckle2008a,Pfeiffer2012,Boge2013,Landsman2014b}, a number of other experimental groups measured tunnelling time. A completely different approach outside the ultrafast physics community was pursued by Fortun and coworkers \cite{Fortun2016}. They studied rubidium atoms trapped in an optical lattice tunnelling from one potential well to the next, when the lattice is suddenly kicked. The authors came to the conclusion that the atoms experienced a tunnelling time of the order of microseconds across potential barriers of width on the order of nanometers, since the tunnelled wave packets seemed to lag behind the reflected wave packets in their oscillation inside the neighbouring lattice cell \cite{Fortun2016}. An experimental-theoretical collaboration published their results \cite{Camus2017} comparing the attoclock observable of final momentum direction $\theta$ between two different target species, argon and krypton. They too found that a quantum calculation based on the Eisenbud-Wigner-Smith approach \cite{Wigner1955}, including both a finite real tunnelling time as well as an initial longitudinal momentum, reproduced their measurements, whereas calculations assuming instantaneous tunnelling failed to do so even qualitatively \cite{Camus2017}. Classical trajectories reproducing their measurements were not only required to start at a time $t_i>0$ after the peak of the pulse, they were also required to have some positive longitudinal momentum. An important feature of this experiment is the fact that the conclusions do not depend on the field strength calibration (see \cite{Boge2013,Hofmann2014,Hofmann2016,Cai2017} and section \ref{sec:NA_CTMC} for more details on this issue), since the observables are directly compared with respect to the average absolute momentum. On the other hand, the authors assume that the SAE approximation is also valid for both argon and krypton targets, where the ionization happens out of 3p or 4p orbitals. Multi-electron effects in helium will be discussed in section \ref{sec:SAEtest}. More recently, Sainadh \textit{et al.}\, published an attoclock measurement on atomic hydrogen, comparing their experimental data to time-dependent Schr\"{o}dinger Equation (TDSE) calculations \cite{Sainadh2017}. They found that their codes reproduce the experimental values when the Coulomb potential is included, and yield zero streaking offset angle when a Yukawa short range potential is employed, in agreement with prior findings \cite{Torlina2015}. This result was used by the authors as evidence of instantaneous tunnelling time in hydrogen \cite{Sainadh2017}. \subsection{More general concepts} \label{sec:GeneralConcepts} Apart from the above discussed approximations and calculation concepts affecting the attoclock interpretation, there are a few more which are commonly found in strong-field ionization models. For most analytical calculations, the binding potential of the target atom is approximated as a short-range potential. This can mean that the extreme case of a delta-potential is used \cite{PPT2}, or a Yukawa potential exponentially suppressing the long range Coulomb tale \cite{Torlina2015,Sainadh2017,Pazourek2015a}. For the propagation of a freed photoelectron, the long range Coulomb potential induces a perturbation on the trajectory dominated by the strong laser field. Neglecting this Coulomb correction leads to the SFA. There are a few approaches where the Coulomb correction is taken into account as a first order perturbation along the unperturbed trajectory \cite{Kaushal2013,Landsman2013b}. At high ellipticity or circular polarization, the Coulomb correction leads to an additional rotation of the final photoelectron momentum in the direction of rotation of the laser field \cite{Landsman2013b}. A strong electric field can induce polarization in bound atomic or ionic states and therefore also modify the ion Coulomb potential that the photoelectron feels while propagating in the continuum. But these higher-order terms can often be neglected, provided the $1/r$ Coulomb term is taken into account \cite{Yuan2017}. This paper is structured as follows. Section \ref{sec:Intro} introduced the attoclock method in its originally conceived form, along with all relevant approximations and assumptions. Furthermore, alternative experiments were summarized, and more general concepts and approximations of strong field ionization phenomena were presented. Sections \ref{sec:TDSE} to \ref{sec:StartingTime} build the core of this review. They each discuss recent research and new important developments for the attoclock method. Section \ref{sec:TDSE} presents an overview of different numerical approaches to the tunnelling time problem. In section \ref{sec:Dipole} the dipole approximation is investigated. The single active electron (SAE) approximation, as opposed to taking account of multi-electron effects, is discussed in section \ref{sec:SAEtest}. In section \ref{sec:2stepModelNA}, non-adiabatic effects and their manifestation in the 2-step model of strong-field ionization are presented. Classical trajectory simulations based on the 2-step model are a common tool. Their details are discussed in section \ref{sec:ClassicalTrajectories}, with special focus on different predictions for the initial conditions probability distribution in phase space at the tunnel exit. Finally, section \ref{sec:StartingTime} summarizes work on the starting time of the tunnelling process. The paper concludes with section \ref{sec:Summary} summarizing the influences of the different approximations on the interpretation of the attoclock experimental data. \section{Numerical solutions of the time-dependent Schr\"{o}dinger equation} \label{sec:TDSE} Since the publication of the first attoclock measurements \cite{Eckle2008,Eckle2008a,Pfeiffer2012}, many groups tried to numerically simulate the experiment by solving the time-dependent Schr\"{o}dinger equation (TDSE) \cite{Ivanov2014,Torlina2015,Teeny2016,Ni2016,Zimmermann2016,Yuan2017,Sainadh2017,Camus2017,Ni2018a,Bray2018b,Ivanov2018}. In the case of \cite{Ivanov2014}, the offset angle $\theta$ extracted from the TDSE calculations seem to match with a non-adiabatic field strength calibration of the attoclock experiment data \cite{Boge2013}, see also section \ref{sec:2stepModelNA} and figure \ref{fig:My_Field_Angle_Ivanov2014}. The authors of \cite{Torlina2015,Sainadh2017,Bray2018b} chose an approach comparing TDSE calculations using a pseudopotential with TDSE results using Yukawa potentials. The pseudopotentials are chosen to mimic the screening of $N-1$ bound electrons, such that only a single electron wave function (SAE approximation) is propagated. Of course this means that multi-electron effects and polarization of the ion due to the strong field are neglected in these calculations. Nevertheless, Yukawa potential calculations where the long-range Coulomb tail is completely suppressed routinely yield negligible streaking offset angles. This result is often taken as argument that the observed streaking angle offset $\theta$ of the experiments must be solely due to long-range Coulomb effects \cite{Torlina2015,Sainadh2017,Bray2018b}. However, one should keep in mind that by replacing the Coulomb potential with a Yukawa potential, either the ionisation potential or the shape of the potential barrier is significantly altered. The authors of \cite{Camus2017} commented on this interpretation: ``[...] when the initial nonvanishing momentum of the electron near the tunnel exit is overlooked, the final photoelectron momentum distribution may be explained only with a negative time delay near the tunnel exit.'' Of course, negative tunnelling time would violate causality, illustrating that the choice of initial conditions at the tunnel exit is key to attoclock interpretation. In \cite{Camus2017}, a quantum mechanical Wigner trajectory \cite{Wigner1955} tracking the most probable photoelectron is calculated, and the results compared to attoclock measurements of argon and krypton. In their analysis, the authors find that a model based on these Wigner trajectories, which includes a finite initial longitudinal momentum at the tunnel exit and finite ionization delay, can reproduce their measurements. The issue of the photoelectron momentum at the tunnel exit will be discussed in more details in section \ref{sec:InitLong}. However, multi-electron effects such as polarization of the ion, or ionization out of a p-orbital rather than an s-orbital, are neglected in this approach, by assuming that these effects are the same for both species, and therefore cancel out when studying the differences between the species \cite{Camus2017}. An alternative approach is to monitor the instantaneous ionization rate during the pulse duration \cite{Teeny2016a,Teeny2016,Yuan2017,Yuan2017a} or by applying a tiny signal field \cite{Ivanov2018} and comparing the results to instantaneous tunnel ionization models. The probability density current through a virtual detector at the adiabatic tunnel exit point was found to be maximised a finite time $t_i>0$ after the peak of the field \cite{Teeny2016}. However, this calculation does not take non-adiabatic effects into account (see section \ref{sec:2stepModelNA} and \cite{Ni2018a}). In \cite{Yuan2017,Yuan2017a} the authors project their time-dependent wave function onto field free bound states in order to determine the instantaneous ionization rate, finding it lagging behind the peak of the field. However, this method is not gauge-invariant, contrary to when the projection is executed after the laser pulse has passed \cite{Ivanov2018}. In the gauge-invariant approach to the instantaneous ionization rate, no asymmetry with respect to the peak of the field strength was found \cite{Ivanov2018}. This implies that the tunnelling process is also not asymmetric, meaning that a model assuming starting time $t_0 = 0$ and ionization time $t_i >0$ is not compatible with these results. Section \ref{sec:StartingTime} will provide more discussion of the starting time assumption. Classical backpropagation is yet another TDSE approach \cite{Ni2016,Ni2018,Ni2018a}, exploiting the correspondence of the classical turning point for an electron running up against a potential with the tunnelling exit point. In these investigations, the authors defined different exit point criteria for the classical trajectories being propagated backwards in time, after sampling a fully quantum forward calculation. They found that if the radial velocity (or the velocity along the instantaneous field direction) should be zero at the exit point, the coordinates are even closer to the ion than in non-adiabatic derivations \cite{Ni2018a,PPT2}. The times $t_i$ when these criteria are satisfied are distributed close around the peak of the laser field \cite{Ni2018a}. The authors of \cite{Zimmermann2016} calculated numerical solutions to the TDSE for strong-field tunnel ionization, and then extracted different tunnelling time predictions defined as derivatives of the complex transmission amplitude \cite{Landsman2015}. Their results show that for this particular approach, the SFA is a good approximation, as long as the field strength does not cross into the over-the-barrier-regime, where the Coulomb potential is suppressed so much that a ground state electron can escape classically. \section{Dipole Approximation} \label{sec:Dipole} The dipole approximation is easily satisfied in the experimental cases studied in the attocklock experiment, and related calculations. The near-IR field of 735 nm at intensities of 0.3 up to $8\cdot 10^{14}\,\mathrm{W/cm^2}$ is a regime well within both limits, see figure \ref{fig:Ludwig_fig}(c). The wavelength is long enough that the photoelectrons do not feel the spatial dependence, and the influence of the magnetic field is negligibly small. \begin{figure}[ht] \centering \includegraphics[height=0.39\linewidth]{figure04a-eps-converted-to} \includegraphics[height=0.39\linewidth]{figure04b-eps-converted-to} \hfill \includegraphics[height=0.39\linewidth]{figure04c-eps-converted-to} \caption{The centre dot of the photoelectron momentum distributions (PMD) serves as a reference for absolute zero momentum. The outer PMD ($|p_x|>0.1\,\mathrm{au}$, green circles in histrograms) in panel (a) show a shift in opposite direction to the beam propagation, compared to the centre dot (orange squares) \cite{Ludwig2014,Maurer2018}. The shift can be explained by the onset of magnetic field effects when the laser parameters reach the ``magnetic displacement'' limit of the dipole approximation, see orange triangles in panel (c). Panel (b) shows no such shift for laser parameters as they were used in the attoclock experiment, see yellow area in panel (c). Figures adapted from \cite{Ludwig2014}. } \label{fig:Ludwig_fig} \end{figure} To illustrate this, the authors of \cite{Ludwig2014,Danek2018} performed photoelectron momentum measurements in linear polarization for $\lambda = 3.4\,\mathrm{\mu m}$ as well as $\lambda = 800 \, \mathrm{nm}$. As can be seen from figure \ref{fig:Ludwig_fig}(a), the effect of the magnetic field causing a shift of the photoelectron momenta opposite the beam propagation direction is only visible when the experimental parameters reach beyond the ``dipole oasis''. The same effect is absent for experiments within both the upper and lower wavelength limit, which is the case for the attoclock measurements, compare figure \ref{fig:Ludwig_fig}(b). \section{Single Active Electron vs. Multi-electron effects} \label{sec:SAEtest} In semiclassical and quantum mechanical treatment of strong-field ionization, it is common to use the SAE approximation, assuming that only one valence electron will tunnel ionize, while the rest of the bound electrons end up in ionic ground state. This of course invites questions on the validity of any model based on the SAE when interpreting experimental results for multi-electron atoms or molecules. The exchange and interaction between an ionized and a second bound electron in helium was studied with CTMC methods, focusing on the post-ionization dynamics \cite{Emmanouilidou2015}. It was found that an effective Coulomb potential with $Z = 1$, corresponding to perfect screening by the remaining bound electron(s), reproduced the final photoelectron momentum distribution (PMD) of two-electron calculations, see figure \ref{fig:Experiment}. Therefore, it is safe to neglect multi-electron effects during the continuum propagation of the ionized electron. \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figure05.pdf} \caption{\textbf{Streaking offset angles $\theta_{\mathrm{SCT}}$ comparison between three numerical models}: The single active electron (SAE) single classical trajectory (SCT) calculation shown in black dashed line was used in the analysis of the attoclock measurement \cite{Landsman2014b,Landsman2014}. Classical trajectory Monte Carlo (CTMC) calculations were computed using an independent code \cite{Emmanouilidou2015}, once with the SAE approximation (blue solid line with $\ocircle$), and once as a two-electron (three-body) calculation (green solid line with $\times$). Calculations based on the SAE approximation agree with the calculation including both electron-electron interaction as well as electron-nuclear force. All calculations shown in this figure assume an adiabatic framework. Figure adapted from \cite{Emmanouilidou2015}.} \label{fig:Experiment} \end{figure} Even if multi-electron effects are negligible once the ionized electron is already far away from the parent ion, there still might be significant electron-electron interaction during the actual tunnel ionization step, while the tunnelling electron is still at a comparable distance to the nucleus relative to the other bound electron. A similar analysis for the tunnel ionization step, however, is challenging to perform since it requires a fully quantum mechanical treatment. Near-circular, but not perfectly circular, polarization prohibits coordinate reduction based on symmetry arguments, making the numerical solution of the TDSE computationally very expensive. Recently, Majety and Scrinzi \cite{Majety2017} published an approach for reducing the necessary basis functions with higher orbital angular momentum. The results of \cite{Majety2017} show that, similar to the propagation in the continuum, the tunnelling step can also be approximated with a single active electron for the case of helium. They could not find any observable differences in the final angular momentum spectrum between a SAE calculation and multi-channel calculations \cite{Majety2017}. Both these studies leave us with the conclusion that the single active electron approximation is valid, at the very least for helium atoms as the target and the laser parameter range studied in the attoclock experiment. For larger atoms, there is less prior work focusing on this aspect. Though there is evidence that some multi-electron effects, specifically the polarization of the remaining parent ion in the strong laser field, can significantly influence the trajectory of the ionized photoelectron for the case of argon \cite{Shvetsov-Shilovski2012,Pfeiffer2012}. Even more so, for tunnel ionization of molecules the polarizability and electron-electron interactions are important to take note of \cite{Lezius2001}. \section{2-Step model with non-adiabatic effects} \label{sec:2stepModelNA} The original attoclock experiment was evaluated in the adiabatic approximation \cite{Eckle2008a,Landsman2014b} characterized by a Keldysh parameter \cite{Keldysh1965} of \begin{equation} \gamma := \frac{\omega\sqrt{2 I_p}}{F} \ll 1, \end{equation} as is typical for strong-field experiments in a similar intensity range \cite{Shafir2012,Hickstein2012,Odenweller2014,Wolter2015,Landsman2013,Landsman2013b}. However, non-adiabatic effects influence especially the field strength calibration of strong-field ionization data already significantly for $\gamma \approx 1$ \cite{Boge2013,Hofmann2014,Hofmann2016,Cai2017}. This calls for a thorough reevaluation of the original attoclock data interpretation. Taking account of the dynamics of the strong electric field leads to several effects which are neglected in the adiabatic approximation. During the tunnelling process, the electron wave packet can gain energy from the oscillating field. This results in a shorter tunnel exit radius of the photoelectron compared to the quasistatic estimate \cite{Mur2001,Ni2016}, compare also figure 3 of \cite{Landsman2015}. Also, the ionization probability falls off slower with reducing field strength compared to the adiabatic prediction (see figure 2 in \cite{Yudin2001}), and the PMDs are predicted to be wider in the non-adiabatic case than in the adiabatic approximation. Furthermore, for the case of elliptical or circular polarization, the rotation of the field is imprinted onto the photoelectron, which exhibits an initial transverse momentum tangential to the rotation of the electric field at the tunnel exit \cite{PPT2}. This initial transverse momentum in turn yields a larger final absolute momentum for the same field strength compared to the adiabatic formalism, which strongly influences the field strength calibration of experimental data at lower intensities \cite{Hofmann2014,Hofmann2016,Cai2017}. For experimental data, the field strength which the photoelectrons experienced must be calibrated a posteriori from the measured PMD, by comparing a measured observable to predictions from a model \cite{Hofmann2016}. This leads to a shift of the (same) experimental data to lower field strengths if treated in the non-adiabatic framework. The same experimental data of \cite{Landsman2014b} has already been studied in another publication in order to assess non-adiabatic effects \cite{Boge2013}. The authors of \cite{Boge2013} focused on the influence of the initial transverse momentum on the angle of the most probable final momentum. On the other hand, for the calculation of SCT and CTMC simulations, the shorter exit radius in the non-adiabatic framework was neglected in this particular work. The choice of such a mixed adiabatic/non-adiabatic model lead the authors to conclude that the attoclock data does not exhibit non-adiabatic effects. This conclusion was questioned by later work, where fully non-adiabatic models were considered \cite{Ivanov2014,Hofmann2014,Hofmann2016,Cai2017}. Two other works \cite{Ivanov2014,Klaiber2015} looked at the original attoclock data in connection with non-adiabatic effects. The first calculated the numerical solution to the TDSE for a small range of intensities covered in the experiment \cite{Ivanov2014}. The second used an analytical model based on the standard SFA methodology \cite{Anatomy}, but extending it to explicitly include non-adiabatic dynamics as well as influence of the Coulomb potential during the tunnelling process and the propagation in the continuum \cite{Klaiber2015}. Furthermore, the authors of \cite{Cai2017} combined ideas of \cite{Boge2013} and \cite{Hofmann2016} to check for non-adiabatic effects with TDSE calculation as well as directly in the attoclock offset angle measurements. They also concluded that non-adiabatic effects must be taken into account, and that the sub-barrier quantum motion is important and should not be neglected in strong-field ionization models \cite{Cai2017}. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{figure06-eps-converted-to} \caption{\textbf{Effect of field strength calibration:} Comparison of measured streaking offset angles with single classical trajectory (SCT) reference calculations assuming instantaneous tunnelling ($\tau = 0$). The red solid line shows the prediction by non-adiabatic SCT simulation, while the blue dashed line represents the adiabatic prediction. For the case of helium, the adiabatic and non-adiabatic SCT yield the same angle prediction for a large range of field strengths. Also shown are the values extracted form TDSE by \cite{Ivanov2014} as green triangles. } \label{fig:My_Field_Angle_Ivanov2014} \end{figure} Figure \ref{fig:My_Field_Angle_Ivanov2014} shows the attoclock data \cite{Boge2013,Landsman2014b} in adiabatic and non-adiabatic calibration (blue and red dots), compared to the TDSE calculation for the final streaking angle by \cite{Ivanov2014} (green triangles). Evidently, the calculation agrees with the non-adiabatic calibration of the measurement data. Additionally, SCT calculations of the expected streaking offset angle $\theta_{\mathrm{SCT}}$ for instantaneous tunnelling are shown as blue dashed line for the adiabatic approximation, and red solid line including non-adiabatic effects. \section{Classical trajectories} \label{sec:ClassicalTrajectories} In attoclock experiments, the experimental observable (offset angle $\theta$) is compared to a zero-time-calibration calculated within a model assuming instantaneous tunnelling, where the angle is typically computed using Classical Trajectory Monte Carlo (CTMC) simulations. The computational costs for CTMC simulations is very low compared to quantum simulations. Therefore, CTMC simulations can achieve highly precise converged results. This gives them a distinct advantage over analytic approaches, which may only be applicable over a narrow range of conditions, such as in \cite{Bray2018b} (see next paragraph for detail). The driving laser field, the Coulomb potential of the residual ion, dipole effects in the ion due to induced polarization by the laser field, and even electron-electron correlation \cite{Emmanouilidou2015} can all be fully and explicitly taken into account. Of course, the accuracy of the resulting calibration hinges on the distribution function and the sampling of the initial conditions. The analytical probability distribution functions in phase space must accurately describe the ionized part of the wave function after a quantum tunnel ionization process. Recently, Bray, Eckart and Kheifets suggested an analytic approach that neglects the laser field during propagation, estimating the Coulomb correction using Rutherford scattering angle in an attractive potential \cite{Bray2018b}. This so-called Keldysh-Rutherford (KR) model uses the adiabatic approximation which neglects the energy gain during the tunnelling process and the initial transverse momentum of the photoelectron at the tunnel exit, although these non-adiabatic effects are increasingly prominent for low intensities. The scattering parameter $\rho$ is assumed to be the same as the exit radius, although formally $r_{\mathrm{exit}} < \rho$, unless the energy of the scattering particle is infinite. Since the Rutherford formula gives the scattering angle in the absence of any time-dependent fields, the KR formula becomes increasingly accurate when the laser field has less of an impact, meaning for weaker intensities and shorter pulses. Hence, it may not be applicable to any existing attoclock experimental data, which requires sufficiently strong laser fields to achieve tunnel ionization. It may nevertheless be instructive to apply the KR formula to the recent experimental data on hydrogen, as suggested by the authors in \cite{Bray2018b}: ``Because of its simplicity, the Keldysh - Rutherford formula can be easily applied to attoclock experiments with arbitrary polarization though modification of the above formalism to account for nonunitary ellipticity. One such case being the recent attoclock measurements on atomic hydrogen \cite{Sainadh2017}, where the signature field intensity scaling of the KR model $I^{0.5}$ was indeed observed." Following the above quote, we plotted the KR formula alongside attoclock measurements on atomic hydrogen \cite{Sainadh2017}. The results are shown in figure \ref{fig:RutherfordVSainadh}, alongside with TDSE simulations also presented in \cite{Sainadh2017}. As figure \ref{fig:RutherfordVSainadh} illustrates, there remains an angle difference between the KR estimate, the TDSE calculations and the experimental data, suggesting non-negligible tunnelling time. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{figure07-eps-converted-to} \caption{Applying the Keldysh-Rutherford model (KR) \cite{Bray2018b} to the attoclock experiment on atomic hydrogen by \cite{Sainadh2017}. Red squares and blue triangles show the offset angle $\theta$ extracted from two different TDSE calculations \cite{Sainadh2017}, while the green dots with error bars are the experimental values \cite{Sainadh2017}. The KR model predicts that the offset angle due to Coulomb scattering is smaller than the measured or calculated total offset angle, suggesting significant tunnelling time. Note however that the KR model may be inapplicable to existing strong field ionization experiments (see text for detail). } \label{fig:RutherfordVSainadh} \end{figure} \subsection{General CTMC and SCT} \label{sec:SCTvsCTMC} The classical trajectories for the attoclock configuration start at an exit radius of approximately 8 au or larger from the ion core \cite{Landsman2015}. Due to the elliptical polarization of the field, which creates a transverse drift in electron momentum, these trajectories typically never return to the vicinity of the parent ion. Due to the weak influence of the Coulomb potential after ionization (particularly in the case of elliptically polarized light), and the absence of resonances or other strong phase shifts (compare section \ref{sec:SPI}), the quantum-classical correspondence is valid \cite{Ehrenfest1927}. Additionally, a single classical trajectory (SCT) launched with the most probable initial conditions follows the propagation of the highest probability density in the full CTMC simulation, see figure \ref{fig:CTMCdensityframe0300}. \begin{figure} \centering \includegraphics[width=\linewidth]{figure08-eps-converted-to} \caption{\textbf{Most probable trajectory} The colour scale shows the classical trajectory Monte Carlo (CTMC) simulation real space (left) or momentum space (right) probability density after the laser pulse has passed. The orange line traces a single classical trajectory (SCT). The target was helium, irradiated by a laser field with the following parameters: $\epsilon=0.89$ (indicated as the green solid polarization ellipse), $\lambda = 735\,\mathrm{nm}$, pulse duration FWHM $9\,\mathrm{fs}, I = 2.5\cdot 10^{14}\,\mathrm{W/cm^2}$. The influence of the ion Coulomb force on the electron during the propagation is included. A SCT initiated with the most probable initial conditions traces the highest probability dencity of the wave packet. See suplemental material for a movie version \cite{Supp}.} \label{fig:CTMCdensityframe0300} \end{figure} More details on our implementation of CTMC simulations based on adiabatic Ammosov, Delone \& Krainov (ADK) models \cite{ADK1986,Delone1991} can be found in \cite{Hofmann2013,Hofmann2016thesis}. \subsection{Implementation of non-adiabatic effects} \label{sec:NA_CTMC} The most popular non-adiabatic strong-field ionization theory was developed by Perelomov, Popov and Terent'ev (PPT) \cite{PPT1,PPT2}, and rewritten as \cite{Mur2001}. This analytical approach describes the final photoelectron momentum probability distribution averaged over one laser cycle, for arbitrary ellipticity of the ionizing field. Non-adiabatic models deriving an instantaneous ionization rate $\Gamma(t)$ are typically only valid for linear (and sometimes circular) polarization \cite{Yudin2001,Bondar2008,Li2016}. In order to describe classical trajectories starting at different times during the laser pulse, we introduced the time dependence by letting the Keldysh parameter $\gamma$ depend on the instantaneous field strength $|F(t)|$. The energy gain of the photoelectron during the tunnelling process results in a shorter exit radius compared to the adiabatic version, and the initial transverse momenta follow a Gaussian distribution centred about the most probable initial transverse momentum. For more details on the non-adiabatic CTMC implementation, please refer to \cite{Hofmann2014,Hofmann2016thesis}. Table \ref{tab:CTMCoverview} compares the main characteristics of the CTMC simulations concerning both the sampling of initial conditions and the classical propagation. \begin{table}[h] \caption{Overview of different characteristics of the different classical trajectory simulations, based on either adiabatic Ammosov, Delone \& Krainov (ADK) \cite{ADK1986,Delone1991} theory, or non-adiabatic Perelomov, Popov \& Terent'ev (PPT) \cite{PPT1,PPT2,Mur2001} theory.} \raisebox{-0.5\height}{\includegraphics[width=0.45\linewidth]{figure10a-eps-converted-to}} \begin{tabularx}{0.48\textwidth}{X} {\footnotesize The figure illustrates the definition of longitundial ($||$) and orthogonal ($\perp$) momentum components, relative to the instantaneous field direction at the starting time $t_i$ for a classical trajectory.} \end{tabularx} \label{tab:CTMCoverview} \begin{tabular}{lll} \toprule characteristic & adiabatic CTMC \cite{Hofmann2013} & non-adiabatic CTMC \cite{Hofmann2014} \\ \midrule \multicolumn{3}{l}{\emph{starting conditions:}} \\ $\Gamma(t)$ & exponential ADK & PPT, with modified $\gamma(t)$ \\ $\mathbf{p}_{\perp}^i$ and $\sigma_{\perp}$, $\sigma_{\perp,\mathrm{ip}}$& ADK & PPT \\ $p_{||}^i$ and $\sigma_{||}$& 0 & 0 \\ $r_{e}$ & parabolic coordinates \cite{Landau1965,Fu2001,Shvetsov-Shilovski2012} & PPT \\ $I_{\mathrm{p}}$ & Stark shift included \cite{Shvetsov-Shilovski2012} & Stark shift included \\ \midrule \multicolumn{3}{l}{\emph{propagation for both adiabatic and non-adiabatic case:}} \\ ion Coulomb: & \multicolumn{2}{l}{soft-core potential: $V(r) = \frac{-1}{\sqrt{r^2+a}}$, with $a = 0.1 \,\mathrm{au}^2$} \\ induced dipole: & \multicolumn{2}{l}{same soft-core constant $a$} \\ electric field: & \multicolumn{2}{l}{always included using dipole approximation} \\ bound electrons: & \multicolumn{2}{l}{single active electron approximation, $Z_{\mathrm{eff}}=1$} \\ \bottomrule \end{tabular} \end{table} Figure \ref{fig:My_Field_Angle_Ivanov2014} demonstrates the difference in the field strength calibration of the measured data, where the blue dots are the values from \cite{Landsman2014b}, and the red dots are recalibrated based on the PPT theory \cite{PPT2}. The red solid line in figure \ref{fig:My_Field_Angle_Ivanov2014} shows the non-adiabatic PPT\cite{PPT2} and the blue dashed line the adiabatic \cite{Pfeiffer2012,Shvetsov-Shilovski2012} prediction of the streaking offset angle. These SCT simulations yield the Coulomb correction on the field induced streaking angle assuming instantaneous tunnelling. For the case of helium, the two non-adiabatic effects of initial longitudinal momentum and shorter exit radius seem to cancel each other out over a large range of field strength, essentially predicting the same final streaking angle offset as the adiabatic approximation. Within the attoclock framework, the angle difference between the measurements and the zero-time reference calculation SCT is then interpreted as being due to a delayed release of the electron into the continuum. Evidently, taking non-adiabatic effects into account still results in a significant streaking angle difference between what is measured and what is expected under the assumption of instantaneous tunnelling. \subsection{Influence of initial longitudinal momentum distribution} \label{sec:InitLong} A core approximation in many analytical descriptions of strong-field ionization is zero momentum of the photoelectron at the tunnel exit parallel to the direction of the electric field (longitudinal), $p_{||}^i=0$ \cite{Anatomy,PPT2,Ni2016}. However, there are several independent works suggesting that the initial longitudinal momentum should be a spread \cite{Wavepacket,Hofmann2013,Hofmann2014,Bondar2008}, and possibly even with a non-zero most probable value \cite{Teeny2016,Camus2017}. Ni \textit{et al.}\, found complementary results with their classical backpropagation method. The classical turning point, when the photoelectron has zero momentum parallel to the electric field, was located at a position even closer to the ion than what PPT predicts \cite{Ni2016,Ni2018a}. This could intuitively be understood as the photoelectron having gained some outwards momentum already by the time it passes by the exit radius predicted by PPT. Taking account of a positive most probable initial longitudinal momentum leads to a \emph{reduction} of the SCT prediction for the final streaking angle $\theta_{\mathrm{SCT}}$. Based on the conservation of canonical momentum, the final momentum is shifted by $\mathbf{p}_{||}^i$ compared to the simulations assuming zero initial longitudinal momentum. This effect is visible in figure \ref{fig:e87_s0_EndVelocitiesDRaC}, where the panel on the right includes a non-zero initial longitudinal momentum, and the white arrow denotes the rotation sense of the driving field. \begin{figure}[ht] \centering \includegraphics[trim=0cm 0cm 3cm 0cm,clip=true,width=0.42\linewidth]{figure09a-eps-converted-to} \includegraphics[trim=3cm 0cm 0cm 0cm,clip=true,width=0.42\linewidth]{figure09b-eps-converted-to} \caption{Final photoelectron momentum distribution (PMD) calculated by adiabatic classical trajectory Monte Carlo (CTMC) simulations in the $v_y>0$ half-plane. The two panels compare a CTMC with zero (left) or finite (right) most probable initial longitudinal momentum $\mathbf{p}_{||}^i$. The laser field is rotating clockwise, as indicated by the white arrows. The figures illustrate that the offset angle $\theta_{\mathrm{SCT}}$ becomes smaller when $\mathbf{p}_{||}^i>0$ is assumed. This would lead to a larger angle difference $\theta - \theta_{\mathrm{SCT}}$ between the measured offset angle $\theta$ and the zero-time calibration $\theta_{\mathrm{SCT}}$ (compare also figure \ref{fig:VMIdataSCT}). } \label{fig:e87_s0_EndVelocitiesDRaC} \end{figure} This observation leads to a related question. Does the influence of the Coulomb force result in an asymmetric deformation of the photoelectron wave packet? For figure \ref{fig:DeltaCoM}, the centres of mass (CoM) of CTMC calculations with varying spread $\sigma_{||}$ and most probable value $\mathbf{p}_{||}^i$ for the initial longitudinal momentum distribution at the tunnel exit are extracted. Their values are then compared to the naive expectation of \begin{equation} \mathrm{CoM}^{\mathrm{expected}} = \mathrm{CoM}(\sigma_{||}=0, \mathbf{p}_{||}^i=0) + \mathbf{p}_{||}^i, \end{equation} which is based on simple vector addition within the conservation of canonical momentum. The difference between the actually extracted CoM and this expected value is plotted in figure \ref{fig:DeltaCoM} (colourmap). All determined shift-differences are smaller than one bin size of the PMD in figure \ref{fig:e87_s0_EndVelocitiesDRaC}, and thus negligible, with the sole exception of the extreme case of $\sigma_{||}=0.8\,\mathrm{au}, \mathbf{p}_{||}^i=0.1\,\mathrm{au}$. \begin{figure} \centering \raisebox{-0.5\height}{\includegraphics[width=0.45\linewidth]{figure10a-eps-converted-to}} \hfill \raisebox{-0.5\height}{\includegraphics[width=0.54\linewidth]{figure10b-eps-converted-to}} \caption{\textbf{Coulomb deformation due to an initial longitudinal momentum:} The colour scale represents the deviation $\Delta \mathrm{CoM}$ of the centre of mass of the final photoelectron momentum distribution (PMD) compared to an expected shift of $\mathbf{p}_{||}^i$ based on the vector addition of the conservation of momentum. The values were extracted from adiabatic classical trajectory Monte Carlo simulations. The majority of the tested range of initial most probable momentum $\mathbf{p}_{||}^i$ and initial longitudinal momentum spread $\sigma_{||}$ only exhibits very small deviations for the CoM away from the pure vector addition. The sketch on the left again shows the coordinate definitions (adapted from \cite{Hofmann2016}).} \label{fig:DeltaCoM} \end{figure} Therefore, we can conclude that the asymmetric influence of the Coulomb force is negligibly small, and SCT are still a valid and easy approach to determine the classical trajectory prediction for the most probable final momentum. \section{Starting time assumption} \label{sec:StartingTime} For determining the duration of the tunnel ionization process, knowing the moment of when an electron exits from the potential barrier is of course not sufficient. The starting point $t_0$, when an electron enters the potential barrier (in a pseudo-classical picture) must be defined, too. In strong-field ionization models such as PPT \cite{PPT1,PPT2}, ADK \cite{ADK1986,Delone1991,Anatomy}, or many others \cite{Yudin2001,Bondar2008,Klaiber2015,Li2016,Li2017} this intuitive definition is typically assigned to the complex transition point $t_s$, which is a time calculated by the saddle point approximation \cite{Anatomy}. However, most of these models, with the notable exception of \cite{Klaiber2015}, then either define the ionization time $t_i$ to be the real part of $t_s$, or the calculation automatically yields that relation due to a short-range potential approximation, and neglecting non-adiabatic effects during the tunnelling process \cite{Anatomy,Klaiber2015}. This then leads to the interpretation that there is no (real) time passing while the electron tunnels through the potential barrier, since \begin{equation} \tau = t_i - t_0 = \Re{t_s} - \Re{t_s} = 0. \end{equation} There are several publications suggesting that the starting time should be before the ionizing field reaches its maximum. In \cite{Teeny2016,Teeny2016a}, the authors monitor the probability current density in a one-dimensional TDSE calculation of strong field ionization. At the classical tunnel entry point $x_{\mathrm{in}}$, they find that the outflowing current maximizes clearly before the electric field reaches its maximum value. Furthermore, for a large range of intensities tested in \cite{Ni2016}, the classical backpropagation (after two-dimensional TDSE forward calculation) reaches classical turning points at times $t_i \lesssim 0$. By causality therefore, the starting times also must be $t_0 \lesssim 0$. In the Coulomb-corrected non-adiabatic calculation of \cite{Klaiber2015}, the complex transition point $t_s$ found has a negative real component. Interestingly, the corresponding ionization time $t_i$, which is the first time of the trajectory on the real axis, is larger than zero, see figure 2d of \cite{Klaiber2015}. In consequence, this particular formalism predicts nonzero real time to pass while the photoelectron tunnels through the potential barrier. A starting time $t_0$ before and corresponding ionization time $t_i$ after the peak of the laser field leads to a very intuitive picture of an optimization problem. Assuming a photoelectron spends some finite time $\tau$ in the classically forbidden region, then the probability of the tunnelling process would be maximized if the integrated barrier width during $\tau$ is minimized. In the attoclock method, a numerical value for $t_0$ was necessary so that the reference calculations, which assume zero tunnelling time, could be launched at the appropriate initial time. The estimate for $t_0$ is based on the instantaneous tunnelling assumption, and the fact that the tunnelling probability rate depends exponentially on the field strength, thus reacting very sensitively to even the slight changes in the field magnitude at large ellipticity. Consequently, ionization would happen preferentially in the moments of maximal field strength, along the major axis of polarization. Therefore, the SCT simulations were launched at \begin{equation} t_i = t_0 + \tau = t_0 + 0, \end{equation} where $t_0$ was assumed to be the peak of the field. For a wave form as defined in \eqref{eq:LaserField} with $\phi_{\mathrm{CEO}} = 0$, this meant $t_0 = 0$. Therefore, the tunnelling times $\tau_A$ extracted from the attoclock experiment are in reference to this starting time $t_0 = 0$. However, based on the earlier discussion in this section, the physical $t_0$ possibly should be chosen before the peak of the field. Additionally, the instantaneous ionization rate analysis as presented in \cite{Ivanov2018} seems to exclude an asymmetric distribution of the tunnelling time such as $\tau = t_i - 0$ with respect to the laser field. None of the investigations mentioned above predict a numerical value for what $t_0$ should be in the particular case of a helium target in an elliptical laser field, in three dimensions. Therefore, we can not perform a quantitative analysis of the experimental data with a modified $t_0$ assumption. Nevertheless, we can state that any correction of $t_0$ from the peak to before the peak would lead to a larger extracted tunnelling times $\tau>\tau_A$ compared to the attoclock delay $\tau_A$ which is presented in figure \ref{fig:MyFieldDelayNA} for example. \section{Summarized Influence on Attoclock Interpretation} \label{sec:Summary} Looking at all these individual aspects of strong-field tunnel ionization, we can conclude the following. \begin{figure}[ht] \centering \includegraphics*[width=0.8\textwidth]{figure11-eps-converted-to} \caption{Streaking offset angles of the recalibrated data set (red dots) compared to the original adiabatic field strength calibration data (blue dots). Also if non-adiabatic field strength calibration is used, an angle difference between the measured streaking angle $\theta$ and the zero tunnelling time prediction $\theta_{\mathrm{SCT}}$ calculated from single classical trajectories (SCT) remains. The offset angle difference $\theta - \theta_{\mathrm{SCT}}$ can be explained by a tunnelling delay time corresponding to $\theta_{\tau}$.} \label{fig:MyFieldAngle} \end{figure} Within the attoclock framework, the offset angle difference \begin{equation} \theta-\theta_{\mathrm{SCT}} = \theta_{\tau} + \theta_{\epsilon} \label{eq:theta_sums} \end{equation} (compare again figure \ref{fig:VMIdataSCT}) is explained as a tunnelling delay time (orange band in figure \ref{fig:MyFieldAngle}) $\theta_{{\tau}}$, plus an additional streaking angle $\theta_{\epsilon}$ (green band in figure \ref{fig:MyFieldAngle}). The $\theta_{\epsilon}$ is due to the elliptical polarization of the ionizing laser field. Only when the electric field happens to point along either the major or minor axis of the polarization ellipse are the electric field vector and the vector potential orthogonal to each other. So if a photoelectron enters the continuum at any time $t_i$ other than those precise moments, the ellipticity of the laser field leads to non-90 degree streaking angles, even in a purely field-driven case ignoring any other influences on the trajectory. $\theta_{\epsilon}$ therefore depends on the ionization time $t_i$ at which a trajectory enters the continuum and the ellipticity of the driving field $\epsilon = 0.87$, and can be estimated as follows. The total field-induced streaking angle \begin{equation} \theta_{\mathrm{field}} = \frac{\pi}{2} + \theta_{\epsilon}, \end{equation} with the ellipticity correction $\theta_{\epsilon}$ to the 90 deg streaking angle is given by the angle between $\mathbf{F}(t_i)$ and $\mathbf{A}(t_i)$. Therefore, we can write \begin{equation} \cos(\theta_{\mathrm{field}}) = \frac{\mathbf{F}(t_i)\cdot\mathbf{A}(t_i)}{|F(t_i)||A(t_i)|} = \frac{-\sin(\omega t_i) \cos(\omega t_i) + \epsilon^2 \cos(\omega t_i) \sin(\omega t_i)}{\sqrt{\cos^2(\omega t_i) + \epsilon^2 \sin^2(\omega t_i)}\sqrt{\sin^2(\omega t_i) + \epsilon^2 \cos^2(\omega t_i)}} \end{equation} Taking the Taylor expansions up to first order on both sides individually, for $\theta_{\mathrm{field}} \approx \frac{\pi}{2}$ and $t_i \approx 0$ respectively leads to \begin{equation} \theta_{\mathrm{field}} - \frac{\pi}{2} = \theta_{\epsilon} = \frac{(1-\epsilon^2)\omega t_i}{\epsilon}. \label{eq:theta_epsilon} \end{equation} The remaining angle difference $\theta_{{\tau}}$ is then interpreted as the time interval $\tau_A = t_i-t_0 = t_i - 0$ after the peak of the electric field until the electron exits the tunnelling barrier and enters the continuum \begin{equation} \theta_{\tau} = \arctan\left( \frac{\epsilon \sin(\omega t_i)}{\cos(\omega t_i)}\right) \approx \epsilon \omega t_i \label{eq:theta_tau} \end{equation} Combining equations \eqref{eq:theta_sums}, \eqref{eq:theta_epsilon} and \eqref{eq:theta_tau}, the attoclock delay can finally be extracted as \begin{align} \theta-\theta_{\mathrm{SCT}} &= \frac{(1-\epsilon^2)\omega t_i}{\epsilon} + \epsilon \omega t_i \nonumber \\ \tau_A := t_i -0 &= \frac{\theta - \theta_{\mathrm{SCT}}}{\omega \left( \frac{1-\epsilon^2}{\epsilon} + \epsilon \right)} \end{align} from a measured streaking offset angle $\theta$ and a calculated zero-time reference $\theta_{\mathrm{SCT}}$. Multi-electron effects do not significantly influence the final photoelectron momentum spectrum \cite{Emmanouilidou2015,Majety2017}, so the single active electron approximation for the single classical trajectory reference, obtaining $\theta_{SCT}$, is valid. Contrary to prior work \cite{Boge2013}, the SCT prediction in the fully non-adiabatic framework shows the same qualitative behaviour as in the adiabatic approximation, if all initial conditions of the classical trajectories are calculated non-adiabatically, see figures \ref{fig:My_Field_Angle_Ivanov2014} and \ref{fig:MyFieldAngle}. Consequently, the values of the extracted tunnelling delay times as defined in the attoclock method are comparable to the results published in \cite{Landsman2014,Landsman2014b}. However, these values are shifted to lower field strengths due to the calibration method including the initial transverse momentum predicted for elliptical polarization in the non-adiabatic case \cite{PPT2}, see figure \ref{fig:MyFieldDelayNA}. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{figure12-eps-converted-to} \caption{Extracted attoclock delay times $\tau_A$ corresponding to the non-adiabatically calibrated data (red dots), compared to adiabatically calibrated previous results (blue dots). The lines show the predictions of a Feynman Path Integral (FPI) calculation \cite{Landsman2014b} for both adiabatic and non-adiabatic barrier width (blue dotted and red dot-dashed respectively), as well as the Larmor time \cite{Buttiker1982,Buttiker1983,Landsman2015} (solid orange).} \label{fig:MyFieldDelayNA} \end{figure} Also theoretical predictions, or rather their evaluation, are affected by non-adiabatic effects. The effective barrier width is comparatively shorter in the non-adiabatic framework. This has a noticeable influence on the Feynman Path Integral predictions for tunnelling time \cite{Landsman2015}, where the transmission wave function is evaluated at the calculated exit point. Both the adiabatic version as published in \cite{Landsman2014b} (blue dotted) as well as a non-adiabatic version (red dot-dashed) are plotted in figure \ref{fig:MyFieldDelayNA}. The only difference between the two versions is the different exit radius, all other parameters of the calculation are identical. The Larmor time is defined as \cite{Baz1967,Rybachenko1967} \begin{equation} \tau_{LM} = \frac{\partial \phi}{\partial V}, \end{equation} where $\phi$ is the phase of the transmission amplitude through the potential barrier, and $V$ is the barrier height. Interestingly, the same non-adiabatic effect of a shorter exit radius only leads to a tiny shift, much smaller than the error bars of the data, for the Larmor time values. Therefore, figure \ref{fig:MyFieldDelayNA} only shows the values for the non-adiabatic case (orange solid line). Of course, the extracted tunnelling times can also be plotted versus the length of the tunnelling barrier. For figure \ref{fig:My_TimeBarrier}, the barrier width $W$ was always estimated by the corresponding short-range potential width \begin{equation} W \approx \frac{I_{\mathrm{p}}}{F_{\mathrm{max}}}. \end{equation} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figure13-eps-converted-to} \caption{Extracted attoclock tunnelling delay times $\tau_A$ in the adiabatic (blue \cite{Landsman2014b}) and non-adiabatic version (red), compared to the corresponding Feynman Path Integral (FPI) estimates (blue dotted for the adiabatic and red dot-dashed for the non-adiabatic case) and the non-adiabatic version of the Larmor time (orange solid line). The speed-of-light (green solid line) is much faster than the extracted motion.} \label{fig:My_TimeBarrier} \end{figure} Since the non-adiabatic field strength calibration yields smaller values for the maximal field strengths for the same data sets, those corresponding barrier widths are significantly larger, meaning that the photoelectron travels a much larger distance in the same time as was originally deduced. But still, the green solid line in figure \ref{fig:My_TimeBarrier} shows the values corresponding to a motion at speed-of-light. All the experimental data are significantly larger times than that, implying sub-luminal speed of the photoelectrons inside the potential barrier. Looking at the longitudinal momentum distribution of the photoelectron wave packet at the tunnel exit, there are some results indicating that it should be a spread \cite{Bondar2008,Wavepacket,Hofmann2013,Hofmann2014} (compatible with the uncertainty principle) and might have a non-zero most probable value, pointing away from the ion \cite{Teeny2016,Camus2017}. Proof-of-principle CTMC calculations however showed that any combination of these effects either only lead to insignificant shifts of the final angular PMD, or these shifts are essentially explained by the simple conservation of canonical momentum (figure \ref{fig:DeltaCoM}). On the other hand, the angular shift introduced by a non-zero most probable initial longitudinal momentum would reduce $\theta_{SCT}$, thereby increasing estimated tunnelling time. Consequently, the streaking angle offset difference between experiment and $\theta_{\mathrm{SCT}}$ either stays the same or would only increase, leading to an even larger extracted tunnelling time $\tau_A$. Last but not least, several publications either directly found a starting time before the peak of the electric field \cite{Klaiber2015,Teeny2016}, or their results suggest that this might be an option \cite{Ni2016,Torlina2015}. This of course is another effect that acts to increase the extracted tunnelling time in experiments based on the attoclock idea \cite{Landsman2014b,Camus2017,Sainadh2017}. However, none of these approaches immediately yield a quantitative prediction of either the most probable initial longitudinal momentum, nor the starting time, for the case of a three-dimensional helium atom. \section{Conclusion and Outlook} \label{sec:Conclusion} To summarize, a number of recent findings affect either the underlying semiclassical model or the data calibration in the attoclock experiment, such that this updated version finds different values for the tunnelling time than were originally published in \cite{Landsman2014b}. In particular, there is a shift of attoclock measured tunnelling delays to lower intensity values due to a shift in the experimental calibration of intensity when non-adiabatic transverse velocity at the tunnel exit is taken into account. Many other approximations however were confirmed to be valid once again, such as the single active electron approximation, neglecting multi-electron effects. However, we were unable to find any effect or model that would render the experimental tunnelling time significantly smaller or even close to zero for the case under consideration. Two more independent experiments have since been peer-reviewed and published, both also finding finite and real tunnelling time. The analytical models used to explain these experiments are fully quantum (based on the Wigner approach) in the case of \cite{Camus2017}, and quasi-classical in the case of \cite{Fortun2016}. On the other hand, a vast range of theoretical approaches exists but often uses a different set of approximations, and even more crucially, different definitions of tunnelling time. Consequently, there is still no clear theoretical consensus. Both the initial longitudinal momentum and the starting time before the peak can not be quantified yet for the helium target (or any larger atoms, for that matter). So we have to leave it at a qualitative statement for now, assuring that taking account of these effects should increase the extracted tunnelling times. This points to a need for further theoretical investigation in the quantum description of the strong field tunnel ionization process. Finally, it is important to recognize that exact definitions matter, and influence both the outcome and interpretation of a study. Most of the presented works use their own individual observables and definitions of the system under investigation. This makes direct comparisons a challenging task. Nevertheless, experimental data can be quantitatively explained by models including some form of a finite tunnelling time, while most models assuming instantaneous tunnelling so far were not able to reproduce the measurements. For more definitive tests, it is desirable to do more comprehensive studies of atomic hydrogen, where multi-electron effects can be neglected. While calculations for atomic hydrogen are more definitive, experimental measurements are considerably more challenging than the corresponding measurements on noble gases. Longer wavelengths, which approach the adiabatic tunnelling regime, would also provide a more convincing test and allow for comparison with more non-adiabatic experiments. On the analytic front, it is important to further explore the time-zero assumption as starting at the peak of the laser field. Any change to this time-zero calibration would obviously have a direct impact on the extraction of tunnelling time from attoclock experiments. \section*{Funding} A.S.L. acknowledges the Max Planck Centre of Attosecond Science (MPC-AS). This research was supported by the National Centre of Competence in Research Molecular Ultrafast Science and Technology (NCCR MUST), funded by the Swiss National Science Foundation (SNSF) and an ERC Advanced Grant (Attoclock-320401) within the seventh framework programme of the European Union. \section*{Disclosure statement} The authors declare no competing financial interests. \clearpage \bibliographystyle{tfp}
2024-02-18T23:39:45.260Z
2019-04-26T02:19:49.000Z
algebraic_stack_train_0000
291
11,160
proofpile-arXiv_065-1590
\section{Introduction} The three-body problem is one of the oldest problems in classical dynamics that continues to throw up surprises. It has challenged scientists from Newton's time to the present. It arose in an attempt to understand the Sun's effect on the motion of the Moon around the Earth. This was of much practical importance in marine navigation, where lunar tables were necessary to accurately determine longitude at sea (see Box 1). The study of the three-body problem led to the discovery of the planet Neptune (see Box 2), it explains the location and stability of the Trojan asteroids and has furthered our understanding of the stability of the solar system \cite{laskar}. Quantum mechanical variants of the three-body problem are relevant to the Helium atom and water molecule \cite{gutzwiller-book}. \begin{center} \begin{mdframed} {\bf Box 1:} The {\bf Longitude Act} (1714) of the British Parliament offered \pounds 20,000 for a method to determine the longitude at sea to an accuracy of half a degree. This was important for marine navigation at a time of exploration of the continents. In the absence of accurate clocks that could function at sea, a lunar table along with the observed position of the Moon was the principal method of estimating the longitude. Leonhard Euler\footnote{Euler had gone blind when he developed much of his lunar theory!}, Alexis Clairaut and Jean-Baptiste d'Alembert competed to develop a theory accounting for solar perturbations to the motion of the Moon around the Earth. For a delightful account of this chapter in the history of the three-body problem, including Clairaut's explanation of the annual $40^\circ$ rotation of the lunar perigee (which had eluded Newton), see \cite{bodenmann-lunar-battle}. Interestingly, Clairaut's use of Fourier series in the three-body problem (1754) predates their use by Joseph Fourier in the analysis of heat conduction! \end{mdframed} \end{center} \begin{center} \begin{mdframed} {\bf Box 2: Discovery of Neptune:} The French mathematical astronomer Urbain Le Verrier (1846) was intrigued by discrepancies between the observed and Keplerian orbits of Mercury and Uranus. He predicted the existence of Neptune (as was widely suspected) and calculated its expected position based on its effects on the motion of Uranus around the Sun (the existence and location of Neptune was independently inferred by John Adams in Britain). The German astronomer Johann Galle (working with his graduate student Heinrich d'Arrest) discovered Neptune within a degree of Le Verrier's predicted position on the very night that he received the latter's letter. It turned out that both Adams' and Le Verrier's heroic calculations were based on incorrect assumptions about Neptune, they were extremely lucky to stumble upon the correct location! \end{mdframed} \end{center} The three-body problem admits many `regular' solutions such as the collinear and equilateral periodic solutions of Euler and Lagrange as well as the more recently discovered figure-8 solution. On the other hand, it can also display chaos as serendipitously discovered by Poincar\'e. Though a general solution in closed form is not known, Sundman while studying binary collisions, discovered an exceptionally slowly converging series representation of solutions in fractional powers of time. The importance of the three-body problem goes beyond its application to the motion of celestial bodies. As we will see, attempts to understand its dynamics have led to the discovery of many phenomena (e.g., abundance of periodic motions, resonances (see Box 3), homoclinic points, collisional and non-collisional singularities, chaos and KAM tori) and techniques (e.g., Fourier series, perturbation theory, canonical transformations and regularization of singularities) with applications across the sciences. The three-body problem provides a context in which to study the development of classical dynamics as well as a window into several areas of mathematics (geometry, calculus and dynamical systems). \begin{center} \begin{mdframed} {\bf Box 3: Orbital resonances:} The simplest example of an orbital resonance occurs when the periods of two orbiting bodies (e.g., Jupiter and Saturn around the Sun) are in a ratio of small whole numbers ($T_S/T_J \approx 5/2$). Resonances can enhance their gravitational interaction and have both stabilizing and destabilizing effects. For instance, the moons Ganymede, Europa and Io are in a stable $1:2:4$ orbital resonance around Jupiter. The Kirkwood gaps in the asteroid belt are probably due to the destabilizing resonances with Jupiter. Resonances among the natural frequencies of a system (e.g., Keplerian orbits of a pair of moons of a planet) often lead to difficulties in naive estimates of the effect of a perturbation (say of the moons on each other). \end{mdframed} \end{center} \section{Review of the Kepler problem} As preparation for the three-body problem, we begin by reviewing some key features of the two-body problem. If we ignore the non-zero size of celestial bodies, Newton's second law for the motion of two gravitating masses states that \begin{equation} m_1 \ddot {\bf r}_1 = \al \frac{ ({\bf r}_2 - {\bf r}_1)}{|{\bf r}_1 - {\bf r}_2|^3} \quad \text{and} \quad m_2 \ddot {\bf r}_2 = \al \frac{ ({\bf r}_1 - {\bf r}_2)}{|{\bf r}_1 - {\bf r}_2|^3}. \label{e:two-body-newton-ode} \end{equation} Here, $\al = G m_1 m_2$ measures the strength of the gravitational attraction and dots denote time derivatives. This system has six degrees of freedom, say the three Cartesian coordinates of each mass ${\bf r}_1 = (x_1,y_1,z_1)$ and ${\bf r}_2 = (x_2,y_2,z_2)$. Thus, we have a system of 6 nonlinear (due to division by $|{\bf r}_1-{\bf r}_2|^3$), second-order ordinary differential equations (ODEs) for the positions of the two masses. It is convenient to switch from ${\bf r}_1$ and ${\bf r}_2$ to the center of mass (CM) and relative coordinates \begin{equation} {\bf R} = \frac{m_1 {\bf r}_1 + m_2 {\bf r}_2}{m_1 + m_2} \quad \text{and} \quad {\bf r} = {\bf r}_2 - {\bf r}_1. \end{equation} In terms of these, the equations of motion become \begin{equation} M \ddot {\bf R} = 0 \quad \text{and} \quad m \ddot {\bf r} = - \frac{\al}{|{\bf r}|^3} {\bf r}. \end{equation} Here, $M = m_1 + m_2$ is the total mass and $m = m_1 m_2/M$ the `reduced' mass. An advantage of these variables is that in the absence of external forces the CM moves at constant velocity, which can be chosen to vanish by going to a frame moving with the CM. The motion of the relative coordinate ${\bf r}$ decouples from that of ${\bf R}$ and describes a system with three degrees of freedom ${\bf r} = (x,y,z)$. Expressing the conservative gravitational force in terms of the gravitational potential $V = - \alpha/|{\bf r}|$, the equation for the relative coordinate ${\bf r}$ becomes \begin{equation} \dot {\bf p} \equiv m \ddot {\bf r} = - {\bf \nabla}_{\bf r} V = - \left(\dd{V}{x}, \dd{V}{y}, \dd{V}{z} \right) \end{equation} where ${\bf p} = m \dot {\bf r}$ is the relative momentum. Taking the dot product with the `integrating factor' $\dot {\bf r} = (\dot x, \dot y, \dot z)$, we get \begin{equation} m \dot {\bf r} \cdot \ddot {\bf r} = \fr{d}{dt}\left(\frac{1}{2} m \dot {\bf r}^2 \right) = - \left( \dd{V}{x} \; \dot x + \dd{V}{y} \; \dot y + \dd{V}{z} \; \dot z \right) = - \fr{dV}{dt}, \label{e:energy-kepler-cm-frame} \end{equation} which implies that the energy $E \equiv \frac{1}{2} m \dot {\bf r}^2 + V$ or Hamiltonian $\frac{{\bf p}^2}{2m} + V$ is conserved. The relative angular momentum ${\bf L} = {\bf r} \times m \dot {\bf r} = {\bf r} \times {\bf p}$ is another constant of motion as the force is central\footnote{The conservation of angular momentum in a central force is a consequence of rotation invariance: $V = V(|{\bf r}|)$ is independent of polar and azimuthal angles. More generally, Noether's theorem relates continuous symmetries to conserved quantities.}: $\dot {\bf L} = \dot {\bf r} \times {\bf p} + {\bf r} \times \dot {\bf p} = 0 + 0$. The constancy of the direction of ${\bf L}$ implies planar motion in the CM frame: ${\bf r}$ and ${\bf p}$ always lie in the `ecliptic plane' perpendicular to ${\bf L}$, which we take to be the $x$-$y$ plane with origin at the CM (see Fig.~\ref{f:lrl-vector}). The Kepler problem is most easily analyzed in plane-polar coordinates ${\bf r} = (r, \tht)$ in which the energy $E = \frac{1}{2} m \dot r^2 + V_{\rm eff}(r)$ is the sum of a radial kinetic energy and an effective potential energy $V_{\rm eff} = \fr{L_z^2}{2 m r^2} + V(r)$. Here, $L_z = m r^2 \dot \tht$ is the vertical component of angular momentum and the first term in $V_{\rm eff}$ is the centrifugal `angular momentum barrier'. Since ${\bf L}$ (and therefore $L_z$) is conserved, $V_{\rm eff}$ depends only on $r$. Thus, $\tht$ does not appear in the Hamiltonian: it is a `cyclic' coordinate. Conservation of energy constrains $r$ to lie between `turning points', i.e., zeros of $E - V_{\rm eff}(r)$ where the radial velocity $\dot r$ momentarily vanishes. One finds that the orbits are Keplerian ellipses for $E < 0$ along with parabolae and hyperbolae for $E \geq 0$: $r(\tht) = \rho(1 + \eps \cos \tht)^{-1}$ \cite{goldstein,hand-finch}. Here, $\rho = L_z^2/m\al$ is the radius of the circular orbit corresponding to angular momentum $L_z$, $\eps$ the eccentricity and $E = - \frac{\al}{2\rho} (1 - \eps^2)$ the energy. \begin{figure}[h] \center \includegraphics[width=8cm]{lrl-vector.pdf} \caption{\footnotesize Keplerian ellipse in the ecliptic plane of motion showing the constant LRL vector ${\bf A}$. The constant angular momentum ${\bf L}$ points out of the ecliptic plane.} \label{f:lrl-vector} \end{figure} In addition to $E$ and ${\bf L}$, the Laplace-Runge-Lenz (LRL) vector ${\bf A} = {\bf p} \times {\bf L} - m \al \: \hat r$ is another constant of motion. It points along the semi-major axis from the CM to the perihelion and its magnitude determines the eccentricity of the orbit. Thus, we have $7$ conserved quantities: energy and three components each of ${\bf L}$ and ${\bf A}$. However, a system with three degrees of freedom has a six-dimensional phase space (space of coordinates and momenta, also called the state space) and if it is to admit continuous time evolution, it cannot have more than 5 independent conserved quantities. The apparent paradox is resolved once we notice that $E$, ${\bf L}$ and ${\bf A}$ are not all independent; they satisfy two relations\footnote{Wolfgang Pauli (1926) derived the quantum mechanical spectrum of the Hydrogen atom using the relation between $E, {\bf L}^2$ and ${\bf A}^2$ before the development of the Schr\"odinger equation. Indeed, if we postulate circular Bohr orbits which have zero eccentricity (${\bf A} = 0$) and quantized angular momentum ${\bf L}^2 = n^2 \hbar^2$, then $E_n = - \fr{m \al^2 }{2 \hbar^2 n^2}$ where $\al = e^2/4 \pi \epsilon_0$ is the electromagnetic analogue of $G m_1 m_2$.}: \begin{equation} {\bf L} \cdot {\bf A} = 0 \quad \text{and} \quad E = \frac{{\bf A}^2 - m^2 \alpha^2}{2 m {\bf L}^2}. \end{equation} Newton used the solution of the two-body problem to understand the orbits of planets and comets. He then turned his attention to the motion of the Moon around the Earth. However, lunar motion is significantly affected by the Sun. For instance, ${\bf A}$ is {\it not} conserved and the lunar perigee rotates by $40^\circ$ per year. Thus, he was led to study the Moon-Earth-Sun three-body problem. \section{The three-body problem} We consider the problem of three point masses ($m_a$ with position vectors ${\bf r}_a$ for $a = 1,2,3$) moving under their mutual gravitational attraction. This system has 9 degrees of freedom, whose dynamics is determined by 9 coupled second order nonlinear ODEs: \begin{equation} m_a \fr{d^2{\bf r}_a}{dt^2} = \sum_{b \neq a} G m_a m_b \fr{{\bf r}_b-{\bf r}_a}{|{\bf r}_b-{\bf r}_a |^3} \quad \text{for} \quad a = 1,2 \; \text{and} \; 3. \label{e:newtonian-3body-ODE} \end{equation} As before, the three components of momentum ${\bf P} = \sum_a m_a \dot {\bf r}_a$, three components of angular momentum ${\bf L} = \sum_a {\bf r}_a \times {\bf p}_a$ and energy \begin{equation} E = \frac{1}{2} \sum_{a=1}^3 m_a \dot {\bf r}_a^2 - \sum_{a < b} \frac{G m_a m_b}{|{\bf r}_a - {\bf r}_b|} \equiv T + V \end{equation} furnish $7$ independent conserved quantities. Lagrange used these conserved quantities to reduce the above equations of motion to 7 first order ODEs (see Box 4). \begin{center} \begin{mdframed} {\bf Box 4: Lagrange's reduction from 18 to 7 equations:} The 18 phase space variables of the 3-body problem (components of ${\bf r}_1, {\bf r}_2, {\bf r}_3, {\bf p}_1, {\bf p}_2, {\bf p}_3$) satisfy 18 first order ordinary differential equations (ODEs) $\dot {\bf r}_a = {\bf p}_a$, $\dot {\bf p}_a = -{\bf \nabla}_{{\bf r}_a} V$. Lagrange (1772) used the conservation laws to reduce these ODEs to a system of 7 first order ODEs. Conservation of momentum determines 6 phase space variables comprising the location ${\bf R}_{\rm CM}$ and momentum ${\bf P}$ of the center of mass. Conservation of angular momentum ${\bf L} = \sum {\bf r}_a \times {\bf p}_a$ and energy $E$ lead to 4 additional constraints. By using one of the coordinates as a parameter along the orbit (in place of time), Lagrange reduced the three-body problem to a system of $7$ first order nonlinear ODEs. \end{mdframed} \end{center} {\bf Jacobi vectors} (see Fig.~\ref{f:jacobi-coords}) generalize the notion of CM and relative coordinates to the 3-body problem \cite{Rajeev}. They are defined as \begin{equation} \label{e:jacobi-coord} {\bf J}_1 = {\bf r}_2 - {\bf r}_1, \quad {\bf J}_2 = {\bf r}_3 - \fr{m_1 {\bf r}_1 + m_2 {\bf r}_2}{m_1+m_2} \quad \text{and} \quad {\bf J}_3 = \fr{m_1 {\bf r}_1 + m_2 {\bf r}_2 + m_3 {\bf r}_3}{m_1 + m_2 +m_3}. \end{equation} ${\bf J}_3$ is the coordinate of the CM, ${\bf J}_1$ the position vector of $m_2$ relative to $m_1$ and ${\bf J}_2$ that of $m_3$ relative to the CM of $m_1$ and $m_2$. A nice feature of Jacobi vectors is that the kinetic energy $T = \frac{1}{2} \sum_{a = 1,2,3} m_a \dot {\bf r}_a^2$ and moment of inertia $I = \sum_{a = 1,2,3} m_a {\bf r}_a^2$, regarded as quadratic forms, remain diagonal\footnote{A quadratic form $\sum_{a,b} r_a Q_{ab} r_b$ is diagonal if $Q_{ab} = 0$ for $a \ne b$. Here, $\ov{M_1} = \ov{m_1} + \ov{m_2}$ is the reduced mass of the first pair, $\ov{M_2} = \ov{m_1+m_2}+\ov{m_3}$ is the reduced mass of $m_3$ and the ($m_1$, $m_2$) system and $M_3 = m_1 + m_2 + m_3$ the total mass.}: \begin{equation} \label{e:jacobi-coord-ke-mom-inertia} T = \frac{1}{2} \sum_{1 \leq a \leq 3} M_a \dot {\bf J}_a^2 \quad \text{and} \quad I = \sum_{1 \leq a \leq 3} M_a {\bf J}_a^2. \end{equation} What is more, just as the potential energy $- \al/|{\bf r}|$ in the two-body problem is a function only of the relative coordinate ${\bf r}$, here the potential energy $V$ may be expressed entirely in terms of ${\bf J}_1$ and ${\bf J}_2$: \begin{equation} V = - \frac{G m_1 m_2}{|{\bf J}_1|} - \frac{G m_2 m_3}{|{\bf J}_2 - \mu_1 {\bf J}_1|} - \frac{G m_3 m_1}{|{\bf J}_2 + \mu_2 {\bf J}_1|} \quad \text{where} \quad \mu_{1,2} = \frac{m_{1,2}}{m_1 + m_2}. \label{e:jacobi-coord-potential} \end{equation} Thus, the components of the CM vector ${\bf J}_3$ are cyclic coordinates in the Hamiltonian $H = T + V$. In other words, the center of mass motion ($\ddot {\bf J}_3 = 0$) decouples from that of ${\bf J}_1$ and ${\bf J}_2$. An instantaneous configuration of the three bodies defines a triangle with masses at its vertices. The moment of inertia about the center of mass $I_{\rm CM} = M_1 {\bf J}_1^2 + M_2 {\bf J}_2^2$ determines the size of the triangle. For instance, particles suffer a triple collision when $I_{\rm CM} \to 0$ while $I_{\rm CM} \to \infty $ when one of the bodies flies off to infinity. \begin{figure}[h] \center \includegraphics[width=6cm]{jacobi-vectors-resonance.pdf} \caption{\footnotesize Jacobi vectors ${\bf J}_1, {\bf J}_2$ and ${\bf J}_3$ for the three-body problem. {\bf O} is the origin of the coordinate system while CM$_{12}$ is the center of mass of particles 1 and 2.} \label{f:jacobi-coords} \end{figure} \section{Euler and Lagrange periodic solutions} The planar three-body problem is the special case where the masses always lie on a fixed plane. For instance, this happens when the CM is at rest ($\dot {\bf J}_3 = 0$) and the angular momentum about the CM vanishes (${\bf L}_{\rm CM} = M_1 {\bf J}_1 \times \dot {\bf J}_1 + M_2 {\bf J}_2 \times \dot {\bf J}_2 = 0$). In 1767, the Swiss scientist Leonhard Euler discovered simple periodic solutions to the planar three-body problem where the masses are always collinear, with each body traversing a Keplerian orbit about their common CM. The line through the masses rotates about the CM with the ratio of separations remaining constant (see Fig.~\ref{f:euler-periodic}). The Italian/French mathematician Joseph-Louis Lagrange rediscovered Euler's solution in 1772 and also found new periodic solutions where the masses are always at the vertices of equilateral triangles whose size and angular orientation may change with time (see Fig.~\ref{f:lagrange-periodic}). In the limiting case of zero angular momentum, the three bodies move toward/away from their CM along straight lines. These implosion/explosion solutions are called Lagrange homotheties. \begin{figure} \centering \begin{subfigure}[t]{3in} \centering \includegraphics[width=5cm]{euler-three-body.pdf} \caption{\footnotesize Masses traverse Keplerian ellipses with one focus at the CM.} \label{f:euler-periodic} \end{subfigure} \quad \begin{subfigure}[t]{3in} \centering \includegraphics[width=3cm]{euler-soln-eq-mass.pdf} \caption{\footnotesize Two equal masses $m$ in a circular orbit around a third mass $M$ at their CM.} \label{f:euler-eq-mass} \end{subfigure} \caption{\footnotesize Euler collinear periodic solutions of the three-body problem. The constant ratios of separations are functions of the mass ratios alone.} \label{f:three-body-periodic} \end{figure} It is convenient to identify the plane of motion with the complex plane $\mathbb{C}$ and let the three complex numbers $z_{a=1,2,3}(t)$ denote the positions of the three masses at time $t$. E.g., the real and imaginary parts of $z_1$ denote the Cartesian components of the position vector ${\bf r}_1$ of the first mass. In Lagrange's solutions, $z_a(t)$ lie at vertices of an equilateral triangle while they are collinear in Euler's solutions. In both cases, the force on each body is always toward the common center of mass and proportional to the distance from it. For instance, the force on $m_1$ in a Lagrange solution is \begin{equation} {\bf F}_1 = G m_1 m_2 \fr{{\bf r}_2 - {\bf r}_1}{|{\bf r}_2 - {\bf r}_1|^3} + G m_1 m_3 \fr{{\bf r}_3 - {\bf r}_1}{|{\bf r}_3 - {\bf r}_1|^3} = \fr{Gm_1}{d^3} \left( m_1 {\bf r}_1 + m_2 {\bf r}_2 + m_3 {\bf r}_3 - M_3 {\bf r}_1 \right) \end{equation} where $d = |{\bf r}_2 - {\bf r}_1| = |{\bf r}_3 - {\bf r}_1|$ is the side-length of the equilateral triangle and $M_3 = m_1 + m_2 + m_3$. Recalling that ${\bf r}_{\rm CM} = (m_1 {\bf r}_1 + m_2 {\bf r}_2 + m_3 {\bf r}_3)/M_3,$ we get \begin{equation} {\bf F}_1 = \fr{Gm_1}{d^3} M_3 \left( {\bf r}_{\rm CM} - {\bf r}_1 \right) \equiv G m_1 \delta_1 \fr{{\bf r}_{\rm CM} - {\bf r}_1}{|{\bf r}_{\rm CM} - {\bf r}_1|^3} \end{equation} where $\delta_1 = M_3 |{\bf r}_{\rm CM} - {\bf r}_1|^3/d^3$ is a function of the masses alone\footnote{Indeed, ${\bf r}_{\rm CM} - {\bf r}_1 = \left(m_2 ({\bf r}_2-{\bf r}_1) + m_3 ({\bf r}_3 - {\bf r}_1) \right)/M_3 \equiv \left( m_2 {\bf b} + m_3 {\bf c} \right)/ M_3$ where ${\bf b}$ and ${\bf c}$ are two of the sides of the equilateral triangle of length $d$. This leads to $|({\bf r}_{\rm CM}-{\bf r}_1)/d| = \sqrt{m_2^2 + m_3^2 + m_2 m_3 }/M_3$ which is a function of masses alone. }. Thus, the equation of motion for $m_1$, \begin{equation} m_1 \ddot {\bf r}_1 = G m_1 \delta_1 \fr{{\bf r}_{\rm CM} - {\bf r}_1}{|{\bf r}_{\rm CM} - {\bf r}_1|^3}, \end{equation} takes the same form as in the two-body Kepler problem (see Eq.~\ref{e:two-body-newton-ode}). The same applies to $m_2$ and $m_3$. So if $z_a(0)$ denote the initial positions, the curves $z_a(t) = z(t) z_a(0)$ are solutions of Newton's equations for three bodies provided $z(t)$ is a Keplerian orbit for an appropriate two-body problem. In other words, each mass traverses a rescaled Keplerian orbit about the common centre of mass. A similar analysis applies to the Euler collinear solutions as well: locations of the masses is determined by the requirement that the force on each one is toward the CM and proportional to the distance from it (see Box 5 on central configurations). \begin{figure} \centering \includegraphics[width=6cm]{lagrange-three-body.pdf} \caption{\footnotesize Lagrange's periodic solution with three bodies at vertices of equilateral triangles. The constant ratios of separations are functions of the mass ratios alone.} \label{f:lagrange-periodic} \end{figure} \begin{center} \begin{mdframed} {\bf Box 5: Central configurations:} Three-body configurations in which the acceleration of each particle points towards the CM and is proportional to its distance from the CM (${\bf a}_b= \om^2 ({\bf R}_{\rm CM} - {\bf r}_b)$ for $b = 1,2,3$) are called `central configurations'. A central configuration rotating at angular speed $\om$ about the CM automatically satisfies the equations of motion (\ref{e:newtonian-3body-ODE}). Euler collinear and Lagrange equilateral configurations are the only central configurations in the three-body problem. In 1912, Karl Sundmann showed that triple collisions are asymptotically central configurations. \end{mdframed} \end{center} \section{Restricted three-body problem} The restricted three-body problem is a simplified version of the three-body problem where one of the masses $m_3$ is assumed much smaller than the primaries $m_1$ and $m_2$. Thus, $m_1$ and $m_2$ move in Keplerian orbits which are not affected by $m_3$. The Sun-Earth-Moon system provides an example where we further have $m_2 \ll m_1$. In the planar circular restricted three-body problem, the primaries move in fixed circular orbits around their common CM with angular speed $\Omega = (G (m_1 + m_2)/d^3 )^{1/2}$ given by Kepler's third law and $m_3$ moves in the same plane as $m_1$ and $m_2$. Here, $d$ is the separation between primaries. This system has $2$ degrees of freedom associated to the planar motion of $m_3$, and therefore a 4-dimensional phase space just like the planar Kepler problem for the reduced mass. However, unlike the latter which has three conserved quantities (energy, $z$-component of angular momentum and direction of LRL vector) and is exactly solvable, the planar restricted three-body problem has only one known conserved quantity, the `Jacobi integral', which is the energy of $m_3$ in the co-rotating (non-inertial) frame of the primaries: \begin{equation} E = \left[ \frac{1}{2} m_3 \dot r^2 + \frac{1}{2} m_3 r^2 \dot \phi^2 \right] - \frac{1}{2} m_3 \Om^2 r^2 - G m_3 \left( \fr{m_1}{r_1} + \fr{m_2}{r_2} \right) \equiv T + V_{\rm eff}. \end{equation} Here, $(r,\phi)$ are the plane polar coordinates of $m_3$ in the co-rotating frame of the primaries with origin located at their center of mass while $r_1$ and $r_2$ are the distances of $m_3$ from $m_1$ and $m_2$ (see Fig.~\ref{f:restricted-3body-setup}). The `Roche' effective potential $V_{\rm eff}$, named after the French astronomer \'Edouard Albert Roche, is a sum of centrifugal and gravitational energies due to $m_1$ and $m_2$. \begin{figure}[h] \center \includegraphics[width=5cm]{restricted-three-body.pdf} \caption{\footnotesize The secondary $m_3$ in the co-rotating frame of primaries $m_1$ and $m_2$ in the restricted three-body problem. The origin is located at the center of mass of $m_1$ and $m_2$ which coincides with the CM of the system since $m_3 \ll m_{1,2}$.} \label{f:restricted-3body-setup} \end{figure} A system with $n$ degrees of freedom needs at least $n$ constants of motion to be exactly solvable\footnote{A Hamiltonian system with $n$ degrees of freedom is exactly solvable in the sense of Liouville if it possesses $n$ independent conserved quantities in involution, i.e., with vanishing pairwise Poisson brackets (see Boxes 6 and 10).}. For the restricted 3-body problem, Henri Poincar\'e (1889) proved the nonexistence of any conserved quantity (other than $E$) that is analytic in small mass ratios ($m_3/m_2$ and $(m_3+m_2)/m_1$) and orbital elements (${\bf J}_1$, $M_1 \dot {\bf J}_1$, ${\bf J}_2$ and $M_2 \dot {\bf J}_2$) \cite{diacu-holmes,musielak-quarles,barrow-green-poincare-three-body}. This was an extension of a result of Heinrich Bruns who had proved in 1887 the nonexistence of any new conserved quantity algebraic in Cartesian coordinates and momenta for the general three-body problem \cite{whittaker}. Thus, roughly speaking, Poincar\'e showed that the restricted three-body problem is not exactly solvable. In fact, as we outline in \S\ref{s:delaunay-hill-poincare}, he discovered that it displays chaotic behavior. {\noindent \bf Euler and Lagrange points}\footnote{Lagrange points $L_{1-5}$ are also called libration (literally, balance) points.} (denoted $L_{1-5}$) of the restricted three-body problem are the locations of a third mass ($m_3 \ll m_1, m_2$) in the co-rotating frame of the primaries $m_1$ and $m_2$ in the Euler and Lagrange solutions (see Fig.~\ref{f:euler-lagrange-points}). Their stability would allow an asteroid or satellite to occupy a Lagrange point. Euler points $L_{1,2,3}$ are saddle points of the Roche potential while $L_{4,5}$ are maxima (see Fig.~\ref{f:effective-potential}). This suggests that they are all unstable. However, $V_{\rm eff}$ does not include the effect of the Coriolis force since it does no work. A more careful analysis shows that the Coriolis force stabilizes $L_{4,5}$. It is a bit like a magnetic force which does no work but can stabilize a particle in a Penning trap. Euler points are always unstable\footnote{Stable `Halo' orbits around Euler points have been found numerically.} while the Lagrange points $L_{4,5}$ are stable to small perturbations iff $(m_1+m_2)^2 \geq 27 m_1 m_2$ \cite{symon}. More generally, in the unrestricted three-body problem, the Lagrange equilateral solutions are stable iff \begin{equation} (m_1 + m_2 + m_3)^2 \geq 27(m_1 m_2 + m_2 m_3 + m_3 m_1). \end{equation} The above criterion due to Edward Routh (1877) is satisfied if one of the masses dominates the other two. For instance, $L_{4,5}$ for the Sun-Jupiter system are stable and occupied by the Trojan asteroids. \begin{figure}[h] \center \includegraphics[width=5cm]{L-points.pdf} \caption{\footnotesize The positions of Euler $(L_{1,2,3})$ and Lagrange $(L_{4,5})$ points when $m_1 \gg m_2 \gg m_3$. $m_2$ is in an approximately circular orbit around $m_1$. $L_3$ is almost diametrically opposite to $m_2$ and a bit closer to $m_1$ than $m_2$ is. $L_1$ and $L_2$ are symmetrically located on either side of $m_2$. $L_4$ and $L_5$ are equidistant from $m_1$ and $m_2$ and lie on the circular orbit of $m_2$.} \label{f:euler-lagrange-points} \end{figure} \begin{figure}[h] \center \includegraphics[width=8cm]{effective-potential-restricted-3-body-v1.pdf} \caption{\footnotesize Level curves of the Roche effective potential energy $V_{\rm eff}$ of $m_3$ in the co-rotating frame of the primaries $m_1$ and $m_2$ in the circular restricted three-body problem for $G = 1$, $m_1 = 15, m_2 = 10$ and $m_3 = .1$. Lagrange points $L_{1-5}$ are at extrema of $V_{\rm eff}$. The trajectory of $m_3$ for a given energy $E$ must lie in the Hill region defined by $V_{\rm eff}(x,y) \leq E$. E.g., for $E=-6$, the Hill region is the union of two neighborhoods of the primaries and a neighborhood of the point at infinity. The lobes of the $\infty$-shaped level curve passing through $L_1$ are called Roche's lobes. The saddle point $L_1$ is like a mountain pass through which material could pass between the lobes.} \label{f:effective-potential} \end{figure} \section{Planar Euler three-body problem} Given the complexity of the restricted three-body problem, Euler (1760) proposed the even simpler problem of a mass $m$ moving in the gravitational potential of two {\it fixed} masses $m_1$ and $m_2$. Initial conditions can be chosen so that $m$ always moves on a fixed plane containing $m_1$ and $m_2$. Thus, we arrive at a one-body problem with two degrees of freedom and energy \begin{equation} E = \frac{1}{2} m \left(\dot x^2 + \dot y^2 \right) -\frac{\mu_1}{r_1} - \frac{\mu_2}{r_2}. \label{e:euler-three-body-energy} \end{equation} Here, $(x,y)$ are the Cartesian coordinates of $m$, $r_a$ the distances of $m$ from $m_a$ and $\mu_a = G m_a m$ for $a = 1,2$ (see Fig.~\ref{f:elliptic-coordinates}). Unlike in the restricted three-body problem, here the rest-frame of the primaries is an inertial frame, so there are no centrifugal or Coriolis forces. This simplification allows the Euler three-body problem to be exactly solved. Just as the Kepler problem simplifies in plane-polar coordinates $(r, \tht)$ centered at the CM, the Euler 3-body problem simplifies in an elliptical coordinate system $(\xi, \eta)$. The level curves of $\xi$ and $\eta$ are mutually orthogonal confocal ellipses and hyperbolae (see Fig.~\ref{f:elliptic-coordinates}) with the two fixed masses at the foci $2f$ apart: \begin{equation} x = f \: \cosh\xi \: \cos\eta \quad \text{and} \quad y = f \: \sinh\xi \: \sin\eta. \label{e:elliptical-coordinates-transformation} \end{equation} Here, $\xi$ and $\eta$ are like the radial distance $r$ and angle $\tht$, whose level curves are mutually orthogonal concentric circles and radial rays. The distances of $m$ from $m_{1,2}$ are $r_{1,2}= f (\cosh \xi \mp \cos \eta)$. \begin{figure}[h] \center \includegraphics[width=6cm]{elliptic-coordinates-2.pdf} \caption{\footnotesize Elliptical coordinate system for the Euler 3-body problem. Two masses are at the foci $(\pm f,0)$ of an elliptical coordinate system with $f=2$ on the $x$-$y$ plane. The level curves of $\xi$ and $\eta$ (confocal ellipses and hyperbolae) are indicated. } \label{f:elliptic-coordinates} \end{figure} The above confocal ellipses and hyperbolae are Keplerian orbits when a single fixed mass ($m_1$ or $m_2$) is present at one of the foci $(\pm f,0)$. Remarkably, these Keplerian orbits survive as orbits of the Euler 3-body problem. This is a consequence of Bonnet's theorem, which states that if a curve is a trajectory in two separate force fields, it remains a trajectory in the presence of both. If $v_1$ and $v_2$ are the speeds of the Keplerian trajectories when only $m_1$ or $m_2$ was present, then $v = \sqrt{v_1^2 + v_2^2}$ is the speed when both are present. Bonnet's theorem however does not give us all the trajectories of the Euler 3-body problem. More generally, we may integrate the equations of motion by the method of separation of variables in the Hamilton-Jacobi equation (see \cite{mukunda-hamilton} and Boxes 6, 7 \& 8). The system possesses {\it two} independent conserved quantities: energy and Whittaker's constant \footnote{When the primaries coalesce at the origin ($f \to 0$), Whittaker's constant reduces to the conserved quantity ${\bf L}^2$ of the planar 2-body problem.} \cite{gutzwiller-book, whittaker} \begin{equation} w = {\bf L}_1 \cdot {\bf L}_2 + 2 m f \left( -\mu_1\cos\theta_1 + \mu_2\cos\theta_2 \right) = m^2 r_1^2 \, r_2^2 \; \dot \tht_1 \dot \tht_2 + 2f m \left( -\mu_1\cos\theta_1 + \mu_2\cos\theta_2 \right). \label{e:whittakers-constant} \end{equation} Here, $\tht_a$ are the angles between the position vectors ${\bf r}_a$ and the positive $x$-axis and ${\bf L}_{1,2} = m r_{1,2}^2 \dot \tht_{1,2} \hat z$ are the angular momenta about the two force centers (Fig.~\ref{f:elliptic-coordinates}). Since $w$ is conserved, it Poisson commutes with the Hamiltonian $H$. Thus, the planar Euler 3-body problem has two degrees of freedom and two conserved quantities in involution. Consequently, the system is integrable in the sense of Liouville. More generally, in the three-dimensional Euler three-body problem, the mass $m$ can revolve (non-uniformly) about the line joining the force centers ($x$-axis) so that its motion is no longer confined to a plane. Nevertheless, the problem is exactly solvable as the equations admit three independent constants of motion in involution: energy, Whittaker's constant and the $x$ component of angular momentum \cite{gutzwiller-book}. \begin{center} \begin{mdframed} {\bf Box 6: Canonical transformations:} We have seen that the Kepler problem is more easily solved in polar coordinates and momenta $(r, \tht, p_r, p_\tht)$ than in Cartesian phase space variables $(x, y, p_x, p_y)$. This change is an example of a canonical transformation (CT). More generally, a CT is a change of canonical phase space variables $({\bf q}, {\bf p}) \to ({\bf Q} ({\bf p}, {\bf q}, t), {\bf P}({\bf p}, {\bf q}, t))$ that preserves the form of Hamilton's equations. For one degree of freedom, Hamilton's equations $\dot q = \dd{H}{p}$ and $\dot p = -\dd{H}{q}$ become $\dot Q = \dd{K}{P}$ and $\dot P = -\dd{K}{Q}$ where $K(Q,P,t)$ is the new Hamiltonian (for a time independent CT, the old and new Hamiltonians are related by substitution: $H(q,p) = K(Q(q,p),P(q,p))$). The form of Hamilton's equations is preserved provided the basic Poisson brackets do not change i.e., \begin{equation} \{ q,p \} = 1, \;\; \{ q,q \} = \{ p,p\} = 0 \quad \Rightarrow \quad \{ Q,P \} = 1, \;\; \{ Q,Q \} = \{ P,P \} = 0. \end{equation} Here, the Poisson bracket of two functions on phase space $f(q,p)$ and $g(q,p)$ is defined as \begin{equation} \{ f(q,p), g(q,p) \} = \dd{f}{q} \dd{g}{p} - \dd{f}{p} \dd{g}{q}. \end{equation} For one degree of freedom, a CT is simply an area and orientation preserving transformation of the $q$-$p$ phase plane. Indeed, the condition $\{ Q, P \} = 1$ simply states that the Jacobian determinant $J = \det \left( \dd{Q}{q}, \dd{Q}{p} \;\vert\; \dd{P}{q}, \dd{P}{p} \right) = 1$ so that the new area element $dQ \, dP = J \, dq \, dp$ is equal to the old one. A CT can be obtained from a suitable generating function, say of the form $S(q,P,t)$, in the sense that the equations of transformation are given by partial derivatives of $S$: \begin{equation} p = \dd{S}{q}, \quad Q = \dd{S}{P} \quad \text{and} \quad K = H + \dd{S}{t}. \end{equation} For example, $S = qP$ generates the identity transformation ($Q = q$ and $P = p$) while $S = - qP$ generates a rotation of the phase plane by $\pi$ ($Q = -q$ and $P = -p$). \end{mdframed} \end{center} \begin{center} \begin{mdframed} {\bf Box 7: Hamilton Jacobi equation:} The Hamilton Jacobi (HJ) equation is an alternative formulation of Newtonian dynamics. Let $i = 1, \ldots, n$ label the degrees of freedom of a mechanical system. Cyclic coordinates $q^i$ (i.e., those that do not appear in the Hamiltonian $H({\bf q},{\bf p},t)$ so that $\partial H/ \partial q^i = 0$) help to understand Newtonian trajectories, since their conjugate momenta $p_i$ are conserved ($\dot p_i = \dd{H}{q^i} = 0$). If all coordinates are cyclic, then each of them evolves linearly in time: $q^i(t) = q^i(0) + \dd{H}{p_i} t$. Now time-evolution is {\it even simpler} if $\dd{H}{p_i} = 0$ for all $i$ as well, i.e., if $H$ is independent of both coordinates and momenta! In the HJ approach, we find a CT from old phase space variables $({\bf q}, {\bf p})$ to such a coordinate system $({\bf Q},{\bf P})$ in which the new Hamiltonian $K$ is a constant (which can be taken to vanish by shifting the zero of energy). The HJ equation is a nonlinear, first-order partial differential equation for Hamilton's principal function $S({\bf q},{\bf P},t)$ which generates the canonical transformation from $({\bf q},{\bf p})$ to $({\bf Q},{\bf P})$. As explained in Box 6, this means $p_i = \dd{S}{q^i}$, $Q^j = \dd{S}{P_j}$ and $K = H + \dd{S}{t}$. Thus, the HJ equation \begin{equation} H\left({\bf q}, \dd{S}{{\bf q}},t \right) + \dd{S}{t} = 0 \end{equation} is simply the condition for the new Hamiltonian $K$ to vanish. If $H$ is time-independent, we may `separate' the time-dependence of $S$ by writing $S({\bf q},{\bf P},t) = W({\bf q},{\bf P}) - Et$ where the `separation constant' $E$ may be interpreted as energy. Thus, the time independent HJ-equation for Hamilton's characteristic function $W$ is \begin{equation} H\left({\bf q},\frac{\partial W}{\partial {\bf q}}\right) = E. \label{e:time-indep-HJ} \end{equation} E.g., for a particle in a potential $V({\bf q})$, it is the equation $\ov{2m}\left( \fr{\partial W}{\partial {\bf q}}\right)^2 + V({\bf q}) = E$. By solving (\ref{e:time-indep-HJ}) for $W$, we find the desired canonical transformation to the new conserved coordinates ${\bf Q}$ and momenta ${\bf P}$. By inverting the relation $(q,p) \mapsto (Q,P)$ we find $(q^i(t),p_j(t))$ given their initial values. $W$ is said to be a {\it complete integral} of the HJ equation if it depends on $n$ constants of integration, which may be taken to be the new momenta $P_1, \ldots, P_n$. When this is the case, the system is said to be integrable via the HJ equation. However, it is seldom possible to find such a complete integral. In favorable cases, {\it separation of variables} can help to solve the HJ equation (see Box 8). \end{mdframed} \end{center} \begin{center} \begin{mdframed} {\bf Box 8:} {\bf Separation of variables:} In the planar Euler 3-body problem, Hamilton's characteristic function $W$ depends on the two `old' elliptical coordinates $\xi$ and $\eta$. The virtue of elliptical coordinates is that the time-independent HJ equation can be solved by separating the dependence of $W$ on $\xi$ and $\eta$: $W(\xi, \eta) = W_1(\xi) + W_2(\eta)$. Writing the energy (\ref{e:euler-three-body-energy}) in elliptical coordinates (\ref{e:elliptical-coordinates-transformation}) and using $p_\xi = W_1'(\xi)$ and $p_\eta = W_2'(\eta)$, the time-independent HJ equation (\ref{e:time-indep-HJ}) becomes \begin{equation} E = \frac{W_1'(\xi)^2 + W_2'(\eta)^2 - 2mf(\mu_1+\mu_2)\cosh\xi -2mf(\mu_1-\mu_2)\cos\eta}{2mf^2(\cosh^2\xi-\cos^2\eta)}. \end{equation} Rearranging, \begin{equation} W_1'^2 - 2Emf^2\cosh^2\xi - 2mf(\mu_1+\mu_2)\cosh\xi = -W_2'^2 -2Emf^2\cos^2\eta + 2mf(\mu_1-\mu_2)\cos\eta. \end{equation} Since the LHS and RHS are functions only of $\xi$ and $\eta$ respectively, they must both be equal to a `separation constant' $\al$. Thus, the HJ partial differential equation separates into a pair of decoupled ODEs for $W_1(\xi)$ and $W_2(\eta)$. The latter may be integrated using elliptic functions. Note that Whittaker's constant $w$ (\ref{e:whittakers-constant}) may be expressed as $w = - 2 m f^2 E - \al$. \end{mdframed} \end{center} \section{Some landmarks in the history of the 3-body problem} \label{s:delaunay-hill-poincare} The importance of the three-body problem lies in part in the developments that arose from attempts to solve it \cite{diacu-holmes,musielak-quarles}. These have had an impact all over astronomy, physics and mathematics. Can planets collide, be ejected from the solar system or suffer significant deviations from their Keplerian orbits? This is the question of the stability of the solar system. In the $18^{\rm th}$ century, Pierre-Simon Laplace and J. L. Lagrange obtained the first significant results on stability. They showed that to first order in the ratio of planetary to solar masses ($M_p/M_S$), there is no unbounded variation in the semi-major axes of the orbits, indicating stability of the solar system. Sim\'eon Denis Poisson extended this result to second order in $M_p/M_S$. However, in what came as a surprise, the Romanian Spiru Haretu (1878) overcame significant technical challenges to find secular terms (growing linearly and quadratically in time) in the semi-major axes at third order! This was an example of a perturbative expansion, where one expands a physical quantity in powers of a small parameter (here the semi-major axis was expanded in powers of $M_p/M_S \ll 1$). Haretu's result however did not prove instability as the effects of his secular terms could cancel out (see Box 9 for a simple example). But it effectively put an end to the hope of proving the stability/instability of the solar system using such a perturbative approach. The development of Hamilton's mechanics and its refinement in the hands of Carl Jacobi was still fresh when the French dynamical astronomer Charles Delaunay (1846) began the first extensive use of canonical transformations (see Box 6) in perturbation theory \cite{gutzwiller-three-body}. The scale of his hand calculations is staggering: he applied a succession of 505 canonical transformations to a $7^{\rm th}$ order perturbative treatment of the three-dimensional elliptical restricted three-body problem. He arrived at the equation of motion for $m_3$ in Hamiltonian form using $3$ pairs of canonically conjugate orbital variables (3 angular momentum components, the true anomaly, longitude of the ascending node and distance of the ascending node from perigee). He obtained the latitude and longitude of the moon in trigonometric series of about $450$ terms with secular terms (see Box 9) eliminated. It wasn't till 1970-71 that Delaunay's heroic calculations were checked and extended using computers at the Boeing Scientific Laboratories \cite{gutzwiller-three-body}! The Swede Anders Lindstedt (1883) developed a systematic method to approximate solutions to nonlinear ODEs when naive perturbation series fail due to secular terms (see Box 9). The technique was further developed by Poincar\'e. Lindstedt assumed the series to be generally convergent, but Poincar\'e soon showed that they are divergent in most cases. Remarkably, nearly 70 years later, Kolmogorov, Arnold and Moser showed that in many of the cases where Poincar\'e's arguments were inconclusive, the series are in fact convergent, leading to the celebrated KAM theory of integrable systems subject to small perturbations (see Box 10). \begin{center} \begin{mdframed} {\bf Box 9: Poincar\'e-Lindstedt method:} The Poincar\'e-Lindstedt method is an approach to finding series solutions to a system such as the anharmonic oscillator $\ddot x + x + g x^3 = 0$, which for small $g$, is a perturbation of the harmonic oscillator $m \ddot x + k x = 0$ with mass $m = 1$ and spring constant $k = 1$. The latter admits the periodic solution $x_0(t) = \cos t$ with initial conditions $x(0) = 1$, $\dot x(0) = 0$. For a small perturbation $0 < g \ll 1$, expanding $x(t) = x_0(t) + g x_1(t) + \cdots$ in powers of $g$ leads to a linearized equation for $x_1(t)$ \begin{equation} \label{e:x1-lindstedt} \ddot x_1 + x_1 + \cos^3 t = 0. \end{equation} However, the perturbative solution \begin{equation} x(t) = x_0 + g x_1 + {\cal O}(g^2) = \cos t + g \left[ \ov{32} (\cos 3t - \cos t) - \fr{3}{8} t \sin t \right] + {\cal O}(g^2) \end{equation} is unbounded due to the linearly growing {\it secular} term $(-3/8)t \sin t$. This is unacceptable as the energy $E =\frac{1}{2} \dot x^2 + \frac{1}{2} x^2 + \ov{4} g x^4$ must be conserved and the particle must oscillate between turning points of the potential $V = \frac{1}{2} x^2 + \fr{g}{4} x^4$. The Poincar\'e-Lindstedt method avoids this problem by looking for a series solution of the form \begin{equation} x(t) = x_0(\tau) + g \tl x_1(\tau) + \cdots \end{equation} where $\tau = \om t$ with $\om = 1 + g \om_1 + \cdots$. The constants $\om_1, \om_2, \cdots$ are chosen to ensure that the coefficients of the secular terms at order $g, g^2, \cdots$ vanish. In the case at hand we have \begin{equation} x(t) = \cos (t + g \om_1 t) + g \tl x_1(t) + {\cal O}(g^2) = \cos t + g \tl {\tl x}_1(t) + {\cal O}(g^2) \quad \text{where} \quad \tl {\tilde x}_1(t) = \tl x_1(t) - \om_1 t \sin t. \end{equation} $\tl {\tilde x}_1$ satisfies the same equation (\ref{e:x1-lindstedt}) as $x_1$ did, leading to \begin{equation} \tl x_1(t) = \ov{32} (\cos 3t - \cos t) + \left(\om_1 - \fr{3}{8} \right) t \sin t. \end{equation} The choice $\om_1 = 3/8$ ensures cancellation of the secular term at order $g$, leading to the approximate bounded solution \begin{equation} x(t) = \cos \left(t + \frac{3}{8} g t \right) + \frac{g}{32} \left(\cos 3t - \cos t \right) + {\cal O}\left(g^2 \right). \end{equation} \end{mdframed} \end{center} \begin{center} \begin{mdframed} {\bf Box 10: Action-angle variables and invariant tori:} Time evolution is particularly simple if all the generalized coordinates $\tht^j$ are cyclic so that their conjugate momenta $I_j$ are conserved: $\dot I_j = - \dd{H}{\tht^j} = 0$. A Hamiltonian system with $n$ degrees of freedom is integrable in the sense of Liouville if it admits $n$ canonically conjugate ($\{ \tht^j, I_k \} = \del^j_k$\footnote{The Kronecker symbol $\del^j_k$ is equal to one for $j = k$ and zero otherwise}) pairs of phase space variables $(\tht^j, I_j)$ with all the $\tht^j$ cyclic, so that its Hamiltonian depends only on the momenta, $H = H({\bf I})$. Then the `angle' variables $\tht^j$ evolve linearly in time $(\tht^j(t) = \tht^j(0) + \om^j \: t)$ while the momentum or `action' variables $I_j$ are conserved. Here, $\om^j = \dot \tht^j = \dd{H}{I_j}$ are $n$ constant frequencies. Typically the angle variables are periodic, so that the $\tht^j$ parametrize circles. The common level sets of the action variables $I_j = c_j$ are therefore a family of tori that foliate the phase space. Recall that a torus is a Cartesian product of circles. For instance, for one degree of freedom, $\tht^1$ labels points on a circle $S^1$ while for 2 degrees of freedom, $\tht^1$ and $\tht^2$ label points on a 2-torus $S^1 \times S^1$ which looks like a vada or doughnut. Trajectories remain on a fixed torus determined by the initial conditions. Under a sufficiently small and smooth perturbation $H({\bf I}) + g H'({\bf I}, {\vec \tht})$, Andrei Kolmogorov, Vladimir Arnold and J\"urgen Moser showed that some of these `invariant' tori survive provided the frequencies $\om^i$ are sufficiently `non-resonant' or `incommensurate' (i.e., their integral linear combinations do not get `too small'). \end{mdframed} \end{center} George William Hill was motivated by discrepancies in lunar perigee calculations. His celebrated paper on this topic was published in 1877 while working with Simon Newcomb at the American Ephemeris and Nautical Almanac\footnote{Simon Newcomb's project of revising all the orbital data in the solar system established the missing $42''$ in the $566''$ centennial precession of Mercury's perihelion. This played an important role in validating Einstein's general theory of relativity.}. He found a new family of periodic orbits in the circular restricted (Sun-Earth-Moon) 3-body problem by using a frame rotating with the Sun's angular velocity instead of that of the Moon. The solar perturbation to lunar motion around the Earth results in differential equations with periodic coefficients. He used Fourier series to convert these ODEs to an infinite system of linear algebraic equations and developed a theory of infinite determinants to solve them and obtain a rapidly converging series solution for lunar motion. He also discovered new `tight binary' solutions to the 3-body problem where two nearby masses are in nearly circular orbits around their center of mass CM$_{12}$, while CM$_{12}$ and the far away third mass in turn orbit each other in nearly circular trajectories. The French mathematician/physicist/engineer Henri Poincar\'e began by developing a qualitative theory of differential equations from a global geometric viewpoint of the dynamics on phase space. This included a classification of the types of equilibria (zeros of vector fields) on the phase plane (nodes, saddles, foci and centers, see Fig.~\ref{f:zeroes-classification}). His 1890 memoir on the three-body problem was the prize-winning entry in King Oscar II's $60^{\rm th}$ birthday competition (for a detailed account see \cite{barrow-green-poincare-three-body}). He proved the divergence of series solutions for the 3-body problem developed by Delaunay, Hugo Gyld\'en and Lindstedt (in many cases) and covergence of Hill's infinite determinants. To investigate the stability of 3-body motions, Poincar\'e defined his `surfaces of section' and a discrete-time dynamics via the `return map' (see Fig.~\ref{f:poincare-return-map}). A Poincar\'e surface $S$ is a two-dimensional surface in phase space transversal to trajectories. The first return map takes a point $q_1$ on $S$ to $q_2$, which is the next intersection of the trajectory through $q_1$ with $S$. Given a saddle point $p$ on a surface $S$, he defined its stable and unstable spaces $W_s$ and $W_u$ as points on $S$ that tend to $p$ upon repeated forward or backward applications of the return map (see Fig.~\ref{f:homoclinic-points}). He initially assumed that $W_s$ and $W_u$ on a surface could not intersect and used this to argue that the solar system is stable. This assumption turned out to be false, as he discovered with the help of Lars Phragm\'en. In fact, $W_s$ and $W_u$ can intersect transversally on a surface at a homoclinic point\footnote{Homoclinic refers to the property of being `inclined' both forward and backward in time to the same point.} if the state space of the underlying continuous dynamics is at least three-dimensional. What is more, he showed that if there is one homoclinic point, then there must be infinitely many accumulating at $p$. Moreover, $W_s$ and $W_u$ fold and intersect in a very complicated `homoclinic tangle' in the vicinity of $p$. This was the first example of what we now call chaos. Chaos is usually manifested via an extreme sensitivity to initial conditions (exponentially diverging trajectories with nearby initial conditions). \begin{figure} \centering \begin{subfigure}[t]{3cm} \centering \includegraphics[width=3cm]{centre.pdf} \caption{\footnotesize center} \label{f:centre} \end{subfigure} \quad \begin{subfigure}[t]{3cm} \centering \includegraphics[width=3cm]{node.pdf} \caption{\footnotesize (stable) node} \label{f:node} \end{subfigure} \begin{subfigure}[t]{3cm} \centering \includegraphics[width=2.1cm]{spiral.pdf} \caption{\footnotesize (unstable) focus} \label{f:focus} \end{subfigure} \quad \begin{subfigure}[t]{3cm} \centering \includegraphics[width=3cm]{saddle.pdf} \caption{\footnotesize saddle} \label{f:saddle} \end{subfigure} \quad \caption{\footnotesize Poincar\'e's classification of zeros of a vector field (equilibrium or fixed points) on a plane. (a) Center is always stable with oscillatory motion nearby, (b,c) nodes and foci (or spirals) can be stable or unstable and (d) saddles are unstable except in one direction.} \label{f:zeroes-classification} \end{figure} \begin{figure}[h] \center \includegraphics[width=4cm]{poincare-return-map.pdf} \caption{\footnotesize A Poincare surface $S$ transversal to a trajectory is shown. The trajectory through $q_1$ on $S$ intersects $S$ again at $q_2$. The map taking $q_1$ to $q_2$ is called Poincar\'e's first return map.} \label{f:poincare-return-map} \end{figure} \begin{figure}[h] \center \includegraphics[width=4cm]{homoclinic-points.pdf} \caption{\footnotesize The saddle point $p$ and its stable and unstable spaces $W_s$ and $W_u$ are shown on a Poincar\'e surface through $p$. The points at which $W_s$ and $W_u$ intersect are called homoclinic points, e.g., $h_0,$ $h_1$ and $h_{-1}$. Points on $W_s$ (or $W_u$) remain on $W_s$ (or $W_u$) under forward and backward iterations of the return map. Thus, the forward and backward images of a homoclinic point under the return map are also homoclinic points. In the figure $h_0$ is a homoclinic point whose image is $h_1$ on the segment $[h_0,p]$ of $W_s$. Thus, $W_u$ must fold back to intersect $W_s$ at $h_1$. Similarly, if $h_{-1}$ is the backward image of $h_0$ on $W_u$, then $W_s$ must fold back to intersect $W_u$ at $h_{-1}$. Further iterations produce an infinite number of homoclinic points accumulating at $p$. The first example of a homoclinic tangle was discovered by Poincar\'e in the restricted 3-body problem and is a signature of its chaotic nature.} \label{f:homoclinic-points} \end{figure} When two gravitating point masses collide, their relative speed diverges and solutions to the equations of motion become singular at the collision time $t_c$. More generally, a singularity occurs when either a position or velocity diverges in finite time. The Frenchman Paul Painlev\'e (1895) showed that binary and triple collisions are the only possible singularities in the three-body problem. However, he conjectured that non-collisional singularities (e.g. where the separation between a pair of bodies goes to infinity in finite time) are possible for four or more bodies. It took nearly a century for this conjecture to be proven, culminating in the work of Donald Saari and Zhihong Xia (1992) and Joseph Gerver (1991) who found explicit examples of non-collisional singularities in the $5$-body and $3n$-body problems for $n$ sufficiently large \cite{saari}. In Xia's example, a particle oscillates with ever growing frequency and amplitude between two pairs of tight binaries. The separation between the binaries diverges in finite time, as does the velocity of the oscillating particle. The Italian mathematician Tulio Levi-Civita (1901) attempted to avoid singularities and thereby `regularize' collisions in the three-body problem by a change of variables in the differential equations. For example, the ODE for the one-dimensional Kepler problem $\ddot x = - k/x^2$ is singular at the collision point $x=0$. This singularity can be regularized\footnote{Solutions which could be smoothly extended beyond collision time (e.g., the bodies elastically collide) were called regularizable. Those that could not were said to have an essential or transcendent singularity at the collision.} by introducing a new coordinate $x = u^2$ and a reparametrized time $ds = dt/u^2$, which satisfy the nonsingular oscillator equation $u''(s) = E u/2$ with conserved energy $E = (2 \dot u^2 - k)/u^2$. Such regularizations could shed light on near-collisional trajectories (`near misses') provided the differential equations remain physically valid\footnote{Note that the point particle approximation to the equations for celestial bodies of non-zero size breaks down due to tidal effects when the bodies get very close}. The Finnish mathematician Karl Sundman (1912) began by showing that binary collisional singularities in the 3-body problem could be regularized by a repararmetrization of time, $s = |t_1-t|^{1/3}$ where $t_1$ is the the binary collision time \cite{siegel-moser}. He used this to find a {\it convergent} series representation (in powers of $s$) of the general solution of the 3-body problem in the absence of triple collisions\footnote{Sundman showed that for non-zero angular momentum, there are no triple collisions in the three-body problem.}. The possibility of such a convergent series had been anticipated by Karl Weierstrass in proposing the 3-body problem for King Oscar's 60th birthday competition. However, Sundman's series converges exceptionally slowly and has not been of much practical or qualitative use. The advent of computers in the $20^{\rm th}$ century allowed numerical investigations into the 3-body (and more generally the $n$-body) problem. Such numerical simulations have made possible the accurate placement of satellites in near-Earth orbits as well as our missions to the Moon, Mars and the outer planets. They have also facilitated theoretical explorations of the three-body problem including chaotic behavior, the possibility for ejection of one body at high velocity (seen in hypervelocity stars \cite{hypervelocity-stars}) and quite remarkably, the discovery of new periodic solutions. For instance, in 1993, Chris Moore discovered the zero angular momentum figure-8 `choreography' solution. It is a stable periodic solution with bodies of equal masses chasing each other on an $\infty$-shaped trajectory while separated equally in time (see Fig.~\ref{f:figure-8}). Alain Chenciner and Richard Montgomery \cite{montgomery-notices-ams} proved its existence using an elegant geometric reformulation of Newtonian dynamics that relies on the variational principle of Euler and Maupertuis. \begin{figure}[h] \center \includegraphics[width=5cm]{figure-8.pdf} \caption{\footnotesize Equal-mass zero-angular momentum figure-8 choreography solution to the 3-body problem. A choreography is a periodic solution where all masses traverse the same orbit separated equally in time.} \label{f:figure-8} \end{figure} \section{Geometrization of mechanics} Fermat's principle in optics states that light rays extremize the optical path length $\int n({\bf r}(\tau)) \: d\tau$ where $n({\bf r})$ is the (position dependent) refractive index and $\tau$ a parameter along the path\footnote{The optical path length $\int n({\bf r}) \, d\tau$ is proportional to $\int d\tau/\la$, which is the geometric length in units of the local wavelength $\la({\bf r}) = c/n({\bf r}) \nu$. Here, $c$ is the speed of light in vacuum and $\nu$ the constant frequency.}. The variational principle of Euler and Maupertuis (1744) is a mechanical analogue of Fermat's principle \cite{lanczos}. It states that the curve that extremizes the abbreviated action $\int_{{\bf q}_1}^{{\bf q}_2} {\bf p}\cdot d{\bf q}$ holding energy $E$ and the end-points ${\bf q}_1$ and ${\bf q}_2$ fixed has the same shape as the Newtonian trajectory. By contrast, Hamilton's principle of extremal action (1835) states that a trajectory going from ${\bf q}_1$ at time $t_1$ to ${\bf q}_2$ at time $t_2$ is a curve that extremizes the action\footnote{The action is the integral of the Lagrangian $S = \int_{t_1}^{t_2} L({\bf q},\dot {\bf q}) \: dt$. Typically, $L = T - V$ is the difference between kinetic and potential energies.}. It is well-known that the trajectory of a free particle (i.e., subject to no forces) moving on a plane is a straight line. Similarly, trajectories of a free particle moving on the surface of a sphere are great circles. More generally, trajectories of a free particle moving on a curved space (Riemannian manifold $M$) are geodesics (curves that extremize length). Precisely, for a mechanical system with configuration space $M$ and Lagrangian $L = \frac{1}{2} m_{ij}({\bf q}) \dot q^i \dot q^j$, Lagrange's equations $\DD{p_i}{t} = \dd{L}{q^i}$ are equivalent to the geodesic equations with respect to the `kinetic metric' $m_{ij}$ on $M$\footnote{A metric $m_{ij}$ on an $n$-dimensional configuration space $M$ is an $n \times n$ matrix at each point ${\bf q} \in M$ that determines the square of the distance ($ds^2 = \sum_{i,j = 1}^n m_{ij} dq^i dq^j$) from ${\bf q}$ to a nearby point ${\bf q} + d {\bf q}$. We often suppress the summation symbol and follow the convention that repeated indices are summed from $1$ to $n$.}: \begin{equation} m_{ij} \: \ddot q^j(t) = - \frac{1}{2} \left(m_{ji,k} + m_{ki,j} - m_{jk,i} \right) \dot q^j(t) \: \dot q^k(t). \label{e:Lagrange-eqns-kin-metric-and-V} \end{equation} Here, $m_{ij,k} = \partial m_{ij}/\partial q^k$ and $p_i = \dd{L}{\dot q^i} = m_{ij}\dot q^j$ is the momentum conjugate to coordinate $q^i$. For instance, the kinetic metric ($m_{rr} = m$, $m_{\tht \tht} = m r^2$, $m_{r \tht} = m_{\tht r} = 0$) for a free particle moving on a plane may be read off from the Lagrangian $L = \frac{1}{2} m (\dot r^2 + r^2 \dot \tht^2)$ in polar coordinates, and the geodesic equations shown to reduce to Lagrange's equations of motion $\ddot r = r \dot \tht^2$ and $d(m r^2 \dot \tht)/dt = 0$. Remarkably, the correspondence between trajectories and geodesics continues to hold even in the presence of conservative forces derived from a potential $V$. Indeed, trajectories of the Lagrangian $L = T - V = \frac{1}{2} m_{ij}({\bf q}) \dot q^i \dot q^j - V({\bf q})$ are {\it reparametrized}\footnote{The shapes of trajectories and geodesics coincide but the Newtonian time along trajectories is not the same as the arc-length parameter along geodesics.} geodesics of the Jacobi-Maupertuis (JM) metric $g_{ij} = (E- V({\bf q})) m_{ij}({\bf q})$ on $M$ where $E = T + V$ is the energy. This geometric formulation of the Euler-Maupertuis principle (due to Jacobi) follows from the observation that the square of the metric line element \begin{equation} ds^2 = g_{ij} dq^i dq^j = (E-V) m_{ij} dq^i dq^j = \frac{1}{2} m_{kl} \fr{dq^k}{dt} \fr{dq^l}{dt} m_{ij} dq^i dq^j = \frac{1}{2} \left( m_{ij} \dot q^i dq^j \right)^2 = \ov{2} ({\bf p} \cdot d{\bf q})^2, \end{equation} so that the extremization of $\int {\bf p} \cdot d{\bf q}$ is equivalent to the extremization of arc length $\int ds$. Loosely, the potential $V({\bf q})$ on the configuration space plays the role of an inhomogeneous refractive index. Though trajectories and geodesics are the same curves, the Newtonian time $t$ along trajectories is in general different from the arc-length parameter $s$ along geodesics. They are related by $\DD{s}{t} = \sqrt{2} (E-V)$ \cite{govind-himalaya}. This geometric reformulation of classical dynamics allows us to assign a local curvature to points on the configuration space. For instance, the Gaussian curvature $K$ of a surface at a point (see Box 11) measures how nearby geodesics behave (see Fig. \ref{f:geodesic-separation}), they oscillate if $K > 0$ (as on a sphere), diverge exponentially if $K < 0$ (as on a hyperboloid) and linearly separate if $K = 0$ (as on a plane). Thus, the curvature of the Jacobi-Maupertuis metric defined above furnishes information on the stability of trajectories. Negativity of curvature leads to sensitive dependence on initial conditions and can be a source of chaos. \begin{center} \begin{mdframed} {\bf Box 11: Gaussian curvature:} Given a point $p$ on a surface $S$ embedded in three dimensions, a normal plane through $p$ is one that is orthogonal to the tangent plane at $p$. Each normal plane intersects $S$ along a curve whose best quadratic approximation at $p$ is called its osculating circle. The principal radii of curvature $R_{1,2}$ at $p$ are the maximum and minimum radii of osculating circles through $p$. The Gaussian curvature $K(p)$ is defined as $1/R_1 R_2$ and is taken positive if the centers of the corresponding osculating circles lie on the same side of $S$ and negative otherwise. \end{mdframed} \end{center} \begin{figure} \centering \begin{subfigure}[t]{5cm} \centering \includegraphics[width=4cm]{geodesic-planar.pdf} \caption{\footnotesize Nearby geodesics on a plane ($K = 0$) separate linearly.} \label{f:planar-geodesics} \end{subfigure} \quad \begin{subfigure}[t]{5cm} \centering \includegraphics[width=2.3cm]{geodesic-spherical.pdf} \caption{\footnotesize Distance between neighboring geodesics on a sphere ($K > 0$) oscillates.} \label{f:spherical-geodesics} \end{subfigure} \quad \begin{subfigure}[t]{5cm} \centering \includegraphics[width=4cm]{geodesic-hyperbolic.pdf} \caption{\footnotesize Geodesics on a hyperbolic surface ($K < 0$) deviate exponentially} \label{f:hyperbolic-geodesics} \end{subfigure} \caption{\footnotesize Local behavior of nearby geodesics on a surface depends on the sign of its Gaussian curvature $K$.} \label{f:geodesic-separation} \end{figure} In the planar Kepler problem, the Hamiltonian (\ref{e:energy-kepler-cm-frame}) in the CM frame is \begin{equation} H = \fr{p_x^2+p_y^2}{2m} - \fr{\al}{r} \quad \text{where} \quad \al = GMm > 0 \;\;\text{and} \;\; r^2 = x^2 + y^2. \end{equation} The corresponding JM metric line element in polar coordinates is $ds^2 = m\left(E+\fr{\al}{r}\right)\left(dr^2+r^2d\theta^2\right)$. Its Gaussian curvature $K = -E \al/2m(\al + Er)^3$ has a sign opposite to that of energy everywhere. This reflects the divergence of nearby hyperbolic orbits and oscillation of nearby elliptical orbits. Despite negativity of curvature and the consequent sensitivity to initial conditions, hyperbolic orbits in the Kepler problem are not chaotic: particles simply fly off to infinity and trajectories are quite regular. On the other hand, negativity of curvature without any scope for escape can lead to chaos. This happens with geodesic motion on a compact Riemann surface\footnote{ A compact Riemann surface is a closed, oriented and bounded surface such as a sphere, a torus or the surface of a pretzel. The genus of such a surface is the number of handles: zero for a sphere, one for a torus and two or more for higher handle-bodies. Riemann surfaces with genus two or more admit metrics with constant negative curvature.} with constant negative curvature: most trajectories are very irregular. \section{Geometric approach to the planar 3-body problem} We now sketch how the above geometrical framework may be usefully applied to the three-body problem. The configuration space of the planar 3-body problem is the space of triangles on the plane with masses at the vertices. It may be identified with six-dimensional Euclidean space ($\mathbb{R}^6$) with the three planar Jacobi vectors ${\bf J}_{1,2,3}$ (see (\ref{e:jacobi-coord}) and Fig.~\ref{f:jacobi-coords}) furnishing coordinates on it. A simultaneous translation of the position vectors of all three bodies ${\bf r}_{1,2,3} \mapsto {\bf r}_{1,2,3} + {\bf r}_0$ is a symmetry of the Hamiltonian $H = T+V$ of Eqs. (\ref{e:jacobi-coord-ke-mom-inertia},\ref{e:jacobi-coord-potential}) and of the Jacobi-Maupertuis metric \begin{equation} \label{e:jm-metric-in-jacobi-coordinates-on-c3} ds^2 = \left( E - V({\bf J}_1, {\bf J}_2) \right) \sum_{a=1}^3 M_a \: |d{\bf J}_a|^2. \end{equation} This is encoded in the cyclicity of ${\bf J}_3$. Quotienting by translations allows us to define a center of mass configuration space $\mathbb{R}^4$ (the space of centered triangles on the plane with masses at the vertices) with its quotient JM metric. Similarly, rotations ${\bf J}_a \to \colvec{2}{\cos \tht & -\sin \tht}{\sin \tht & \cos \tht} {\bf J}_a$ for $a = 1,2,3$ are a symmetry of the metric, corresponding to rigid rotations of a triangle about a vertical axis through the CM. The quotient of ${\mathbb{R}}^4$ by such rotations is the {\it shape space} ${\mathbb{R}}^3$, which is the space of congruence classes of centered oriented triangles on the plane. Translations and rotations are symmetries of any central inter-particle potential, so the dynamics of the three-body problem in any such potential admits a consistent reduction to geodesic dynamics on the shape space ${\mathbb{R}}^3$. Interestingly, for an {\it inverse-square} potential (as opposed to the Newtonian `$1/r$' potential) \begin{equation} V = -\sum_{a < b} \fr{G m_a m_b}{|{\bf r}_a - {\bf r}_b|^2} = -\fr{G m_1 m_2}{|{\bf J}_1|^2} - \fr{G m_2 m_3}{|{\bf J}_2 - \mu_1 {\bf J}_1|^2} - \fr{G m_3 m_1}{|{\bf J}_2+\mu_2 {\bf J}_1|^2} \quad \text{with} \quad \mu_{1,2}= \frac{m_{1,2}}{m_1 + m_2}, \end{equation} the zero-energy JM metric (\ref{e:jm-metric-in-jacobi-coordinates-on-c3}) is also invariant under the scale transformation ${\bf J}_a \to \la {\bf J}_a$ for $a = 1,2$ and $3$ (see Box 12 for more on the inverse-square potential and for why the zero-energy case is particularly interesting). This allows us to further quotient the shape space ${\mathbb{R}}^3$ by scaling to get the shape sphere ${\mathbb{S}}^2$, which is the space of similarity classes of centered oriented triangles on the plane\footnote{Though scaling is not a symmetry for the Newtonian gravitational potential, it is still useful to project the motion onto the shape sphere.}. Note that collision configurations are omitted from the configuration space and its quotients. Thus, the shape sphere is topologically a $2$-sphere with the three binary collision points removed. In fact, with the JM metric, the shape sphere looks like a `pair of pants' (see Fig.~\ref{f:horn-shape-sphere}). \begin{figure} \centering \begin{subfigure}[t]{3in} \centering \includegraphics[width=5cm]{horn-shape-sphere.pdf} \caption{\footnotesize The negatively curved `pair of pants' metric on the shape sphere ${\mathbb{S}}^2$.} \label{f:horn-shape-sphere} \end{subfigure} \quad \begin{subfigure}[t]{3in} \centering \includegraphics[width=5cm]{round-shape-sphere-resonance.pdf} \caption{\footnotesize Locations of Lagrange, Euler and collision points on a geometrically {\it unfaithful} depiction of the shape sphere ${\mathbb{S}}^2$. The negative curvature of ${\mathbb{S}}^2$ is indicated in Fig.~\ref{f:horn-shape-sphere}. Syzygies are instantaneous configurations where the three bodies are collinear (eclipses).} \label{f:round-shape-sphere} \end{subfigure} \caption{\footnotesize `Pair of pants' metric on shape sphere and Lagrange, Euler and collision points.} \label{f:shape-sphere} \end{figure} For equal masses and $E=0$, the quotient JM metric on the shape sphere may be put in the form \begin{equation} \label{e:jm-metric-zero-energy-shape-sphere} ds^2 = Gm^3 h(\eta,\xi_2) \left(d\eta^2+\sin^2 2\eta \;d\xi_2^2\right). \end{equation} Here, $0 \le 2 \eta \le \pi$ and $0 \le 2 \xi_2 \le 2 \pi$ are polar and azimuthal angles on the shape sphere ${\mathbb{S}}^2$ (see Fig.~\ref{f:round-shape-sphere}). The function $h$ is invariant under the above translations, rotations and scalings and therefore a function on ${\mathbb{S}}^2$. It may be written as $v_1 + v_2 + v_3$ where $v_1 = I_{\rm CM}/(m |{\bf r}_2 - {\bf r}_3|^2)$ etc., are proportional to the inter-particle potentials \cite{govind-himalaya}. As shown in Fig.~\ref{f:horn-shape-sphere}, the shape sphere has three cylindrical horns that point toward the three collision points, which lie at an infinite geodesic distance. Moreover, this equal-mass, zero-energy JM metric (\ref{e:jm-metric-zero-energy-shape-sphere}) has negative Gaussian curvature everywhere except at the Lagrange and collision points where it vanishes. This negativity of curvature implies geodesic instability (nearby geodesics deviate exponentially) as well as the uniqueness of geodesic representatives in each `free' homotopy class, when they exist. The latter property was used by Montgomery \cite{montgomery-notices-ams} to establish uniqueness of the `figure-8' solution (up to translation, rotation and scaling) for the inverse-square potential. The negativity of curvature on the shape sphere for equal masses extends to negativity of scalar curvature\footnote{Scalar curvature is an average of the Gaussian curvatures in the various tangent planes through a point} on the CM configuration space for both the inverse-square and Newtonian gravitational potentials \cite{govind-himalaya}. This could help to explain instabilities and chaos in the three-body problem. \begin{center} \begin{mdframed} {\bf Box 12:} {\bf The inverse-square potential} is somewhat simpler than the Newtonian one due to the behavior of the Hamiltonian $H = \sum_a {\bf p}_a^2/2m_a - \sum_{a < b} G m_a m_b/|{\bf r}_a -{\bf r}_b|^2$ under scale transformations ${\bf r}_a \to \la {\bf r}_a$ and ${\bf p}_a \to \la^{-1} {\bf p}_a$: $H(\la {\bf r}, \la^{-1} {\bf p}) = \la^{-2} H({\bf r}, {\bf p})$ \cite{Rajeev}. The infinitesimal version ($\la \approx 1$) of this transformation is generated by the dilatation operator $D = \sum_a {\bf r}_a \cdot {\bf p}_a$ via Poisson brackets $\{{\bf r}_a, D \} = {\bf r}_a$ and $\{{\bf p}_a, D \} = - {\bf p}_a$. Here, the Poisson bracket between coordinates and momenta are $\{ r_{ai}, p_{bj} \} = \del_{ab} \del_{ij}$ where $a,b$ label particles and $i,j$ label Cartesian components. In terms of Poisson brackets, time evolution of any quantity $f$ is given by $\dot f = \{ f, H \}$. It follows that $\dot D = \{ D, H \} = 2 H$, so scaling is a symmetry of the Hamiltonian (and $D$ is conserved) only when the energy vanishes. To examine long-time behavior we consider the evolution of the moment of inertia in the CM frame $I_{\rm CM} = \sum_a m_a {\bf r}_a^2$ whose time derivative may be expressed as $\dot I = 2D$. This leads to the Lagrange-Jacobi identity $\ddot I = \{\dot I, H \} = \{2D, H \} = 4 E$ or $I = I(0) + \dot I(0) \: t + 2E \: t^2$. Hence when $E > 0$, $I \to \infty$ as $t \to \infty$ so that bodies fly apart asymptotically. Similarly, when $E < 0$ they suffer a triple collision. When $E = 0$, the sign of $\dot I(0)$ controls asymptotic behavior leaving open the special case when $E = 0$ and $\dot I(0) = 0$. By contrast, for the Newtonian potential, the Hamiltonian transforms as $H(\la^{-2/3} {\bf r}, \la^{1/3} {\bf p}) = \la^{2/3} H({\bf r}, {\bf p})$ leading to the Lagrange-Jacobi identity $\ddot I = 4E - 2V$. This is however not adequate to determine the long-time behavior of $I$ when $E < 0$. \end{mdframed} \end{center}
2024-02-18T23:39:45.397Z
2019-01-23T02:32:59.000Z
algebraic_stack_train_0000
294
12,614
proofpile-arXiv_065-1636
\section{Introduction} Synthetic active colloids are microscopic particles that harness a catalytic chemical reaction to self-propel \cite{paxton2004catalytic,howse2007self}. These synthetic particles display biological-like features in that they are able to turn the chemical energy available in the environment into motion like bacteria or eukaryote cells. However, since their surface can be functionalized and their surface chemistry can be controlled during the manufacturing process, they represent potential candidates for novel cancer therapies \cite{hortelao2018targeting,tang2020enzyme,hortelao2020monitoring,tang2020enzyme}, cargo transport \cite{baraban2012transport} or environmental remediation \cite{parmar2018micro}. Such promising applications have sparked the development of many different synthetic active particles that propel through different mechanisms \cite{eloul2020reactive,de2020self,gallino2018physics}. A common feature of synthetic active colloids is that to move in fluidic environments they operate out of equilibrium to convert chemical energy into mechanical stresses, potentially leading to spontaneous symmetry-breaking instabilities \cite{Michelin_2013,de2013self,izri2014self,Narinder_2018,De_Corato_2020,De_Corato_2021}. Therefore, their behavior can be understood using the framework of nonequilibrium thermodynamics. In a recent series of papers Gaspard, Kapral and coauthors showed using thermodynamics considerations that, close to equilibrium, the velocity and the reaction rate of a chemically active particle are linearly related to an external force and to the chemical affinity \cite{gaspard2017communication,gaspard2018fluctuating,huang2018dynamics}. This chemo-mechanical coupling originates from the Onsager reciprocal relations and implies that, if a reaction rate drives self-propulsion in a certain direction, then a force applied in that direction drives a reaction rate. A consequence of the Onsager reciprocal relations is that it is possible to use external forces to drive chemical reactions. Similar examples of chemo-mechanical coupling are very common in biological settings, for instance, the adsorbtion of protein on cell membranes can change their preferential curvature \cite{tozzi2019out} and forces are known to impact reaction rates as in the case of mechanophores \cite{makarov2016perspective} or enzymatic reactions \cite{gumpp2009triggering}. In their work, Gaspard and Kapral \cite{gaspard2017communication} demonstrated that such chemo-mechanical coupling is relevant also for synthetic active colloids that propel through chemical reactions but they did not discuss the physical mechanism responsible for it. On the other hand, the self-propulsion of chemically-active colloids has been successfully explained using the framework of self-phoresis, which uses thin boundary layer asymptotics \cite{golestanian2007designing,moran2017phoretic}. According to this approach, the surface reaction generates a gradient of reactants and products that interact through a short-ranged potential with the surface of the active colloid \cite{anderson1982motion,anderson1989colloid}. This mechanism results in the development of a phoretic slip velocity within a few nanometers of the particle surface, which, in turn, drives the motion of the active colloid. While this framework successfully explains how a surface reaction results in self-propulsion it is not clear how an external force can generate a reaction rate. In these studies, the advective transport of the reactant and product species is usually neglected, and the transport of species is solved independently of the velocity field. As a consequence, the reaction rate is decoupled from the flow field and the symmetry of the Onsager relations appears to be broken. In this paper, we address this point by investigating the physical mechanism leading to the chemo-mechanical coupling highlighted by Gaspard and Kapraal \cite{gaspard2017communication}. To do so, we use integral relations, a perturbation expansion and numerical simulations. We show that by, solving the transport equations around a chemically active colloid, without assuming a short-ranged interaction potential \cite{sharifi2013diffusiophoretic}, we recover a symmetric Onsager matrix. Our analysis reveals that the advection of the reactant and product species, which is often neglected, is the physical mechanism leading to the symmetry of the chemo-mechanical coupling discovered by Gaspard and Kapraal \cite{gaspard2017communication}. Consistently taking into account advection is crucial to preserve the symmetry of the Onsager reciprocal relations in the case of self-propelled chemically-active particles. Finally, since many experiments are carried out far from thermodynamic equilibrium, we investigate the validity of the Onsager reciprocal relations around a nonequilibrium steady state. In this case, there is a net entropy production at steady state that breaks the detailed balance and the microreversibility of the molecular trajectories. This does not necessarily break the reciprocal relations because the fulfilment of the detailed balance implies the Onsager reciprocal relations but not vice-versa. In fact, there are some situations in which the Onsager reciprocal relations and fluctuation-dissipation relations hold around nonequilibrium steady states despite the breakdown of the detailed balance \cite{gabrielli1996onsager,gabrielli1999onsager,dal2019linear}. The paper is divided as follows. In section \ref{sec1}, we briefly recall the Onsager's reciprocal relations demonstrated by Gaspard and Kapraal in the case of a chemically-active colloid. In Sections \ref{sec2}-\ref{sec4}, we define the problem, and the governing equations and derive their dimensionless form. In Section \ref{sec5} we report the governing equations linearized around a generic steady-state. In Section \ref{sec6} we address the Onsager's reciprocal relations around equilibrium using perturbative analysis and numerical simulations. In Section \ref{sec7} we address the Onsager's reciprocal relations around a nonequilibrium steady state. Finally, Section \ref{sec8} contains conclusions and discussions. \section{Onsager reciprocal relations for a chemically-active colloid}\label{sec1} In a series of papers Gaspard, Kapral and coauthors \cite{gaspard2017communication,gaspard2018fluctuating,huang2018dynamics} showed that for small thermodynamic forces, i.e. in the linear response regime, the velocity of the active particle, $\boldsymbol{V}$, and the net reaction rate, $W$, are linearly related to the thermodynamic forces: \begin{equation}\label{fullonsager} \left(\begin{array}{c} \boldsymbol{V} \\ W \end{array}\right) = \left(\begin{array}{cc} D_{VF} & D_{VA} \, \boldsymbol{u} \\ D_{WF} \, \boldsymbol{u} & D_{WA} \end{array}\right) \cdot \left(\begin{array}{c} \frac{\boldsymbol{F}}{k_BT} \\ A_{\text{rxn}} \end{array}\right) \,\, , \end{equation} where $D_{VF}$ is the translational diffusion coefficient, $D_{WA} $ is the reaction-diffusion coefficient and the coefficients that couple the velocity to the reaction rate, $D_{VA}$, and the reaction rate to the external force $D_{WF}$ are equal $D_{WF}=D_{VA}$. In Eq. \eqref{fullonsager}, the unit vector $\boldsymbol{u}$ determines the direction of motion induced by a nonzero chemical activity. Here, we consider an axisymmetric case, the unit vector, $\boldsymbol{u}$, coincides with the z-axis unit vector $\boldsymbol{u}= \boldsymbol{e}_z$ and the velocity is determined by its z-component $V$. The thermodynamic forces are given by the chemical affinity, $A_{\text{rxn}}$ and by the mechanical affinity, $\boldsymbol{F}/k_BT$, which need to be small for Eq. \eqref{fullonsager} to be valid. Without any loss of generality, we consider the external force $\boldsymbol{F}$ to be acting along the z-axis $\boldsymbol{F} = F \boldsymbol{e}_z$. The matrix that appears in Eq. \eqref{fullonsager} is called the Onsager matrix and, near the equilibrium, it must be symmetric and positive definite. The properties of the Onsager matrix are a cornerstone result of thermodynamics and follow from the microscopic reversibility of the molecular trajectories at equilibrium. The application of the Onsager's reciprocal relations to the case of a self-propelled chemically active colloid implies that, if a nonzero chemical affinity leads to the motion of a colloid along the z-axis, then an external force directed along the z-axis results in a reaction rate \cite{gaspard2017communication}. While the Onsager's reciprocal relations are rigorously derived near equilibrium, there are instances where they hold also when the linearization is performed around a nonequilibrium steady state \cite{gabrielli1996onsager,gabrielli1999onsager,dal2019linear}. In what follows, we show that the Onsager's reciprocal relations are valid around equilibrium using a perturbative expansion and numerical simulations. Numerical simulations show that the reciprocal relations are broken around a nonequilibrium steady state. \section{Problem definition}\label{sec2} To investigate the validity of the Onsager's reciprocal relations for an axisymmetric chemically active colloid, we study a thermodynamic system similar to that analyzed by Sabass and Seifert \cite{sabass2012dynamics} and depicted schematically in Figure \ref{fig_schem}. We consider an isothermal system comprising of a spherical particle of radius $R$ suspended in a dilute solution of two neutral species A and B whose chemical potentials are given by: \begin{equation}\label{chempota} \mu_A= k_B T \ln{c_A} \,\, , \end{equation} \begin{equation}\label{chempotb} \mu_B= k_B T \ln{c_B}+\Phi(r,\theta) \,\, , \end{equation} with $c_A$ and $c_B$ the number density of species A and B and $k_B$ the Boltzmann constant and $T$ the absolute temperature. We assume that the chemical potential of the two species differs because of the interaction of the species B with the wall through the potential $\Phi(r,\theta) $ with $r$ and $\theta$ the radial and polar coordinates of a spherical coordinate system fixed at the particle center. We assume that the equilibrium reaction A$\rightleftharpoons$B takes place at the surface of the colloid according to the reaction rate per unit surface \cite{pagonabarraga1997fluctuating,bedeaux2011concentration}: \begin{equation}\label{reacrate} w = L_r (\theta) \left(1-\exp{\left(\frac{\mu_A-\mu_B}{k_B T}\right)}\right) \, \, \, \text{at} \, \, \, r=R \, \, , \end{equation} with $L_r (\theta)$ the Onsager's coefficient that relates the local chemical potential to the local reaction rate. The total reaction rate, $W$ is given by the integral of $w$ over the active particle surface \begin{equation} W= \int_{S} w \, \, dS \, \, , \end{equation} with $S$ the surface of the particle. To model chemically active colloids that are used in the experiments, which are coated with a catalyst on only some part of their surface, we consider that the reactivity changes along the particle surface as $L_r (\theta) = L_r \, g(\theta)$, where $g(\theta)$ is a positive dimensionless function and $L_r$ specifies the magnitude of the Onsager coefficient. To achieve self-propelled motion, the spherical symmetry of the problem needs to be broken \cite{golestanian2007designing,Uspal_2018,Burelbach_2019,Poehnl_2021}, which happens if the potential energy, $\Phi(r,\theta)$ or the Onsager's coefficient $L_r (\theta)$, changes along the polar angle. Here, we consider a potential energy that has the form $\Phi(r,\theta) = \Phi_0 f(r, \theta)$ with a characteristic magnitude $\Phi_0$ and varying in space according to the dimensionless function $f(r, \theta)$, which we assume to be axisymmetric around the z-axis. It follows that the molecules of B interact preferentially with one side of the surface than the other. Finally, we assume that the interaction potential decays to zero at large distances from the surface of the colloid $\Phi(r,\theta) \rightarrow 0$ as $r \rightarrow \infty$. At thermodynamic equilibrium all the fluxes vanish, the suspending fluid is quiescent, the chemical potential is uniform and the distribution of the species A and B are given by the Boltzmann distribution: \begin{equation} \label{equilibriumca} c_A=c_{A, eq} = \text{const.} \, \, , \end{equation} and \begin{equation} \label{equilibriumcb} c_B=c_{B, eq}= c_{A, eq} \exp{\left(-\frac{\Phi_0}{k_BT} f(r,\theta)\right)} \, \, . \end{equation} For $r\rightarrow \infty$ the concentration of species A and B are equal because $\Phi(r,\theta)$ decays to zero. \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{schematics.pdf} \caption{Schematics of the system investigated. A chemically-active colloid is suspended in an incompressible fluid and a chemical reaction between two solute species, A and B, is catalyzed at the surface of the colloid. An external force might be acting along the z-axis. The concentration of the species A and B are fixed far from the colloid. An inhomogeneous interaction between the B solute molecules and the colloid surface drives the self-propulsion of the active particle. } \label{fig_schem} \end{figure} \section{General steady-state equations}\label{sec3} By following the framework of nonequilibrium thermodynamics, we assume that the local thermodynamic forces and the local fluxes are linearly related even if the system is globally driven out of equilibrium \cite{de1962non}. We present the governing equations at steady state and in a reference frame attached to the center of the active particle. It follows that the momentum balance is given by, \begin{equation}\label{mom_bal} \eta \boldsymbol{\nabla}^2 \boldsymbol{v} - \boldsymbol{\nabla} p = c_B \,\boldsymbol{\nabla} \mu_B +c_A \,\boldsymbol{\nabla} \mu_A \, \, , \end{equation} where $\eta$ is the shear viscosity of the liquid, $\boldsymbol{v}$ is the velocity field and $p$ is the pressure. We neglected the inertia of the liquid in Eq. \eqref{mom_bal}, which is typically negligible at the colloidal scale. By substituting the expression for the chemical potential $\mu_A$ and $\mu_B$, given by Eqs. \eqref{chempota}-\eqref{chempotb} in the momentum balance, we obtain \begin{equation} \eta \boldsymbol{\nabla}^2 \boldsymbol{v} - \boldsymbol{\nabla} P = c_B \,\boldsymbol{\nabla} \Phi \, \, , \end{equation} where we have defined the pressure $P$ as the sum of the hydrodynamic pressure and of the osmotic pressure $P= p+k_BT \, \left(c_A+c_B\right)$. We assume that the fluid mixture is incompressible, therefore the continuity equation is given by \begin{equation} \boldsymbol{\nabla} \cdot \boldsymbol{v} = 0 \, \, , \end{equation} with boundary conditions at infinity $r\rightarrow \infty$ given by: \begin{equation} \boldsymbol{v} = -V \boldsymbol{e}_z \, \, , \end{equation} and at the surface of the particle $r=R$: \begin{equation} \boldsymbol{v} = \boldsymbol{0} \, \, . \end{equation} The balance of force on the active particle reads \begin{equation}\label{force_bal} \int_S \boldsymbol{T} \cdot \boldsymbol{n} \, dS = -\int_\Omega c_B \, \boldsymbol{\nabla} \Phi \, d\Omega - F \boldsymbol{e}_z\, \, , \end{equation} where $\boldsymbol{T}$ is the stress tensor defined as $\boldsymbol{T}= \eta \left(\boldsymbol{\nabla}\boldsymbol{v}+\boldsymbol{\nabla}\boldsymbol{v}^T\right) - P \boldsymbol{I}$, $\boldsymbol{n}$ is the normal to the particle surface pointing into the fluid, and $\Omega$ is the volume outside the sphere. The steady-state balance of the species A and B is given by \begin{equation} \boldsymbol{\nabla} \cdot \, \boldsymbol{J_A} =\boldsymbol{\nabla} \cdot \, \boldsymbol{J_B} =0 \, \, , \end{equation} where $\boldsymbol{J_A}$ and $\boldsymbol{J_B}$ are the fluxes of the species A and B, defined as \begin{equation}\label{difffluxA} \boldsymbol{J_A} = -\frac{L_{AA}}{T}\boldsymbol{\nabla} \mu_A +c_A \boldsymbol{v}\, \, , \end{equation} \begin{equation}\label{difffluxB} \boldsymbol{J_B} = -\frac{L_{BB}}{T}\boldsymbol{\nabla} \mu_B +c_B \boldsymbol{v}\, \, . \end{equation} The coefficients $L_{AA}$ and $L_{BB}$ are the Onsager's transport coefficients. The transport coefficients are related to the diffusion coefficients of the species A and B through $L_{AA}=D_A \,T \, c_A$ and $L_{BB}=D_B \, T \, c_B$, with $D_A$ and $D_B$ the diffusion coefficients of species A and B. In the definition of the diffusive fluxes, we have neglected the cross-coupling coefficients because we are considering dilute species. Nevertheless, the conclusions of the present work should hold in the case of cross diffusing species. At the surface of the active particle $r=R$, the fluxes of species A and B are related to the local reaction rate $w$, given by Eq. \eqref{reacrate}, and read \begin{equation} \boldsymbol{J_A} \cdot \boldsymbol{n} = -w\, \, , \end{equation} \begin{equation} \boldsymbol{J_B} \cdot \boldsymbol{n} = w\, \, . \end{equation} The net reaction rate is obtained by integrating $w$ over the surface of the active particle, \begin{equation} W = \int_S w \, dS \, \, . \end{equation} Far from the particle, $r\rightarrow \infty $, the chemical potential of species A is fixed, while the chemical potential of B is at equilibrium: \begin{equation} \mu_A \rightarrow \mu_{A, \infty}\, \, , \end{equation} \begin{equation} \mu_B \rightarrow \mu_{B,eq}\, \, . \end{equation} The difference between the chemical potential of the two species, normalized by $k_B T$, defines the chemical affinity, which is the driving force of the chemical reaction at the surface of the active particle. We define the chemical affinity, $A_{\text{rxn}}$, using the chemical potential of the species far from the particle \begin{equation}\label{chemical_affinity} A_{\text{rxn}}= \frac{\left(\mu_{A, \infty}-\mu_{B, eq} \right)}{k_B T} \, \, , \end{equation} which is typically how the reaction rate is driven in experimental systems. The thermodynamic forces that drive the active particle out of equilibrium are given by the mechanical affinity $F/k_BT$, acting directly on the particle, and by the chemical affinity, $A_{\text{rxn}}$ that drives the chemical reaction. \section{Dimensionless equations}\label{sec4} We make the governing equations dimensionless by using the following characteristic quantities: \begin{equation} r= R \, \tilde{r} ; \, \, \, \, \boldsymbol{v} = \frac{k_B T \, R \, c_{A,eq}}{\eta } \, \tilde{\boldsymbol{v}}; \, \, \, \, P = k_B T \, c_{A,eq} \, \tilde{P}; \, \, \, \, c_{A}= c_{A,eq} \, \tilde{c}_{A}; \, \, \, \, c_{B}= c_{A,eq} \, \tilde{c}_{B} \, \, . \end{equation} In the rest of the paper, we will consider dimensionless quantities only, and we omit the tilde superscript for clarity. The dimensionless momentum balance reads: \begin{equation}\label{dimensionlessmombal} \boldsymbol{\nabla}^2 \boldsymbol{v} - \boldsymbol{\nabla} P = \epsilon \, c_B \,\boldsymbol{\nabla} f(r,\theta) \, \, , \end{equation} with $\epsilon = \Phi_0/k_B T$ the dimensionless characteristic potential energy between the species B and the surface of the particle. The mass balance reads: \begin{equation} \boldsymbol{\nabla} \cdot \boldsymbol{v} = 0 \, \, , \end{equation} with boundary conditions at infinity $r\rightarrow \infty$ given by: \begin{equation} \boldsymbol{v} = -V \, \boldsymbol{e}_z \, \, , \end{equation} and at the surface of the particle $r=1$: \begin{equation} \boldsymbol{v} = \boldsymbol{0} \, \, . \end{equation} The dimensionless balance of number density of species A and B is given by: \begin{equation} \boldsymbol{\nabla}^2 \, c_A -\frac{Pe}{\beta} \, \boldsymbol{v} \cdot \boldsymbol{\nabla}c_A = 0 \, \, , \end{equation} \begin{equation} \boldsymbol{\nabla}^2 \, c_B + \epsilon \, \boldsymbol{\nabla} \cdot \left[ c_B \, \boldsymbol{\nabla} f(r,\theta) \right] - Pe \, \boldsymbol{v} \cdot \boldsymbol{\nabla} c_B= 0 \, \, . \end{equation} Where the P\'eclet number is defined as $Pe = k_BT c_{A,eq} R^2 / \eta D_B $. In defining the P\'eclet number we considered as characteristic velocity the one generated by the solute-surface interactions rather than the velocity of the particle. This choice is dictated by the fact that the velocity of the active particle is unknown and is obtained from the solution of the equations. Alternatively, another velocity scale could be constructed using the external force $F$ but this choice results in the mechanical affinity being included in the P\'eclet number. As a consequence, one could not decouple the effects of an external force from the effects of advection. In the limit $r \rightarrow \infty$, the chemical potential of the species A is kept to a constant value, which fixes their number density \begin{equation} c_{A} \rightarrow c_{A,\infty} \, \, . \end{equation} It is a nonzero chemical affinity that drives the reaction out of equilibrium. The number density of B decays to its equilibrium value far from the particle, $r \rightarrow \infty$: \begin{equation} c_{B} \rightarrow 1 \, \, . \end{equation} The chemical affinity that drives the chemical reaction is given by \begin{equation} A_\text{rxn}= \ln{c_{A,\infty}} \, \, . \end{equation} At the surface of the particle, $r=1$, the species react according to the reversible reaction: \begin{equation} -\boldsymbol{\nabla} c_{A} \cdot \boldsymbol{n} = -Da \, g(\theta) \, \left( 1-\frac{c_B}{ c_A} \, \exp{\left(\epsilon \, f(1,\theta)\right)}\right)\, \, , \end{equation} \begin{equation} -\left[\boldsymbol{\nabla} c_{B} + \epsilon \, c_B \, \boldsymbol{\nabla} f(1,\theta)\right]\cdot \boldsymbol{n} = Da \, \beta \, g(\theta) \, \left( 1-\frac{c_B}{ c_A} \, \exp{\left(\epsilon \, f(1,\theta)\right)}\right)\, \, . \end{equation} Where $Da= L_r R /D_A c_{A,eq}$ is the Damkh{\"o}ler number defined with the diffusion coefficient of the species A, and $\beta = D_A/D_B$ is the ratio of the diffusion coefficient of the two species. The average reaction rate can be evaluated by averaging the net consumption of A over the particle surface $S$: \begin{equation} W = Da \, \int_S \, g(\theta) \, \left( 1-\frac{c_B}{ c_A} \, \exp{\left(\epsilon \, f(1,\theta)\right)}\right) \, dS \, \, . \end{equation} The particle is dragged by an external force along the z-axis. The dimensionless force balance on the particle gives: \begin{equation}\label{force_bal_dim} \int_S \boldsymbol{T} \cdot \boldsymbol{n} \, dS = -\epsilon \, \int_\Omega \, c_B \, \boldsymbol{\nabla} f(r,\theta) \, d\Omega - \frac{\beta}{Pe} \, F^* \, \boldsymbol{e}_z\, \, , \end{equation} with the dimensionless force $F^*= F/\eta D_A$. In the present form, Eqs. \eqref{dimensionlessmombal}-\eqref{force_bal_dim} are nonlinear and they must be linearized to connect the velocity of the particle $V$ and the reaction rate $W$ to the thermodynamic forces through a linear relation. In the following section, we linearize Eqs. \eqref{dimensionlessmombal}-\eqref{force_bal_dim} around a generic steady state. \section{Linearization around a steady state}\label{sec5} To derive the Onsager reciprocal relations derived directly from the transport equations, Eqs. \eqref{mom_bal}-\eqref{chemical_affinity}, we consider small deviations of the thermodynamic forces around their steady state value, $A_{\text{rxn}}=A_{\text{rxn},0}+\delta A_{\text{rxn}}$ and $F^* = F^*_0+ \delta F^*$, where $\delta A_{\text{rxn}}$ and $\delta F^*$ are small. We thus linearize the governing equations around the steady state. The number density of A and B, the velocity and the pressure fields are then expanded as $c_A=c_{A,0}+\delta c_A$, $c_B=c_{B,0}+\delta c_B$, $\boldsymbol{v}=\boldsymbol{v}_0+\delta \boldsymbol{v}$ and $P=P_0+\delta P$. Similarly, the velocity of the particle is given by $V=V_0+\delta V$ and the reaction rate by $W=W_0+\delta W$. The base state equations for the unknowns $c_{A,0}$, $c_{B,0}$, $\boldsymbol{v}_0$, $P_0$ $V_0$ and $W_0$ satisfy the same equations as Eqs. \eqref{dimensionlessmombal}-\eqref{force_bal_dim}. The equations for the deviation are obtained by substituting the expansions in the dimensionelss equations Eqs. \eqref{dimensionlessmombal}-\eqref{force_bal_dim} and neglecting the nonlinear terms. The linearized momentum and mass balance read: \begin{equation}\label{lindimensionlessmombal} \eta \boldsymbol{\nabla}^2 \delta \boldsymbol{v} - \boldsymbol{\nabla} \delta P = \epsilon \, \delta c_B \,\boldsymbol{\nabla} f(r,\theta) \, \, , \end{equation} \begin{equation} \boldsymbol{\nabla} \cdot \delta \boldsymbol{v} = 0 \, \, . \end{equation} with boundary conditions at infinity $r \rightarrow \infty$ given by: \begin{equation} \delta \boldsymbol {v} = -\delta V \, \boldsymbol{e}_z \, \, , \end{equation} and at the surface of the particle $r=1$: \begin{equation} \delta \boldsymbol{v} = \boldsymbol{0} \, \, . \end{equation} The force balance reads \begin{equation}\label{force_bal1} \int_S \left[ \left(\boldsymbol{\nabla}\delta\boldsymbol{v}+\boldsymbol{\nabla} \delta \boldsymbol{v}^T\right) - \delta P \, \boldsymbol{I} \right] \cdot \boldsymbol{n} \, dS = -\epsilon \, \int_\Omega \, \delta c_B \, \boldsymbol{\nabla} f(r,\theta) \, d\Omega - \frac{\beta}{Pe} \, \delta F^* \, \boldsymbol{e}_z\, \, . \end{equation} The linearized transport of the species A and B reads: \begin{equation} \boldsymbol{\nabla}^2 \delta c_A =0 \, \, , \end{equation} \begin{equation} \boldsymbol{\nabla}^2 \delta c_B + \epsilon \, \boldsymbol{\nabla} \cdot \left[ c_B \, \boldsymbol{\nabla} f(r,\theta) \right]- Pe \, \delta \boldsymbol{v} \cdot \boldsymbol{\nabla} c_{B,0} - Pe \,\boldsymbol{v}_0 \cdot \boldsymbol{\nabla} \delta c_{B}=0 \, \, . \end{equation} The reaction rate is also linearized, leading to the linearized boundary condition at $r=1$. \begin{equation} -\boldsymbol{\nabla} \delta c_{A} \cdot \boldsymbol{n} = Da \, g(\theta) \, \left( \frac{c_{A,0} \delta c_B-c_{B,0} \delta c_A}{ c_{A,0}^2} \right) \, \exp{\left(\epsilon \, f(1,\theta)\right)} \, \, , \end{equation} \begin{equation} -\left[\boldsymbol{\nabla} \delta c_{B} + \epsilon \, \delta c_B \, \boldsymbol{\nabla} f(1,\theta)\right]\cdot \boldsymbol{n} = - Da \, \beta \, g(\theta) \, \left( \frac{c_{A,0} \delta c_B-c_{B,0} \delta c_A}{ c_{A,0}^2} \right) \, \exp{\left(\epsilon \, f(1,\theta)\right)}\, \, . \end{equation} The deviation of the concentration from the steady state far from the particle yields the boundary conditions: \begin{equation} \delta c_A \rightarrow \delta c_{A, \infty} \, \, \text{as} \, \, r \rightarrow \infty, \end{equation} \begin{equation}\label{lindimensionlesscbboundcond} \delta c_B \rightarrow 0 \, \, \text{as} \, \, r \rightarrow \infty . \end{equation} The deviation of the chemical affinity, $\delta A_{\text{rxn}}$, is related to the deviation of the far-field concentration, $\delta c_{A, \infty}$, through $ \delta A_{\text{rxn}}= \delta c_{A, \infty}/ \exp{(A_{\text{rxn},0})}$, where $A_{\text{rxn},0}$ is the chemical affinity of the steady state around which the linearization is performed. The reaction rate $\delta W$ can be evaluated by averaging the net consumption of A over the particle surface $S$: \begin{equation} \delta W = Da \, \int_S \, g(\theta) \, \left( \frac{c_{A,0} \delta c_B-c_{B,0} \delta c_A}{ c_{A,0}^2} \right) \, \exp{\left(\epsilon \, f(1,\theta)\right)} \, dS \, \, . \end{equation} The velocity of the particle can be computed using the Lorentz reciprocal theorem \cite{masoud2019reciprocal}: \begin{equation}\label{rectheorem} \delta V = \frac{\beta}{6 \pi \, Pe} \, \delta F^*-\frac{\epsilon}{6 \pi} \, \int_\Omega \delta c_B \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, d\Omega \, \, , \end{equation} where $\hat{\boldsymbol{v}}_\text{Stokes}$ is the Stokes flow past a sphere given by \begin{equation} \hat{\boldsymbol{v}}_\text{Stokes} = \left(\frac{3}{2r}-\frac{1}{2r^3}-1 \right) \cos{(\theta)} \boldsymbol{e}_r-\left(\frac{3}{4r}+\frac{1}{4r^3}-1 \right) \sin{(\theta)} \boldsymbol{e}_\theta \,\, , \end{equation} and $\boldsymbol{e}_r$ and $\boldsymbol{e}_\theta$ are the unit vectors along the radial and polar direction. By linearizing the governing equations the deviation of the particle velocity, $\delta V$, and of the reaction rate, $\delta W$, are linearly related to the deviations of the thermodynamic forces as: \begin{equation}\label{dimensionlessfullonsager} \left(\begin{array}{c} \delta V \\ \delta W \end{array}\right) = \left(\begin{array}{cc} D_{VF} & D_{VA} \\ D_{WF} & D_{WA} \end{array}\right) \cdot \left(\begin{array}{c} \frac{\beta}{Pe} \delta F^* \\ \delta A_{\text{rxn}} \end{array}\right) \,\, . \end{equation} To investigate the validity of the Onsager reciprocal relations, we are interested in calculating the cross-coupling coefficients $D_{VA}$ and $D_{WF}$ for a given steady state. To compute $D_{WF}$ we apply first an external force $\delta F^*$, we solve the system of equations given by Eqs \eqref{lindimensionlessmombal}-\eqref{lindimensionlesscbboundcond}, and we evaluate the reaction rate $\delta W$. The coefficient relating the applied force to the reaction rate is the Onsager coefficient $D_{WF}$. Likewise, to compute $D_{VA}$, we apply a chemical affinity $\delta A_{\text{rxn}}$, we solve the system of equations given by Eqs \eqref{lindimensionlessmombal}-\eqref{lindimensionlesscbboundcond}, and we calculate the particle velocity $\delta V$. \section{Reciprocal relations around equilibrium}\label{sec6} In the case of a base state given by the thermodynamic equilibrium, the Onsager's matrix given by Eq. \eqref{dimensionlessfullonsager} must be symmetric positive semi-definite. This property follows from the microscopic reversibility of the trajectories under time reversal. In what follows, we answer the question: in the case of a base state given by the thermodynamic equilibrium, do the transport equations, Eqs. \eqref{lindimensionlessmombal}-\eqref{lindimensionlesscbboundcond}, result in a symmetric positive semi-definite Onsager matrix? We address this question in the following subsections using a perturbation expansion and numerical simulations. At thermodynamic equilibrium the base state is given by $c_{A,0}=1$, $c_{B,0}=\exp{(-\epsilon \, f(r,\theta) )}$, $\boldsymbol{v}_0=\boldsymbol{0}$, $P=k_BT \left(c_{A,0}+c_{B,0} \right)$, $V_0=0$ and $W_0=0$. \subsection{Perturbation expansion for weak interaction potentials and small Damkh{\"o}ler numbers} Even if the system of equation, given by Eqs \eqref{lindimensionlessmombal}-\eqref{lindimensionlesscbboundcond}, is linear, its analytical solution is complicated by the fact that the chemical activity and the potential energy vary with the polar angle $\theta$. To circumvent this difficulty, we perform a perturbation expansion of the linearized equations, which is valid for small $\epsilon$ and small $Da$. \begin{alignat}{1} & \delta\boldsymbol{v} = \delta \boldsymbol{v}^{0,0} + \epsilon \, \delta \boldsymbol{v}^{1,0}+ Da \, \delta \boldsymbol{v}^{0,1}+ \epsilon^2 \, \delta \boldsymbol{v}^{2,0} + \epsilon \, Da \, \delta \boldsymbol{v}^{1,1}+ Da^2 \, \delta \boldsymbol{v}^{0,2}+\mathcal{O}(\epsilon^3, Da^2 \epsilon, \epsilon^2 Da, Da^3) \, \, ,\\ & \delta P = \delta P^{0,0} + \epsilon \, \delta P^{1,0} + Da \, \delta P^{0,1}+ \epsilon^2 \, \delta P^{2,0} + \epsilon \, Da \, \delta P^{1,1}+ Da^2 \, \delta P^{0,2}+\mathcal{O}(\epsilon^3, Da^2 \epsilon, \epsilon^2 Da, Da^3) \, \, ,\\ & \delta c_A = \epsilon \, \delta c^{1,0}_A + Da \,\delta c^{0,1}_A+ \epsilon^2 \, \delta c^{2,0}_A + \epsilon \, Da \, \delta c^{1,1}_A+ Da^2 \, \delta c^{0,2}_A+\mathcal{O}(\epsilon^3, Da^2 \epsilon, \epsilon^2 Da, Da^3) \, \, ,\\ & \delta c_B = \epsilon \, \delta c^{1,0}_B + Da \,\delta c^{0,1}_B+ \epsilon^2 \, \delta c^{2,0}_B + \epsilon \, Da \, \delta c^{1,1}_B+ Da^2 \, \delta c^{0,2}_B +\mathcal{O}(\epsilon^3, Da^2 \epsilon, \epsilon^2 Da, Da^3) \, \, ,\\ & \delta W = \delta W^{0,0} + \epsilon \, \delta W^{1,0} + Da \, \delta W^{0,1}+ \epsilon^2 \, \delta W^{2,0} + \epsilon \, Da \, \delta W^{1,1}+ Da^2 \, \delta W^{0,2} +\mathcal{O}(\epsilon^3, Da^2 \epsilon, \epsilon^2 Da, Da^3) \, \, ,\\ & \delta V = \delta V^{0,0} + \epsilon \, \delta V^{1,0} + Da \, \delta V^{0,1}+ \epsilon^2 \, \delta V^{2,0} + \epsilon \, Da \, \delta V^{1,1}+ Da^2 \, \delta V^{0,2} +\mathcal{O}(\epsilon^3, Da^2 \epsilon, \epsilon^2 Da, Da^3) \, \, . \end{alignat} Some of these terms can be shown to be zero based on simple considerations. The terms $Da \, \delta \boldsymbol{v}^{0,1}$, $Da^2 \, \delta \boldsymbol{v}^{0,2}$, $Da \, V_{0,1}$ and $Da^2 \, V^{0,2}$ are zero, because in the absence of a potential energy, $\epsilon=0$, the momentum balance is decoupled from the transport of mass and a reaction cannot generate fluid motion. Similarly, since the reaction rate is proportional to $Da$, there is no reaction rate if $Da=0$ and the terms $\delta W^{0,0}$, $\epsilon \, \delta W^{1,0}$ and $\epsilon^2 \delta \, W^{2,0}$ are zero. In addition, we identify the field $\delta \boldsymbol{v}^{0,0}$ as the dimensionless Stokes flow past a sphere $\delta \boldsymbol{v}^{0,0}=\frac{\beta}{ Pe} F \hat{\boldsymbol{v}}_\text{Stokes}$ and the velocity $\delta V^{0,0}=\frac{\beta}{6 \pi \, Pe} F$. With these simplifications in mind, the velocity of the active particle and the net reaction rate can be obtained from an expansion of the Onsager's matrix: \begin{equation}\label{expanded_onsager} \left(\begin{array}{c} \delta V \\ \delta W \end{array}\right) = \left(\begin{array}{cc} \frac{1}{6 \pi}+\epsilon \left( \, D^{1,0}_{VF}+\epsilon \,D^{2,0}_{VF}+ \, Da\, D^{1,1}_{VF} \right) & \epsilon \, Da \, D^{1,1}_{VF} \\ \epsilon \, Da\,D^{1,1}_{WF} & Da \left( \, D^{0,1}_{WA}+\epsilon \, D^{1,1}_{WA}+Da \, D{0,2}_{WA} \right) \end{array}\right) \cdot \left(\begin{array}{c} \frac{\beta}{Pe} \delta F^* \\ \delta A_{\text{rxn}} \end{array}\right) \,\, . \end{equation} To leading order, the eigenvalues of the Onsager matrix, given by Eq. \eqref{expanded_onsager}, are $1/6\pi$ and $Da \, D^{0,1}_{WA}$. Therefore, to demonstrate that the matrix is positive semi-definite, we need to show that $D^{0,1}_{WA} \geq 0$. To show that it is also symmetric, we need to prove that $D^{1,1}_{VA}=D{1,1}_{WF}$. To do so, we plug the expansion into the governing equations above and solve order by order. The objective is to find the coefficient $D^{1,1}_{VA}$ that relates $\delta V$ and the chemical affinity, $\delta A_{rxn}$, and to show that it is equal to the coefficient $D^{1,1}_{WF}$. To do so, we proceed by dividing the problem into two steps. We first consider the case of a zero external force $\delta F^*=0$ and a nonzero chemical affinity $\delta A_{rxn}$, we calculate $\delta V^{1,1}$. The entry of the Onsager matrix $D^{1,1}_{VA}$ is simply given by the coefficient that relates $\delta A_{rxn}$ and $\delta V^{1,1}$. We then we impose a nonzero $\delta F^*$ while keeping the chemical affinity to zero $\delta A_{rxn}=0$, we calculate $\delta W^{1,1}$ and obtain $D^{1,1}_{WF}$ as the coefficient that relates $\delta F^*$ and $\delta W^{1,1}$. The first-order reaction rate $\delta W^{1,1}$ and the velocity $\delta V^{1,1}$ are obtained using integral relations \cite{masoud2019reciprocal} that do not require the solution of all the fields. By substituting the expansion in the governing equations, given by Eqs \eqref{dimensionlessmombal}-\eqref{rectheorem}, we find that the net reaction rate $\delta W^{1,1}$ is given by \begin{equation} \delta W^{1,1} = \int_S \, g(\theta) \, \delta c^{1,0}_B \, dS \, \, , \end{equation} and that the velocity $\delta V{1,1}$ of the particle is given by \begin{equation}\label{rectheorem1} \delta V^{1,1} = -\frac{1}{6 \pi} \, \int_\Omega \, \delta c^{0,1}_B \, \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, d\Omega \, \, , \end{equation} It follows that, to compute $\delta W^{1,1}$ and $\delta V^{1,1}$, we need to calculate the first order fields $\delta c^{1,0}_B$ and $\delta c^{0,1}_B$ only. \subsubsection{Fixing the chemical affinity and calculating the particle velocity and reaction rate} In order to find an expression for $\delta c^{0,1}_B$, we substitute the expansion in powers of $\epsilon$ and $Da$ and we keep all the terms linear in $Da$: \begin{equation}\label{fixingchemaff1} \boldsymbol{\nabla}^2 \, \delta c^{0,1}_B= 0 \, \, . \end{equation} with boundary condition at $r=1$ given by \begin{equation}\label{fixingchemaff2} -\boldsymbol{\nabla} \, \delta c^{0,1}_B \cdot \boldsymbol{n}= \beta \, g(\theta) \, \delta A_{\text{rxn}} \, \, . \end{equation} end with $\delta c^{0,1}_B=0$ as $r \rightarrow \infty$. The reaction rate, $\delta W^{0,1}$, is simply given by the integral of the reaction rate, given by Eq. \eqref{fixingchemaff2}, over the surface. This allows us to identify the coefficient $D^{0,1}_{WA}=\beta \, \int_S \, g(\theta) \, dS$. Since $\beta$ is always positive and $g(\theta)$ is a positive function, it follows that $D^{0,1}_{WA}\geq 0$, which proves that the Onsager's matrix is positive semi-definite. The solution of the Eqs. \eqref{fixingchemaff1}-\eqref{fixingchemaff2} is obtained by expanding the distribution of the kinetic constant, $g(\theta)$, in Legendre polynomials as $g(\theta) = \sum_{l=0}^{\infty} g_l P_l(\cos{(\theta)})$, with $P_l$ the Legendre polynomial of order $l$. The solution then reads \begin{equation} \delta c^{0,1}_B = \beta \, \delta A_{\text{rxn}} \, \sum_{l=0}^{\infty} \, \frac{g_l}{l+1} r^{-l-1} \, P_l(\cos{(\theta)}) \, \, . \end{equation} Substituting this expression in the velocity we obtain \begin{equation}\label{rectheorem2} \delta V^{1,1} = -\frac{\beta \, \delta A_{\text{rxn}}}{6 \pi} \, \sum_{l=0}^{\infty} \, \frac{g_l}{l+1} \, \int_\Omega r^{-l-1} \, P_l(\cos{(\theta)}) \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, d\Omega \, \, . \end{equation} Eq. \eqref{rectheorem2} allows us to identify the coefficient $D^{1,1}_{VA}$ as the proportionality constant between $\delta A_{\text{rxn}}$ and $\delta V^{1,1}$: \begin{equation}\label{xi11} D^{1,1}_{VA} = -\frac{\beta }{6 \pi} \, \sum_{l=0}^{\infty} \frac{g_l}{l+1} \int_\Omega \, r^{-l-1} \, P_l(\cos{(\theta)}) \, \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, d\Omega \, \, . \end{equation} \subsubsection{Fixing the external force and calculating the reaction rate} In order to find an expression for $\delta c^{1,0}_B$, we substitute the expansion in powers of $\epsilon$ and $Da$ and we keep all the terms linear in $\epsilon$: \begin{equation} \boldsymbol{\nabla}^2 \, \delta c^{1,0}_B= - \frac{\beta}{6 \pi } \, \delta F^* \, \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, \, . \end{equation} with boundary condition at $r=1$ given by \begin{equation} -\boldsymbol{\nabla} \, \delta c^{1,0}_B \cdot \boldsymbol{n}= 0 \, \, . \end{equation} and at infinity $\delta c^{1,0}_B=0$. The second Green's theorem states that the following integral relation holds between $\delta c^{1,0}_B$ and an auxiliary field $\Psi$, which satisfies $\boldsymbol{\nabla}^2 \,\Psi =0$ \begin{equation}\label{greentheorem} \frac{\beta}{6 \pi } \, \delta F^* \, \int_\Omega \Psi \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, d\Omega = \int_S \, \delta c^{1,0}_B \boldsymbol{\nabla} \Psi \cdot \boldsymbol{n} \, dS \, \, . \end{equation} Since the function $\Psi$ satisfies the Laplace equation, its solution can be written as $\Psi = \sum_{l=0}^{\infty} r^{-l-1} P_l(\cos{(\theta)})$, which we substitute in the expression above to obtain \begin{equation}\label{greentheorem1} -\frac{\beta}{6 \pi } \, \delta F^* \, \int_\Omega \sum_{l=0}^{\infty} \frac{r^{-l-1}}{l+1} P_l(\cos{(\theta)}) \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, d\Omega = \sum_{l=0}^{\infty} \int_S \delta c^{1,0}_B P_l(\cos{(\theta)}) \, dS \, \, . \end{equation} We now expand the function $\delta c^{1,0}_B$, evaluated at the surface of the colloid, in series of Legendre polynomials $\delta c^{1,0}_B = \sum_{l=0}^{\infty} \delta c^{1,0, l}_B \, P_l (\cos{(\theta)})$. We plug this expansion in the right hand side of Eq. \eqref{greentheorem1}, we apply the orthogonality property of the Legendre polynomials and equate term by term to get \begin{equation}\label{greentheorem2} -\frac{\beta}{6 \pi } \, \delta F^* \, \int_\Omega \frac{r^{-l-1}}{l+1} P_l(\cos{(\theta)}) \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, d\Omega = \frac{2}{2l+1}\, \delta c^{1,0, l}_B\, \, . \end{equation} The equation above yields all the Legendre modes of the distribution $\delta c^{1,0, l}_B$ at the surface of the colloid. We can use this expression to evaluate the net reaction rate: \begin{equation} \delta W^{1,1} = \int_S g(\theta) \, \delta c^{1,0}_B \, dS \, \, , \end{equation} where we now expand both $g(\theta)$ and $ \delta c^{1,0}_B$ in series of Legendre polynomials. By further using the orthogonality property of the Legendre polynomials, we obtain: \begin{equation} \delta W^{1,1} = \sum_{l=0}^{\infty} \frac{2}{2l+1} g_l \, \delta c^{1,0, l}_B \, \, . \end{equation} we now substitute $\delta c_{B,1,0}^l$ obtained from Eq. \eqref{greentheorem2} to obtain: \begin{equation}\label{finaleqreac} \delta W^{1,1} = -\frac{\beta}{6 \pi } \, \delta F^* \, \sum_{l=0}^{\infty} \frac{g_l }{l+1} \int_\Omega r^{-l-1} \, P_l(\cos{(\theta)}) \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, d\Omega \, \, . \end{equation} Eq. \eqref{finaleqreac} relates the reaction rate to the mechanical affinity. The coefficient of proportionality between the reaction rate and the mechanical affinity yields the Onsager coefficient $D_{WF, 1,1}$, which is identical to that obtained in Eq. \eqref{xi11}: \begin{equation}\label{finaleqreac1} D^{ 1,1}_{WF} = -\frac{\beta}{6 \pi } \, \sum_{l=0}^{\infty} \frac{g_l }{l+1} \, \int_\Omega r^{-l-1}P_l(\cos{(\theta)}) \boldsymbol{\nabla} f(r,\theta) \cdot \hat{\boldsymbol{v}}_\text{Stokes} \, d\Omega = D^{ 1,1}_{VA}\, \, . \end{equation} This result also proves that, to leading order, the Onsager matrix given by Eq. \eqref{expanded_onsager} is symmetric for any choice of the distribution of the chemical activity, $g(\theta) \geq 0$ and for any choice of the distribution of the interaction energy $f(r,\theta)$. Interestingly, to leading order, neither $D_{WF}$ nor $D_{VA}$ depend on the P\'eclet number which suggests a negligible impact of advection to the cross-coupling coefficients. As a consequence, one would be tempted to neglect this mechanism when modeling chemically active colloids. Yet, neglecting \textit{a} \textit{priori} the transport due to advection in the diffusive fluxes, given by Eqs \eqref{difffluxA}-\eqref{difffluxB}, implies $D^{1,1}_{WF}=0$, thus breaking the symmetry of the Onsager matrix. \subsubsection{Comparison of the self-diffusiophoretic velocity with previous results} We can compare the self-diffusiophoretic velocity of the active colloid predicted by Eq. \eqref{rectheorem2} to that obtained by Sabbass and Seifert \cite{sabass2012dynamics}, in the limit of a short-range interaction potential, zero P\`{e}clet number and equal diffusivity of the two species A and B. The authors calculated the velocity of an active particle using a matched asymptotic expansion, which is valid for an interaction potential that decays quickly for $r>1$. In the case of an interaction potential that is only a function of the radius $\Phi(r)= \epsilon \, f(r)$, they find that the velocity depends on the dipolar mode of the reaction rate. Rewriting their result in dimensionless form and in the limit of slow reaction rate $Da \ll 1$ and weak interaction potentials $\epsilon \ll 1$: \begin{equation}\label{previousstudy} V_{dph} = -\frac{Da \, \epsilon \, g_1 \, A_\text{rxn} \, \beta}{3} \int_1^{\infty} (r-1) f(r) \, dr \, \, . \end{equation} Where $Da \, g_1 \, A_\text{rxn}$ represents the dipolar component of the reaction rate occurring at the surface of the active colloid, and $f(r)$ is a quickly decaying function. To compare with Eq. \eqref{previousstudy}, we rewrite Eq. \eqref{rectheorem2} for the case of the interaction potential being a function of the radial distance only $f(r,\theta)=f(r)$: \begin{equation}\label{rectheorem3} \delta V^{1,1} = -\frac{\beta \, \delta A_{\text{rxn}}}{6 \pi} \, \sum_{l=0}^{\infty} \frac{g_l}{l+1} \int_\Omega r^{-l-1} \, P_l(\cos{(\theta)}) \frac{\partial}{\partial r} f(r) \left(\frac{3}{2r}-\frac{1}{2r^3}-1 \right) \cos{(\theta)} \, d\Omega \, \, . \end{equation} We rewrite the integral above in spherical coordinates and carry out the integration along the azimuthal direction, which is trivial because the integrand does not depend on the azimuthal angle: \begin{equation}\label{rectheorem4} \delta V^{1,1} = -\frac{\beta \, \delta A_{\text{rxn}}}{3} \, \sum_{l=0}^{\infty} \frac{g_l}{l+1} \int_1^{\infty} \int_0^{\pi} r^{-l+1} \, \sin{\theta} \, P_l(\cos{(\theta)}) \frac{\partial}{\partial r} f(r) \left(\frac{3}{2r}-\frac{1}{2r^3}-1 \right) \cos{(\theta)} \, dr \, d\theta \, \, . \end{equation} We remove the radial derivative on the potential energy using integration by parts and we use the fact that $f(r) \rightarrow 0 $ as $r\rightarrow \infty$ and that $\left(\frac{3}{2r}-\frac{1}{2r^3}-1 \right)=0$ at $r=1$ to obtain \begin{equation} \delta V^{1,1} = \frac{\beta \, \delta A_{\text{rxn}}}{3} \sum_{l=0}^{\infty} \frac{g_l}{l+1} \int_1^{\infty} \int_0^{\pi} \, f(r) P_l(\cos{(\theta)}) \sin{\theta} \cos{(\theta)} \frac{\partial}{\partial r} \left[r^{-l+1} \left(\frac{3}{2r}-\frac{1}{2r^3}-1 \right) \right] \, dr \, d\theta \, \, . \end{equation} We carry out the integral along the polar angle first. Since $\cos{(\theta)}=P_1(\cos{(\theta)})$, we can apply the orthogonality property of the Legendre polynomials, $\int_0^{\pi} P_l(\cos{(\theta)})P_{l'}(\cos{(\theta)}) \sin{(\theta)} d\theta = \delta_{ll'} 2/(2l+1)$, which identifies the mode $l=1$ as the only contribution in the summation: \begin{equation} \delta V{1,1} = \frac{\beta \, g_1 \, \delta A_{\text{rxn}}}{9} \, \int_1^{\infty} \, f(r) \left(\frac{3}{2r^4}-\frac{3}{2r^2} \right) \, dr \, \, . \end{equation} We are left with an integration of the product between two functions along the radial coordinate . Since $f(r)$ decays quickly to zero, we can Taylor expand the term in the bracket around $r=1$ and we retain only the zeroth-order term \cite{perturbationmethods}. By doing this, we obtain the leading order propulsion velocity \begin{equation}\label{sameresult} \delta V = \frac{Da \, \beta \, \epsilon \, g_1 \, \delta A_{\text{rxn}}}{3} \, \int_1^{\infty} \, f(r) \left(r-1 \right) \, dr \, \, , \end{equation} which for $\beta=1$ is exactly the same result as in Eq. \eqref{previousstudy}. Our results, which are derived from a model where the advective transport of species is considered, coincide with those where advection is neglected \cite{sabass2012dynamics}. This suggests that, in the limit of a rapidly decaying interaction potential or weak interaction energy energy, the advective transport of solute does not contribute to the propulsion velocity. \subsubsection{Onsager relations using numerical simulations around equilibrium} We extend the perturbative analysis presented in the previous sections to non-vanishing values of $Da$ and $\epsilon$ by solving Eqs.\eqref{lindimensionlessmombal}-\eqref{lindimensionlesscbboundcond} using the finite element method. We consider the case of an asymmetric chemical activity given by $g(\theta)= 1+\cos{(\theta)}$ and an interaction potential that decays exponentially over a dimensionless lengthscale $\lambda^{-1}$ and is fore-aft asymmetric $f(r,\theta)= \epsilon \exp{[\lambda (r-1)]}\left(\cos{(\theta})-1\right)$. We further assume equal species diffusivity $\beta=1$. The computational domain is axisymmetric and it is divided into triangular elements, with a more refined mesh near the particle surface and coarser elements further away. To avoid finite size effects, the computational domain is chosen 500 times the radius of the active particle. A quadratic interpolation is used for the velocity field and the solute concentration fields and a linear interpolation is used for the pressure field. To derive the Onsager cross coupling coefficients, we proceed by fixing $\delta F^*_0 =1$ and $\delta A_\text{rxn}=0$ and evaluating the reaction rate we compute the coefficient $D_{WF}$. We then fix $\delta F^*_0 =0$ and $\delta A_\text{rxn}=1$ and evaluate the particle velocity we obtain the coefficient $D_{VA}$. In Figure \ref{fig2}, we report the coefficients $D_{WF}$ and $D_{VA}$ for different values of $Da$ and $\epsilon$. In panel (a), the Onsager's coefficients, $D_{WF}$ and $D_{VA}$, are reported as a function of $\epsilon$ for $Da=0.1$ while, in panel (b), the coefficients are plotted against $Da$ for $\epsilon=0.1$. For the particular choice of parameters, the numerical results confirm the symmetry of the Onsager matrix and show that the perturbative approximation, given by Eq. \eqref{xi11} and Eq. \eqref{finaleqreac1}, are accurate for the cases shown in Figure \ref{fig2}. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figure2.pdf} \caption{Onsager cross coupling coefficients, $D_{WF}$ and $D_{VA}$, computed using numerical simulations of the governing equations linearized around the equilibrium. In the panel (a) we fix $Da=0.1$ and change $\epsilon$, while in panel (b) we fix $\epsilon =0.1$ and we change $Da$. The remaining dimensionless numbers are $\beta=1$, $\lambda=1$ and $Pe=1$.} \label{fig2} \end{figure} In Figure \ref{fig4}, we show the Onsager cross-coupling coefficients for values of $\epsilon$ and $Da$ that are beyond the range of applicability of the perturbation expansion. The numerical results show that $D_{VA}=D_{WF}$ for all the parameters investigated, thus confirming that the Onsager's reciprocal relations are fulfilled by the governing equations even beyond the range of applicability of the perturbation expansion. Interestingly, in the limit $Pe\rightarrow 0$, the cross-coupling coefficients attain a constant value that is independent of $Pe$ and depends only on $\epsilon$ and $Da$. The range of $Pe$ for which the $D_{VA}$ and $D_{WF}$ are constant depends on the range of the interaction potential $\lambda^{-1}$. For short-ranged potentials, $\lambda^{-1} \gg 1$, the coupling coefficients are constant up to very large values of $Pe$. To investigate the effect of the range of the interaction potential, $\lambda^{-1}$, in Figure \ref{fig5} we plot the cross-coupling coefficients, normalized by their value at $Pe \rightarrow 0$, as a function of $Pe \, \lambda^{-3}$. The results show that $D_{VA}$ and $D_{WF}$ calculated for different interaction ranges, $\lambda^{-1}$, collapse onto a mastercurve that depends only on $\epsilon$ and $Da$. For $Pe \, \lambda^{-3} \ll 1$ the cross-coupling coefficients are constant and they start to decay to zero when $Pe \, \lambda^{-3} \approx 1$. This scaling is in agreement with the findings of Michelin and Lauga \cite{michelin2014phoretic} who found that in the limit $\lambda^{-1} \gg 1$, the advection of species becomes important within the thin boundary layer only if $Pe \approx \lambda^{3}$. Our numerical simulations suggest that, for $Pe \, \lambda^{-3} \ll 1$, advection can be safely neglected if one is interested in the propulsion of chemically-active colloids. However, one should retain advection in cases where external forces are present, since neglecting it leads to $D_{WF}=0$ thus breaking the Onsager reciprocal relations. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figure4.pdf} \caption{Onsager cross coupling coefficients, $D_{WF}$ and $D_{VA}$, computed using numerical simulations of the governing equations linearized around the equilibrium. The dimensionless numbers are $\beta=1$, $\lambda=1$ and $\epsilon=1$.} \label{fig4} \end{figure} Our results suggest that the momentum balance and the transport of solute are coupled even in the limit of $Pe\rightarrow 0$. Such coupling is necessary for an external force to drive a chemical reaction and preserve the symmetry of the Onsager relations. Indeed, the Force balance, given by Eq. \eqref{force_bal_dim}, reveals that the velocity field must scale as $v \propto F^*/Pe$ in the limit of $Pe\rightarrow 0$. By substituting this scaling into the transport equation of the species B, the $Pe$ number that multiplies in the advective term of the equation cancels out with the scaling $v \propto F^*/Pe$. From a physical standpoint, in the limit $Pe\rightarrow 0$, the phoretic velocity scale used in the definition of the P\'eclet number becomes irrelevant and the only relevant velocity scale can be constructed using the external force $F$. One can redefine a new P\'eclet number using this velocity scale, which would contain the mechanical affinity in its definition. The immediate consequence of this is that one cannot simultaneously consider a finite mechanical affinity and vanishing advective effects. Our results are in agreement with the recent work by Gaspard and Kapral \cite{gaspard2018nonequilibrium}, who propose that in the limit of short-range potentials, there is a coupling between the tangential component of the traction exerted by the fluid and the tangential transport of species. Such coupling is independent of the P\'eclet number and couples the transport of solute and the transport of momentum even if the advective transport outside the boundary layer is negligible. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figure5.pdf} \caption{Onsager cross coupling coefficient $D_{VA}$, computed using numerical simulations of the governing equations linearized around the equilibrium. In panel (a) we show the case of $Da=1$ and in panel (b) the case of $Da=2$. The dimensionless numbers are $\beta=1$ and $\epsilon=1$. The data computed at different interaction potential range, $\lambda$, collapse on a mastercurve up to $Pe \, \lambda^{-3} \approx 1$.} \label{fig5} \end{figure} \section{Onsager reciprocal relations around a nonequilibrium steady state}\label{sec7} In this section we investigate the validity of the Onsager reciprocal relations around a nonequilibrium steady state. We use the finite element method to solve the base state, given by Eqs.\eqref{dimensionlessmombal}-\eqref{force_bal_dim}, and compute the steady-state quantities. We assume that the base state is given by an active particle driven by an external force $F^*_0$ or by a chemical affinity $A_{\text{rxn},0}$. As we did in the previous section, we fix the chemical activity as $g(\theta)= 1+\cos{(\theta)}$ and the interaction potential as $f(r,\theta)= \epsilon \exp{(\lambda (r-1))}\left[\cos{(\theta})-1\right]$. The nonlinear system of equations is solved using the Newton-Raphson method starting from an initial guess given by the equilibrium distribution of species. Once the base state is computed, we solve the linearized equations, Eqs.\eqref{lindimensionlessmombal}-\eqref{lindimensionlesscbboundcond}, using the same mesh used to solve for the base state. In Figure \ref{fig3}, we report $D_{WF}$ and $D_{VA}$ for a base state driven out of equilibrium by an external force or by the chemical affinity for the case $Pe=\beta=1$, $\epsilon = 0.1$ and $Da=0.1$. In panel (a) and in panel (b) of Figure \ref{fig3} it is apparent that for small thermodynamic forces the system is near equilibrium and $D_{WF}=D_{VA}$ with their value agreeing with the asymptotic approximation given by Eq. \eqref{xi11} and Eq. \eqref{finaleqreac1}. However, Figure \ref{fig3} shows that far from equilibrium the two coefficients are different meaning that the Onsager reciprocal relations break down. Here, we also find that considering a generalized chemical affinity as proposed in \cite{pagonabarraga1997fluctuating,bedeaux2011concentration} does not restore the symmetry of the Onsager relations. Since in experimental conditions the active particles are usually driven by a chemical reaction that is far from equilibrium, we expect the Onsager reciprocal relations to be broken. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{figure3.pdf} \caption{Onsager cross-coupling coefficients, $D_{WF}$ and $D_{VA}$, computed using numerical simulations of the governing equations linearized around a nonequilibrium steady state. In the panel (a) the base state is driven out of equilibrium by a nonzero chemical affinity $A_{\text{rxn},0}$, while in panel (b) the base state is driven out of equilibrium by an external force $F_{0}^*$. The remaining dimensionless numbers are $\beta=Pe=1$, $\epsilon = 0.1$ and $Da=0.1$.} \label{fig3} \end{figure} \section{Conclusions}\label{sec8} We investigate the Onsager reciprocal relations for a chemically-active colloid. We assume that the active colloid is suspended in an incompressible solution of two species, A and B, with the species B interacting through a potential with the surface of a spherical particle. The two species undergo a reversible reaction at the surface of the colloid. In the case of the thermodynamic system investigated here, the Onsager reciprocal relations link the total surface reaction rate and the velocity of the active colloid to the chemical and the mechanical affinity. Such chemo-mechanical coupling can be formalized using the Onsager matrix, which must be symmetric positive definite around the equilibrium. Here we derive the Onsager reciprocal relations, starting from the local transport equations of the number density of species, the balance of momentum, and the continuity equation. These equations are defined in the volume outside the active colloid and are derived using the framework of nonequilibrium thermodynamics and the assumption of local equilibrium. Since the resulting governing equations are nonlinear, we linearize them around a generic steady state. Using a perturbation expansion and numerical simulations we compute the Onsager matrix. We show that the Onsager reciprocal relations are recovered when the equations are linearized around the thermodynamic equilibrium. This is expected since at equilibrium the microscopic equations of motion obey the detailed balance. In addition, our results agree with the self-phoretic velocity calculated in previous works using matched asymptotic expansions \cite{sabass2012dynamics}. We find that accounting for the advection of the reacting species is crucial to preserve the symmetry of the Onsager matrix even in the case of short-ranged interaction potentials or vanishing P\'eclet numbers. Neglecting the advective transport of the solute breaks the symmetry of the Onsager relations. In the limit of vanishing P\'eclet numbers, the only relevant velocity scale can be defined using the mechanical affinity. As a consequence, the mechanical affinity enters the definition of the P\'eclet number and one cannot simultaneously neglect the advective transport of the solutes and consider a finite mechanical affinity: A nonzero mechanical affinity implies nonzero advective effects. Finally, we investigated the validity of the Onsager reciprocal relations around a nonequilibrium steady state (NESS). The active particle is driven by an external force or by a nonzero chemical affinity and we consider small perturbations around this nonequilibrium steady state. Previous works have shown that the reciprocal relations might hold around NESS even if the detailed balance of the underlying dynamics is broken \cite{gabrielli1996onsager,gabrielli1999onsager,dal2019linear}. Here, we found that the symmetry of the Onsager reciprocal relations breaks down and one cannot define an effective temperature that preserves the symmetry of the Onsager matrix \cite{Hargus_2021,JClub_Grosberg}. Indeed, most of the active particles used in experiments are driven far from equilibrium and we should expect their Brownian motion to be qualitatively different from that experienced at equilibrium \cite{Golestanian_2009,Gomez_Solano_2016}. \begin{acknowledgments} M.D.C. acknowledges funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie action (GA 712754), the Severo Ochoa programme (SEV-2014-0425), the CERCA Programme/Generalitat de Catalunya, and the Spanish Ministry of Science and Innovation (MCIN) under the Juan de la Cierva (IJC2018-035270-I) postdoctoral grant and the retos de investigacion project PID2020-113033GB-I00. I.P. acknowledges support from MINECO/FEDER Project No. PGC2018-098373-B-I00, DURSI Project No. 2017SGR-884, SNF Project No. 200021-175719. M.D.C and I.P ackowledge funding from the H2020 research and innovation program under the FET open project NanoPhlow (GA 766972). \end{acknowledgments}
2024-02-18T23:39:45.567Z
2022-05-09T02:03:09.000Z
algebraic_stack_train_0000
301
9,532
proofpile-arXiv_065-1645
\section{Introduction}\label{s000} Our work is motivated by E.~Akin's book ``General Topology of Dynamical Systems'' \cite{A}, where dynamical systems using closed relations are presented, and by S.~Kolyada's and L.~Snoha's paper ``Minimal dynamical systems'' \cite{KS}, where a wonderful overview of minimal dynamical systems is given. Our work is also motivated by J.~Kennedy's and G.~Erceg's paper ``Topological entropy on closed sets in $[0,1]^2$'' \cite{EK}, and I.~Bani\v c's, J.~Kennedy's and G.~Erceg's paper ``Closed relations with non-zero entropy that generate no periodic points'' \cite{BEK}, where the idea of topological entropy is generalized from standard dynamical systems $(X,f)$ to dynamical systems $(X,G)$ with closed relations $G$ on compact metric spaces $X$. In dynamical systems theory, the study of chaotic behaviour of a dynamical system is often based on some topological properties or properties of continuous functions. One of the commonly used properties is the minimality of a dynamical system $(X,f)$ or the minimality of the function $f$. According to \cite{KS}, minimal dynamical systems were defined by Birkhoff in 1912 \cite{B} as the systems which have no nontrivial closed subsystems: they are considered to be the most fundamental dynamical systems; see \cite{KS} where more references can be found. Minimal dynamical systems $(X,f)$ (i.e., with a minimal map $f$) have the property that each point moves under iteration of $f$ from one non-empty open set to another. This property has been studied intensively by mathematicians since it is an important property in dynamical system theory. In this paper, we generalize the notion of topological dynamical systems to topological dynamical systems with closed relations and introduce the notion of minimality of such dynamical systems. A similar generalization of a dynamical object was presented in 2004 by Ingram and Mahavier \cite{ingram,mah} introducing inverse limits of inverse sequences of compact metric spaces $X$ with upper semi-continuous set-valued bonding functions $f$ (their graphs $\Gamma(f)$ are examples of closed relations on $X$ with certain additional properties). These inverse limits provide a valuable extension to the role of inverse limits in the study of dynamical systems and continuum theory. For example, Kennedy and Nall have developed a simple method for constructing families of $\lambda$-dendroids \cite{KN}. Their method involves inverse limits of inverse sequences with upper semi-continuous set-valued functions on closed intervals with simple bonding functions. Such generalizations have proven to be useful (also in applied areas); frequently, when constructing a model for empirical data, continuous (single-valued) functions fall short, and the data are better modelled by upper semi-continuous set-valued functions, or sometimes, even closed relations that are not set-valued functions are required. The Christiano-Harrison model from macroeconomics is one such example \cite{christiano}. The study of inverse limits of inverse sequences with upper semi-continuous set-valued functions is rapidly gaining momentum - the recent books by Ingram \cite{ingram-knjiga}, and by Ingram and Mahavier \cite{ingram}, give a comprehensive exposition of this research prior to 2012. Also, several papers on the topic of dynamical systems with (upper semi-continuous) set-valued functions have appeared recently, see \cite{CP,LP,LYY,LWZ,KN,KW,MRT,R,SWS}, where more references may be found. However, there is not much known of such dynamical systems and therefore, there are many properties of such set-valued dynamical systems that are yet to be studied. In this paper, we study the minimality of such dynamical systems. We also extend the notion of dynamical systems with (upper semi-continuous) set-valued functions to dynamical systems with closed relations. We proceed as follows. In the sections that follow Section \ref{s00}, where basic definitions are given, we discuss the following topics: \begin{enumerate} \item Minimal dynamical systems with closed relations and invariant sets (Section \ref{s1}). \item Minimal dynamical systems with closed relations and forward orbits (Section \ref{s2}). \item Minimal dynamical systems with closed relations and omega limit sets (Section \ref{s3}). \item Backward minimal dynamical systems with closed relations (Section \ref{s4}). \item Minimal dynamical systems with closed relations and backward orbits (Section \ref{s5}). \item Minimal dynamical systems with closed relations and alpha limit sets (Section \ref{s6}). \item Preserving minimality by topological conjugation (Section \ref{s8}). \end{enumerate} In Sections \ref{s1}, \ref{s2}, \ref{s3}, \ref{s5}, \ref{s6}, and \ref{s8}, we first revisit minimal dynamical systems $(X,f)$ and then, we generalize the asserted property from dynamical systems $(X,f)$ to dynamical systems with closed relations $(X,G)$ by making the identification $(X,f)=(X,\Gamma(f))$. Results about dynamical systems $(X,f)$, presented in first part of each of the above mentioned sections, are well-known. Their proofs are short, rather straight forward and elementary. Since they are important for the purpose of this paper, we state and prove each of the presented results. \section{Definitions and notation} \label{s00} In this section, basic definitions and well-known results that are needed later in the paper are presented. \begin{definition} Let $X$ and $Y$ be metric spaces, and let $f:X\rightarrow Y$ be a function. We use $$ \Gamma(f)=\{(x,y)\in X\times Y \ | \ y=f(x)\} $$ to denote \emph{ \color{blue} the graph of the function $f$}. \end{definition} \begin{definition} If $X$ is a compact metric space, then \emph{ \color{blue} $2^X$ }denotes the set of all non-empty closed subsets of $X$. \end{definition} \begin{definition} Let $X$ be a compact metric space and let $G\subseteq X\times X$ be a relation on $X$. If $G\in 2^{X\times X}$, then we say that $G$ is \emph{ \color{blue} a closed relation on $X$}. \end{definition} \begin{definition} Let $X$ be a set and let $G$ be a relation on $X$. Then we define $$ G^{-1}=\{(y,x)\in X\times X \ | \ (x,y)\in G\} $$ to be \emph{ \color{blue} the inverse relation of the relation $G$} on $X$. \end{definition} \begin{definition} Let $X$ be a compact metric space and let $G$ be a closed relation on $X$. Then we call $$ \star_{i=1}^{m}G=\Big\{(x_1,x_2,x_3,\ldots ,x_{m+1})\in \prod_{i=1}^{m+1}X \ | \ \textup{ for each } i\in \{1,2,3,\ldots ,m\}, (x_{i},x_{i+1})\in G\Big\} $$ for each positive integer $m$, \emph{ \color{blue} the $m$-th Mahavier product of $G$}, and $$ \star_{i=1}^{\infty}G=\Big\{(x_1,x_2,x_3,\ldots )\in \prod_{i=1}^{\infty}X \ | \ \textup{ for each positive integer } i, (x_{i},x_{i+1})\in G\Big\} $$ \emph{ \color{blue} the infinite Mahavier product of $G$}. \end{definition} \begin{observation} Let $X$ be a compact metric space, let $f:X\rightarrow X$ be a continuous function. Then $$ \star_{n=1}^{\infty}\Gamma(f)^{-1}=\varprojlim(X,f). $$ \end{observation} \section{Minimal dynamical systems with closed relations}\label{s1} First, we revisit minimal dynamical systems and then, we introduce dynamical systems with closed relations and generalize the notion of minimality of a dynamical system to minimality of dynamical systems with closed relations. \begin{definition} Let $X$ be a compact metric space, let $f:X\rightarrow X$ be a continuous function. We say that $(X,f)$ is \emph{ \color{blue} a dynamical system}. \end{definition} \begin{definition} Let $(X,f)$ be a dynamical system and let $A\subseteq X$. We say that \begin{enumerate} \item $A$ is \emph{ \color{blue} $f$-invariant}, if $f(A)\subseteq A$. \item $A$ is \emph{ \color{blue} strongly $f$-invariant}, if $f(A)=A$. \end{enumerate} \end{definition} \begin{definition} Let $(X,f)$ be a dynamical system. We say that $(X,f)$ is \emph{ \color{blue} a minimal dynamical system}, if for each closed subset $A$ of $X$, $$ A \textup{ is } f\textup{-invariant} \Longrightarrow A\in \{\emptyset ,X\}. $$ \end{definition} The following are well-known results. Since the proofs are short and elementary, we give them here. \begin{theorem}\label{enolicni} Let $(X,f)$ be a dynamical system. The following statements are equivalent. \begin{enumerate} \item \label{ena} $(X,f)$ is a minimal dynamical system. \item\label{dva} For each closed subset $A$ of $X$, $$ f(A)= A \Longrightarrow A\in \{\emptyset ,X\}. $$ \end{enumerate} \end{theorem} \begin{proof} Let $(X,f)$ be a minimal dynamical system and let $A$ be a closed subset of $X$ such that $f(A)=A$. Then $f(A)\subseteq A$. Since $(X,f)$ is minimal, it follows that $A\in \{\emptyset ,X\}$. Next, suppose that for each closed subset $A$ of $X$, $$ f(A)= A \Longrightarrow A\in \{\emptyset ,X\}. $$ To prove that $(X,f)$ is minimal, let $A$ be a closed subset of $X$ such that $f(A)\subseteq A$. Set $$ \mathcal U=\{B\in 2^{A} \ | \ f(B)\subseteq B\}. $$ We use Zorn's lemma to show that there is a minimal element of $(\mathcal U,\subseteq)$. Let $\mathcal V$ be a chain (a totally ordered set) in $\mathcal U$ and set $B_0=\bigcap_{B\in \mathcal V}B$. Then $B_0\in 2^A$ and $$ f(B_0)=f\Big(\bigcap_{B\in \mathcal V}B\Big)\subseteq \bigcap_{B\in \mathcal V}f(B)\subseteq \bigcap_{B\in \mathcal V}B=B_0. $$ Therefore, $B_0\in \mathcal U$ and $B_0\subseteq B$ for any $B\in \mathcal V$. By Zorn's lemma, there is a minimal element of $(\mathcal U,\subseteq)$. Let $A_0$ be a minimal element of $(\mathcal U,\subseteq)$. Then $A_0$ is a non-empty closed subset of $X$. Note that $f(A_0)\subseteq A_0$ since $A_0$ is an element of $(\mathcal U,\subseteq)$. Suppose that $f(A_0)\neq A_0$ and let $A_1=f(A_0)$. Then $A_1$ is a proper subset of $A_0$ such that $f(A_1)\subseteq A_1$ (note that $f(A_1)=f(f(A_0))\subseteq f(A_0)= A_1$ since $f(A_0)\subseteq A_0$). This contradicts the minimality of $A_0$ in $(\mathcal U,\subseteq)$. Therefore, $A_0$ is a non-empty closed subset of $X$ such that $f(A_0)=A_0$. It follows that $A_0=X$ since $A_0\neq \emptyset$. Therefore, $A=X$ since $A_0\subseteq A$. We have just proved that \ref{ena} is equivalent to \ref{dva}. \end{proof} Next, we introduce dynamical systems with closed relations. Before we do that, we give an obvious Proposition \ref{prop1cc}, which will serve as a motivation for the rest of this section. \begin{definition} Let $X$ be a metric space. We use $$ p_1,p_2:X\times X\rightarrow X $$ to denote\emph{ \color{blue} the standard projections} defined by $$ p_1(x,y)=x \textup{ and } p_2(x,y)=y $$ for all $(x,y)\in X\times X$. \end{definition} \begin{proposition}\label{prop1cc} Let $(X,f)$ be a dynamical system and let $A\subseteq X$. The following statements are equivalent: \begin{enumerate} \item $A$ is $f$-invariant. \item For each $(x,y)\in \Gamma(f)$, $$ x\in A \Longrightarrow y\in A. $$ \item For each $x\in A$, $$ x\in p_1(\Gamma(f)) \Longrightarrow \textup{ there is } y\in A \textup{ such that } (x,y)\in \Gamma(f). $$ \end{enumerate} \end{proposition} \begin{proof} Suppose that $A$ is $f$ invariant and let $(x,y)\in \Gamma(f)$ such that $x\in A$. Then $y=f(x)$ and since $A$ is $f$ invariant, it follows that $y\in A$. Suppose that for each $(x,y)\in \Gamma(f)$, $$ x\in A \Longrightarrow y\in A $$ and let $x\in A$ such that $x\in p_1(\Gamma(f))$. Set $y=f(x)$. Then $(x,y)\in \Gamma(f)$ and $y\in A$ follows. Suppose that for each $x\in A$, $$ x\in p_1(\Gamma(f)) \Longrightarrow \textup{ there is } y\in A \textup{ such that } (x,y)\in \Gamma(f) $$ and let $x\in A$. Let $y\in A$ such that $(x,y)\in \Gamma(f)$. Then $y=f(x)$ and $f(x)\in A$ follows. Therefore, $A$ is $f$-invariant. \end{proof} Motivated by Proposition \ref{prop1cc}, we introduce two different types of invariant sets with respect to a closed relation on a compact metric space. \begin{definition} Let $X$ be a compact metric space and let $G$ be a non-empty closed relation on $X$. We say that $(X,G)$ is \emph{ \color{blue} a dynamical system with a closed relation} or, briefly, \emph{ \color{blue} a CR-dynamical system}. \end{definition} \begin{definition} Let $(X,G)$ be a CR-dynamical system and let $A\subseteq X$. We say that the set $A$ is \begin{enumerate} \item \emph{ \color{blue} $1$-invariant in $(X,G)$}, if for each $x\in A$, $$ x\in p_1(G) \Longrightarrow \textup{ there is } y\in A \textup{ such that } (x,y)\in G. $$ \item \emph{ \color{blue} $\infty$-invariant in $(X,G)$}, if for each $(x,y)\in G$, $$ x\in A \Longrightarrow y\in A. $$ \end{enumerate} \end{definition} \begin{observation}\label{gorane1} Let $(X,G)$ be a CR-dynamical system, let $\mathbf x=(x_1,x_2,x_3,\ldots ) \in \star_{i=1}^{\infty}G$, and let let $A$ be an $\infty$-invariant set in $(X,G)$. If $x_1\in A$, then $x_k\in A$ for any positive integer $k$. \end{observation} \begin{observation}\label{obs1} Let $(X,f)$ be a dynamical system and let $A\subseteq X$. Then $(X,\Gamma(f))$ is a CR-dynamical system and by Proposition \ref{prop1cc}, the following statements are equivalent. \begin{enumerate} \item The set $A$ is $f$-invariant. \item The set $A$ is $1$-invariant in $(X,\Gamma(f))$. \item The set $A$ is $\infty$-invariant in $(X,\Gamma(f))$. \end{enumerate} \end{observation} Next, we show that every $\infty$-invariant in $(X,G)$ set is also $1$-invariant in $(X,G)$. \begin{proposition}\label{mimika} Let $(X,G)$ be a CR-dynamical system and let $A\subseteq X$. If $A$ is $\infty$-invariant in $(X,G)$, then $A$ is $1$-invariant in $(X,G)$. \end{proposition} \begin{proof} Suppose that $A$ is $\infty$-invariant in $(X,G)$. If $A\cap p_1(G)=\emptyset$, then there is nothing to show, so, let $x\in A\cap p_1(G)$ and let $y\in X$ be any point such that $(x,y)\in G$. Such a point exists since $x\in p_1(G)$. Since $x\in A$ and since $A$ is $\infty$-invariant in $(X,G)$, $y\in A$. So, there is a point $y\in A$ such that $(x,y)\in G$. Therefore, $A$ is $1$-invariant in $(X,G)$. \end{proof} The following example shows that there are CR-dynamical systems $(X,G)$ and subsets $A$ of $X$ such that $A$ is $1$-invariant in $(X,G)$ but it is not $\infty$-invariant in $(X,G)$. \begin{figure}[h!] \centering \includegraphics[width=13em]{cross.pdf} \caption{The relation $G$ from Example \ref{ex1}} \label{figure1} \end{figure} \begin{example} \label{ex1} Let $X=[0,1]$ and let $G=([0,1]\times \{\frac{1}{2}\})\cup(\{\frac{1}{2}\}\times [0,1])$, see Figure \ref{figure1}. Then $(X,G)$ is a CR-dynamical system. Let $A=\{\frac{1}{2}\}$. Then $A$ is $1$-invariant in $(X,G)$ but it is not $\infty$-invariant in $(X,G)$. \end{example} \begin{definition} Let $(X,G)$ be a CR-dynamical system. We say that \begin{enumerate} \item $(X,G)$ is \emph{ \color{blue} $1$-minimal} if for each closed subset $A$ of $X$, $$ A \textup{ is } 1\textup{-invariant in } (X,G) \Longrightarrow A\in \{\emptyset, X\}. $$ \item $(X,G)$ is \emph{ \color{blue} $\infty$-minimal} if for each closed subset $A$ of $X$, $$ A \textup{ is } \infty\textup{-invariant in } (X,G) \Longrightarrow A\in \{\emptyset, X\}. $$ \end{enumerate} \end{definition} \begin{theorem}\label{judyK} Let $(X,G)$ be a CR-dynamical system. If $(X,G)$ is $1$-minimal, then $(X,G)$ is $\infty$-minimal. \end{theorem} \begin{proof} Let $(X,G)$ be $1$-minimal and let $A$ be a non-empty closed subset of $X$ such that $A$ is $\infty$-invariant in $(X,G)$. Then $A$ is $1$-invariant in $(X,G)$ by Proposition \ref{mimika}. Therefore, $A=X$ and it follows that $(X,G)$ is $\infty$-minimal. \end{proof} In the following example, we show that there is a $\infty$-minimal CR-dynamical system which is not $1$-minimal. \begin{example}\label{goranH} Let $X=[0,1]$ and let $G=([0,1]\times \{\frac{1}{2}\})\cup(\{\frac{1}{2}\}\times [0,1])$, see Figure \ref{figure1}. Then $(X,G)$ is $\infty$-minimal but it is not $1$-minimal. Let $A=\{\frac{1}{2}\}$. Then $A$ is $1$-invariant in $(X,G)$ but $A\not \in \{\emptyset,X\}$. Therefore, $(X,G)$ is not $1$-minimal. To see that $(X,G)$ is $\infty$-minimal, let $A$ be a non-empty closed subset of $X$ such that $A$ is $\infty$-invariant in $(X,G)$. Let $x\in A$. Then $(x,\frac{1}{2})\in G$ and $\frac{1}{2}\in A$ follows. Since $(\frac{1}{2},t)\in G$, it follows that $t\in A$ for any $t\in X$. Therefore, $A=X$ and it follows that $(X,G)$ is $\infty$-minimal. \end{example} \section{Minimality and forward orbits}\label{s2} First, we revisit orbits of dynamical systems $(X,f)$ and then we generalize these to orbits of CR-dynamical systems $(X,G)$. \begin{definition} Let $(X,f)$ be a dynamical system and let $x_0\in X$. The sequence $$ \mathbf x=(x_0,f(x_0),f^2(x_0),f^3(x_0),\ldots)\in \star_{i=1}^{\infty}\Gamma(f) $$ is called \emph{ \color{blue} the forward orbit of $x_0$.} The set $$ \mathcal O_f^{\oplus}(\mathbf x)=\{x_0,f(x_0),f^2(x_0),f^3(x_0),\ldots\} $$ is called \emph{ \color{blue} the forward orbit set of $x_0$}. \end{definition} The following are well-known results. Since the proof is short and elementary, we give it here. \begin{theorem}\label{enolicnicc} Let $(X,f)$ be a dynamical system. The following statements are equivalent. \begin{enumerate} \item \label{enacc} $(X,f)$ is a minimal dynamical system. \item \label{tricc} For each $x\in X$, $$ \Cl (\mathcal{O}_f^{\oplus}(x))=X. $$ \end{enumerate} \end{theorem} \begin{proof} First, we prove that \ref{enacc} implies \ref{tricc}. Let $(X,f)$ be a minimal dynamical system and let $x\in X$ be any point. Let $A=\Cl(\mathcal{O}_f^{\oplus}(x))$. Then $A$ is a non-empty closed subset of $X$ such that $f(A)\subseteq A$. Since $(X,f)$ is minimal, it follows that $A=X$. Therefore, $\Cl(\mathcal{O}_f^{\oplus}(x))=X$. Finally, suppose that for each $x\in X$, $\Cl (\mathcal{O}_f^{\oplus}(x))=X$. To show that $(X,f)$ is minimal, let $A$ be a non-empty closed subset of $X$ such that $f(A)\subseteq A$. Let $x\in A$. Then $\Cl(\mathcal{O}_f^{\oplus}(x))\subseteq A$, and $A=X$ follows from $\Cl(\mathcal{O}_f^{\oplus}(x))=X$. Therefore, $(X,f)$ is minimal. We have proved that \ref{enacc} is equivalent to \ref{tricc}. \end{proof} \begin{definition} Let $X$ be a compact metric space. For each positive integer $k$, we use $\pi_k:\prod_{i=1}^{\infty}X\rightarrow X$ to denote \emph{ \color{blue} the $k$-th standard projection} from $\prod_{i=1}^{\infty}X$ to $X$. \end{definition} \begin{definition} Let $(X,G)$ be a CR-dynamical system and let $x_0\in X$. We use \emph{ \color{blue} $T_G^{+}(x_0)$} to denote the set $$ T_G^{+}(x_0)=\{\mathbf x\in \star_{i=1}^{\infty}G \ | \ \pi_1(\mathbf x)=x_0\}\subseteq \star_{i=1}^{\infty}G. $$ \end{definition} \begin{definition} Let $(X,G)$ be a CR-dynamical system, let $\mathbf x\in \star_{i=1}^{\infty}G$, and let $x_0\in X$. We \begin{enumerate} \item say that $\mathbf x$ is \emph{\color{blue}a forward orbit of $x_0$ in $(X,G)$}, if $\pi_1(\mathbf x)=x_0$. \item use \emph{ \color{blue} $\mathcal O_G^{\oplus}(\mathbf x)$} to denote the set $$ \mathcal O_G^{\oplus}(\mathbf x)=\{\pi_k(\mathbf x) \ | \ k \textup{ is a positive integer}\}\subseteq X. $$ \item use \emph{ \color{blue} $\mathcal U_G^{\oplus}(x_0)$} to denote the set $$ \mathcal U_G^{\oplus}(x_0)=\bigcup_{\mathbf x\in T_G^{+}(x_0)}\mathcal O_G^{\oplus}(\mathbf x)\subseteq X. $$ \end{enumerate} \end{definition} \begin{example} Let $X=[0,1]$ and let $G=\{(1,0)\}$. Then $\star_{i=1}^{1}G\neq \emptyset $ and for each $m\neq 1$, $\star_{i=1}^{m}G=\emptyset$. Therefore, in this CR-dynamical system, there are no forward orbits in $(X,G)$. \end{example} \begin{definition} Let $(X,G)$ be a CR-dynamical system. We say that \begin{enumerate} \item $(X,G)$ is \emph{ \color{blue} $1^{\oplus}$-minimal} if for each $x\in X$, $T_G^{+}(x)\neq \emptyset$, and for each $\mathbf x\in \star_{i=1}^{\infty}G$, $$ \Cl\Big(\mathcal{O}_G^{\oplus}(\mathbf x)\Big)=X. $$ \item $(X,G)$ is \emph{ \color{blue} $2^{\oplus}$-minimal }if for each $x\in X$ there is $\mathbf x\in T_G^{+}(x)$ such that $$ \Cl\Big(\mathcal{O}_G^{\oplus}(\mathbf x)\Big)=X. $$ \item $(X,G)$ is \emph{ \color{blue} $3^{\oplus}$-minimal} if for each $x\in X$, $$ \Cl\Big(\mathcal U_G^{\oplus}(x)\Big)=X. $$ \end{enumerate} \end{definition} \begin{theorem}\label{main1} Let $(X,G)$ be a CR-dynamical system. Then the following holds. \begin{enumerate} \item \label{1dva} $(X,G)$ is $1$-minimal if and only if $(X,G)$ is $1^{\oplus}$-minimal. \item \label{2dva} If $(X,G)$ is $1^{\oplus}$-minimal, then $(X,G)$ is $2^{\oplus}$-minimal. \item \label{3dva} If $(X,G)$ is $2^{\oplus}$-minimal, then $(X,G)$ is $3^{\oplus}$-minimal. \item \label{4dva} If $(X,G)$ is $3^{\oplus}$-minimal, then $(X,G)$ is $\infty$-minimal. \end{enumerate} \end{theorem} \begin{proof} Let $(X,G)$ be a $1$-minimal CR-dynamical system. To prove that $(X,G)$ is $1^{\oplus}$-minimal, let $x\in X$. To prove that $$ T_G^{+}(x)\neq \emptyset, $$ we show first that $p_2(G)\subseteq p_1(G)$. Suppose that $p_2(G)\not \subseteq p_1(G)$ and let $x_0\in p_2(G)\setminus p_1(G)$. Then $A=\{x_0\}$ is trivially $1$-invariant in $(X,G)$---a contradiction since $A\neq X$. Therefore, $p_2(G)\subseteq p_1(G)$. Next, we prove that $p_2(G)=X$. Let $A=p_2(G)$ and let $x\in A$ be any point. Since $A\subseteq p_1(G)$, it follows that $x\in p_1(G)$. Then there is $y\in p_2(G)$ such that $(x,y)\in G$. This proves that $A$ is $1$-invariant in $(X,G)$. Since $A$ is closed in $X$ and $A\neq \emptyset$, it follows that $A=X$ since $(X,G)$ is $1$-minimal. Therefore, $p_2(G)=X$. Also, $p_1(G)=X$ follows since $p_2(G)\subseteq p_1(G)$. Since $p_1(G)=X$, there is a point $\mathbf x\in \star_{i=1}^{\infty}G$ such that $\pi_1(\mathbf x)=x$ and $T_G^{+}(x)\neq \emptyset$. This completes the proof that $T_G^{+}(x)\neq \emptyset$. Next, let $\mathbf x\in \star_{i=1}^{\infty}G$. We show that $\Cl (\mathcal{O}_G^{\oplus}(\mathbf x))=X$. Let $A=\Cl (\mathcal{O}_G^{\oplus}(\mathbf x))$. Then $A$ is a non-empty closed subset of $X$. Let $x\in A$ such that $x\in p_1(G)$. We consider the following possible cases. \begin{enumerate} \item[(i)] $x\in \mathcal{O}_G^{\oplus}(\mathbf x)$. Let $m$ be a positive integer such that $\pi_m(\mathbf x)=x$ and let $y=\pi_{m+1}(\mathbf x)$. Then $y\in A$ and $(x,y)\in G$. \item[(ii)] $x\not \in \mathcal{O}_G^{\oplus}(\mathbf x)$. Let $(z_n)$ be a sequence of points in $\mathcal{O}_G^{\oplus}(\mathbf x)$ such that $\displaystyle \lim_{n\to \infty}z_n=x$. For each positive integer $n$, let $i_n$ be a positive integer such that $\pi_{i_n}(\mathbf x)=z_n$. For each positive integer $n$, let $y_n=\pi_{i_n+1}(\mathbf x)$, and let $(y_{j_n})$ be a convergent subsequence of the sequence $(y_n)$ and let $\displaystyle \lim_{n\to \infty}y_{j_n}=y$. Note that for each positive integer $n$, $(z_{j_n},y_{j_n})\in G$ . Since $G$ is closed in $X\times X$ and since $\displaystyle \lim_{n\to \infty}(z_{j_n},y_{j_n})=(x,y)$, it follows that $(x,y)\in G$. Since $A$ is closed in $X$ and since $y_{i_n}\in \mathcal{O}_G^{\oplus}(\mathbf x)$ for each positive integer $n$, it follows from $\mathcal{O}_G^{\oplus}(\mathbf x)\subseteq A$ that $y\in A$. \end{enumerate} We proved that there is $y\in A$ such that $(x,y)\in G$. It follows that $A$ is $1$-invariant in $(X,G)$ and, therefore, $A=X$. This proves that $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x))=X$ and it follows that $(X,G)$ is $1^{\oplus}$-minimal. This proves the first implication of \ref{1dva}. To prove the other implication, suppose that $(X,G)$ is $1^{\oplus}$-minimal and let $A$ be a non-empty closed subset of $X$ which is $1$-invariant in $(X,G)$. Let $a_1\in A$ be any point. Since $T_G^{+}(a_1)\neq \emptyset$, there is $\mathbf x_1\in T_G^{+}(a_1)$. Choose such an element $\mathbf x_1\in T_G^{+}(a_1)$ and set $x=\pi_2(\mathbf x_1)$. Then $(a_1,x)\in G$ and $a_1\in p_1(G)$ follows. Since $A$ is $1$-invariant in $(X,G)$, there is a point $a_2\in A$ such that $(a_1,a_2)\in G=\star_{i=1}^{1}G$. Fix such a point $a_2$. Let $n>1$ be a positive integer and suppose that we have already constructed the points $a_1,a_2,a_3,\ldots ,a_n\in A$ such that $(a_1,a_2,a_3,\ldots ,a_n)\in \star_{i=1}^{n-1}G$. Since $T_G^{+}(a_n)\neq \emptyset$, there is $\mathbf x_n\in T_G^{+}(a_n)$. Choose such an element $\mathbf x_n\in T_G^{+}(a_n)$ and set $x=\pi_2(\mathbf x_n)$. Then $(a_n,x)\in G$ and $a_n\in p_1(G)$ follows. Since $A$ is $1$-invariant in $(X,G)$, there is a point $a_{n+1}\in A$ such that $(a_n,a_{n+1})\in G$. Fix such a point $a_{n+1}$. Let $$ \mathbf x=(a_1,a_2,a_3,\ldots). $$ Then $\mathbf x\in \star_{i=1}^{\infty}G$ and $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x))=X$ and, since $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x))\subseteq A$, it follows that $A=X$. This proves that $(X,G)$ is $1$-minimal and we have just proved \ref{1dva}. To prove \ref{2dva} suppose that $(X,G)$ is $1^{\oplus}$-minimal. Let $x\in X$ be any point. Since $(X,G)$ is $1^{\oplus}$-minimal, there is a point $\mathbf x\in \star_{i=1}^{\infty}G$ such that $\pi_1(\mathbf x)=x$. Since $(X,G)$ is $1^{\oplus}$-minimal, it follows that $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x))=X$. Therefore, $(X,G)$ is $2^{\oplus}$-minimal. To prove \ref{3dva} suppose that $(X,G)$ is $2^{\oplus}$-minimal. Let $x\in X$ be any point. Since $(X,G)$ is a $2^{\oplus}$-minimal dynamical system, there is a point $\mathbf x_0\in \star_{i=1}^{\infty}G$ such that $\pi_1(\mathbf x_0)=x$ and $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x_0))=X$. It follows from $\mathcal{O}_G^{\oplus}(\mathbf x_0)\subseteq \mathcal U_G^{\oplus}(x)$ that $\Cl(\mathcal U_G^{\oplus}(x))=X$. Therefore, $(X,G)$ is $3^{\oplus}$-minimal. Finally, to prove \ref{4dva}, suppose first, that $(X,G)$ is $3^{\oplus}$-minimal. Let $A$ be a non-empty closed subset of $X$ such that $A$ is $\infty$-invariant in $(X,G)$. Let $x\in A$. Since $(X,G)$ is $3^{\oplus}$-minimal, it follows that $\Cl(\mathcal U_G^{\oplus}(x))=X$. We show that $A=X$ by showing that $\Cl(\mathcal U_G^{\oplus}(x))\subseteq A$. First, we show that $\mathcal U_G^{\oplus}(x)\subseteq A$. Let $y\in \mathcal U_G^{\oplus}(x)$ and let $\mathbf x_0\in T_G^{+}(x)$ such that $y\in \mathcal{O}_G^{\oplus}(\mathbf x_0)$. Since $x\in A$ and since $A$ is $\infty$-invariant in $(X,G)$, it follows that $y\in A$ by Observation \ref{gorane1}. Therefore, $\mathcal U_G^{\oplus}(x)\subseteq A$ and, since $A$ is closed in $X$, it follows that $\Cl(\mathcal U_G^{\oplus}(x))\subseteq A$. \end{proof} Among other things, the following theorem says that $1$, $1^{\oplus}$, $2^{\oplus}$, $3^{\oplus}$, and $\infty$-minimality of CR-dynamical systems is a generalization of the notion of the minimality of dynamical systems. \begin{theorem} Let $(X,f)$ be a dynamical system. The following statements are equivalent. \begin{enumerate} \item \label{1ena} $(X,f)$ is minimal. \item \label{2ena} $(X,\Gamma(f))$ is $1$-minimal. \item \label{3ena} $(X,\Gamma(f))$ is $1^{\oplus}$-minimal. \item \label{4ena} $(X,\Gamma(f))$ is $2^{\oplus}$-minimal. \item \label{5ena} $(X,\Gamma(f))$ is $3^{\oplus}$-minimal. \item \label{6ena} $(X,\Gamma(f))$ is $\infty$-minimal. \end{enumerate} \end{theorem} \begin{proof} Suppose that $(X,f)$ is minimal. To prove that $(X,\Gamma(f))$ is $1$-minimal, let $A$ be a closed subset of $X$ such that $A$ is $1$-invariant in $(X,\Gamma(f))$. By Observation \ref{obs1}, $A$ is $f$-invariant in $(X,\Gamma(f))$. Therefore, $ A\in \{\emptyset, X\}$ since $(X,f)$ is minimal. This proves the implication from \ref{1ena} to \ref{2ena}. The implications from \ref{2ena} to \ref{3ena}, from \ref{3ena} to \ref{4ena}, from \ref{4ena} to \ref{5ena} and from \ref{5ena} to \ref{6ena} follow from Theorem \ref{main1}. Suppose that $(X,\Gamma(f))$ is $\infty$-minimal. To prove that $(X,f)$ is minimal, let $A$ be a closed subset of $X$ such that $A$ is $f$-invariant. By Observation \ref{obs1}, $A$ is $\infty$-invariant in $(X,\Gamma(f))$. Therefore, $ A\in \{\emptyset, X\}$ since $(X,\Gamma(f))$ is $\infty$-minimal. This proves the implication from \ref{6ena} to \ref{1ena}. \end{proof} \begin{theorem}\label{surjektivnost} Let $(X,G)$ be a CR-dynamical system and let $k\in \{1,1^{\oplus},2^{\oplus},3^{\oplus},\infty\}$. If $(X,G)$ is $k$-minimal, then $$ p_1(G)=p_2(G)=X. $$ \end{theorem} \begin{proof} Suppose that $(X,G)$ is $\infty$-minimal. First, we show that $p_2(G)\subseteq p_1(G)$. Suppose that $p_2(G)\not \subseteq p_1(G)$ and let $x_0\in p_2(G)\setminus p_1(G)$. Then $A=\{x_0\}$ is trivially $\infty$-invariant in $(X,G)$---a contradiction. Therefore, $p_2(G)\subseteq p_1(G)$. Next, we prove that $p_2(G)=X$. Let $A=p_2(G)$ and let $(x,y)\in G$ be any point such that $x\in A$. Since $A\subseteq p_1(G)$, it follows that $y\in p_2(G)$, meaning that $y\in A$. This proves that $A$ is $\infty$-invariant in $(X,G)$. Since $A$ is closed in $X$ and $A\neq \emptyset$, it follows that $A=X$ since $(X,G)$ is $\infty$-minimal. Therefore, $p_2(G)=X$. Also, $p_1(G)=X$ follows since $p_2(G)\subseteq p_1(G)$. Next, let $k\in\{1,1^{\oplus},2^{\oplus},3^{\oplus}\}$ and suppose that $(X,G)$ is $k$-minimal. It follows from Theorem \ref{main1} that $(X,G)$ is also $\infty$-minimal. Therefore, $p_1(G)=p_2(G)=X$. \end{proof} In the following example, we show that there is a $2^{\oplus}$-minimal CR-dynamical system which is not $1^{\oplus}$-minimal. \begin{example} \label{ex2} Let $X=[0,1]$ and let $G=([0,1]\times \{\frac{1}{2}\})\cup(\{\frac{1}{2}\}\times [0,1])$, see Figure \ref{figure1}. To show that $(X,G)$ is not $1^{\oplus}$-minimal, let $\mathbf x=(\frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \ldots)\in \star_{i=1}^{\infty}G$. Then $\Cl(\mathcal O_G^{\oplus}(\mathbf x))\neq X$. Therefore, $(X,G)$ is not $1^{\oplus}$-minimal. To show that $(X,G)$ is $2^{\oplus}$-minimal, let $x\in X$ be any point. We show that there is $\mathbf x\in T_G^{+}(x)$ such that $\Cl\Big(\mathcal{O}_G^{\oplus}(\mathbf x)\Big)=X$. Let $[0,1]\cap \mathbb Q=\{q_1,q_2,q_3,\ldots\}$ be the set of rationals in $[0,1]$, let $x_1=x$, for each positive integer $n$, let $x_{2n}=\frac{1}{2}$ and $x_{2n+1}=q_n$, and let $\mathbf x=(x_1,x_2,x_3,\ldots)$. Then $\mathbf x\in T_G^{+}(x)$ such that $\Cl\Big(\mathcal{O}_G^{\oplus}(\mathbf x)\Big)=X$. \end{example} In the following example, we show that there is a $\infty$-minimal CR-dynamical system which is not $3^{\oplus}$-minimal. \begin{example} \label{ex22} Let $X=[0,1]$ and let $G$ be the union of the following line segments: \begin{enumerate} \item the line segment with endpoints $(0,\frac{1}{2})$ and $(1,1)$, \item the line segment with endpoints $(1,0)$ and $(1,1)$, \end{enumerate} see Figure \ref{figure2}. \begin{figure}[h!] \centering \includegraphics[width=15em]{cross1.pdf} \caption{The relation $G$ from Example \ref{ex22}} \label{figure2} \end{figure} To show that $(X,G)$ is not $3^{\oplus}$-minimal, let $x_0=0$. Then $$ \Cl(\mathcal U_G^{\oplus}(0))=\Cl\Big(\Big\{0,\frac{1}{2},\frac{3}{4}, \frac{7}{8},\frac{15}{16},\ldots\Big\}\Big)=\Big\{0,\frac{1}{2},\frac{3}{4}, \frac{7}{8},\frac{15}{16},\ldots\Big\}\cup\{1\}\neq X. $$ Therefore, $(X,G)$ is not $3^{\oplus}$-minimal. To show that $(X,G)$ is $\infty$-minimal, let $A$ be a non-empty closed subset of $X$ such that $A$ is $\infty$-invariant in $(X,G)$. First, we show that $1\in A$. Since $A\neq \emptyset$, it follows that there is $x\in A$. Choose any element $x$ in $A$. If $x=1$, we are done. Suppose that $x<1$ and let $f:[0,1]\rightarrow [0,1]$ be defined by $f(t)=\frac{1}{2}t+\frac{1}{2}$ for each $t\in [0,1]$. Note that the graph of $f$ is the line segment from $(0,\frac{1}{2})$ to $(1,1)$. Since $A$ is $\infty$-invariant in $(X,G)$, it follows from Observation \ref{gorane1} that $$ \mathcal{O}_f^{\oplus}(x)=\{x,f(x),f^{2}(x),f^{3}(x),\ldots\}\subseteq A. $$ Since $A$ is closed in $X$, it follows that $$ \Cl(\mathcal{O}_f^{\oplus}(x))=\{x,f(x),f^{2}(x),f^{3}(x),\ldots\}\cup \{1\}\subseteq A. $$ Therefore, $1\in A$. Next, let $y\in X$ be any point. Then $(1,y)\in G$ and since $1\in A$, it follows from the fact that $A$ is $\infty$-invariant in $(X,G)$, that $y\in A$. Therefore, $A=X$. \end{example} We conclude this section by stating the following problem. { \begin{problem} Is there an example of a $3^{\oplus}$-minimal CR-dynamical system which is not $2^{\oplus}$-minimal? \end{problem}} \section{Minimality and omega limit sets}\label{s3} Theorem \ref{main666}, where results about relations of omega limits sets in CR-dynamical systems $(X,G)$ and minimality are presented, is the main result of this section. First, we revisit omega limit sets in dynamical systems $(X,f)$. \begin{definition} Let $(X,f)$ be a dynamical system and let $x_0\in X$ and let $\mathbf x\in T_{\Gamma(f)}^{+}(x_0)$ be the forward orbit of $x_0$. The set $$ \omega_f(x_0)=\{x\in X \ | \ \textup{ there is a subsequence of the sequence } \mathbf x \textup{ with limit } x\} $$ is called \emph{ \color{blue} the omega limit set of $x_0$}. \end{definition} The following is a well-known result. We present its proof for the completeness of the paper. \begin{theorem}\label{omomg} Let $(X,f)$ be a dynamical system. The following statements are equivalent. \begin{enumerate} \item \label{tritri} $(X,f)$ is minimal. \item \label{stiri} For each $x\in X$, $$ \omega_f(x)=X. $$ \end{enumerate} \end{theorem} \begin{proof} To show that \ref{tritri} is equivalent to \ref{stiri}, suppose that for each $\mathbf x\in \star_{i=1}^{\infty}\Gamma(f)$, $\Cl (\mathcal{O}_f^{\oplus}(\mathbf x))=X$, let $x_0\in X$ be any point and let $\mathbf x_0\in T_{\Gamma(f)}^{+}(x_0)$ be the forward orbit of $x_0$. Obviously, $\omega_f(x_0)\subseteq X$. To show that $X\subseteq \omega_f(x_0)$, let $x\in X$ and let $\mathbf x\in T_{\Gamma(f)}^{+}(x)$ be the forward orbit of $x$. To show that $x\in \omega_f(x_0)$, we consider the following possible cases. \begin{enumerate} \item There is a positive integer $n$ such that $f^n(x)=x$. Then $x$ is a periodic point and it follows from $\Cl(\mathcal O_f^{\oplus}(\mathbf x))=X$ and from the fact that $\mathcal O_f^{\oplus}(\mathbf x)$ is finite, that $X=\mathcal O_f^{\oplus}(\mathbf x)$, so $X$ is finite. Then $\mathcal O_f^{\oplus}(\mathbf x)=\mathcal O_f^{\oplus}(\mathbf x_0)$ and it follows that $x\in \omega_f(x_0)$. \item For each positive integer $n$, $f^n(x)\neq x$. First, we show that $x\in \omega_f(x_0)$. Suppose the contrary, that $x\not\in \omega_f(x_0)$. Then $x$ is not a limit point of the sequence $\mathbf x$. Therefore, there is an open set $U$ in $X$ such that $x\in U$ and $U\cap \mathcal O_f^{\oplus}(\mathbf x_0)$ only contains one element. Choose such an open set $U$ in $X$ and let $U\cap \mathcal O_f^{\oplus}(\mathbf x_0)=\{z_0\}$. First, we show that $U\subseteq \mathcal O_f^{\oplus}(\mathbf x_0)$. Suppose that $U\not \subseteq \mathcal O_f^{\oplus}(\mathbf x_0)$. Then let $y\in U\setminus \mathcal O_f^{\oplus}(\mathbf x_0)$ and let $r_1=d(y,z_0)$, $r_2=d(y,X\setminus U)$, and let $$ r=\min\{r_1,r_2\}. $$ Then $r>0$ and $B(y,\frac{r}{2})=\{z\in X \ | \ d(z,y)<\frac{r}{2}\}\subseteq X\setminus \mathcal O_f^{\oplus}(\mathbf x_0)$. Therefore, $\mathcal O_f^{\oplus}(\mathbf x_0)$ is not dense in $X$---a contradiction. Therefore, $U$ is a subset of $ \mathcal O_f^{\oplus}(\mathbf x_0)$. Since $U\cap \mathcal O_f^{\oplus}(\mathbf x_0)$ is finite, it follows that $U$ is a finite subset of $ \mathcal O_f^{\oplus}(\mathbf x_0)$. Let $V=\{x\}$ and let $\mathbf y\in T_{\Gamma(f)}^{+}(f(x))$ be the forward orbit of $f(x)$. It follows that $V$ is an open set in $X$ and that $x\not \in \mathcal O_f^{\oplus}(\mathbf y)$ since for each positive integer $n$, $f_n(x)\neq x$. Therefore, $V\cap \mathcal O_f^{\oplus}(\mathbf y)=\emptyset$ and it follows that $\mathcal O_f^{\oplus}(\mathbf y)$ is not dense in $X$---a contradiction. Therefore, $x\in \omega_f(x_0)$. \end{enumerate} We have just proved that also $X\subseteq \omega_f(x_0)$. So, $ \omega_f(x_0)=X$ follows. Finally, we show that \ref{stiri} implies \ref{tritri}. Suppose that for each $x\in X$, $\omega_f(x)=X$, let $x_0\in X$ be any point and let $\mathbf x_0\in T_{\Gamma(f)}^{+}(x_0)$ be the forward orbit of $x_0$. Obviously, $\Cl (\mathcal{O}_f^{\oplus}(\mathbf x_0))\subseteq X$. To show that $X\subseteq \Cl (\mathcal{O}_f^{\oplus}(\mathbf x_0))$, let $x\in X$. Since $X=\omega_f(x_0)$, there is a subsequence $(f^{i_n}(x_0))$ of $\mathbf x_0$ such that $$ \lim_{n\to \infty}f^{i_n}(x_0)=x. $$ Therefore, $x\in \Cl (\mathcal{O}_f^{\oplus}(\mathbf x_0))$. This completes the proof. \end{proof} Next, we generalize the notion of omega limit sets from dynamical systems to CR-dynamical systems. \begin{definition} Let $(X,G)$ be a CR-dynamical system and let $\mathbf x\in \star_{i=1}^{\infty}G$. The set $$ \omega_G(\mathbf x)=\{x\in X \ | \ \textup{ there is a subsequence of the sequence } \mathbf x \textup{ with limit } x\} $$ is called \emph{ \color{blue} the omega limit set of $\mathbf x$}. For each $x\in X$, we use \emph{ \color{blue} $\psi_G(x)$} to denote the set $$ \psi_G(x)=\bigcup_{\mathbf x\in T_G^{+}(x)}\omega_G(\mathbf x). $$ \end{definition} \begin{observation}\label{gorane2} Note that for each $\mathbf x\in \star_{i=1}^{\infty}G$, $\omega_G(\mathbf x)\subseteq \Cl(\mathcal O_G^{\oplus}(\mathbf x))$. \end{observation} \begin{definition} Let $(X,G)$ be a CR-dynamical system. We say that \begin{enumerate} \item $(X,G)$ is \emph{ \color{blue} $1^{\omega}$-minimal}, if for each $x\in X$, $T_G^{+}(x)\neq \emptyset$, and for each $\mathbf x\in \star_{i=1}^{\infty}G$, $$ \omega_G(\mathbf x)=X. $$ \item $(X,G)$ is \emph{ \color{blue} $2^{\omega}$-minimal}, if for each $x\in X$ there is $\mathbf x\in T_G^{+}(x)$ such that $$ \omega_G(\mathbf x)=X. $$ \item $(X,G)$ is \emph{ \color{blue} $3^{\omega}$-minimal}, if for each $x\in X$, $$ \psi_G(x)=X. $$ \end{enumerate} \end{definition} \begin{observation} Let $(X,G)$ be a CR-dynamical system. Then the following hold. \begin{enumerate} \item If $(X,G)$ is $1^{\omega}$-minimal, then $(X,G)$ is $2^{\omega}$-minimal. \item If $(X,G)$ is $2^{\omega}$-minimal, then $(X,G)$ is $3^{\omega}$-minimal. \end{enumerate} \end{observation} Next, we construct an example of a CR-dynamical system such that it is $2^{\omega}$-minimal but it is not $1^{\omega}$-minimal. \begin{example}\label{omegaPLUS} Let $X=[0,1]$ and let $G=([0,1]\times \{\frac{1}{2}\})\cup(\{\frac{1}{2}\}\times [0,1])$, see Figure \ref{figure1}. Then $(X,G)$ is $2^{\omega}$-minimal but it is not $1^{\omega}$-minimal. To show that $(X,G)$ is not $1^{\omega}$-minimal, let $\mathbf x=(\frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \ldots)\in T_G^{+}(\frac{1}{2})$. Then $\omega_G(\mathbf x)\neq X$. Therefore, $(X,G)$ is not $1^{\omega}$-minimal. To show that $(X,G)$ is $2^{\omega}$-minimal, let $x\in X$ be any point. We show that there is $\mathbf x\in T_G^{+}(x)$ such that $\omega_G(\mathbf x)=X$. Let $\mathbb Q$ denote the rationals and let $[0,1]\cap \mathbb Q=\{q_1,q_2,q_3,\ldots\}$, let $x_1=x$, for each positive integer $n$, let $x_{2n}=\frac{1}{2}$ and $x_{2n+1}=q_n$, and let $\mathbf x=(x_1,x_2,x_3,\ldots)$. Then $\mathbf x\in T_G^{+}(x)$ such that $\omega_G(\mathbf x)=X$. \end{example} \begin{observation} Let $(X,f)$ be a dynamical system, let $x\in X$ and let $\mathbf x\in T_{\Gamma(f)}^{+}(x)$ be the forward orbit of $x$. Then $$ \omega_f(x)=\omega_{\Gamma(f)}(\mathbf x)=\psi_{\Gamma(f)}(x) $$ and the following statements are equivalent. \begin{enumerate} \item $(X,f)$ is minimal. \item $(X,\Gamma(f))$ is $1^{\omega}$-minimal. \item $(X,\Gamma(f))$ is $2^{\omega}$-minimal. \item $(X,\Gamma(f))$ is $3^{\omega}$-minimal. \end{enumerate} \end{observation} \begin{theorem}\label{main666} Let $(X,G)$ be a CR-dynamical system. Then the following hold. \begin{enumerate} \item \label{1tre} $(X,G)$ is $1^{\omega}$-minimal if and only if $(X,G)$ is $1^{\oplus}$-minimal. \item \label{2tre} $(X,G)$ is $2^{\omega}$-minimal if and only if $(X,G)$ is $2^{\oplus}$-minimal. \item \label{3tre} If $(X,G)$ is $3^{\omega}$-minimal, then $(X,G)$ is $3^{\oplus}$-minimal. \end{enumerate} \end{theorem} \begin{proof} To prove \ref{1tre}, first suppose that $(X,G)$ is $1^{\oplus}$-minimal. Clearly, for each for each $x\in X$, $T_G^{+}(x)\neq \emptyset$. Let $\mathbf x\in \star_{i=1}^{\infty}G$. Obviously, $\omega_G(\mathbf x)\subseteq X$. To prove that $X\subseteq \omega_G(\mathbf x)$, let $x\in X$. To show that $x\in \omega_G(\mathbf x)$, we treat the following possible cases. \begin{enumerate} \item[(i)] $x\not \in \mathcal{O}_G^{\oplus}(\mathbf x)$. Since $\mathcal{O}_G^{\oplus}(\mathbf x)$ is dense in $X$, it follows that for any open set $U$ in $X$, $$ U\neq \emptyset \Longrightarrow U \cap \mathcal{O}_G^{\oplus}(\mathbf x)\neq \emptyset. $$ Therefore, for any open set $U$ in $X$, $$ x\in U \Longrightarrow U\cap \mathcal{O}_G^{\oplus}(\mathbf x)\neq \emptyset $$ and since $x\not\in \mathcal{O}_G^{\oplus}(\mathbf x)$, it follows that for any open set $U$ in $X$, $$ x\in U \Longrightarrow (U\setminus \{x\})\cap \mathcal{O}_G^{\oplus}(\mathbf x)\neq \emptyset. $$ Therefore, $x$ is a limit point of the sequence $\mathbf x$ and $x\in \omega_G(\mathbf x)$ follows. \item[(ii)] $x \in \mathcal{O}_G^{\oplus}(\mathbf x)$. Suppose that $x\not \in \omega_G(\mathbf x)$. Then there is an open set $U$ in $X$ such that $U\cap \mathcal{O}_G^{\oplus}(\mathbf x)=\{x\}$ and $\pi_k(\mathbf x)=x$ only for finitely many positive integers $k$. Let $$ n_0=\max\{n \ | \ n \textup{ is a positive integer such that } \pi_n(\mathbf x)=x\} $$ and let $$ \mathbf y=(\pi_{n_0+1}(\mathbf x),\pi_{n_0+2}(\mathbf x),\pi_{n_0+3}(\mathbf x),\ldots). $$ Then $U\cap \mathcal{O}_G^{\oplus}(\mathbf y)=\emptyset$---a contradiction since $(X,G)$ is $1^{\oplus}$-minimal and, therefore, $\Cl(\mathcal{O}_G^{\oplus}(\mathbf y))=X$. It follows that $x\in \omega_G(\mathbf x)$. \end{enumerate} Next, suppose that for each $x\in X$, $T_G^{+}(x)\neq \emptyset$, and for each $\mathbf x\in \star_{i=1}^{\infty}G$, $$ \omega_G(\mathbf x)=X. $$ Therefore, for each $x\in X$, $T_G^{+}(x)\neq \emptyset$, and by Observation \ref{gorane2}, for each $\mathbf x\in \star_{i=1}^{\infty}G$, $$ X=\omega_G(\mathbf x)\subseteq \Cl(\mathcal{O}_G^{\oplus}(\mathbf x))\subseteq X $$ and $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x))= X$ follows. This completes the proof of \ref{1tre}. Next, we prove \ref{2tre}. Suppose that $(X,G)$ is $2^{\omega}$-minimal, i.e. that for each $x\in X$ there is $\mathbf x\in T_G^{+}(x)$ such that $$ \omega_G(\mathbf x)=X. $$ To show that $(X,G)$ is $2^{\oplus}$-minimal, let $x_0\in X$ be any point and let $\mathbf x_0\in T_G^{+}(x_0)$ be such that $\omega_G(\mathbf x_0)=X$. By Observation \ref{gorane2}, $$ X=\omega_G(\mathbf x_0)\subseteq \Cl(\mathcal{O}_G^{\oplus}(\mathbf x_0))\subseteq X. $$ Therefore, $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x_0))= X$. This completes the proof of one implication of \ref{2tre}. Next, suppose that $(X,G)$ is $2^{\oplus}$-minimal. To show that $(X,G)$ is $2^{\omega}$-minimal, let $x\in X$ be any point. We will construct $\mathbf x\in T_G^{+}(x)$ such that $\omega_G(\mathbf x)=X$. Let $$ \mathbf x_1=(x_1^1,x_2^1,x_3^1,\ldots )\in T_G^{+}(x) $$ such that $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x_1))=X$. For each positive integer $n$, let $\ell_n$ be a positive integer and let $y_1^{n},y_2^{n},y_3^{n},\ldots ,y_{\ell_n}^{n}\in X$, such that $$ \mathcal U_n=\Big\{B(y_1^{n},\frac{1}{n}),B(y_2^{n},\frac{1}{n}),B(y_2^{n},\frac{1}{n}),\ldots ,B(y_{\ell_n}^{n},\frac{1}{n})\Big\} $$ is a finite open cover for $X$. We follow the following steps. {\bf Step 1.} Let $m_1$ be a positive integer such that for each $i\in \{1,2,3,\ldots ,\ell_1\}$, $$ \{x_1^{1},x_2^{1},x_3^{1},\ldots ,x_{m_1}^{1}\}\cap B(y_i^{1},1)\neq \emptyset. $$ {\bf Step 2.} Let $$ \mathbf x_2=(x_1^{2},x_2^{2},x_3^{2},\ldots )\in T_G^{+}(x_{m_1}^{1}) $$ and let $m_2$ be a positive integer such that for each $i\in \{1,2,3,\ldots ,\ell_{2}\}$, $$ \{x_1^{2},x_2^{2},x_3^{2},\ldots ,x_{m_2}^{2}\}\cap B(y_i^{2},\frac{1}{2})\neq \emptyset. $$ {\bf Step 3.} Let $$ \mathbf x_3=(x_1^{3},x_2^{3},x_3^{3},\ldots )\in T_G^{+}(x_{m_2}^{2}) $$ and let $m_3$ be a positive integer such that for each $i\in \{1,2,3,\ldots ,\ell_{3}\}$, $$ \{x_1^{3},x_2^{3},x_3^{3},\ldots ,x_{m_3}^{3}\}\cap B(y_i^{3},\frac{1}{3})\neq \emptyset. $$ We continue inductively. For each positive integer $j$, the step $j$ is as follows. {\bf Step $\mathbf j$.} Let $$ \mathbf x_j=(x_1^{j},x_2^{j},x_3^{j},\ldots )\in T_G^{+}(x_{m_{j-1}}^{j-1}) $$ and let $m_j$ be a positive integer such that for each $i\in \{1,2,3,\ldots ,\ell_{j}\}$, $$ \{x_1^{j},x_2^{j},x_3^{j},\ldots ,x_{m_j}^{j}\}\cap B(y_i^{j},\frac{1}{j})\neq \emptyset. $$ Finally, let $$ \mathbf x=(x_1^{1},x_2^{1},x_3^{1},\ldots ,x_{m_1}^{1}=x_1^{2},x_2^{2},x_3^{2},\ldots ,x_{m_2}^{2}=x_1^{3},x_2^{3},x_3^{3},\ldots ,x_{m_3}^{3},\ldots). $$ Then $\mathbf x\in T_G^{+}(x)$ such that $\omega_G(\mathbf x)=X$. Finally, we prove \ref{3tre}. Suppose that for each $x\in X$, $ \psi_G(x)=X$. To prove that $(X,G)$ is $3^{\oplus}$-minimal, let $x_0\in X$ be any point and we show that $\Cl(\mathcal{U}_G^{\oplus}(x_0))=X$. Obviously, $\Cl(\mathcal{U}_G^{\oplus}(x_0))\subseteq X$. To show that $X\subseteq \Cl(\mathcal{U}_G^{\oplus}(x_0))$, let $x\in X$. Then $x\in \psi_G(x_0)$. Let $\mathbf x_0\in T_G^{+}(x_0)$ such that $x\in \omega_G(\mathbf x_0)$. Since $\omega_G(\mathbf x_0)\subseteq \Cl(\mathcal{O}_G^{\oplus}(\mathbf x_0))$, it follows that $x\in \Cl(\mathcal{O}_G^{\oplus}(\mathbf x_0))$. Since $\mathcal{O}_G^{\oplus}(\mathbf x_0)\subseteq \mathcal{U}_G^{\oplus}(x_0)$, it follows that $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x_0))\subseteq \Cl(\mathcal{U}_G^{\oplus}(x_0))$ and, therefore, $$ x\in \Cl(\mathcal{U}_G^{\oplus}(x_0)). $$ \end{proof} We conclude this section by stating the following problems. { \begin{problem} Is there an example of a $3^{\omega}$-minimal CR-dynamical system which is not $2^{\omega}$-minimal? \end{problem}} { \begin{problem} Is there an example of a $3^{\oplus}$-minimal CR-dynamical system which is not $3^{\omega}$-minimal? \end{problem}} \section{Backward minimal dynamical systems with closed relations}\label{s4} In this section, we define backward dynamical systems with closed relations. \begin{definition} Let $(X,G)$ be a CR-dynamical system and let $A\subseteq X$. We say that the set $A$ is \begin{enumerate} \item \emph{ \color{blue} $1$-backward invariant in $(X,G)$}, if for each $y\in A$, $$ y\in p_2(G) \Longrightarrow \textup{ there is } x\in A \textup{ such that } (x,y)\in G. $$ \item \emph{ \color{blue} $\infty$-backward invariant in $(X,G)$}, if for each $(x,y)\in G$, $$ y\in A \Longrightarrow x\in A. $$ \end{enumerate} \end{definition} \begin{observation}\label{ema} Let $(X,G)$ be a CR-dynamical system and let $A\subseteq X$. Note that \begin{enumerate} \item $A$ is $1$-backward invariant in $(X,G)$ if and only if $A$ is $1$-invariant in $(X,G^{-1})$. \item$A$ is $\infty$-backward invariant in $(X,G)$ if and only if $A$ is $\infty$-invariant in $(X,G^{-1})$. \end{enumerate} \end{observation} \begin{proposition}\label{mimika11} Let $(X,G)$ be a CR-dynamical system and let $A\subseteq X$. If $A$ is $\infty$-backward invariant in $(X,G)$, then $A$ is $1$-backward invariant in $(X,G)$. \end{proposition} \begin{proof} The proposition follows from Proposition \ref{mimika} and Observation \ref{ema}. \end{proof} \begin{example} \label{ex1b} Let $X=[0,1]$ and let $G=([0,1]\times \{\frac{1}{2}\})\cup(\{\frac{1}{2}\}\times [0,1])$, see Figure \ref{figure1}. The set $A=\{\frac{1}{2}\}$ is $1$-backward invariant in $(X,G)$ but it is not $\infty$-backward invariant in $(X,G)$. \end{example} \begin{definition} Let $(X,G)$ be a CR-dynamical system. We say that \begin{enumerate} \item $(X,G)$ is \emph{ \color{blue} $1$-backward minimal} if for each closed subset $A$ of $X$, $$ A \textup{ is } {1}\textup{-backward invariant in } (X,G) \Longrightarrow A\in \{\emptyset, X\}. $$ \item $(X,G)$ is \emph{ \color{blue} $\infty$-backward minimal} if for each closed subset $A$ of $X$, $$ A \textup{ is } \infty\textup{-backward invariant in } (X,G) \Longrightarrow A\in \{\emptyset, X\}. $$ \end{enumerate} \end{definition} \begin{observation}\label{333bbb} Let $(X,G)$ be a CR-dynamical system and let $k\in \{1,\infty\}$. Then the following holds. $$ (X,G) \textup{ is } k\textup{-backward minimal } \Longleftrightarrow (X,G^{-1}) \textup{ is } k\textup{-minimal}. $$ \end{observation} \begin{theorem}\label{judyKkkk} Let $(X,G)$ be a CR-dynamical system. If $(X,G)$ is $1$-backward minimal, then $(X,G)$ is $\infty$-backward minimal. \end{theorem} \begin{proof} Let $(X,G)$ be $1$-backward minimal and let $A$ be a non-empty closed subset of $X$ such that $A$ is $\infty$-backward invariant in $(X,G)$. Then $A$ is $1$-backward invariant in $(X,G)$ by Proposition \ref{mimika11}. Therefore, $A=X$ and it follows that $(X,G)$ is $\infty$-backward minimal. \end{proof} Note that Example \ref{goranH} is an example of a $\infty$-backward minimal CR-dynamical system which is not $1$-backward minimal. In Theorem \ref{666}, we show (using backward orbits that are defined in Section \ref{s5}) that for a CR-dynamical system $(X,G)$, the following holds: $$ (X,G) \text{ is }1\text{-backward minimal} \Longleftrightarrow (X,G) \text{ is }1\text{-minimal.} $$ The following example gives a CR-dynamical system which is $\infty$-minimal but is not $\infty$-backward minimal. \begin{example}\label{bluy} Let $X=[0,1]$ and let $G$ be the union of the following line segments: \begin{enumerate} \item the line segment with endpoints $(0,\frac{1}{2})$ and $(1,1)$, \item the line segment with endpoints $(1,0)$ and $(1,1)$, \end{enumerate} see Figure \ref{figure2}. We proved that $(X,G)$ is $\infty$-minimal in Example \ref{ex22}. To show that $(X,G)$ is not $\infty$-backward minimal, let $$ A=\{0,\frac{1}{2}, \frac{3}{4}, \frac{7}{8},\ldots \}\cup\{1\}. $$ Then $A$ is a non-empty closed subset of $X$. Let $(x,y)\in G$ such that $y\in A$. If $y=0$, then $x=1$ and, therefore $x\in A$. If $y=\frac{1}{2}$, then $x=0$ and, therefore $x\in A$. If $y=1$, then $x=1$ and, therefore $x\in A$. If $y=\frac{2^{n+1}-1}{2^{n+1}}$ for some positive integer $n$, then $x=\frac{2^{n}-1}{2^{n}}$ or $x=1$, therefore, $x\in A$. This proves that $A$ is $\infty$-backward invariant. Since $A\neq X$, it follows that $(X,G)$ is not $\infty$-backward minimal. \end{example} The following example gives a CR-dynamical system which is $\infty$-backward minimal but is not $\infty$-minimal. \begin{example} Let $X=[0,1]$ and let $H$ be the union of the following line segments: \begin{enumerate} \item the line segment with endpoints $(0,\frac{1}{2})$ and $(1,1)$, \item the line segment with endpoints $(1,0)$ and $(1,1)$, \end{enumerate} and let $G=H^{-1}$. Then, using a similar approach as in Example \ref{bluy}, one can easily prove that $(X,G) $ is $\infty$-backward minimal but is not $\infty$-minimal. \end{example} \section{Minimality and backward orbits}\label{s5} First, we visit the dynamical systems and revisit a well-known result saying that a dynamical system $(X,f)$ is minimal if and only if $f$ is surjective and every backward orbit in $(X,f)$ is dense in $X$. \begin{definition} Let $(X,f)$ be a dynamical system and let $x_0\in X$. We use \emph{ \color{blue} $T_f^{-}(x_0)$} to denote the set $$ T_f^{-}(x_0)=\{\mathbf x\in \star_{i=1}^{\infty}\Gamma(f)^{-1} \ | \ \pi_1(\mathbf x)=x_0\}. $$ \end{definition} \begin{definition} Let $(X,f)$ be a dynamical system, let $x_0\in X$ be any point and let $$ \mathbf x=(x_1,x_2,x_3,\ldots) \in \star_{i=1}^{\infty}\Gamma(f)^{-1}. $$ The sequence $\mathbf x$ is called \emph{ \color{blue} a backward orbit of $x_0$}, if $\pi_1(\mathbf x)=x_0$. We use \emph{ \color{blue} $\mathcal{O}_f^{\ominus}(\mathbf x)$ } to denote the set $$ \mathcal{O}_f^{\ominus}(\mathbf x)=\{x_1,x_2,x_3,\ldots\} $$ \end{definition} The following is a well-known result, see \cite{KST,M} for more details. We present its proof for the completeness of the paper. \begin{theorem}\label{minimalback} Let $(X,f)$ be a dynamical system. The following statements are equivalent. \begin{enumerate} \item \label{tritritri} $(X,f)$ is minimal. \item \label{pet} For each $x\in X$, $T_f^{-}(x)\neq \emptyset$ and for each $\mathbf x\in \star_{i=1}^{\infty}\Gamma(f)^{-1}$, $$ \Cl(\mathcal{O}_f^{\ominus}(\mathbf x))=X. $$ \end{enumerate} \end{theorem} \begin{proof} Let $(X,f)$ be a minimal dynamical system. Then $f$ is surjective and, therefore, for each $x\in X$, $T_f^{-}(x)\neq \emptyset$. Let $\mathbf x\in \star_{i=1}^{\infty}\Gamma(f)^{-1}$. We show that $$ \Cl(\mathcal{O}_f^{\ominus}(\mathbf x))=X. $$ Let $A$ be the set of all limit points of the sequence $\mathbf x$. Then $A\neq \emptyset$ and $$ A\subseteq \Cl(\mathcal{O}_f^{\ominus}(\mathbf x)). $$ To show that $A$ is closed in $X$, let $(s_n)$ be a sequence in $A$ and let $s\in X$ such that $\displaystyle \lim_{n\to \infty}s_n=s$. To show that $s\in A$, let $U$ be an open set in $X$ such that $s\in U$. Let $n_0$ be a positive integer such that $s_{n_0}\in U$. Then infinitely many terms of the sequence $\mathbf x$ are in $U$, since $s_{n_0}$ is a limit point of the sequence. Therefore, $s\in A$ and this proves that $A$ is closed in $X$. Next, we show that $f(A)\subseteq A$. Let $s\in A$ and let $(x_{i_n})$ be a subsequence of the sequence $\mathbf x$ such that $\displaystyle \lim_{n\to \infty} x_{i_n}=s$. Then, since $f$ is continuous, $$ f(s)=\lim_{n\to\infty}f(x_{i_n})=\lim_{n\to\infty}x_{i_n-1}\in A. $$ It follows that $f(A)\subseteq A$. Since $(X,f)$ is minimal, it follows that $A=X$. Therefore, $\Cl(\mathcal{O}_f^{\ominus}(\mathbf x))=X$. Next, suppose that for each $x\in X$, $T_f^{-}(x)\neq \emptyset$ and for each $\mathbf x\in \star_{i=1}^{\infty}\Gamma(f)^{-1}$, $$ \Cl(\mathcal{O}_f^{\ominus}(\mathbf x))=X. $$ To show that $(X,f)$ is minimal, let $A$ be a non-empty closed subset of $X$ such that $f(A)=A$. Suppose that $A\neq X$. Then $\star_{i=1}^{\infty}\Gamma(f|_A)^{-1}\neq \emptyset$. Let $\mathbf x\in \star_{i=1}^{\infty}\Gamma(f|_A)^{-1}$. Then $$ \Cl(\mathcal{O}_f^{\ominus}(\mathbf x))\subseteq A\neq X, $$ which is a contradiction. Therefore, $(X,f)$ is minimal. \end{proof} Next, we generalize the notion of backward orbits in $(X,f)$ to the notion of backward orbits in $(X,G)$. \begin{definition} Let $(X,G)$ be a CR-dynamical system and let $x_0\in X$. We use \emph{ \color{blue} $T_G^{-}(x_0)$} to denote the set $$ T_G^{-}(x_0)=\{\mathbf x\in \star_{i=1}^{\infty}G^{-1} \ | \ \pi_1(\mathbf x)=x_0\}. $$ \end{definition} \begin{definition} Let $(X,G)$ be a CR-dynamical system, let $\mathbf x\in \star_{i=1}^{\infty}G^{-1}$, and let $x_0\in X$. We \begin{enumerate} \item say that $\mathbf x$ is \emph{ \color{blue} a backward orbit of $x_0$ in $(X,G)$}, if $\pi_1(\mathbf x)=x_0$. \item use \emph{ \color{blue} $\mathcal O_G^{\ominus}(\mathbf x)$} to denote the set $$ \mathcal O_G^{\ominus}(\mathbf x)=\{\pi_k(\mathbf x) \ | \ k \textup{ is a positive integer}\}. $$ \item use \emph{ \color{blue} $\mathcal U_G^{\ominus}(x_0)$} to denote the set $$ \mathcal U_G^{\ominus}(x_0)=\bigcup_{\mathbf x\in T_G^{-}(x_0)}\mathcal O_G^{\ominus}(\mathbf x). $$ \end{enumerate} \end{definition} \begin{definition} Let $(X,G)$ be a CR-dynamical system. We say that \begin{enumerate} \item $(X,G)$ is \emph{ \color{blue} $1^{\ominus}$-minimal} if for each $x\in X$, $T_G^{-}(x)\neq \emptyset$, and for each $\mathbf x\in \star_{i=1}^{\infty}G^{-1}$, $$ \Cl\Big(\mathcal{O}_G^{\ominus}(\mathbf x)\Big)=X. $$ \item $(X,G)$ is \emph{ \color{blue} $2^{\ominus}$-minimal }if for each $x\in X$ there is $\mathbf x\in T_G^{-}(x)$ such that $$ \Cl\Big(\mathcal{O}_G^{\ominus}(\mathbf x)\Big)=X. $$ \item $(X,G)$ is \emph{ \color{blue} $3^{\ominus}$-minimal} if for each $x\in X$, $$ \Cl\Big(\mathcal U_G^{\ominus}(x)\Big)=X. $$ \end{enumerate} \end{definition} \begin{observation}\label{333} Let $(X,G)$ be a CR-dynamical system and let $k\in \{1,2,3\}$. Then the following holds. $$ (X,G) \textup{ is } k^{\ominus}\textup{-minimal } \Longleftrightarrow (X,G^{-1}) \textup{ is } k^{\oplus}\textup{-minimal}. $$ \end{observation} \begin{theorem}\label{main1b} Let $(X,G)$ be a CR-dynamical system. Then the following hold. \begin{enumerate} \item \label{1dvab} $(X,G)$ is $1$-backward minimal if and only if $(X,G)$ is $1^{\ominus}$-minimal. \item \label{2dvab} If $(X,G)$ is $1^{\ominus}$-minimal, then $(X,G)$ is $2^{\ominus}$-minimal. \item \label{3dvab} If $(X,G)$ is $2^{\ominus}$-minimal, then $(X,G)$ is $3^{\ominus}$-minimal. \item \label{4dvab} If $(X,G)$ is $3^{\ominus}$-minimal, then $(X,G)$ is $\infty$-backward minimal. \end{enumerate} \end{theorem} \begin{proof} The proof is analogous to the proof of Theorem \ref{main1}. We leave the details to the reader. \end{proof} Using Observations \ref{333} and \ref{333bbb}, one can easily conclude that Example \ref{ex2} is also an example of a $2^{\ominus}$-minimal CR-dynamical system, which is not $1^{\ominus}$-minimal and that Example \ref{ex22} is also an example of a $\infty$-backward minimal CR-dynamical system, which is not $3^{\ominus}$-minimal. \begin{theorem}\label{surjektivnostb} Let $(X,G)$ be a CR-dynamical system. If $(X,G)$ is $1$-backward minimal, $\infty$-backward minimal or $k^{\ominus}$-minimal for some $k\in \{1,2,3\}$, then $$ p_1(G)=p_2(G)=X. $$ \end{theorem} \begin{proof} The theorem follows from Theorem \ref{surjektivnost} and Observations \ref{333bbb} and \ref{333}. \end{proof} \begin{theorem}\label{666} Let $(X,G)$ be a CR-dynamical system. The following statements are equivalent. \begin{enumerate} \item \label{GIJR1} $(X,G)$ is $1^{\ominus}$-minimal if and only if $(X,G)$ is $1^{\oplus}$-minimal. \item \label{GIJR4} $(X,G)$ is $1$-backward minimal if and only if $(X,G)$ is $1$-minimal. \end{enumerate} \end{theorem} \begin{proof} First, we prove \ref{GIJR1}. Suppose that $(X,G)$ is $1^{\oplus}$-minimal. Then $p_2(G)=X$ and, therefore, for each $x\in X$, $T_G^{-}(x)\neq \emptyset$. Let $\mathbf x\in \star_{i=1}^{\infty}G^{-1}$. Then we show that $$ \Cl(\mathcal{O}_G^{\ominus}(\mathbf x))=X. $$ Let $A$ be the set of all limit points of the sequence $\mathbf x$. Then $A\neq \emptyset$, $A$ is closed in $X$, and $$ A\subseteq \Cl(\mathcal{O}_G^{\ominus}(\mathbf x)). $$ We show that $A$ is $1$-invariant in $(X,G)$. Let $x\in A\cap p_1(G)=A$ and let $(x_{i_n})$ be a subsequence of the sequence $\mathbf x$ such that $\displaystyle \lim_{n\to \infty} x_{i_n}=x$. Let $(s,t)$ be any limit point of the sequence $(x_{i_n},x_{i_n-1})$. Then $s=x$ and, let $y=t$. Since $G$ is closed in $X\times X$, $(x,y)\in G$ and, since $A$ is closed, it follows that $y\in A$. We have just proved that $A$ is $1$-invariant in $(X,G)$. Since $(X,G)$ is $1^{\oplus}$-minimal, it is also $1$-minimal by Theorem \ref{main1}, and it follows that $A=X$. Therefore, $\Cl(\mathcal{O}_G^{\ominus}(\mathbf x))=X$. Next, suppose that $(X,G)$ is $1^{\ominus}$-minimal. To show that $(X,G)$ is $1^{\oplus}$-minimal, let $x\in X$ and let $\mathbf x\in \star_{i=1}^{\infty}G$. We show that $T_G^{+}(x)\neq \emptyset$ and that $\Cl\Big(\mathcal{O}_G^{\oplus}(\mathbf x)\Big)=X$. By Theorem \ref{surjektivnostb}, $p_1(G)=X$ and $T_G^{+}(x)\neq \emptyset$ follows. To show that $\Cl\Big(\mathcal{O}_G^{\oplus}(\mathbf x)\Big)=X$, let $A$ be the set of all limit points of the sequence $\mathbf x$. Then $A\neq \emptyset$, $A$ is closed in $X$, and $$ A\subseteq \Cl(\mathcal{O}_G^{\oplus}(\mathbf x)). $$ We show that $A$ is $1$-backward invariant in $(X,G)$. Let $y\in A\cap p_2(G)=A$ and let $(x_{i_n})$ be a subsequence of the sequence $\mathbf x$ such that $\displaystyle \lim_{n\to \infty} x_{i_n}=y$. Let $(s,t)$ be any limit point of the sequence $(x_{i_n+1},x_{i_n})$. Then $y=t$ and, let $x=s$. Since $G$ is closed in $X\times X$, $(x,y)\in G$ and, since $x$ is a limit point of $\mathbf x$, it follows that $x\in A$. We have just proved that $A$ is $1$-backward invariant in $(X,G)$. Since $(X,G)$ is $1^{\ominus}$-minimal, it is also $1$-backward minimal by Theorem \ref{main1b}, and it follows that $A=X$. Therefore, $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x))=X$. This completes the proof of \ref{GIJR1}. Note that this also proves \ref{GIJR4} since $(X,G)$ is $1$-minimal if and only if $(X,G)$ is $1^{\oplus}$-minimal by Theorem \ref{main1}, and since $(X,G)$ is $1$-backward minimal if and only if $(X,G)$ is $1^{\ominus}$-minimal by Theorem \ref{main1b}. \end{proof} \begin{observation} Note that in Theorem \ref{main1}, we have proved that $(X,G)$ is $1^{\oplus}$-minimal if and only if $(X,G)$ is $1$-minimal. It follows from Theorem \ref{666} that the following statements are equivalent. \begin{enumerate} \item $(X,G)$ is $1^{\oplus}$-minimal. \item $(X,G)$ is $1^{\ominus}$-minimal. \item $(X,G)$ is $1$-minimal. \item $(X,G)$ is $1$-backward minimal. \end{enumerate} \end{observation} Note that so far, we have not presented an example of a closed relation $G$ on $[0,1]$ such that $([0,1],G)$ is $1$-minimal. Also, note that all the closed relations $G$ on $[0,1]$ that are presented in our examples, contain a vertical or a horizontal line. Example \ref{tistile} is an example of a closed relation $G$ on $[0,1]$ such that $([0,1],G)$ is $1$-minimal and $G$ does not contain a vertical or a horizontal line. We use Theorem \ref{tatale} in its construction. \begin{theorem}\label{tatale} Let $(X,G)$ be a CR-relation such that $p_1(G)=p_2(G)=X$ and let $\sigma_G:\star_{i=1}^{\infty}G^{-1}\rightarrow \star_{i=1}^{\infty}G^{-1}$ be the shift map $$ \sigma_G(x_1,x_2,x_3,\ldots )=(x_2,x_3,\ldots) $$ for each $(x_1,x_2,x_3,\ldots)$. If $(\star_{i=1}^{\infty}G^{-1},\sigma_G)$ is minimal, then $(X,G)$ is $1$-minimal. \end{theorem} \begin{proof} We show that $(X,G)$ is $1$-backward minimal. Let $A$ be a non-empty closed subset of $X$ such that $A$ is $1$-backward invariant. Also, let $$ B=\Big(\prod_{i=1}^{\infty}A\Big)\cap \Big(\star_{i=1}^{\infty}G^{-1}\Big). $$ Since $A$ is $1$-backward invariant, $B$ is non-empty. Note, that $B$ is also a closed subset of $\star_{i=1}^{\infty}G^{-1}$ such that $\sigma_G(B)\subseteq B$. Since $(\star_{i=1}^{\infty}G^{-1},\sigma_G)$ is minimal, it follows that $B=\star_{i=1}^{\infty}G^{-1}$. Therefore, $$ \star_{i=1}^{\infty}G^{-1}\subseteq \prod_{i=1}^{\infty}A. $$ Since $p_1(G)=p_2(G)=X$, it follows that $$ X=\pi_1(\star_{i=1}^{\infty}G^{-1})=\pi_1(B)\subseteq \pi_1(\prod_{i=1}^{\infty}A)=A. $$ Therefore, $(X,G)$ is $1$-backward minimal. By Theorem \ref{666}, $(X,G)$ is $1$-minimal. \end{proof} \begin{example}\label{tistile} Let $\lambda$ be an irrational number in $(0,1)$ and let $G$ be the union of the following line segments in $[0,1]\times [0,1]$: \begin{figure}[h!] \centering \includegraphics[width=15em]{cross2.pdf} \caption{The relation $G$ from Example \ref{tistile}} \label{figgure} \end{figure} \begin{enumerate} \item the line segment from $(0,\lambda)$ to $(1-\lambda,1)$ and \item the line segment from $(1-\lambda,0)$ to $(1,\lambda)$, \end{enumerate} see Figure \ref{figgure}. Then $(\star_{i=1}^{\infty}G^{-1},\sigma_G)$ is minimal; this follows from the proof of \cite[Theorem 3.4, page 103]{KK}. By Theorem \ref{tatale}, $([0,1],G)$ is $1$-minimal. \end{example} In the following example, we demonstrate that there is a $2^{\oplus}$-minimal CR-dynamical system $(X,G)$ which is not $2^{\ominus}$-minimal. \begin{example}\label{Rene2} Let $X=[0,1]$ and and let $G=A\cup B\cup C$, where $A$ is a line segment from $(0,\frac{1}{2})$ to $(1,\frac{1}{2})$, $B$ is the line segment from $(0,0)$ to $(1,1)$, and $C$ is defined as follows. Let $d_1=\frac{1}{2}$, let $d_{10}=\frac{1}{2^{2}}$ and $d_{11}=\frac{3}{2^{2}}$, and let $d_{100}=\frac{1}{2^{3}}$, $d_{101}=\frac{3}{2^{3}}$, $d_{110}=\frac{5}{2^{3}}$ and $d_{111}=\frac{7}{2^{3}}$. Let $n$ be a positive integer and suppose that for any $$ \mathbf s \in \{s_1s_2s_3\ldots s_n \ | \ s_1=1, s_2,s_3,s_4,\ldots ,s_n\in \{0,1\}\}, $$ we have already defined $d_{\mathbf s}$ to be $d_{\mathbf s}=\frac{k}{2^n}$ for some $k\in \{1,3,5,7,\ldots ,2^n-1\}$. Then we define $d_{\mathbf s0}$ and $d_{\mathbf s1}$ as follows. If $k=1$ then $d_{\mathbf s0}=\frac{1}{2^{n+1}}$ and $d_{\mathbf s1}=\frac{3}{2^{n+1}}$, if $k=3$ then $d_{\mathbf s0}=\frac{5}{2^{n+1}}$ and $d_{\mathbf s1}=\frac{7}{2^{n+1}}$, $\ldots$, and if $k=2^n-1$ then $d_{\mathbf s0}=\frac{2^{n+1}-3}{2^{n+1}}$ and $d_{\mathbf s1}=\frac{2^{n+1}-1}{2^{n+1}}$. For each positive integer $n$, let $\mathcal S_n=\{s_1s_2s_3\ldots s_n \ | \ s_1=1, s_2,s_3,s_4,\ldots ,s_n\in \{0,1\}\}$ and let $\mathcal S=\bigcup_{n=1}^{\infty}\mathcal S_n$. Then we define the set $C$ as $$ C=\bigcup_{\mathbf s\in \mathcal S} \Big(\{d_{\mathbf s}\}\times\{d_{\mathbf s0},d_{\mathbf s1}\}\Big), $$ see Figure \ref{Rene}, \begin{figure}[h!] \centering \includegraphics[width=15em]{Rene1.pdf} \caption{The construction of the set $C$ } \label{Rene} \end{figure} where the construction of the set $C$ is presented -- in particular, together with the sets $A$ and $B$, the set $\displaystyle \bigcup_{\mathbf s\in \mathcal S_1\cup \mathcal S_2\cup \mathcal S_3} \Big(\{d_{\mathbf s}\}\times\{d_{\mathbf s0},d_{\mathbf s1}\}\Big)$ is also pictured in the figure. Then $(X,G)$ is $2^{\oplus}$-minimal (since for any $x\in [0,1]$, $(x,\frac{1}{2})\in G$ and therefore, there is $\mathbf x\in T_G^{+}(x)$ such that $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x))=X$) but it is not $2^{\ominus}$-minimal (note that $T_G^{-}(1)=\{(1,1,1,1,\ldots)\}$ and, therefore, for any $\mathbf x\in T_G^{-}(1)$, $\Cl(\mathcal O_G^{\ominus}(\mathbf x))\neq X$). \end{example} Note that \begin{enumerate} \item $(X,G)$ from Example \ref{Rene2} is also an example of a $3^{\oplus}$-minimal CR-dynamical system which is not $3^{\ominus}$-minimal, and \item if $(X,G)$ is the CR-dynamical system from Example \ref{Rene2}, then $(X,G^{-1})$ is an example of a $2^{\ominus}$-minimal ($3^{\ominus}$-minimal) CR-dynamical system which is not $2^{\oplus}$-minimal ($3^{\oplus}$-minimal). \end{enumerate} We conclude the section by stating the following open problem. { \begin{problem} Is there an example of a $3^{\ominus}$-minimal CR-dynamical system which is not $2^{\ominus}$-minimal? \end{problem}} \section{Minimality and alpha limit sets}\label{s6} In this section we define an alpha limit set and (using such a set) introduce new types of minimality of CR-dynamical systems, all of them generalizing minimal dynamical systems. \begin{definition} Let $(X,f)$ be a dynamical system and let $\mathbf x\in \star_{i=1}^{\infty}\Gamma(f)^{-1}$. The set $$ \alpha_f(\mathbf x)=\{x\in X \ | \ \textup{ there is a subsequence of the sequence } \mathbf x \textup{ with limit } x\} $$ is called \emph{ \color{blue} the alpha limit set of $\mathbf x$}. \end{definition} The following is a well-known result. \begin{theorem} Let $(X,f)$ be a dynamical system. The following statements are equivalent. \begin{enumerate} \item \label{tritri3} $(X,f)$ is minimal. \item \label{stiri4} For each $x\in X$, $T_{f}^{-}(x)\neq \emptyset$, and for each $\mathbf x\in \star_{i=1}^{\infty}\Gamma(f)^{-1}$, $$ \alpha_f(\mathbf x)=X. $$ \end{enumerate} \end{theorem} \begin{proof} The proof is analogous to the proof of Theorem \ref{omomg}. We leave the details to a reader. \end{proof} \begin{definition} Let $(X,G)$ be a CR-dynamical system, let $x_0\in X$ and let $\mathbf x\in T_G^{-}(x_0)$. The set $$ \alpha_G(\mathbf x)=\{x\in X \ | \ \textup{ there is a subsequence of the sequence } \mathbf x \textup{ with limit } x\} $$ is called \emph{ \color{blue} the alpha limit set of $\mathbf x$} and we use \emph{ \color{blue} $\beta_G(x_0)$} to denote the set $$ \beta_G(x_0)=\bigcup_{\mathbf x\in T_G^{-}(x_0)}\alpha_G(\mathbf x). $$ \end{definition} \begin{definition} Let $(X,G)$ be a CR-dynamical system. We say that \begin{enumerate} \item $(X,G)$ is \emph{ \color{blue} $1^{\alpha}$-minimal}, if for each $x\in X$, $T_G^{-}(x)\neq \emptyset$, and for each $\mathbf x\in \star_{i=1}^{\infty}G^{-1}$, $$ \alpha_G(\mathbf x)=X. $$ \item $(X,G)$ is \emph{ \color{blue} $2^{\alpha}$-minimal}, if for each $x\in X$ there is $\mathbf x\in T_G^{-}(x)$ such that $$ \alpha_G(\mathbf x)=X. $$ \item $(X,G)$ is \emph{ \color{blue} $3^{\alpha}$-minimal}, if for each $x\in X$, $$ \beta_G(x)=X. $$ \end{enumerate} \end{definition} \begin{observation} Let $(X,G)$ be a CR-dynamical system. Then the following hold. \begin{enumerate} \item If $(X,G)$ is $1^{\alpha}$-minimal, then $(X,G)$ is $2^{\alpha}$-minimal. \item If $(X,G)$ is $2^{\alpha}$-minimal, then $(X,G)$ is $3^{\alpha}$-minimal. \end{enumerate} \end{observation} Note that Example \ref{omegaPLUS} is also an example of a CR-dynamical system which is $2^{\alpha}$-minimal but is not $1^{\alpha}$-minimal. \begin{observation}\label{0666} Let $(X,G)$ be a CR-dynamical system. Then the following hold. \begin{enumerate} \item If $(X,G)$ is $1^{\alpha}$-minimal, if and only if $(X,G^{-1})$ is $1^{\omega}$-minimal. \item If $(X,G)$ is $2^{\alpha}$-minimal, if and only if $(X,G^{-1})$ is $2^{\omega}$-minimal. \item If $(X,G)$ is $3^{\alpha}$-minimal, if and only if $(X,G^{-1})$ is $3^{\omega}$-minimal. \end{enumerate} \end{observation} \begin{theorem}\label{main2bb} Let $(X,G)$ be a CR-dynamical system. Then the following hold. \begin{enumerate} \item \label{1trebb} $(X,G)$ is $1^{\alpha}$-minimal if and only if $(X,G)$ is $1^{\ominus}$-minimal. \item \label{2trebb} $(X,G)$ is $2^{\alpha}$-minimal if and only if $(X,G)$ is $2^{\ominus}$-minimal. \item \label{3trebb} If $(X,G)$ is $3^{\alpha}$-minimal, then $(X,G)$ is $3^{\ominus}$-minimal. \end{enumerate} \end{theorem} \begin{proof} Let $(X,G)$ be a $1^{\alpha}$-minimal CR-dynamical system. By Observation \ref{0666}, $(X,G)$ is $1^{\alpha}$-minimal if and only if $(X,G^{-1})$ is $1^{\omega}$-minimal, and it follows from Theorem \ref{main666} that $(X,G^{-1})$ is $1^{\omega}$-minimal if and only if $(X,G^{-1})$ is $1^{\oplus}$-minimal. By Observation \ref{333}, $(X,G^{-1})$ is $1^{\oplus}$-minimal if and only if $(X,G)$ is $1^{\ominus}$-minimal. Let $(X,G)$ be a $2^{\alpha}$-minimal CR-dynamical system. By Observation \ref{0666}, $(X,G)$ is $2^{\alpha}$-minimal if and only if $(X,G^{-1})$ is $2^{\omega}$-minimal, and it follows from Theorem \ref{main666} that $(X,G^{-1})$ is $2^{\omega}$-minimal if and only if $(X,G^{-1})$ is $2^{\oplus}$-minimal. By Observation \ref{333}, $(X,G^{-1})$ is $2^{\oplus}$-minimal if and only if $(X,G)$ is $2^{\ominus}$-minimal. Let $(X,G)$ be a $3^{\alpha}$-minimal CR-dynamical system. By Observation \ref{0666}, $(X,G)$ is $3^{\alpha}$-minimal if and only if $(X,G^{-1})$ is $3^{\omega}$-minimal, and it follows from Theorem \ref{main666} that $(X,G^{-1})$ is $3^{\oplus}$-minimal. By Observation \ref{333}, $(X,G^{-1})$ is $3^{\oplus}$-minimal if and only if $(X,G)$ is $3^{\ominus}$-minimal. \end{proof} \section{Preserving different types of minimality by topological conjugation}\label{s8} The main results of this section are obtained in Theorem \ref{CMain}, where it is proved that any kind of minimality of a dynamical system is preserved by a topological conjugation. \begin{definition} Let $X$ and $Y$ be metric spaces, and let $f:X\rightarrow X$ and $g:Y\rightarrow Y$ be functions. If there is a homeomorphism $\varphi:X\rightarrow Y$ such that $$ \varphi \circ f=g\circ \varphi, $$ then we say that \emph{ \color{blue} $f$ and $g$ are topological conjugates}. \end{definition} The following is a well-known result. \begin{theorem} Let $(X,f)$ and $(Y,g)$ be dynamical systems. If $f$ and $g$ are topological conjugates, then $$ (X,f) \textup{ is minimal } \Longleftrightarrow (Y,g) \textup{ is minimal }. $$ \end{theorem} \begin{proof} Let $\varphi:X\rightarrow Y$ be a homeomorphism such that $$ \varphi \circ f=g\circ \varphi. $$ Suppose that $(X,f)$ is minimal and let $A$ be a non-empty closed subset of $Y$ such that $g(A)\subseteq A$. Then $\varphi^{-1}(A)$ is a non-empty closed subset of $X$ such that $$ f(\varphi^{-1}(A))=\varphi^{-1}(g(A))\subseteq \varphi^{-1}(A). $$ Therefore, $\varphi^{-1}(A)=X$ and $A=Y$ follows. This proves that $(Y,g)$ is minimal. The proof of the other implication is analogous. \end{proof} The following definition generalizes the notion of topological conjugacy of continuous functions to the topological conjugacy of closed relations. See \cite{BEK} for details. \begin{definition} Let $(X,G)$ and $(Y,H)$ be CR-dynamical systems. We say that \emph{ \color{blue} $G$ and $H$ are topological conjugates} if there is a homeomorphism $\varphi:X\rightarrow Y$ such that for each $(x,y)\in X\times X$, the following holds $$ (x,y)\in G \Longleftrightarrow (\varphi(x), \varphi(y))\in H. $$ \end{definition} In the rest of the paper, we use $p_1,p_2:X\times X\rightarrow X$ to denote the projections $p_1(x,y)=x$ and $p_2(x,y)=y$ for all $(x,y)\in X\times X$, and $q_1,q_2:Y\times Y\rightarrow Y$ to denote the projections $q_1(x,y)=x$ and $q_2(x,y)=y$ for all $(x,y)\in Y\times Y$. Theorem \ref{CMain} is the main result of this section. We use the following lemmas in its proof. \begin{lemma}\label{CLemma1} Let $(X,G)$ and $(Y,H)$ be CR-dynamical systems and suppose that $G$ and $H$ are topological conjugates and let $\varphi:X\rightarrow Y$ be a homeomorphism such that for each $(x,y)\in X\times X$, $$ (x,y)\in G \Longleftrightarrow (\varphi(x), \varphi(y))\in H. $$ Then the following hold. \begin{enumerate} \item $p_1(G)=X$ if and only if $q_1(H)=Y$. \item $p_2(G)=X$ if and only if $q_2(H)=Y$. \end{enumerate} \end{lemma} \begin{proof} Suppose that $p_1(G)=X$. To show that $q_1(H)=Y$, let $x\in Y$. Let $z\in X$ such that $(\varphi^{-1}(x),z)\in G$ and let $y=\varphi(z)$. Then $(x,y)\in H$ and $x\in q_1(H)$ follows. We have proved the implication from $p_1(G)=X$ to $q_1(H)=Y$. The proofs of the other three implications are analogous to the proof of this implication. We leave them to the reader. \end{proof} \begin{lemma}\label{CLemma2} Let $(X,G)$ and $(Y,H)$ be CR-dynamical systems and suppose that $G$ and $H$ are topological conjugates, let $\varphi:X\rightarrow Y$ be a homeomorphism such that for each $(x,y)\in X\times X$, $$ (x,y)\in G \Longleftrightarrow (\varphi(x), \varphi(y))\in H, $$ and let $A\subseteq X$. Then the following hold. \begin{enumerate} \item \label{last1} $A$ is $1$-invariant in $(X,G)$ if and only if $\varphi(A)$ is $1$-invariant in $(Y,H)$. \item\label{last2} $A$ is $\infty$-invariant in $(X,G)$ if and only if $\varphi(A)$ is $\infty$-invariant in $(Y,H)$. \item \label{last3} $A$ is $1$-backward invariant in $(X,G)$ if and only if $\varphi(A)$ is $1$-backward invariant in $(Y,H)$. \item \label{last4} $A$ is $\infty$-backward invariant in $(X,G)$ if and only if $\varphi(A)$ is $\infty$-backward invariant in $(Y,H)$. \end{enumerate} \end{lemma} \begin{proof} Suppose that $A$ is $1$-invariant in $(X,G)$. Obviously, since $\varphi:X\rightarrow Y$ is a homeomorphism and since $A$ is closed in $X$, also $\varphi(A)$ is closed in $Y$. Let $x\in \varphi(A)$ such that $x\in q_1(H)$ and let $z\in Y$ such that $(x,z)\in H$. Then $(\varphi^{-1}(x), \varphi^{-1}(z))\in G$ and, therefore, $\varphi^{-1}(x)\in A$ is such a point that $\varphi^{-1}(x)\in p_1(G)$. Since $A$ is $1$-invariant in $(X,G)$, there is $w\in A$ such that $(\varphi^{-1}(x),w)\in G$. Fix such an element $w$ and let $y=\varphi(w)$. Then $y\in \varphi(A)$ and $(x,y)\in H$. Therefore, $\varphi(A)$ is $1$-invariant in $(Y,H)$. Next, suppose that $\varphi(A)$ is $1$-invariant in $(Y,H)$. Obviously, since $\varphi:X\rightarrow Y$ is a homeomorphism and since $\varphi(A)$ is closed in $Y$, also $A$ is closed in $X$. Let $x\in A$ such that $x\in p_1(G)$ and let $z\in X$ such that $(x,z)\in G$. Then $(\varphi(x), \varphi(z))\in H$ and, therefore, $\varphi(x)\in \varphi(A)$ is such a point that $\varphi(x)\in q_1(H)$. Since $ \varphi(A)$ is $1$-invariant in $(Y,H)$, there is $w\in \varphi(A)$ such that $(\varphi(x),w)\in H$. Fix such an element $w$ and let $y=\varphi^{-1}(w)$. Then $y\in A$ and $(x,y)\in G$. Therefore, $A$ is $1$-invariant in $(X,G)$. This proves \ref{last1}. The proofs of \ref{last2}, \ref{last3} and \ref{last4} are analogous to the proof of \ref{last1}. We leave details to the reader. \end{proof} \begin{lemma}\label{CLemma3} Let $(X,G)$ and $(Y,H)$ be CR-dynamical systems and suppose that $G$ and $H$ are topological conjugates, let $\varphi:X\rightarrow Y$ be a homeomorphism such that for each $(x,y)\in X\times X$, $$ (x,y)\in G \Longleftrightarrow (\varphi(x), \varphi(y))\in H. $$ Then the following hold. \begin{enumerate} \item \label{null} For each $x\in X$, $T^{+}_G(x)\neq \emptyset $ if and only if $T^{+}_H(\varphi(x))\neq \emptyset$. \item \label{eins} For each $\mathbf x\in \star_{i=1}^{\infty}G$, $ \Cl(\mathcal{O}_G^{\oplus}(\mathbf x))=X $ if and only if $\Cl(\mathcal O_H^{\oplus}(\mathbf y))=Y $ where $$ \mathbf y=(\varphi(p_1(\mathbf x)),\varphi(p_2(\mathbf x)),\varphi(p_3(\mathbf x)),\ldots). $$ \item \label{zwei} For each $x\in X$, $ \Cl(\mathcal{U}_G^{\oplus}(x))=X $ if and only if $\Cl(\mathcal U_H^{\oplus}(\varphi(x)))=Y $. \item \label{drei} For each $\mathbf x\in \star_{i=1}^{\infty}G$, $ \omega_G(\mathbf x)=X $ if and only if $\omega_H(\mathbf y)=Y $ where $$ \mathbf y=(\varphi(p_1(\mathbf x)),\varphi(p_2(\mathbf x)),\varphi(p_3(\mathbf x)),\ldots). $$ \item \label{vier} For each $x\in X$, $ \psi_G(x)=X $ if and only if $\psi_H(\varphi(x))=Y $. \item \label{null1} For each $x\in X$, $ T^{-}_G(x)\neq \emptyset $ if and only if $T^{-}_H(\varphi(x))\neq \emptyset $. \item \label{eins1} For each $\mathbf x\in \star_{i=1}^{\infty}G^{-1}$, $ \Cl(\mathcal{O}_G^{\ominus}(\mathbf x))=X $ if and only if $\Cl(\mathcal O_H^{\ominus}(\mathbf y))=Y $ where $$ \mathbf y=(\varphi(p_1(\mathbf x)),\varphi(p_2(\mathbf x)),\varphi(p_3(\mathbf x)),\ldots). $$ \item \label{zwei1} For each $x\in X$, $ \Cl(\mathcal U^{\ominus}_G(x))=X $ if and only if $\Cl(\mathcal U_H^{\ominus}(\varphi(x)))=Y $. \item \label{drei1} For each $\mathbf x\in \star_{i=1}^{\infty}G$, $ \alpha_G(\mathbf x)=X $ if and only if $\alpha_H(\mathbf y)=Y $ where $$ \mathbf y=(\varphi(p_1(\mathbf x)),\varphi(p_2(\mathbf x)),\varphi(p_3(\mathbf x)),\ldots). $$ \item \label{vier1} For each $x\in X$, $\beta_G(x)=X $ if and only if $\beta_H(\varphi(x))=Y$. \end{enumerate} \end{lemma} \begin{proof} First, we prove \ref{null}. Let $x\in X$ such that $T^{+}_G(x)\neq \emptyset$ and let $\mathbf x\in T^{+}_G(x)$ and $\mathbf y=(\varphi(p_1(\mathbf x)),\varphi(p_2(\mathbf x)),\varphi(p_3(\mathbf x)),\ldots)$. Then $\mathbf y\in T^{+}_H(\varphi(x))$ and, therefore, $T^{+}_H(\varphi(x))\neq \emptyset$. To finish the proof of \ref{null}, let $x\in X$ such that $T^{+}_H(\varphi(x))\neq \emptyset$ and let $\mathbf y\in T^{+}_H(\varphi(x))$ and $\mathbf x=(\varphi^{-1}(q_1(\mathbf y)),\varphi^{-1}(q_2(\mathbf y)),\varphi^{-1}(q_3(\mathbf y)),\ldots)$. Then $\mathbf x\in T^{+}_G(x)$ and, therefore, $T^{+}_G(x)\neq \emptyset$. This completes the proof of \ref{null}. Next, we prove \ref{eins}. Let $\mathbf x\in \star_{i=1}^{\infty}G$ and let $\mathbf y=(\varphi(p_1(\mathbf x)),\varphi(p_2(\mathbf x)),\varphi(p_3(\mathbf x)),\ldots)$. First, suppose that $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x))=X$. To show that $\Cl(\mathcal{O}_G^{\oplus}(\mathbf y))=Y$, let $y\in Y$. Then $\varphi^{-1}(y)\in \Cl(\mathcal{O}_G^{\oplus}(\mathbf x))$. Let $(x_n)$ be a sequence in $\mathcal{O}_G^{\oplus}(\mathbf x)$ such that $\displaystyle \lim_{n\to\infty}x_n=\varphi^{-1}(y)$. Then $(\varphi(x_n))$ is a sequence in $\mathcal{O}_G^{\oplus}(\mathbf y)$ such that $\displaystyle \lim_{n\to\infty}\varphi(x_n)=y$. Therefore, $y\in \Cl(\mathcal{O}_G^{\oplus}(\mathbf y))$. This proves the first implication of \ref{eins}. To prove the other implication of \ref{eins}, suppose that $\Cl(\mathcal{O}_G^{\oplus}(\mathbf y))=Y$. To show that $\Cl(\mathcal{O}_G^{\oplus}(\mathbf x))=X$, let $x\in X$. Then $\varphi(x)\in \Cl(\mathcal{O}_G^{\oplus}(\mathbf y))$. Let $(y_n)$ be a sequence in $\mathcal{O}_G^{\oplus}(\mathbf y)$ such that $\displaystyle \lim_{n\to\infty}y_n=\varphi(x)$. Then $(\varphi^{-1}(y_n))$ is a sequence in $\mathcal{O}_G^{\oplus}(\mathbf x)$ such that $\displaystyle \lim_{n\to\infty}\varphi^{-1}(y_n)=x$. Therefore, $x\in \Cl(\mathcal{O}_G^{\oplus}(\mathbf x))$. This completes the proof of \ref{eins}. The proofs of \ref{zwei}, \ref{drei}, \ref{vier}, \ref{null1}, \ref{eins1}, \ref{zwei1}, \ref{drei1}, and \ref{vier1} are straight forward and analogous to the proofs of \ref{null} and \ref{eins}. We leave them to the reader. \end{proof} \begin{theorem}\label{CMain} Let $(X,G)$ and $(Y,H)$ be CR-dynamical systems and suppose that $G$ and $H$ are topological conjugates. Then the following hold. \begin{enumerate} \item Let $k\in \{1,\infty,1^{\oplus},2^{\oplus},3^{\oplus},1^{\ominus},2^{\ominus},3^{\ominus},1^{\omega},2^{\omega},3^{\omega},1^{\alpha},2^{\alpha},3^{\alpha}\}$. Then $$ (X,G) \textup{ is } k\textup{-minimal} \Longleftrightarrow (Y,H) \textup{ is } k\textup{-minimal}. $$ \item Let $k\in \{1,\infty\}$. Then $$ (X,G) \textup{ is } k\textup{-backward minimal} \Longleftrightarrow (Y,H) \textup{ is } k\textup{-backward minimal}. $$ \end{enumerate} \end{theorem} \begin{proof} Let $\varphi:X\rightarrow Y$ be a homeomorphism such that for each $(x,y)\in X\times X$, $$ (x,y)\in G \Longleftrightarrow (\varphi(x), \varphi(y))\in H. $$ We need to prove 16 statements. Their proofs are straight forward and all of them follow from Lemma \ref{CLemma2} or Lemma \ref{CLemma3}. We just give one of the proofs in details and leave the rest of the proofs to the reader. We prove that $$ (X,G) \textup{ is } 1\textup{-minimal} \Longleftrightarrow (Y,H) \textup{ is } 1\textup{-minimal}. $$ Suppose that $(X,G)$ is $1$-minimal. To prove that $(Y,H)$ is $1$-minimal, let $A$ be a non-empty closed subset of $Y$ which is $1$-invariant in $(Y,H)$. By Lemma \ref{CLemma2}, $\varphi^{-1}(A)$ is a non-empty closed subset of $X$ which is $1$-invariant in $(X,G)$. Since $(X,G)$ is $1$-minimal, it follows that $\varphi^{-1}(A)=X$. Therefore, $A=Y$. It follows that $(Y,H)$ is $1$-minimal. Suppose that $(Y,H)$ is $1$-minimal. To prove that $(X,G)$ is $1$-minimal, let $A$ be a non-empty closed subset of $X$ which is $1$-invariant in $(X,G)$. By Lemma \ref{CLemma2}, $\varphi(A)$ is a non-empty closed subset of $Y$ which is $1$-invariant in $(Y,H)$. Since $(Y,H)$ is $1$-minimal, it follows that $\varphi(A)=Y$. Therefore, $A=X$. It follows that $(X,G)$ is $1$-minimal. \end{proof}
2024-02-18T23:39:45.612Z
2022-05-09T02:02:36.000Z
algebraic_stack_train_0000
305
15,009
proofpile-arXiv_065-1653
\section{Introduction}\label{sec:intro} In high-intensity hadron storage rings, intrabeam scattering (IBS) and beam-beam effects will degrade the beam emittance over the length of the store, limiting machine luminosity. In particular, at the planned Electron-Ion Collider (EIC), the IBS times are expected to be at the timescale of a couple hours, and so some form of strong hadron cooling is necessary to achieve the physics goals \cite{cite:eic_cdr}. The proposed method in this case is microbunched electron cooling (MBEC), a particular form of coherent electron cooling (CeC). This was first introduced in \cite{cite:ratner}, and the theory has since been developed extensively in \cite{cite:stupakov_initial, cite:stupakov_amplifier, cite:stupakov_transverse, cite:stupakov_fourth}. The premise of MBEC is that the hadron beam to be cooled will copropagate with an electron beam in a straight ``modulator'' section, during which time the hadrons will provide energy kicks to the electrons. The two beams are then separated, and the electrons pass through a series of amplifiers to change this initial energy modulation into a density modulation and amplify it. The hadrons travel through their own chicane before meeting the electrons again in a straight ``kicker'' section. Here, the amplified density modulations in the electron beam provide energy kicks to the hadrons. By tuning the hadron chicane so that the hadron delay in travelling from the modulator to the kicker is energy-dependent, we may arrange it so that the energy kick that the hadron receives in the kicker tends to correct initial energy offsets. If the chicane also gives the hadrons a delay dependent on their transverse phase-space coordinates and if there is non-zero dispersion in the kicker, then the transverse emittance of the hadron beam can also be cooled. In the current design of an MBEC Cooler for the EIC, the typical scale of the electron density modulations at the top energy will be $\sim1\mu$m \cite{cite:eic_mbec_design}. This corresponds to about 4 orders of magnitude higher bandwidth than can be achieved with microwave stochastic cooling \cite{cite:microwave_stochastic_cooling, cite:stochastic_cooling_rhic}, allowing the cooling of dense hadron bunches, but also making alignment a challenge. It is important that the hadron arrives in the kicker at the same time as the density perturbations which it had induced in the electron beam, or else it will receive entirely uncorrelated energy kicks \cite{cite:Seletskiy_2021zrn}. Comparing the $\sim100$m distance between modulator and kicker to the $\sim1\mu$m density perturbations in the electron beam, we see that the transit times of the electrons and hadrons must be maintained at a level of ten parts per billion. In order to commission and operate such a system, it is necessary to have some way to quickly measure the relative alignment of the electron and hadron beams at the sub-micron scale. Directly observing cooling would require waits on the timescale of hours, which would make commissioning painful and prevent any sort of fast feedback during operations. The method proposed here is to make use of ``signal modification,'' an extension of the well-known signal suppression of microwave stochastic cooling \cite{cite:sigsup_original, cite:sigsup_bisognano, cite:sigsup_ruggiero, cite:sigsup_sc} to the case of MBEC cooling. The principle of this method is that after the hadron beam has received its cooling kicks, it will propagate to a ``detector'' where the power of the hadron beam at particular wavelengths may be measured. If the hadron beam is well-aligned with the electron beam, this will produce a predictable change in the spectral content of the hadron beam on a single-pass basis. As our model, we assume that the cooling section consists of a modulator, where the hadron beam imprints on the electrons; a kicker, where the electrons provide an energy kick back to the hadrons; and a detector, where we will observe the density spectrum in the hadron beam. See Fig. \ref{fig:layout}. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{sigsup_layout.png} \end{center} \caption{\label{fig:layout} A schematic of the MBEC cooling section, including the usual modulator and kicker necessary for cooling, as well as the diagnostic detector where signal modification may be observed.} \end{figure} In Section \ref{sec:thry}, we provide a theoretical derivation of signal modification. In Section \ref{sec:sim}, we discuss simulation tools to model this process, and find good agreement with the theoretical predictions. In Section \ref{sec:detection}, we discuss what will be needed to measure such a signal experimentally. We present our outlook in Section \ref{sec:conclude} and conclude. Note that, although this paper focuses on MBEC, the general form of these results will hold for coherent electron cooling with other amplification mechanisms. \section{Theory}\label{sec:thry} We derive here the theory of signal modification by directly propagating the particles themselves from the modulator to the detector with arbitrary 6-dimensional transfer matrices. Alternative derivations using the longitudinal Vlasov equation are presented in Appendix \ref{app:alternative}. In subsection \ref{subsec:thry_decohere}, we comment on decoherence of the signal when observing a range of frequencies. We use phase-space coordinates $x$, $x'$, $y$, $y'$, $z$, and $\eta$, where the first four are the transverse positions and angles, $z$ is the particle's longitudinal position in the bunch, with positive $z$ values corresponding to the head of the bunch, and $\eta$ is the fractional energy offset of the particle. In order to characterize the cooling process, we follow \cite{cite:stupakov_initial} and define a longitudinal wake function, such that the energy kick a hadron receives in the kicker is the convolution of this wake function with the longitudinal distribution of hadrons in the modulator. Explicitly, the fractional energy kick $\Delta \eta$ received by a hadron at longitudinal position $z$ within the bunch at the kicker is given by \begin{align}\label{eqtn:wake_def} \Delta\eta(z) = \frac{q^2}{E_0} \int_{-\infty}^{\infty}w(z+\Delta z-z')n(z') dz' \end{align} \noindent where $q$ is the hadron charge, $E_0$ is the nominal hadron energy, $n(z)$ is the longitudinal hadron density in the modulator, and $\Delta z$ is the difference in modulator-to-kicker longitudinal delay between the hadron and electron beams. We also identify a corresponding impedance \begin{align}\label{eqtn:imped_def} Z(k) = -\frac{1}{c}\int_{-\infty}^{\infty} w(z) e^{-ikz} dz \end{align} We focus our attention on the peak region of the electron and hadron beams and take the limit of a longitudinally infinite and uniform plasma. Since the typical wake wavelength is on the order of a few microns, and the typical bunch lengths are a few mm or longer, this is a reasonable approximation. Finally, we assume that the hadron beam enters the modulator with no correlated structures on the scale of the wake wavelength. Although such structures will be generated within the kicker, their characteristic size is on the micron scale, much less than the millimeter scale longitudinal motion per turn, washing out any memory of the kick by the time the beam enters the modulator again. As illustrated in Fig. \ref{fig:layout}, we take the hadrons to have transfer matrix $M^{MK}$ between modulator and kicker and $M^{KD}$ between kicker and detector, with the transfer matrix between modulator and detector given by $M^{MD}$. We treat our particles as existing in a region of length $L$, much larger than any length scale associated with the wake function, and assume periodic boundary conditions, so that we may arbitrarily shift the limits of integration in our integrals. In this model, we also consider the full 6-dimensional evolution of the hadron beam and ignore any collective effects during beam transport except for the electron-hadron interactions characterized by the wake function, as discussed above. We write the evolution of a hadron's position between modulator and detector as \begin{align}\label{eqtn:delays} z_d^{(i)} = &z_m^{(i)} + M^{MD}_{5u}\vec{x}_u^{(i)}\\\nonumber + &M^{KD}_{56}\sum_j \frac{q^2}{E_0} w(z_m^{(i)} + M^{MK}_{5u}\vec{x}_u^{(i)} + \Delta z - z_m^{(j)}) \end{align} \noindent where $z_m^{(i)}$ is the longitudinal position of particle $i$ in the modulator, $z_d^{(i)}$ is its position within the detector, and $\vec{x}^{(i)}$ are the phase-space coordinates of particles $i$ in the modulator. We use the convention that the repeated ``$u$'' subscript refers to sums over the 5 phase-space coordinates \textit{excluding} the longitudinal position. The summation over $j$ is over all particles within the hadron beam. The first two terms of this equation describe the modulator to kicker beam evolution by a simple transfer matrix, while the final term gives the contribution to delay due to the extra energy kick our particle receives in the kicker due to the wakes of all particles in the beam, including itself. At the detector, the longitudinal density of the hadron beam is \begin{align}\label{eqtn:rho_d} n(z) = \sum_i \delta(z - z_d^{(i)}) \end{align} \noindent with corresponding density in Fourier space \begin{align}\label{eqtn:rho_tilde_d} \tilde{n}(k) &= \int_{-\infty}^{\infty}\sum_i e^{-ikz} \delta(z - z_d^{(i)}) dz\\\nonumber &=\sum_i e^{-ikz_d^{(i)}} \end{align} The power in the hadron spectrum at a given wavenumber is then given by \begin{align}\label{eqtn:pwr} |\tilde{n}(k)|^2 = &\sum_{i,a}e^{-ik\big[z_d^{(i)} - z_d^{(a)}\big]}\\\nonumber =N + &\sum_{i \neq a}e^{-ik\big[z_m^{(i)} - z_m^{(a)} + M^{MD}_{5u}\big(\vec{x}_u^{(i)} - \vec{x}_u^{(a)}\big)\big]}\\\nonumber &\times e^{-ikM^{KD}_{56}q^2/E_0\sum_j w\big(z_m^{(i)} + M^{MK}_{5u}\vec{x}_u^{(i)} + \Delta z - z_m^{(j)}\big)}\\\nonumber &\times e^{ikM^{KD}_{56}q^2/E_0\sum_j w\big(z_m^{(a)} + M^{MK}_{5u}\vec{x}_u^{(a)} + \Delta z - z_m^{(j)}\big)} \end{align} \noindent where we have substituted in the expression for $z_d^{(i)}$ given in Eqtn. \ref{eqtn:delays} and used the fact that the $i=a$ terms in the sum are all equal to $1$, giving us the $N$ out front, where $N$ is the number of particles in the length-$L$ section of the beam. Typically, the kick from the wake is small, and so we may Taylor-expand the final two exponentials above to linear order. We thereby obtain \begin{align}\label{eqtn:pwr_taylor} |\tilde{n}(k)|^2 &= N + \sum_{i \neq a}e^{-ik\big[z_m^{(i)} - z_m^{(a)} + M^{MD}_{5u}\big(\vec{x}_u^{(i)} - \vec{x}_u^{(a)}\big)\big]}\\\nonumber \times \big[1 &- ikM^{KD}_{56}\frac{q^2}{E_0}\sum_j w\big(z_m^{(i)} + M^{MK}_{5u}\vec{x}_u^{(i)} + \Delta z - z_m^{(j)}\big)\\\nonumber &+ ikM^{KD}_{56}\frac{q^2}{E_0}\sum_j w\big(z_m^{(a)} + M^{MK}_{5u}\vec{x}_u^{(a)} + \Delta z - z_m^{(j)}\big)\big] \end{align} \noindent with the effect of second-order terms considered in Appendix \ref{app:second_order}. We now wish to take the expectation value of the beam power, requiring integrals over the 12 phase space coordinates of particles $i$ and $a$ and the longitudinal position of particle $j$. However, note that the dependence on $z_m^{(j)}$ appears only in the argument of the wake functions. If the total integral of the wake is zero, then, if particle $j$ is distinct from both particles $i$ and $a$, this integral will evaluate to 0. We then need only consider the terms $j=i$ and $j=a$ in those sums. The beam power can then be written as \begin{align}\label{eqtn:pwr_taylor2} |\tilde{n}(k)|^2 &= N + \sum_{i \neq a}e^{-ik\big[z_{ia} + M^{MD}_{5u}\big(\vec{x}_u^{(i)} - \vec{x}_u^{(a)}\big)\big]}\\\nonumber \times [1 &- ikM^{KD}_{56}\frac{q^2}{E_0} w(M^{MK}_{5u}\vec{x}_u^{(i)} + \Delta z)\\\nonumber &+ ikM^{KD}_{56} \frac{q^2}{E_0}w(M^{MK}_{5u}\vec{x}_u^{(a)} + \Delta z)\\\nonumber &- ikM^{KD}_{56} \frac{q^2}{E_0}w(z_{ia} + M^{MK}_{5u}\vec{x}_u^{(i)} + \Delta z)\\\nonumber &+ ikM^{KD}_{56} \frac{q^2}{E_0}w(-z_{ia} + M^{MK}_{5u}\vec{x}_u^{(a)} + \Delta z)] \end{align} \noindent where we have made the definition $z_{ia} \equiv z_m^{(i)} - z_m^{(a)}$. Since we assume a homogeneous hadron bunch, $z_m^{(i)}$ and $z_m^{(a)}$ themselves are irrelevant and $z_{ia}$ has the same probability distribution as them. We note in the above formula that the first three terms have their only $z_{ia}$ dependence in the leading exponential, and so performing an average over all $z_{ia}$ will be zero. We therefore need only focus on the 4th and 5th terms. Approximating the $N(N-1)$ terms in the above sum as $N^2$, the relevant integral for the 4th term is \begin{align}\label{eqtn:4th_term} &-N^2ikM_{56}^{KD} \int_{-L/2}^{L/2} dz_{ia}/L \int_{-\infty}^{\infty} d^5\vec{x}^{(i)} d^5\vec{x}^{(a)} \rho\big(\vec{x}^{(i)}\big) \rho\big(\vec{x}^{(a)}\big)\\\nonumber &\times \frac{q^2}{E_0}w(z_{ia} + M^{MK}_{5u}\vec{x}_u^{(i)} + \Delta z) e^{-ik\big[z_{ia} + M^{MD}_{5u}\big(\vec{x}_u^{(i)} - \vec{x}_u^{(a)}\big)\big]} \end{align} \noindent where $\rho\big(\vec{x}\big)$ the hadron phase-space density in the modulator within the 5 phase-space coordinates excluding longitudinal position. Approximating the longitudinal integral as extending from $-\infty$ to $\infty$, using the impedance of Eqtn. \ref{eqtn:imped_def}, and making an appropriate change of variables to $z' \equiv z_{ia} + M^{MK}_{5u}\vec{x}_u^{(i)} + \Delta z$, the longitudinal integral in Eqtn. \ref{eqtn:4th_term} may be evaluated, leaving us with \begin{align}\label{eqtn:4th_term_2} &N^2ikM_{56}^{KD} \frac{q^2c}{E_0}\frac{1}{L} Z(k) e^{ik\Delta z}\\\nonumber &\times\int_{-\infty}^{\infty} d^5\vec{x}^{(i)} d^5\vec{x}^{(a)} \rho\big(\vec{x}^{(i)}\big) \rho\big(\vec{x}^{(a)}\big)\\\nonumber &\times e^{-ik\big[M^{MD}_{5u}\big(\vec{x}_u^{(i)} - \vec{x}_u^{(a)}\big) - M^{MK}_{5u}\vec{x}_u^{(i)}\big]} \end{align} To perform the remaining integrals, we write the evolution of the phase-space coordinates explicitly in terms of action-angle variables and Courant-Snyder parameters at the start of the transfer matrix, finding \begin{align}\label{eqtn:action_angle} M_{5u}\vec{x}_u &= M_{51}(\sqrt{2J_x\beta_x}\cos(\phi_x)+D_x\eta)\\\nonumber &+ M_{52}(-\sqrt{2J_x/\beta_x}[\sin(\phi_x)+\alpha_x\cos(\phi_x)]+D'_x\eta)\\\nonumber &+ M_{53}(\sqrt{2J_y\beta_y}\cos(\phi_y)+D_y\eta)\\\nonumber &+ M_{54}(-\sqrt{2J_y/\beta_y}[\sin(\phi_y)+\alpha_y\cos(\phi_y)]+D'_y\eta)\\\nonumber &+ M_{56}\eta\\\nonumber &\\\nonumber &= (M_{51}-\frac{\alpha_x}{\beta_x}M_{52})\sqrt{2J_x\beta_x}\cos(\phi_x)\\\nonumber &- M_{52}\sqrt{2J_x/\beta_x}\sin(\phi_x)\\\nonumber &+ (M_{53}-\frac{\alpha_y}{\beta_y}M_{54})\sqrt{2J_y\beta_y}\cos(\phi_y)\\\nonumber &- M_{54}\sqrt{2J_y/\beta_y}\sin(\phi_y)\\\nonumber &+ (D_xM_{51} + D'_xM_{52} + D_yM_{53} + D'_yM_{54} + M_{56})\eta\\\nonumber &\\\nonumber \equiv & \hat{M}_{51}\hat{x} + \hat{M}_{52}\hat{x}' + \hat{M}_{53}\hat{y} + \hat{M}_{54}\hat{y}' + \hat{M}_{56}\hat{\eta} \end{align} \noindent with \begin{align}\label{eqtn:m_tilde} &\hat{M}_{51} \equiv M_{51}-\frac{\alpha_x}{\beta_x}M_{52}\\\nonumber &\hat{M}_{52} \equiv M_{52}\\\nonumber &\hat{M}_{53} \equiv M_{53}-\frac{\alpha_y}{\beta_y}M_{54}\\\nonumber &\hat{M}_{54} \equiv M_{54}\\\nonumber &\hat{M}_{56} \equiv D_xM_{51} + D'_xM_{52} + D_yM_{53} + D'_yM_{54} + M_{56} \end{align} For a Gaussian beam, the $\hat{x}$, $\hat{x}'$, $\hat{y}$, $\hat{y}'$, and $\hat{\eta}$ are normally distributed with \begin{align}\label{eqtn:sigmas} &\sigma_{\hat{x}} = \sqrt{\epsilon_x\beta_x}\\\nonumber &\sigma_{\hat{x}'} = \sqrt{\epsilon_x/\beta_x}\\\nonumber &\sigma_{\hat{y}} = \sqrt{\epsilon_y\beta_y}\\\nonumber &\sigma_{\hat{y}'} = \sqrt{\epsilon_y/\beta_y}\\\nonumber &\sigma_{\hat{\eta}} = \sigma_{\eta} \end{align} In this case, the remaining ten integrals in Eqtn. \ref{eqtn:4th_term_2} may be performed, yielding \begin{align}\label{eqtn:4th_term_final} &ikM_{56}^{KD}\frac{q^2c}{E_0} \frac{N^2}{L} Z(k) e^{ik\Delta z}\\\nonumber &\times e^{-\frac{k^2}{2} \sum_{u\neq5} \sigma^2_{\hat{x}_u}\Big[\big(\hat{M}^{MD}_{5u}\big)^2 + \big(\hat{M}^{MD}_{5u} - \hat{M}^{MK}_{5u}\big)^2\Big]} \end{align} A similar procedure may be applied to the fifth term of Eqtn. \ref{eqtn:pwr_taylor2}, resulting in \begin{align}\label{eqtn:5th_term_final} &-ikM_{56}^{KD} \frac{q^2c}{E_0}\frac{N^2}{L} Z(-k) e^{-ik\Delta z}\\\nonumber &\times e^{-\frac{k^2}{2} \sum_{u\neq5} \sigma^2_{\hat{x}_u}\Big[\big(\hat{M}^{MD}_{5u}\big)^2 + \big(\hat{M}^{MD}_{5u} - \hat{M}^{MK}_{5u}\big)^2\Big]} \end{align} Making use of the fact that, for real wakes, $Z(-k) = Z^*(k)$, and defining $n_0 \equiv N/L$ as the mean linear density of the hadrons, we may sum Eqtns. \ref{eqtn:4th_term_final} and \ref{eqtn:5th_term_final}, and incorporate them back into Eqtn. \ref{eqtn:pwr_taylor2}, obtaining \begin{align}\label{eqtn:pwr_final} |\tilde{n}(k)|^2 &= N - 2Nn_0\frac{q^2c}{E_0}kM_{56}^{KD}\\\nonumber &\times[\Re(Z(k))\sin(k\Delta z) + \Im(Z(k))\cos(k\Delta z)]\\\nonumber &\times e^{-\frac{k^2}{2} \sum_{u\neq5} \sigma^2_{\hat{x}_u}\Big[\big(\hat{M}^{MD}_{5u}\big)^2 + \big(\hat{M}^{MD}_{5u} - \hat{M}^{MK}_{5u}\big)^2\Big]} \end{align} Then, the fractional change in beam power as a function of electron/hadron misalignment is \begin{align}\label{eqtn:pwr_relative} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} &= -2n_0\frac{q^2c}{E_0}kM_{56}^{KD}\\\nonumber &\times[\Re(Z(k))\sin(k\Delta z) + \Im(Z(k))\cos(k\Delta z)]\\\nonumber &\times e^{-\frac{k^2}{2} \sum_{u\neq5} \sigma^2_{\hat{x}_u}\Big[\big(\hat{M}^{MD}_{5u}\big)^2 + \big(\hat{M}^{MD}_{5u} - \hat{M}^{MK}_{5u}\big)^2\Big]} \end{align} Note that if $\Re(Z(k)) = 0$ and $\Im(Z(k)) > 0$ (which is the case considered below, see Eqtn.~\ref{eqtn:imped_approx}), and if $M_{56}^{KD}>0$, then perfect alignment of the hadrons and electrons, ie $\Delta z = 0$, results in $\Delta |\tilde{n}(k)|^2 < 0,$ corresponding to noise suppression below the shot noise in the beam. Such noise suppression has been previously studied theoretically in \cite{PhysRevLett.102.154801,PhysRevSTAB.14.060710} and observed experimentally in \cite{PhysRevLett.109.034801,Gover:2012ts}. Our result Eqtn.~\ref{eqtn:pwr_relative} is an agreement with the theoretical analysis of \cite{PhysRevSTAB.14.060710}. \subsection{Decoherence}\label{subsec:thry_decohere} The above posits that the amount of signal modification has a purely sinusoidal dependence on the electron/hadron misalignment, ie, \begin{align}\label{eqtn:decoherence_ideal} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} = A(k) \cos(k\Delta z + \theta_0) \end{align} \noindent where $A$ is some amplitude and $\theta_0$ is a phase, equal to $0$ for an antisymmetric wake. However, the above derivation assumes an observation at a pure single frequency. If we take the more realistic case that we sample some range of frequencies with bandwidth $\Delta k$, the signal amplitude will be \begin{align}\label{eqtn:decoherence_true} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} &\approx A(k)\frac{1}{\Delta k}\int_{k-\Delta k/2}^{k+\Delta k/2} \cos(k'\Delta z + \theta_0) dk'\\\nonumber &= A(k)\cos(k\Delta z + \theta_0) \frac{\sin(\Delta k\Delta z/2)}{\Delta k\Delta z/2} \end{align} \noindent so that the amplitude of the oscillations will decay over lengths of $\sim 1/\Delta k$. \section{Simulation}\label{sec:sim} In order to check the validity and limits of the above theory, we make use of simulation. We first examine the case of a perfectly linear simulation, where the fractional energy kick to a given hadron in the kicker is simply the convolution of the wake function with the longitudinal hadron distribution in the modulator. We then turn our attention to a more detailed simulation, where the electrons and hadrons are tracked with a particle-in-cell code in order to incorporate saturation effects. In this and future sections, we consider the cooling parameters currently planned for 275 GeV protons in the EIC, listed in Tab. \ref{tab:param}. The electron optics are assumed to be kept roughly constant within the modulator, kicker, and amplifiers through the use of focusing. Due to their much higher energy, the hadrons see the modulator and kicker as drifts, with the optics parameters specified at the center. \begin{table*}[!hbt] \centering \caption{Parameters for Longitudinal and Transverse Cooling} \begin{tabular}{lc} \textit{\textbf{Geometry}} & \\ Modulator Length (m) & 45 \\ Kicker Length (m) & 45 \\ Number of Amplifier Straights & 2 \\ Amplifier Straight Lengths (m) & 37 \\ \textit{\textbf{Proton Parameters}} & \\ Energy (GeV) & 275 \\ Protons per Bunch & 6.9e10 \\ Average Current (A) & 1 \\ Proton Bunch Length (cm) & 6 \\ Proton Fractional Energy Spread & 6.8e-4 \\ Proton Emittance (x/y) (nm) & 11.3 / 1 \\ Horizontal/Vertical Proton Betas in Modulator (m) & 39 / 39 \\ Horizontal/Vertical Proton Dispersion in Modulator (m) & 1 / 0 \\ Horizontal/Vertical Proton Dispersion Derivative in Modulator (m) & -0.023 / 0 \\ Horizontal/Vertical Proton Betas in Kicker (m) & 39 / 39 \\ Horizontal/Vertical Proton Dispersion in Kicker (m) & 1 / 0 \\ Horizontal/Vertical Proton Dispersion Derivative in Kicker (m) & 0.023 / 0 \\ Proton Horizontal/Vertical Phase Advance (rad) & 4.79 / 4.79 \\ Proton R56 between Centers of Modulator and Kicker (mm) & -2.26 \\ \textit{\textbf{Electron Parameters}} & \\ Energy (MeV) & 150 \\ Electron Bunch Charge (nC) & 1 \\ Electron Bunch Length (mm) & 7 \\ Electron Peak Current (A) & 17 \\ Electron Fractional Slice Energy Spread & 1e-4 \\ Electron Normalized Emittance (x/y) (mm-mrad) & 2.8 / 2.8 \\ Horizontal/Vertical Electron Betas in Modulator (m) & 40 / 20 \\ Horizontal/Vertical Electron Betas in Kicker (m) & 4 / 4 \\ Horizontal/ Vertical Electron Betas in Amplifiers (m) & 1 / 1 \\ R56 in First Electron Chicane (mm) & 5 \\ R56 in Second Electron Chicane (mm) & 5 \\ R56 in Third Electron Chicane (mm) & -11 \\ \textit{\textbf{Cooling Times}} & \\ Horizontal/Vertical/Longitudinal IBS Times (hours) & 2.0 / - / 2.9 \\ Horizontal/Vertical/Longitudinal Cooling Times (hours) & 1.8 / - / 2.8 \\ \end{tabular} \label{tab:param} \end{table*} \subsection{Linear Simulation}\label{subsec:sim_lin} We simulate a 50$\mu$m length of the hadron beam at peak electron and hadron beam currents. Since the bunch lengths of the hadron and electron beams are a few cm and mm, respectively, we may assume constant longitudinal beam densities in our region of interest, and take periodic boundary conditions. We start by seeding 1 million hadron macroparticles in the modulator, representing 23 million real hadrons. In order to match the noise statistics in the real beam, we perform the seeding with sub-Poisson noise, as follows. We arrange a 2-dimension grid in the longitudinal phase space, and analytically compute the number of hadrons expected in each bin, with Poisson statistical noise, assuming a Gaussian energy distribution and uniform longitudinal position distribution. We then assign a number of macroparticles to this bin equal to the number of real particles expected divided by the number of real particles represented by each macroparticle, rounded to the nearest integer. Underflow/overflow is carried to the next bin. Each hadron macroparticle is also given random transverse actions and phases, from which its 4 transverse phase space coordinates may be obtained using the modulator optics. We obtain the position of each hadron macroparticle in the kicker by using the modulator-to-kicker transfer matrix in the design optics, and save this pre-kicker distribution. We then apply a longitudinal shift to all hadrons equally, corresponding to a longitudinal electron-hadron misalignment. We assign an energy kick to each macroparticle by convolving the ideal wake function with the longitudinal hadron density distribution in the modulator. The ideal wake function is computed using the procedures of \cite{cite:stupakov_initial, cite:stupakov_amplifier, cite:stupakov_transverse} for the case of two amplification straights and unmatched Gaussian electron and hadron beams with arbitrary horizontal and vertical beam sizes. A plot of this idealized wake is shown in black in Fig. 2. Note that this wake is antisymmetiric, which means that $\Re(Z(k)) = 0$ and the first term in the square brackets on the right hand side of Eqtn.~\ref{eqtn:pwr_relative} vanishes. We then translate the hadrons to the detector element using a kicker-to-detector transfer matrix equal to the inverse of the modulator-to-kicker transfer matrix. Finally, we perform a fast Fourier transform (FFT) on the hadron linear density distribution to obtain the amplitude of the spectrum. Squaring these values gives us the spectral power, and we compute the fractional change from the initial spectral power of the hadrons. We repeat this process from the saved pre-kicker distribution, but apply different longitudinal misalignments. Finally, we repeat this whole procedure from the beginning for 100 different random noise seeds and take the average fractional change in spectral power for each delay value. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{wakes_both_corrected.png} \end{center} \caption{\label{fig:wakes_both} Wake functions both from linear theory (without saturation) and simulated by our PIC code (with saturation).} \end{figure} We plot these simulated results and the theory prediction in Fig. \ref{fig:sim_linear}, finding excellent agreement. Note that the theoretical wake has a constant vertical shift of $\sim 0.04$ due to higher-order terms, as discussed in Appendix \ref{app:second_order}. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{true_impedance_linear_correct.png} \end{center} \caption{\label{fig:sim_linear} Fractional change in the spectral power in the hadron beam at the $6.25\mu$m wavelength for the linear case. Excellent agreement between theory and simulation is observed.} \end{figure} \subsection{Nonlinear Simulation}\label{subsec:sim_nonlin} As a more realistic model, we explicitly track the hadrons and electrons step-by-step through the modulator, kicker, and amplifiers. Much of the process for the hadrons is the same as in the linear case, described in subsection \ref{subsec:sim_lin}. The main difference is that, rather than simply convolve the wake with the hadron density distribution, we explicitly include the electron beam and model its interactions with itself and the hadrons in a particle-in-cell (PIC) code, as described in \cite{cite:ipac2021_pic}. The electrons are initialized in the same manner as the hadrons, with 10 real electrons per macroparticle, but with only their longitudinal coordinates described. The hadrons are initialized at the center of the modulator using the modulator optics of Tab. \ref{tab:param}, then back-propagated to the start of the modulator so as to have the correct initial distribution. We track the electrons and hadrons through the modulator in 2 and 6 phase-space dimensions, respectively, modelling the inter-particle interactions using the disc model of \cite{cite:stupakov_initial, cite:stupakov_amplifier, cite:stupakov_transverse}. In bringing the hadrons to the kicker, the modulator-to-kicker transfer matrix is multiplied by the inverse transfer matrices of half the modulator and kicker drifts, so as to have the correct transfer matrix between the element centers, while the electrons are explicitly tracked through the amplification section (3 chicanes and 2 straights), using the same PIC code. In the kicker, both species are again tracked with the PIC code to get accurate kicks to the hadrons. In bringing the hadrons to the detector, we multiply the kicker-to-detector transfer matrix by half the inverse transfer matrix of the kicker drift in order to have the correct transfer matrix from the kicker center to the detector. In comparing to theory, we reduce the wake amplitude to account for saturation in the electron beam, as described in \cite{cite:ipac2021_pic}, and shown in red in Fig. \ref{fig:wakes_both}. We incorporate the $\sim 0.01$ offset of Appendix \ref{app:second_order} into Eqtn. \ref{eqtn:pwr_relative} and compare with the simulation results, as shown in Fig. \ref{fig:sim_nonlinear}. Good agreement is observed. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{true_impedance_nonlinear_transverse_motion_in_mk.png} \end{center} \caption{\label{fig:sim_nonlinear} Fractional change in the spectral power in the hadron beam at the $6.25\mu$m wavelength for the fully nonlinear case. Good agreement between theory and simulation is observed.} \end{figure} \section{Detection of the Signal}\label{sec:detection} So far, we have focused on the derivation and validation of the signal modification theory. However, for this to be useful, we must be able to physically detect it. We therefore look for the optimal kicker-to-detector transfer matrix and wavelength at which to observe the signal, and then examine the possibility of detecting the hadron beam density modulation in the radiation of an EIC dipole. \subsection{Optimal Parameters}\label{subsec:optimal_obs} In order to perform the optimization, it is helpful to make use of a simplified, analytic form for the wake function. In \cite{cite:wake_approx}, it was proposed to approximate the wake using a simple fit function equal to a sine wave with Gaussian decay. Making slight changes to the parameters, we may write \begin{align}\label{eqtn:wake_approx} w(z) = A \sin(\kappa z)e^{-z^2/2\lambda^2} \end{align} \noindent where $A$, $\kappa$, and $\lambda$ are fit parameters roughly corresponding to the wake amplitude, wavenumber, and falloff-distance, respectively. We find that this function fits our wake with saturation fairly well, as shown in Fig. \ref{fig:wake_fit_sat}, and we arrive at values of $A = 78.4$MV/pC, $\kappa = 1.031/\mu$m, and $\lambda = 2.768\mu$m. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{fit_wake_corrected.png} \end{center} \caption{\label{fig:wake_fit_sat} Fit of the simulated wake (with saturation) to a function of the form given in Eqtn. \ref{eqtn:wake_approx}: $w(z) = A \sin(\kappa z)e^{-z^2/2\lambda^2}$. We find fit parameters of $A = 78.4$MV/pC, $\kappa = 1.031/\mu$m, and $\lambda = 2.768\mu$m, and see that this matches the data fairly well.} \end{figure} The corresponding impedance of the simplified wake of Eqtn. \ref{eqtn:wake_approx} is given by \begin{align}\label{eqtn:imped_approx} Z(k) = -\frac{A\lambda i}{2c}\sqrt{2\pi}\Big[e^{-\lambda^2(k+\kappa)^2/2} - e^{-\lambda^2(k-\kappa)^2/2}\Big] \end{align} Putting the above into Eqtn. \ref{eqtn:pwr_relative}, we obtain an expression for signal modification amplitude: \begin{align}\label{eqtn:pwr_relative2} \frac{\Delta |\tilde{n}^2(k)|^2}{|\tilde{n}^2(k)|^2} &= \frac{q^2}{E_0}A\lambda n_0kM_{56}^{KD}\sqrt{2\pi}\cos(k\Delta z)\\\nonumber &\times \Big[e^{-\lambda^2(k+\kappa)^2/2} - e^{-\lambda^2(k-\kappa)^2/2}\Big]\\\nonumber &\times e^{-\frac{k^2}{2} \sum_{u\neq5} \sigma^2_{\tilde{x}_u}\bigg[\Big(\tilde{M}^{MD}_{5u}\Big)^2 + \Big(\tilde{M}^{MK}_{5u} - \tilde{M}^{MD}_{5u}\Big)^2\bigg]} \end{align} Examining the above, we see that once we have fixed the cooling parameters, including the transfer matrix from the modulator to the kicker, the only variables which we can alter which will have any impact on the signal modification are the wavenumber of the density modulation we wish to observe and the $M_{5u}$ transfer elements from the kicker to the detector. (Note that these parameters alone are also sufficient to specify the $M_{5u}$ transfer matrix elements from the modulator to the detector.) If we have no vertical dispersion, $M^{MK}_{53} = M^{MK}_{54} = 0$, and it is easy to see from Eqtn. \ref{eqtn:pwr_relative2} that the corresponding elements in the kicker-to-detector transfer matrix should also be equal to $0$. The magnitude of the signal amplitude in Eqtn. \ref{eqtn:pwr_relative2} may be easily maximized with respect to $k$, $M^{KD}_{51}$, $M^{KD}_{52}$, and $M^{KD}_{56}$, obtaining values of $M_{51}^{KD} = -8.44\times 10^{-4}$, $M_{52}^{KD} = -3.00\times 10^{-2}$m, $M_{56}^{KD} = 2.36\times10^{-3}$m, and $k = 1.04 \times10^{6}/$m, with Eqtn. \ref{eqtn:pwr_relative2} taking on the numerical value $\frac{\Delta |\tilde{\rho}^2(k)|^2}{|\tilde{\rho}^2(k)|^2} = -0.22\cos(k\Delta z)$. (For comparison, the simulations in Section \ref{sec:sim}, setting $M^{KD}$ to the inverse of $M^{MK}$ were made with near-optimal parameters $M_{51}^{KD} = -7.93\times 10^{-4}$, $M_{52}^{KD} = -2.86\times 10^{-2}$m, $M_{56}^{KD} = 2.26\times10^{-3}$m, and $k = 1.06 \times10^{6}/$m.) \subsection{Drift Plus Dipole}\label{subsec:dipole_obs} In the current design of the cooler, it is useful to have as much space as possible dedicated to the modulator, kicker, and amplifiers, making it difficult to also fit in an optimized transfer line between the kicker and detector to achieve optimal signal modification. We therefore consider the case where we simply have a drift after the kicker plus a main-arc dipole (field strength of 3.782T, bending radius of 243m). We assume a hard-edge dipole model and that the beam path is normal to the pole face, so that edge effects may be ignored. For convenience, we define the amplitude of signal modification so that positive values correspond to a reduction in intensity when the electrons and hadrons are aligned. We take an observation point at the start of the dipole and perform a scan of the amplitude of signal modification as a function of the $M_{56}$ value of a drift transfer matrix between the kicker center and the observation point for several wavelengths of hadron density perturbations, as shown in Fig. \ref{fig:scan_wavelength_discrete}. We find an optimal wavelength of $\sim 6\mu$m, but the 1.2mm $M_{56}$ value corresponds to over 100m of drift, which is too long for the available space. We therefore focus on more reasonable parameters and scan the signal modification amplitude as a function of moderate kicker-to-dipole drift lengths and observation wavelengths, as shown in Fig. \ref{fig:scan_wavelength}. Note that these drifts are defined between the end of the kicker and the start of the dipole, and so an extra $M_{56}$ contribution from half the kicker length is also included in the transfer matrix. We find an optimal observation wavelength of $6\mu$m over a range of drift lengths. We then fix a $6\mu$m observation wavelength and scan the signal modification amplitude as a function of drift length and bend angle within the dipole, as shown in Fig. \ref{fig:scan_dipole}. We find that it is ideal to observe the radiation near the start of the dipole and that, within the region of interest, increased drift lengths result in increased signal modification, with amplitudes of a few percent. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{drift_scan_discrete.png} \end{center} \caption{\label{fig:scan_wavelength_discrete} Fractional signal modification as a function of $M_{56}$ between the kicker center and detection point for a variety of observation wavelengths. We find an optimal observation wavelength of $6\mu$m with an $M_{56}$ of 1.2mm. This would correspond to a total drift of over 100m, which would not be feasible in the available space.} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{drift_wavelength_scan.png} \end{center} \caption{\label{fig:scan_wavelength} Fractional signal modification expected at the start of the dipole if we observe the hadron density modulations of a specified wavelength after traversing a specified drift length after the kicker. We find an optimal observation wavelength of roughly $6\mu$m over a range of drift lengths, giving a few percent signal modification amplitude.} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{drift_dipole_scan.png} \end{center} \caption{\label{fig:scan_dipole} Fractional signal modification of $6\mu$m wavelength hadron density perturbations after traversing a drift of specified length after the kicker and a bend within a dipole of specified angle. We find that moderate drift lengths will produce signal modification amplitudes of a few percent at the start of the dipole.} \end{figure} Note that the inclusion of the $M_{51}$ and $M_{52}$ terms is vital to the correct understanding of signal modification. Fig. \ref{fig:scan_drift_negative} shows the result of scanning the amplitude of signal modification as a function of the $M_{56}$ term between the kicker center and observation point. It would appear that we could use a dipole to generate negative $M_{56}$ and see about half the signal modification we could obtain with a drift. However, the dipole will also generate non-trivial $M_{51}$ and $M_{52}$ terms. Performing a scan with the more realistic dipole generates the plot seen in Fig. \ref{fig:scan_dipole_pure}. Virtually no signal modification is observed, and even then only for the small positive $M_{56}$ values generated near the very start of the dipole. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{drift_scan_discrete_negative.png} \end{center} \caption{\label{fig:scan_drift_negative} Fractional signal modification of various wavelengths as a function of $M_{56}$ between the kicker center and observation point, including negative values. We see signal modification with both signs of $M_{56}$.} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{pure_dipole_v_theta.png} \end{center} \caption{\label{fig:scan_dipole_pure} Fractional signal modification of various wavelengths as a function of bend angle in the dipole, assuming only a real dipole between the center of the kicker and the observation point. The corresponding range of $M_{56}$ values is between -4cm and +9$\mu$m. The inclusion of non-zero $M_{51}$ and $M_{52}$ terms from the dipole destroys the signal modification. Note the horizontal log scale.} \end{figure} \subsection{Radiation}\label{subsec:radiation} So far, we have focused entirely on the changes to the hadron density spectrum. However, this cannot be observed directly. Instead, we will monitor the radiation produced by the hadron beam as it moves through a dipole. In the far-field approximation, the electric field from an accelerated charge is given by \cite{cite:jackson} \begin{align}\label{eqtn:jackson} \vec{E}(\vec{x},t_0) = \frac{q}{4\pi\epsilon_0c}\Bigg[\frac{1}{|\vec{r}_{0} - \vec{r}|}\frac{\hat{n}\times\big[(\hat{n}-\vec{\beta})\times\dot{\vec{\beta}}\big]}{\big(1-\vec{\beta}\cdot\hat{n}\big)^3}\Bigg]_t \end{align} \noindent where the observation point is $\vec{r}_0$, the observation time is $t_0$, and the hadron position $\vec{r}$, relativistic beta, $\vec{\beta}$, and its derivative, $\dot{\vec{\beta}}$, are evaluated at the retarded time, $t$, with $t_0 = t + |\vec{r}_{0} - \vec{r}|/c$. Since we had seen in subsection \ref{subsec:dipole_obs} that it is useful to observe the signal modification immediately at the start of the dipole, we consider the edge radiation \cite{cite:edge_radiation}. We write the frequency-space electric field \begin{align}\label{eqtn:fft_E} \vec{E}(\vec{x},\omega) &= \int_{-\infty}^{\infty} \vec{E}(\vec{x},t_0) e^{i\omega t_0} dt_0\\\nonumber &= \frac{q}{4\pi\epsilon_0c}\int_{0}^{\infty}\Bigg[\frac{1}{|\vec{r}_{0} - \vec{r}|}\frac{\hat{n}\times\big[(\hat{n}-\vec{\beta})\times\dot{\vec{\beta}}\big]}{\big(1-\vec{\beta}\cdot\hat{n}\big)^2}\Bigg]_t e^{i\omega t_0} dt \end{align} \noindent where we take the hadron to enter the dipole at time $t=0$, and have simplified using the fact that $d t_0/dt = 1 - \vec{\beta}\cdot\hat{n}$. The intensity of the radiation is given by \cite{cite:edge_radiation} \begin{align}\label{eqtn:intensity} S = \alpha \frac{\Delta \omega}{\omega} \frac{I_h}{q} \bigg(\frac{2\epsilon_0c}{e}\bigg)^2\hbar\omega\Big|\vec{E}(\vec{x},\omega)\Big|^2 \end{align} \noindent where $\alpha$ is the fine-structure constant, $\Delta\omega$ is the bandwidth, and $I_h$ is the average hadron beam current (1A for the EIC). Since the 7mm electron bunch is much shorter than the 6cm hadron bunch, only a fraction of the hadrons will experience cooling each turn, and $I_h$ is reduced by the ratio of the electron and hadron bunch lengths under the assumption that a streak camera would enable the central portion of the bunch to be observed separately from the rest of the beam. Eqtn. \ref{eqtn:fft_E} may be integrated numerically over a transverse grid 10m downstream of the dipole entrance, where we might place a camera to detect the emitted radiation. We assume a 10\% bandwidth for this detector. The result of this is shown in Fig. \ref{fig:radiation_detection_scan}. We see that we may achieve intensities of $\sim110\mu W/m^2$. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\columnwidth]{radiation_scan_6um_10m.png} \end{center} \caption{\label{fig:radiation_detection_scan} Intensity of radiation from core protons seen 10m downstream of the dipole entrance.} \end{figure} We must also contend with background thermal radiation, whose intensity is given by the well-known blackbody radiation formula: \begin{align}\label{eqtn:thermal} S = \frac{\hbar}{4\pi^2c^2}\frac{\omega^3}{e^{\hbar\omega/k_BT}-1} \Delta \omega \end{align} Within this same 10\% bandwidth at a temperature of 300K, we have a thermal intensity of $10W/m^2$, which would swamp our signal. It is therefore necessary to operate at liquid nitrogen temperature (77.29K), where the thermal background is only $1nW/m^2$. \section{Conclusion}\label{sec:conclude} We have derived an expression for signal modification for the case of a CEC cooler. We have also performed simulations of this process, finding good agreement with theoretical expectations. Finally, we have shown that such a signal may be experimentally observed at the few percent level at the EIC with a cryo-cooled streak camera set up to observe existing dipole radiation. As the design of the cooler matures, it will be necessary to incorporate the needs of this radiation diagnostic, at least ensuring the existence of a sufficient drift length between the end of the kicker and the first downstream dipole. \section{Acknowledgements}\label{sec:ack} This work was supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy, and by the Department of Energy, contract DE-AC03-76SF00515.
2024-02-18T23:39:45.646Z
2022-05-09T02:02:01.000Z
algebraic_stack_train_0000
306
7,184
proofpile-arXiv_065-1784
\section{Introduction} Different fields of relativistic plasmas are under current development due to the description of various astrophysical scenarios and generation of relativistic plasmas in laboratories at the propagation of high-energy density electromagnetic beams \cite{Shatashvili PoP 20}, \cite{Liu PPCF 21}, \cite{She 21}, \cite{Bhattacharjee PRD 19}, \cite{Comisso 19}, \cite{Darbha 21}, \cite{Chabanov 21}, \cite{Mahajan PoP 16}, \cite{Mahajan PoP 2011}, \cite{Comisso PRL 14}, \cite{Heyvaerts AA 12}, \cite{Mahajan PRL 03}, \cite{Munoz EPS 06}, \cite{Brunetti MNRAS 04}. The degenerate plasmas can be placed in the middle between the classical and quantum plasmas. It can be considered as the quasi-classic object described by the classic hydrodynamic or kinetics, where the equation of state is chosen as the Fermi pressure or the equilibrium distribution function is chosen as the Fermi step (the zero-temperature limit of the Fermi-Dirac distribution function). However, the Fermi pressure and the Fermi step distribution function are caused by the Pauli blocking, which is the quantum effect. But, other quantum effects like the quantum Bohm potential are secondary in the degenerate plasmas for the majority of physical scenarios. In this paper we focus on relativistic plasmas. Relativistic regime can be reached within different scenarios. Monoenergetic beams of electrons can propagate through the plasmas, with the velocity of beam close to the speed of light. There is the opposite limit of the macroscopically motionless plasmas heated up to temperatures of order of the rest energy of at leats lightest of species of plasmas (here and below we consider temperature in the energy units). This regime is rather more complex in compare with monoenergetic beams since it requires to get relation between the momentum density and the velocity field for the large number of particles moving with different velocities (if we use the Euler equation in the form of the momentum balance equation). The degenerate relativistic plasmas is the low-temperature high-density plasmas, where the Fermi velocity is of order of the speed of light. So, we have distribution of particles over quantum states with different energies and we have same complexity of the model like for the thermally distributed plasmas. Novel model for the description of the relativistically hot plasmas is developed in Refs. \cite{Andreev 2021 05}, \cite{Andreev 2021 06}, \cite{Andreev 2021 07}, \cite{Andreev 2021 08}, \cite{Andreev 2021 09}, \cite{Andreev 2021 10}. This model is based on the derivation of hydrodynamic equations directly from their microscopic motion described by the corresponding relativistic form of the second Newton's law as the equations of motion of each particle. This method can be considered as the reformulation of the classical mechanics in the form of equations of field since no probabilistic method is used during derivation. Originally this method is suggested for the derivation of the kinetic Vlasov equation for the relativistic plasmas \cite{Kuz'menkov 91}. Next, its simplification is presented for the hydrodynamic modeling of the nonrelativistic plasmas \cite{Drofa TMP 96}, \cite{Andreev PIERS 2012}. Finally, an original structure of the hydrodynamic model for the relativistic plasmas is derived using this method \cite{Andreev 2021 05}, \cite{Andreev 2021 06}, \cite{Andreev 2021 07}, \cite{Andreev 2021 08}, \cite{Andreev 2021 09}, \cite{Andreev 2021 10}. The model itself consist of equations for evolution of four functions, two three-scalars and two three-vectors, which can be combined in two four-vectors. These functions are the concentration of particles, the velocity field obtained via the evolution of the current of particles, the average reverse relativistic gamma-factor, and the current of the average reverse relativistic gamma-factor. Basically, the average reverse relativistic gamma-factor is the ariphmetic average of $\sqrt{1-\textbf{v}_{i}^{2}(t)/c^{2}}$ over all particles, where $\textbf{v}_{i}(t)$ is the velocity of $i$-th particle as the function of time $t$ in accordance with the microscopic equation of motion, $c$ is the speed of light. A question might appear: Why do we use so strange functions like "the average reverse relativistic gamma-factor" in our model? The answer is following. We do not try to choose functions for the description of plasmas, but we follow the structure of hydrodynamic equations as it appears during derivation. The evolution of the concentration $n$ leads to the current of particles $\textbf{j}$. Therefore, we consider the evolution of the current of particles $\textbf{j}$, which has no direct relation to the momentum density in contrast with the nonrelativistic regime, where these functions coincide. The current of particles $\textbf{j}$ allows us to introduce the velocity field $\textbf{v}\equiv \textbf{j}/n$. Hence, the equation for evolution of the current of particles $\textbf{j}$ gives the equation for evolution of the velocity field $\textbf{v}$. This equation contains four novel functions. One is the second rank tensor describing the flux of current of particles. Three other functions appear at the presentation of interaction in the self-consistent (mean-field) approximation. Two of them are mentioned above: the average reverse relativistic gamma-factor, and the current of the average reverse relativistic gamma-factor. Another one is the second rank tensor of flux of the current of the average reverse relativistic gamma-factor. Consequently, we use the average reverse relativistic gamma-factor, and the current of the average reverse relativistic gamma-factor in order to continue the set of hydrodynamic equations and find appropriate regime of truncation. The derivation shows that the evolution of the average reverse relativistic gamma-factor, and the current of the average reverse relativistic gamma-factor mostly leads to reappearance of the concentration and the current of particles. Hence, on this stage the mathrematical structure of the model shows some tendency to close itself relatively presented set of functions. Obviously, there is no complete closer of the set of equation with no additional truncation due to the large number of degrees of freedom in the many-particle systems. Truncation is made via applications of equations of state for the second and higher rank tensors. In this paper we consider the degenerate electron gas. Therefore, we derive corresponding equations of state for the zero temperature regime describe by the Fermi step distribution function. This paper is organized as follows. In Sec. II the relativistic hydrodynamic model is adopted for the degenerate electron gas. In Sec. III the linear and nonlinear analysis of the small amplitude longitudinal waves is given. The accuracy of the suggested model is also analyzed via comparison of the Langmuir wave spectrum with the result obtain in the kinetic model. The ion-acoustic solitons are considered by the reductive perturbation method. In Sec. IV a brief summary of obtained results is presented. \section{Relativistic hydrodynamic model} Quasiclassic analysis of the degenerate electron gas can be based on the classical hydrodynamic model, where the equations of state are obtained within the distribution function in form of the Fermi step (the zero temperature limit of the Fermi-Dirac distribution function). The nonrelativistic classic hydrodynamics obtained in the selfconsistent field approximation requires some equation of state for the pressure $P^{ab}$. The relativistic hydrodynamic model based on the momentum balance equation requires two equations of state. One equation of state is for the pressure. The second equation of state is necessary for the momentum density in order to make transition to the velocity field which exists in the continuity and Maxwell equations. The relativistic hydrodynamic model with the average reverse gamma factor evolution considered in this paper includes evolution of the concentration, the velocity field, the average reverse relativistic gamma factor, and the flux of the reverse relativistic gamma factor. Therefore, it requires three equations of state. Two equations of state are for the second rank tensors describing the flux of the current of particles $p^{ab}$ and the current of flux of the reverse relativistic gamma factor. One equation of state is for the fourth rank tensor $M^{abcd}$, which is the flux of the current of tensor $p^{ab}$. Here we follow Refs. \cite{Andreev 2021 05}, \cite{Andreev 2021 09}, and adopt the model for the relativistic degenerate plasmas. Hence, we have the Fermi velocity $v_{Fe}$ comparable with the speed of light $c$. First equation in the presented model is the continuity equation \cite{Andreev 2021 05} \begin{equation}\label{RHD2021ClLM cont via v} \partial_{t}n+\nabla\cdot(n\textbf{v})=0.\end{equation} Next, the velocity field evolution equation is \cite{Andreev 2021 05}, \cite{Andreev 2021 09} $$n\partial_{t}v^{a}+n(\textbf{v}\cdot\nabla)v^{a} +\frac{1}{m}\partial^{a}\tilde{p}$$ $$=\frac{e}{m}\biggl(\Gamma -\frac{\tilde{t}}{c^{2}}\biggr)E^{a}+\frac{e}{mc}\varepsilon^{abc}(\Gamma v_{b}+t_{b})B_{c}$$ \begin{equation}\label{RHD2021ClLM Euler for v} -\frac{e}{mc^{2}}(\Gamma v^{a} v^{b}+v^{a}t^{b}+v^{b}t^{a})E_{b}, \end{equation} where tensor $p^{ab}=\tilde{p}\delta^{ab}$ is the flux of the thermal velocities, and tensor $t^{ab}=\tilde{t}\delta^{ab}$ is the flux of the average reverse gamma-factor. Parameter $\tilde{p}$ is used here instead of $p$ presented in earlier papers \cite{Andreev 2021 05} in order to distinguish it from the momentum. Parameters $m$ and $e$ are the mass and charge of particle, $c$ is the speed of light, $\delta^{ab}$ is the three-dimensional Kronecker symbol, $\varepsilon^{abc}$ is the three-dimensional Levi-Civita symbol. In equation (\ref{RHD2021ClLM Euler for v}) and below we assume the summation on the repeating index $v^{b}_{s}E_{b}=\sum_{b=x,y,z}v^{b}_{s}E_{b}$. Moreover, the metric tensor has diagonal form corresponding to the Minkovskii space, it has the following sings $g^{\alpha\beta}=\{-1, +1, +1, +1\}$. Hence, we can change covariant and contrvariant indexes for the three-vector indexes: $v^{b}_{s}=v_{b,s}$. The Latin indexes like $a$, $b$, $c$ etc describe the three-vectors, while the Greek indexes are deposited for the four-vector notations. The Latin indexes can refer to the species $s=e$ for electrons or $s=i$ for ions. The Latin indexes can refer to the number of particle $j$ at the microscopic description. However, the indexes related to coordinates are chosen from the beginning of the alphabet, while other indexes are chosen in accordance with their physical meaning. The equation of evolution of the averaged reverse relativistic gamma factor includes the action of the electric field $$\partial_{t}\Gamma+\partial_{b}(\Gamma v^{b}+t^{b})$$ \begin{equation}\label{RHD2021ClLM eq for Gamma} =-\frac{e}{mc^{2}}n(\textbf{v}\cdot\textbf{E}) \biggl(1-\frac{1}{c^{2}}\biggl(\textbf{v}^{2}+\frac{5\tilde{p}}{n}\biggr)\biggr).\end{equation} Function $\Gamma$ is also called the hydrodynamic Gamma function \cite{Andreev 2021 05}. The fourth and final equation in this set of hydrodynamic equations is the equation of evolution for the thermal part of current of the reverse relativistic gamma factor (the hydrodynamic Theta function): $$(\partial_{t}+\textbf{v}\cdot\nabla)t^{a} +\partial^{a}\tilde{t} +(\textbf{t}\cdot\nabla) v^{a}+t^{a} (\nabla\cdot \textbf{v}) +\Gamma(\partial_{t}+\textbf{v}\cdot\nabla)v^{a}$$ $$ =\frac{e}{m}nE^{a}\biggl[1-\frac{\textbf{v}^{2}}{c^{2}}-\frac{3\tilde{p}}{nc^{2}}\biggr] +\frac{e}{mc}\varepsilon^{abc}nv_{b}B_{c} \biggl[1-\frac{\textbf{v}^{2}}{c^{2}}-\frac{5\tilde{p}}{nc^{2}}\biggr]$$ \begin{equation}\label{RHD2021ClLM eq for t a} -\frac{2e}{mc^{2}}\Biggl[E^{a}\tilde{p}\biggl(1-\frac{\textbf{v}^{2}}{c^2}\biggr) +nv^{a}v^{b}E_{b}\biggl(1-\frac{\textbf{v}^{2}}{c^{2}}-\frac{9\tilde{p}}{nc^{2}}\biggr) -\frac{5M_{0}}{3c^{2}} E^{a}\Biggr].\end{equation} All hydrodynamic equations are obtained in the mean-field approximation (the self-consistent field approximation). The fourth rank tensor $M^{abcd}$ entering the equation for evolution of the flux of reverse gamma factor via its partial trace $M^{abcc}=M_{c}^{cab}$. In the isotropic limit, we construct tensor $M^{abcd}$ of the Kronecker symbols $M^{abcd}=(M_{0}/3)(\delta^{ab}\delta^{cd}+\delta^{ac}\delta^{bd}+\delta^{ad}\delta^{bc})$. It gives $M^{xxxx}=M^{yyyy}=M^{zzzz}=M_{0}$. If we have two pairs of different projections we obtain $M^{xxyy}=M^{xxzz}=M_{0}/3$. Otherwise the element of tensor $M^{abcd}$ is equal to zero. So, for the partial trace $M_{c}^{cab}$ we find $M_{c}^{cab}=(5M_{0}/3)\delta^{ab}$. For the explicit definition of tensor $M^{abcd}$ see equation (17) of Ref. \cite{Andreev 2021 05}. The equations of electromagnetic field have the traditional form presented in the three-dimensional notations \begin{equation}\label{RHD2021ClLM div B} \nabla \cdot\textbf{B}=0,\end{equation} \begin{equation}\label{RHD2021ClLM rot E} \nabla\times \textbf{E}=-\frac{1}{c}\partial_{t}\textbf{B},\end{equation} \begin{equation}\label{RHD2021ClLM div E with time} \nabla \cdot\textbf{E}=4\pi(en_{i}-en_{e}),\end{equation} and \begin{equation}\label{RHD2021ClLM rot B with time} \nabla\times \textbf{B}=\frac{1}{c}\partial_{t}\textbf{E}+\frac{4\pi q_{e}}{c}n_{e}\textbf{v}_{e},\end{equation} where the ions exist as the motionless background. \subsection{Equations of state} Equations (\ref{RHD2021ClLM cont via v})-(\ref{RHD2021ClLM eq for t a}) originally obtained for the relativistically hot plasmas. One of its features is the large temperature, so the effective thermal velocities are close to the speed of light. Let us mention that the zero temperature limit of these equations can also be considered to study the propagation of the monoenergetic beams \cite{Andreev 2021 09}. However, the opposite limit is under consideration in this paper. As it is mentioned above we are going to make a substitution in order to consider the degenerate electron gas of high concentration, so the Fermi velocity $v_{Fe}=p_{Fe}/\sqrt{1+p_{Fe}^{2}/m^{2}c^{2}}m$, where $p_{Fe}=(3\pi^{2}n)^{1/3}\hbar$. In order to give a quasi-classic analysis of the high density relativistic degenerate electron gas (or all species of plasmas) we need to find corresponding equations of state for functions $p^{ab}$, $t^{ab}$, and $M^{abcd}$. Degenerate electrons are described within the zero-temperature limit of the Fermi-Dirac distribution, which is given by the Fermi step distribution \begin{equation}\label{RHD2021ClLM Fermi step} f_{0}=\Biggl\{\begin{array}{c} \frac{2}{(2\pi\hbar)^{3}} \\ 0 \end{array} \textrm{for} \begin{array}{c} p\leq p_{Fe} \\ p> p_{Fe} \end{array} \end{equation} Before we consider novel functions $p^{ab}$, $t^{ab}$, and $M^{abcd}$ we present the equation of state for the pressure as a point of reference. The concentration has well-known form in terms of the distribution function \begin{equation}\label{RHD2021ClLM concentr via f} n=\int f_{0} d^{3}p. \end{equation} The pressure (the flux of the momentum density) can be written in the following forms: \begin{equation}\label{RHD2021ClLM Pressure def via f} P^{ab}=\int p^{a}v^{b}f_{0} d^{3}p=c\int p^{a}p^{b}f_{0} d^{3}p/p_{0}, \end{equation} where $p_{0}=\gamma mc=mc/\sqrt{1-v^{2}/c^{2}}$, $\textbf{p}=m\textbf{v}/\sqrt{1-\textbf{v}^{2}/c^{2}}$, $\gamma=1/\sqrt{1-v^{2}/c^{2}}$, and the second expression is shown in more symmetric form including the covariant element of volume in the momentum space $d^{3}p/p_{0}$. Calculation leads to $P^{ab}=P \delta^{ab}$, with \begin{equation}\label{RHD2021ClLM Pressure rel eq of state} P= \frac{m^{4}c^{5}}{24\pi^{2}\hbar^{3}}\biggl[\xi\sqrt{\xi^{2}+1}(2\xi^{2}-3)+3Arsinh\xi\biggr], \end{equation} where $\xi\equiv p_{Fe}/mc$, $Arsinh\xi=ln\mid \xi+\sqrt{\xi^{2}+1}\mid$, and $sinh(Arsinh\xi)=\xi$. At this step we are ready to go further and calculate expressions for the novel functions. The first of the is the flux of the current of particles, which has the following representation in form of the distribution function \begin{equation}\label{RHD2021ClLM p via f} p^{ab}=\int v^{a}v^{b} f_{0} d^{3}p. \end{equation} Next, we obtain the equation of state $p^{ab}=\tilde{p} \delta^{ab}$ with \begin{equation}\label{RHD2021ClLM p rel eq of state} \tilde{p}=\frac{m^{3}c^{5}}{3\pi^{2}\hbar^{3}}\biggl[\frac{1}{3}\xi^{3}-\xi+\arctan\xi\biggr], \end{equation} where $m^{3}c^{5}/3\pi^{2}\hbar^{3}=n_{0}c^{2}(m^{3}c^{3}/p_{Fe}^{3})$ The second of required functions is the flux of the current of the average reverse gamma factor \begin{equation}\label{RHD2021ClLM t via f} t^{ab}=\int \biggl(\frac{v^{a}v^{b}}{\gamma}\biggr) f_{0} d^{3}p. \end{equation} Our calculation leads to $t^{ab}=\tilde{t} \delta^{ab}$ with \begin{equation}\label{RHD2021ClLM t rel eq of state} \tilde{t}=\frac{m^{3}c^{5}}{6\pi^{2}\hbar^{3}} \biggl[ \xi\sqrt{\xi^{2}+1}+\frac{2\xi}{\sqrt{\xi^{2}+1}} -3Arsinh\xi\biggr]. \end{equation} The fourth rank tensor should be also calculated using the Fermi step \begin{equation}\label{RHD2021ClLM M via f} M^{abcd}=\int v^{a}v^{b} v^{c}v^{d} f_{0} d^{3}p. \end{equation} It leads to the symmetric expression $M^{abcd}=(M_{0}/3)(\delta^{ab}\delta^{cd}+\delta^{ac}\delta^{bd}+\delta^{ad}\delta^{bc})$ with \begin{equation}\label{RHD2021ClLM M rel eq of state} M_{0}=\frac{m^{3}c^{7}}{30\pi^{2}\hbar^{3}} \biggl[ 2\xi(\xi^{2} -6) -\frac{3\xi}{\xi^{2}+1} +15\arctan\xi \biggr]. \end{equation} We also need to find the equilibrium expression for the average reverse gamma factor $\Gamma_{0}$ for the degenerate electron gas \begin{equation}\label{RHD2021ClLM Gamma via f} \Gamma=\int \frac{1}{\gamma} f_{0} d^{3}p. \end{equation} After calculation for the degenerate electron gas we obtain \begin{equation}\label{RHD2021ClLM Gamma rel eq of state} \Gamma= \frac{m^{3}c^{3}}{2\pi^{2}\hbar^{3}} \biggl[ \xi\sqrt{\xi^{2}+1} -Arsinh\xi\biggr]. \end{equation} In this model the relativistic Gamma function $\Gamma$ is an independent function. Its evolution is described by equation (\ref{RHD2021ClLM eq for Gamma}). Equation (\ref{RHD2021ClLM Gamma rel eq of state}) is used as the equation of state for the equilibrium value of the relativistic Gamma function $\Gamma$. \section{Waves in the relativistic magnetized plasmas} \subsection{Equilibrium state and the linearized hydrodynamic equations} We focus on degenerate electron-ion plasmas, where both components are degenerate. Moreover, the concentration of both components $n_{0e}=n_{0i}$ is high, up to values giving large Fermi velocities $v_{Fe}$ and $v_{Fi}$ getting close to the speed of light $c$. We consider small perturbations of the equilibrium state while the equilibrium state is characterized by the constant concentrations of electrons and ions $n_{0e}$, $n_{0i}$, constant values of the average reverse relativistic gamma factors $\Gamma_{0e}$, $\Gamma_{0i}$, zero values of the equilibrium velocity fields of both species, zero values of the current of average reverse relativistic gamma factors, and zero values of the electric and magnetic fields. So, the hydrodynamic functions are $n_{s}=n_{0s}+\delta n_{s}$, $v_{xs}=\delta v_{xs}$, $\Gamma_{s}=\Gamma_{0s}+\delta \Gamma_{s}$, $t_{xs}=\delta t_{xs}$, $E_{x}=\delta E_{x}$, perturbations of the magnetic field are not considered since we consider the longitudinal waves. It is also assumed that the perturbations have the monochromatic form, for instance $\delta n_{s}=N_{s}e^{-\imath\omega t+\imath k x}$, where $N_{s}$ is the amplitude. The linearized continuity equation has well-known form for the macroscopically motionless fluids (no equilibrium velocity field) \begin{equation}\label{RHD2021ClLM continuity equation lin 1D} \partial_{t}\delta n_{s}+n_{0s}\partial_{x} \delta v_{xs}=0. \end{equation} The second equation appears from equation (\ref{RHD2021ClLM Euler for v}) in the following form \begin{equation}\label{RHD2021ClLM velocity field evolution equation lin 1D} n_{0s}\partial_{t}\delta v_{xs}+\frac{\delta \tilde{p}_{0s}}{\delta n_{0s}}\partial_{x}n_{s} =\frac{q_{s}}{m_{s}}\Gamma_{0s} \delta E_{x}-\frac{q_{s}}{m_{s}c^{2}}\tilde{t}_{0s}\delta E_{x}, \end{equation} where parameters $\delta \tilde{p}_{0s}/\delta n_{0s}$ and $\tilde{t}_{0s}$ appear from equations of state presented above. We obtain the linearized equations for $\delta\Gamma$ and $\delta t_{x}$ from equations (\ref{RHD2021ClLM eq for Gamma}) and (\ref{RHD2021ClLM eq for t a}) in the following form \begin{equation}\label{RHD2021ClLM evolution of Gamma lin 1D} \partial_{t}\delta\Gamma_{s} +\Gamma_{0s}\partial_{x}\delta v_{xs}+\partial_{x}\delta t_{xs} =0, \end{equation} and $$\partial_{t}\delta t_{xs} +\partial_{x}\delta \tilde{t}_{s}-\frac{\Gamma_{0s}}{n_{0s}}\partial_{x}\delta \tilde{p}_{s} +\frac{q_{s}}{m_{s}}\frac{\Gamma_{0s}^{2}}{n_{0s}}\delta E_{x}$$ \begin{equation}\label{RHD2021ClLM evolution of Theta lin 1D} =\frac{q_{s}}{m_{s}}n_{0s}\delta E_{x} -\frac{5q_{s}}{m_{s}c^{2}}\tilde{p}_{0s}\delta E_{x} +\frac{10q_{s}}{3m_{s}c^{4}}M_{0s}\delta E_{x}, \end{equation} where $M_{0s}^{xxcc}=(5/3)M_{0s}$. The linearized Poisson equation has the well-known form \begin{equation}\label{RHD2021ClLM Poisson equation lin} \partial_{x}\delta E_{x}=4\pi (q_{e} \delta n_{e}+q_{i} \delta n_{i}). \end{equation} Analysis of the set of linearized equations obtained at zero external fields shows that it is enough to use equations (\ref{RHD2021ClLM continuity equation lin 1D}), (\ref{RHD2021ClLM velocity field evolution equation lin 1D}), (\ref{RHD2021ClLM Poisson equation lin}) to get a closed set of equations for the longitudinal perturbations. The linearized hydrodynamic equations (\ref{RHD2021ClLM continuity equation lin 1D})-(\ref{RHD2021ClLM Poisson equation lin}) contain $\Gamma_{0}$ and $\tilde{t}_{0}$, but function $\tilde{p}$ enters these equations in two forms. It appears as the equilibrium value of this function on the right-hand side of equation (\ref{RHD2021ClLM evolution of Theta lin 1D}). However, the Euler equation (\ref{RHD2021ClLM velocity field evolution equation lin 1D}) contains the perturbation of function $\tilde{p}$: $\delta \tilde{p}_{s}=(\delta p/\delta n)\delta n_{s}$, so $u_{ps}^{2}\equiv (\delta p_{s}/\delta n_{s})$. Therefore, we present the expression for $\delta p_{s}/\delta n_{s}$ obtained from equation (\ref{RHD2021ClLM p rel eq of state}) \begin{equation}\label{RHD2021ClLM d p on d n rel eq of state} \frac{\delta\tilde{p}}{\delta n}=\frac{1}{3}c^{2}\frac{\xi^{2}}{\xi^{2}+1}.\end{equation} It gives $v_{Fe}^{2}/3$ in the nonrelativistic limit. While the ultrarelativistic limit leads to $c^{2}/3$. Using the relativistic expression for the Fermi velocity we find that $\delta\tilde{p}/\delta n=v_{Fe}^{2}/3$ in the ambient relativistic regimes. \subsection{Spectrum of the Langmuir waves: Hydrodynamic description} To give a simple illustration of the relativistic effects existing in the presented model we consider one of fundamental wave effects in plasmas -- the Langmuir wave spectrum. The relativistic Langmuir waves are considered in Ref. \cite{Andreev 2021 05} for the thermally distributed electrons with the relativistic temperatures. Here we consider this problem for the degenerate electrons. Equations (\ref{RHD2021ClLM continuity equation lin 1D})-(\ref{RHD2021ClLM Poisson equation lin}) allow to find the following spectrum \begin{equation}\label{RHD2021ClLM Langmuir wave H} \omega^{2}=\biggl(\frac{\Gamma_{0}}{n_{0}}-\frac{u_{t}^{2}}{c^{2}}\biggr)\omega_{Le}^{2} +u_{p}^{2}k_{z}^{2}, \end{equation} where parameters $\Gamma_{0}$, $u_{t}^{2}$, and $u_{p}^{2}$ should be obtained from equations (\ref{RHD2021ClLM p rel eq of state})-(\ref{RHD2021ClLM Gamma rel eq of state}). However, they have different appearance. As it is mentioned above $u_{p}^{2}$ appears from the perturbation of pressure (\ref{RHD2021ClLM p rel eq of state}) $p=p_{0}+\delta p$ with $\delta p=u_{p}^{2} \delta n$. But parameter $u_{t}^{2}$ appears from the equilibrium value of function $u_{t}^{2}\equiv \tilde{t}_{0}/n_{0}$. Necessary substitution leads to the following result \begin{equation}\label{RHD2021ClLM Langmuir wave H2} \omega^{2}=\frac{\omega_{Le}^{2}}{\gamma_{Fe}} +\frac{1}{3}c^{2}\frac{p_{Fe}^{2}}{p_{Fe}^{2}+m^{2}c^{2}}k_{z}^{2}, \end{equation} where $\gamma_{Fe}=1/\sqrt{1-v_{Fe}^{2}/c^{2}}=\sqrt{1+p_{Fe}^{2}/m^{2}c^{2}}$ is the standard relativistic gamma factor considered for the Fermi velocity. Expression (\ref{RHD2021ClLM Langmuir wave H2}) can be represented via the Fermi velocity as follows $\omega^{2}=\frac{\omega_{Le}^{2}}{\gamma_{Fe}} +\frac{1}{3}v_{Fe}^{2}k_{z}^{2}$. \subsection{Spectrum of the Langmuir waves in the relativistic kinetics for the degenerate electrons} It would be useful to give an analysis of accuracy of the hydrodynamic model presented within equations (\ref{RHD2021ClLM cont via v})-(\ref{RHD2021ClLM eq for t a}). To this end we compare the spectrum of the Langmuir waves obtained above (\ref{RHD2021ClLM Langmuir wave H2}) with the result of the relativistic kinetics. Therefore, we present the Vlasov kinetic equation \begin{equation}\label{RHD2021ClLM Vlasov eq} \partial_{t}f_{e}+\textbf{v}\cdot\nabla f_{e} +q_{e}\biggl(\textbf{E}+\frac{1}{c}\textbf{v}\times\textbf{B}\biggr)\cdot\frac{\partial f_{e}}{\partial \textbf{p}}=0,\end{equation} with the corresponding form of the Poisson equation \begin{equation}\label{RHD2021ClLM div E kin} \nabla\cdot \textbf{E}=4\pi q_{e}\int f_{e}(\textbf{r},\textbf{p},t)d\textbf{p} +4\pi q_{i}n_{0i},\end{equation} for the analysis of the Longitudinal waves. Equilibrium distribution function is given be equation (\ref{RHD2021ClLM Fermi step}). Here we consider the small perturbations of this distribution $f_{e}=f_{0}+\delta f$ with $\delta f=F e^{-\imath\omega t +i \textbf{k} \textbf{r}}$. Moreover, let us consider the small perturbations for the waves propagating in the plasmas being in the external uniform magnetic field, so we have $\textbf{E}=0+\delta \textbf{E}$ and $\textbf{B}=B_{0}\textbf{e}_{z}+\delta \textbf{B}$. Therefore, the Vlasov kinetic equation (\ref{RHD2021ClLM Vlasov eq}) transforms to the following form for the linear approximation \begin{equation}\label{RHD2021ClLM Vlasov eq lin} -\imath(\omega- \textbf{k}\cdot\textbf{v}) \delta f -m\frac{v_{\perp}}{p_{\perp}}\Omega_{e}\frac{\partial \delta f}{\partial \varphi_{p}} +q_{e}\delta\textbf{E}\cdot\frac{\partial f_{0}}{\partial \textbf{p}}=0,\end{equation} where we use $\Omega_{e}=q_{e}B_{0}/m_{e}c$, $[\textbf{v}\times\delta \textbf{B}]\cdot(\partial f_{0}(p)/\partial \textbf{p})=0$, $\textbf{v}=v_{\perp}\textbf{e}_{\perp}+v_{z}\textbf{e}_{z}$, and $v_{\perp}/p_{\perp}=\sqrt{1-v^{2}/c^{2}}=1/\gamma$. Equation (\ref{RHD2021ClLM Vlasov eq lin}) leads to the following solution $$\delta f=\int_{c}^{\varphi}d\varphi' \biggl[\biggl(\frac{q\gamma}{\Omega_{e}}\delta \textbf{E}\cdot\frac{\partial f_{0}}{\partial \textbf{p}}\biggr)_{\varphi'}\cdot $$ \begin{equation}\label{RHD2021ClLM delta f solution 1} \cdot\exp\biggl(\frac{\imath\gamma}{\Omega_{e}}\int_{\varphi}^{\varphi'}d\varphi'' (\omega-\textbf{k}\cdot\textbf{v})_{\varphi''}\biggr)\biggr]. \end{equation} Integration leads to $$\delta f=\frac{q_{e}\gamma}{\Omega_{e}}\frac{1}{p}\frac{\partial f_{0}}{\partial p} \exp\biggl(-\imath\frac{(\omega-k_{z}v_{z})\varphi -k_{x}v_{\perp}\sin\varphi}{\Omega_{e}/\gamma}\biggr)\cdot$$ \begin{equation}\label{RHD2021ClLM delta f solution 2} \cdot\int_{c}^{\varphi}d\varphi' (\delta \textbf{E}\cdot \textbf{p})_{\varphi'} \exp\biggl(\imath\frac{(\omega-k_{z}v_{z})\varphi' -k_{x}v_{\perp}\sin\varphi'}{\Omega_{e}/\gamma}\biggr). \end{equation} For the longitudinal waves $\delta \textbf{E}\parallel \textbf{k}$ propagating parallel to the external magnetic field $\textbf{k}\parallel \textbf{B}_{0}$ we obtain $$\delta f=\frac{q_{e}}{\Omega_{e}\sqrt{1-\frac{v^{2}}{c^{2}}}}\frac{1}{p}\frac{\partial f_{0}}{\partial p} p_{z} \delta E_{z}\cdot$$ \begin{equation}\label{RHD2021ClLM delta f solution 3} \cdot\exp\biggl(-\imath\frac{(\omega-k_{z}v_{z})\varphi}{\Omega_{e}\sqrt{1-\frac{v^{2}}{c^{2}}}}\biggr) \int_{c}^{\varphi}d\varphi' \exp\biggl(\imath\frac{(\omega-k_{z}v_{z})\varphi'}{\Omega_{e}\sqrt{1-\frac{v^{2}}{c^{2}}}}\biggr). \end{equation} Final integration leads to the following expression for the perturbation of the distribution function \begin{equation}\label{RHD2021ClLM delta f solution 4} \delta f=q_{e}\frac{p_{z}}{p}\frac{\partial f_{0}}{\partial p} \frac{\delta E_{z}}{\imath(\omega-k_{z}v_{z})}, \end{equation} where all relativistic effects are placed in $(\omega-k_{z}v_{z})$ via $v_{z}=p_{z}/m\gamma$. Let us use solution (\ref{RHD2021ClLM delta f solution 4}) for the calculation of the perturbations of the concentration \begin{equation}\label{RHD2021ClLM delta n via delta f} \delta n= \int d\textbf{p} \delta f(\textbf{r}, \textbf{p},t) \end{equation} and substitute it in the Poisson equation to get the dispersion equation \begin{equation}\label{RHD2021ClLM Disp eq from kin} 1+3\gamma_{Fe}\frac{\omega_{Le}^{2}}{p_{Fe}^{2}k_{z}^{2}/m^{2}} \biggl[1-\frac{m\gamma_{Fe}\omega}{p_{Fe}k_{z}}\ln\biggl(\frac{m\gamma_{Fe}\omega+p_{Fe}k_{z}}{m\gamma_{Fe}\omega-p_{Fe}k_{z}}\biggr)\biggr]=0. \end{equation} In the limit $\omega\gg k_{z}v_{Fe}=p_{Fe}k_{z}/m\gamma_{Fe}$ dispersion equation gives the following spectrum of the relativistic Langmuir waves in the degenerate electron gas \begin{equation}\label{RHD2021ClLM Langm spectrum from kin} \omega^{2}=\frac{\omega^{2}_{Le}}{\gamma_{Fe}} +\frac{3}{5}\frac{p_{Fe}^{2}k_{z}^{2}}{m^{2}\gamma_{Fe}^{2}}. \end{equation} It can be also represented in the following form $\omega^{2}=\frac{\omega^{2}_{Le}}{\gamma_{Fe}} +\frac{3}{5}v_{Fe}^{2}k_{z}^{2}$. Comparison with the results of hydrodynamic model presented above shows agreement up to the coefficient in front of $v_{Fe}^{2}$. Which is the well-known difference existing in the nonrelativistic limit as well. There is systematic way of solving of this problem via the extension of set of hydrodynamic equations to get complete agreement with the kinetic description (see for instance \cite{Tokatly PRB 99}, \cite{Tokatly PRB 00}, \cite{Andreev JPP 21}). Presence of the relativistic gamma factor taken for the Fermi velocity $\gamma_{Fe}$ in the denumerator in the first term of equation (\ref{RHD2021ClLM Langm spectrum from kin}) shows the decrease of the cut-off frequency of the Langmuir waves. Moreover, presence of the square of relativistic gamma factor in the denumerator in the second term of equation (\ref{RHD2021ClLM Langm spectrum from kin}) shows stronger decrease of the change of frequency at the grough of the wave vector. Which reveals in the decrease of the group velocity as well. \subsection{Spectrum of the ion-acoustic waves} From equations (\ref{RHD2021ClLM continuity equation lin 1D})-(\ref{RHD2021ClLM Poisson equation lin}) we find the following dispersion equation for the longitudinal waves in the electron-ion plasmas in order to obtain the spectrum of the low frequency ion-acoustic waves \begin{equation}\label{RHD2021ClLM Disp eq e-i} 1=\biggl(\frac{\Gamma_{0e}}{n_{0e}}-\frac{u_{te}^{2}}{c^{2}}\biggr)\frac{\omega_{Le}^{2}}{\omega^{2}-u_{pe}^{2}k_{z}^{2}} +\biggl(\frac{\Gamma_{0i}}{n_{0i}}-\frac{u_{ti}^{2}}{c^{2}}\biggr)\frac{\omega_{Li}^{2}}{\omega^{2}-u_{pi}^{2}k_{z}^{2}}. \end{equation} As it is demonstrated above we can simplify the coefficients in front of the Langmuir frequencies expressing it via the relativistic gamma factor. Moreover, the characteristic velocities $u_{pe}$ and $u_{pi}$ have simple expressions via the Fermi velocities $u_{pe}^{2}=v_{Fe}^{2}/3$ and $u_{pi}^{2}=v_{Fi}^{2}/3$. Therefore, we give representation of dispersion equation (\ref{RHD2021ClLM Disp eq e-i}) in simplified form \begin{equation}\label{RHD2021ClLM Disp eq e-i} 1=\frac{\omega_{Le}^{2}}{\gamma_{Fe}(\omega^{2}-v_{Fe}^{2}k_{z}^{2}/3)} +\frac{\omega_{Li}^{2}}{\gamma_{Fi}(\omega^{2}-v_{Fi}^{2}k_{z}^{2}/3)}. \end{equation} For the frequencies being in interval $v_{Fe}^{2}k_{z}^{2}/3\gg\omega^{2}\gg v_{Fi}^{2}k_{z}^{2}/3$ equation (\ref{RHD2021ClLM Disp eq e-i}) gives the following spectrum of the ion-acoustic waves \begin{equation}\label{RHD2021ClLM spectrum iaw} \omega^{2}=\frac{\omega_{Li}^{2}}{\gamma_{Fi}(1+\frac{\omega_{Le}^{2}}{\gamma_{Fe}v_{Fe}^{2}k_{z}^{2}/3})}, \end{equation} or, in the long wavelength limit, \begin{equation}\label{RHD2021ClLM spectrum iaw small k} \omega^{2} =\frac{m_{e}\gamma_{Fe}}{m_{i}\gamma_{Fi}}v_{Fe}^{2}k_{z}^{2}/3 =\frac{p_{Fe}^{2}k_{z}^{2}}{3m_{e}m_{i}\gamma_{Fe}\gamma_{Fi}}. \end{equation} The expression of frequency square in terms of the Fermi velocity contains additional factor equal to ratio of the gamma factors for electrons and ions $\gamma_{Fe}/\gamma_{Fi}$. However, the momentum has expression via the concentration equals to the nonrelativistic expression $p_{Fe}=(3\pi^{2}n_{0e})^{1/3}\hbar$. Hence, the second expression in equation (\ref{RHD2021ClLM spectrum iaw small k}) gives more clear physical picture, where the frequency square is proportional to product of reverse gamma factors $(\gamma_{Fe}\gamma_{Fi})^{-1}$. The minimal frequency square of the Langmuir wave is decreased by factor $\gamma_{Fe}^{-1}$ (see equation (\ref{RHD2021ClLM Langm spectrum from kin})). While the frequency square of the ion-acoustic waves shows stronger decrease since it contains $(\gamma_{Fe}\gamma_{Fi})^{-1}$, where additional factor $\gamma_{Fi}^{-1}<1$ is included. \subsection{Small amplitude ion-acoustic soliton} The ion-acoustic solitons are considered in the high-density low-temperature electron-ion plasmas. They are studied in the limit of small amplitude of the soliton. Therefore, we apply the reductive perturbation method (see for instance \cite{Andreev_Iqbal PoP 16}). This method includes the expansion of hydrodynamic functions as the series on the small amplitude with some scaling presented by parameter $\varepsilon$. In accordance with this method we introduce the following couple of variables with the necessary scaling \begin{equation}\label{RHD2021ClLM def of xi}\begin{array}{cc} \xi=\varepsilon^{\frac{1}{2}}(z-Vt), & \tau=\varepsilon^{\frac{3}{2}}t, \end{array} \end{equation} where parameter $\tau$ is proportional to the larger degree of small parameter $\varepsilon$. The parameter $\tau$ is called the slower time, while faster dependence on time $t$ is included in parameter $\xi$. we introduce an expansion of the hydrodynamic parameters on a small parameter $\varepsilon$ \begin{equation}\label{RHD2021ClLM expansion of n s} n_{s}=n_{0s}+\varepsilon n_{1s}+\varepsilon^{2} n_{2s},\end{equation} \begin{equation}\label{RHD2021ClLM expansion of v s} v_{sz}=0+\varepsilon v_{1sz}+\varepsilon^{2} v_{2sz},\end{equation} \begin{equation}\label{RHD2021ClLM expansion of Gamma} \Gamma_{s}=\Gamma_{0s}+\varepsilon \Gamma_{1s}+\varepsilon^{2} \Gamma_{2s},\end{equation} \begin{equation}\label{RHD2021ClLM expansion of t flux of G} t_{sz}=0+\varepsilon t_{1sz}+\varepsilon^{2} t_{2sz},\end{equation} and \begin{equation}\label{RHD2021ClLM expansion of phi} \phi=0+\varepsilon \phi_{1}+\varepsilon^{3} \phi_{2},\end{equation} where function $\Gamma_{0s}$ is given by equation (\ref{RHD2021ClLM Gamma rel eq of state}), and $\phi$ is the potential of the electric field $\textbf{E}=-\nabla\phi$. Equations of state (\ref{RHD2021ClLM Pressure rel eq of state})-(\ref{RHD2021ClLM M rel eq of state}) for functions $P$, $p$, $t$, and $M$ allows to get representations of these functions via the different combinations of $n_{0s}$, $n_{1s}$, $n_{1s}^{2}$, and $n_{2s}$. We find the expressions presented below: \begin{equation}\label{RHD2021ClLM expansion of p on epsilon} \tilde{p}_{s}\approx \tilde{p}_{0s}+\varepsilon u_{ps}^{2} n_{1s} +\varepsilon^{2} u_{ps}^{2} n_{2s} +\varepsilon^{2} \frac{v_{Fs}^{2}}{9\gamma_{Fs}^{2}n_{0s}} n_{1s}^{2}, \end{equation} where $u_{ps}^{2}=v_{Fs}^{2}/3$, and \begin{equation}\label{RHD2021ClLM expansion of p on epsilon} \tilde{t}_{s}\approx \tilde{t}_{0s}+\varepsilon \frac{v_{Fs}^{2}}{3\gamma_{Fs}} n_{1s}, \end{equation} for function $M_{s0}$ we need equilibrium expressions only. Presented method of expansion leads to the continuity equation considered in the first (lowest) order of the expansion and the second order of expansion \begin{equation}\label{RHD2021ClLM cont eq expansion order 1} n_{0s}\partial_{\xi}v_{sx1}=U\partial_{\xi}n_{s1}, \end{equation} and \begin{equation}\label{RHD2021ClLM cont eq expansion order 2} \partial_{\tau}n_{s1}-U\partial_{\xi}n_{s2}+\partial_{\xi}(n_{s0}v_{sz2}+n_{s1}v_{sz1})=0. \end{equation} More accurately speaking we have coefficients $\varepsilon^{3/2}$ and $\varepsilon^{5/2}$ for the first and second orders of expansion, correspondingly. We can integrate equation (\ref{RHD2021ClLM cont eq expansion order 1}) under boundary condition that the perturbation caused by soliton goes to zero at infinite distance from its center $v_{sx1}\rightarrow0$ and $n_{s1}\rightarrow0$ at $\xi\rightarrow\pm\infty$. We obtain \begin{equation}\label{RHD2021ClLM cont eq expansion order 1 integrated} n_{0s}v_{sz1}=Un_{s1}. \end{equation} Next, we consider the Poisson equation \begin{equation}\label{RHD2021ClLM Poisson equation I order} n_{e1}-n_{i1}=0 \end{equation} and \begin{equation}\label{RHD2021ClLM Poisson equation II order} -\partial_{\xi}^{2}\varphi_{1}=4\pi (q_{e}n_{e2}+q_{i}n_{i2}). \end{equation} Necessary relation between concentration, velocity field and electric field is found from the Euler equation which is also considered in the first and second orders of expansion \begin{equation}\label{RHD2021ClLM Euler equation I order} -Un_{0s}\partial_{\xi}v_{sz1}+u_{sp}^{2}\partial_{\xi}n_{s1} =-\frac{q_{s}}{m_{s}}n_{0s}\biggl(\frac{\Gamma_{0s}}{n_{0s}}-\frac{u_{st}^{2}}{c^{2}}\biggr)\partial_{\xi}\varphi_{1}, \end{equation} and $$-Un_{0s}\partial_{\xi}v_{sz2}+n_{0s}\partial_{\tau}v_{sz1}+u_{sp}^{2}\partial_{\xi}n_{s2}+\frac{u_{sp}^{2}}{3\gamma_{Fs}^{2}n_{0s}}\partial_{\xi}n_{s1}^{2}$$ \begin{equation}\label{RHD2021ClLM Euler equation II order} =-\frac{q_{s}}{m_{s}}\biggl(\Gamma_{s1}-\frac{u_{st}^{2}}{c^{2}}n_{s1} \biggr)\partial_{\xi}\varphi_{1}.\end{equation} The Euler equation obtained in the first order (\ref{RHD2021ClLM Euler equation I order}) can be integrated. So, necessary relation between $v_{sz1}$, $n_{s1}$, and $\varphi_{1}$ is found. Relation between $v_{sz2}$, $n_{s2}$, $v_{sz1}$, and $\varphi_{1}$ presented by equation (\ref{RHD2021ClLM Euler equation II order}) includes the first order perturbation of the relativistic hydrodynamic gamma function $\Gamma_{s1}$. Therefore, we consider equation (\ref{RHD2021ClLM eq for Gamma}) in the lowest order of expansion \begin{equation}\label{RHD2021ClLM Gamma eq I order} -U\partial_{\xi}\Gamma_{s1}+\Gamma_{0s}\partial_{\xi}v_{sz1}+\partial_{\xi}t_{sz1}=0. \end{equation} Next, we also need equation for the first order perturbation of the flux of the relativistic hydrodynamic gamma function $t_{sz1}$: $$U\partial_{\xi}t_{sz1}-u_{st}^{2}\partial_{\xi}n_{s1}+\frac{\Gamma_{0s}}{n_{0s}}u_{sp}^{2}\partial_{\xi}n_{s1} +\frac{q_{s}}{m_{s}}\Gamma_{0s} \biggl(\frac{\Gamma_{0s}}{n_{0s}}-\frac{u_{st}^{2}}{c^{2}}\biggr)\partial_{\xi}\varphi_{1}$$ \begin{equation}\label{RHD2021ClLM t evol eq I order} =\frac{q_{s}}{m_{s}}n_{0s} \biggl(1-\frac{5u_{sp}^{2}}{c^{2}}+\frac{10}{3}\frac{u_{Ms}^{4}}{c^{4}}\biggr)\partial_{\xi}\varphi_{1}, \end{equation} where we introduce the characteristic velocity for function $M_{0s}$ as follows $u_{Ms}^{4}\equiv M_{0s}/n_{0s}$. Let us to point out that equations (\ref{RHD2021ClLM Gamma eq I order}) and (\ref{RHD2021ClLM t evol eq I order}) are used in the second order. In the first order we find the following expressions for the concentration as the function of the potential of the electric field \begin{equation}\label{RHD2021ClLM n s1 via phi} n_{s1}=\frac{q_{s}}{m_{s}}\biggl(\frac{\Gamma_{0s}}{n_{0s}}-\frac{u_{st}^{2}}{c^{2}}\biggr) \frac{n_{0s}}{U^{2}-u_{sp}^{2}}\varphi_{1}. \end{equation} Let us repeat that $\frac{\Gamma_{0s}}{n_{0s}}-\frac{u_{st}^{2}}{c^{2}}=\frac{1}{\gamma_{Fs}}$. We substitute expression (\ref{RHD2021ClLM n s1 via phi}) in the Poisson equation (\ref{RHD2021ClLM Poisson equation I order}) and find equation for the velocity of perturbation $U$: \begin{equation}\label{RHD2021ClLM eq for U} \frac{1}{m_{e}} \frac{\frac{\Gamma_{0e}}{n_{0e}}-\frac{u_{et}^{2}}{c^{2}}}{U^{2}-u_{ep}^{2}} +\frac{1}{m_{i}} \frac{\frac{\Gamma_{0i}}{n_{0i}}-\frac{u_{it}^{2}}{c^{2}}}{U^{2}-u_{ip}^{2}}=0.\end{equation} Expressions $\frac{\Gamma_{0s}}{n_{0s}}-\frac{u_{st}^{2}}{c^{2}}>0$ are positive. Hence equation (\ref{RHD2021ClLM eq for U}) has solution under condition $u_{ip}^{2}< U^{2}< u_{ep}^{2}$. Moreover, it is well-known that stable ion-acoustic waves exist at more strict condition $u_{ip}^{2}\ll U^{2}\ll u_{ep}^{2}$. It allows us to get corresponding approximate solution of equation (\ref{RHD2021ClLM eq for U}) \begin{equation}\label{RHD2021ClLM U in I order} U^{2}=\frac{m_{e}}{m_{i}}\frac{\gamma_{Fe}}{\gamma_{Fi}}u_{ep}^{2}. \end{equation} The second order leads to the nonlinear equation for the electric potential \begin{widetext} $$\partial_{\xi}^{3}\varphi_{1} +\sum_{s=e,i}\frac{U\omega_{Ls}^{2}}{\gamma_{Fs}(U^{2}-u_{sp}^{2})^{2}} \partial_{\tau}\varphi_{1} +\sum_{s=e,i}\frac{q_{s}}{m_{s}} \frac{2(U^{2}+\frac{u_{sp}^{2}}{3\gamma_{Fs}^{2}})\omega_{Ls}^{2}}{\gamma_{Fs}^{2}(U^{2}-u_{sp}^{2})^{3}}\varphi_{1}\partial_{\xi}\varphi_{1}$$ $$+\sum_{s=e,i}\frac{q_{s}}{m_{s}}\frac{\omega_{Ls}^{2}}{\gamma_{Fs}(U^{2}-u_{sp}^{2})^{2}} \biggl[\frac{\Gamma_{0s}}{n_{0s}}U\biggl(1-\frac{u_{sp}^{2}}{U^{2}}\biggr) +\frac{1}{U}\biggl(\frac{v_{Fs}^{2}}{3\gamma_{Fs}}-u_{st}^{2}\frac{U^{2}}{c^{2}}\biggr)\biggr]\varphi_{1}\partial_{\xi}\varphi_{1}$$ \begin{equation}\label{RHD2021ClLM KdV simm} +\sum_{s=e,i}\frac{q_{s}}{m_{s}}\frac{\omega_{Ls}^{2}}{U^{2}-u_{sp}^{2}} \biggl(1-5\frac{p_{0s}}{n_{0s}c^{2}}-\frac{\Gamma_{0s}}{\gamma_{Fs}n_{0s}}+\frac{10}{3}\frac{u_{sM}^{4}}{c^{4}}\biggr) \varphi_{1}\partial_{\xi}\varphi_{1}=0. \end{equation} Equation (\ref{RHD2021ClLM KdV simm}) is given in the general form which is symmetric relatively all species. Let us include condition $u_{ip}^{2}\ll U^{2}\ll u_{ep}^{2}$ in equation (\ref{RHD2021ClLM KdV simm}). So it transforms in the following form: $$\partial_{\xi}^{3}\varphi_{1} +\biggl(\frac{U\omega_{Le}^{2}}{\gamma_{Fe}u_{ep}^{4}} +\frac{\omega_{Li}^{2}}{\gamma_{Fi}U^{3}} \biggr) \partial_{\tau}\varphi_{1} +\biggl(-\frac{q_{e}}{m_{e}} \frac{2(U^{2}+\frac{u_{ep}^{2}}{3\gamma_{Fe}^{2}})\omega_{Le}^{2}}{\gamma_{Fe}^{2}u_{ep}^{6}} +\frac{q_{i}}{m_{i}} \frac{2(U^{2}+\frac{u_{ip}^{2}}{3\gamma_{Fi}^{2}})\omega_{Li}^{2}}{\gamma_{Fi}^{2}U^{6}} \biggr) \varphi_{1}\partial_{\xi}\varphi_{1}$$ $$+\Biggl(\frac{q_{e}}{m_{e}}\frac{\omega_{Le}^{2}U}{\gamma_{Fe}u_{ep}^{4}} \biggl[\frac{\Gamma_{0e}}{n_{0e}}\biggl(1-\frac{u_{ep}^{2}}{U^{2}}\biggr) +\biggl(\frac{v_{Fe}^{2}}{3\gamma_{Fe}U^{2}}-\frac{u_{et}^{2}}{c^{2}}\biggr)\biggr] +\frac{q_{i}}{m_{i}}\frac{\omega_{Li}^{2}}{\gamma_{Fi}U^{3}} \biggl[\frac{\Gamma_{0i}}{n_{0i}}\biggl(1-\frac{u_{ip}^{2}}{U^{2}}\biggr) +\biggl(\frac{v_{Fi}^{2}}{3\gamma_{Fi}U^{2}}-\frac{u_{it}^{2}}{c^{2}}\biggr)\biggr]\Biggr)\varphi_{1}\partial_{\xi}\varphi_{1}$$ \begin{equation}\label{RHD2021ClLM KdV ui U ue} +\biggl(-\frac{q_{e}}{m_{e}}\frac{\omega_{Le}^{2}}{u_{ep}^{2}} \biggl(1-5\frac{p_{0e}}{n_{0e}c^{2}}-\frac{\Gamma_{0e}}{\gamma_{Fe}n_{0e}}+\frac{10}{3}\frac{u_{eM}^{4}}{c^{4}}\biggr) +\frac{q_{i}}{m_{i}}\frac{\omega_{Li}^{2}}{U^{2}} \biggl(1-5\frac{p_{0i}}{n_{0i}c^{2}}-\frac{\Gamma_{0i}}{\gamma_{Fi}n_{0i}}+\frac{10}{3}\frac{u_{iM}^{4}}{c^{4}}\biggr)\biggr) \varphi_{1}\partial_{\xi}\varphi_{1}=0, \end{equation} \end{widetext} where we use $u_{ep}\sim c$, hence $U\ll c$. Since we have $u_{ip}^{2}\ll U^{2}$ we can expect that $u_{it}^{2}\ll U^{2}$ and $u_{iM}^{4}\ll c^{4}$. Condition $u_{ip}^{2}\ll U^{2}\ll u_{ep}^{2}$ together with assumption $u_{ep}\sim c$ ($u_{ep}< c$, but they have same order) leads to $u_{ip}^{2}\ll U^{2}\ll c$. It allows us assume $\gamma_{Fi}\approx1$. Moreover, the contribution of the relativistic effects in the properties of ions can be dropped. Let us consider the second term in equation (\ref{RHD2021ClLM KdV ui U ue}). It is proportional to $\frac{U\omega_{Le}^{2}}{\gamma_{Fe}u_{ep}^{4}} +\frac{\omega_{Li}^{2}}{U^{3}}$, where we included $\gamma_{Fi}=1$. We have $\omega_{Le}^{2}\gg \omega_{Li}^{2}$, but the contribution of $\omega_{Le}^{2}$ is reduced by $\gamma_{Fe}u_{ep}^{4}$, which is large in compare with $U^{4}$: $\sqrt{\gamma_{Fe}}u_{ep}^{2} > u_{ep}^{2}\gg U^{2}$. Hence, both terms are comparable under these conditions. Next, we consider the coefficient in the third term in equation (\ref{RHD2021ClLM KdV ui U ue}). We have $\gamma_{Fe}^{2}>1$, but we can also have $\gamma_{Fe}^{2}\gg1$. Therefore, parameter $u_{ep}^{2}/3\gamma_{Fe}^{2}$ is reduced in compare with $u_{ep}^{2}$, consequently the parameter $u_{ep}^{2}/3\gamma_{Fe}^{2}$ is comparable with $U^{2}$. Here we have stronger reduction of the electron Langmuir frequency square in compare with the similar reduction of the electron Langmuir frequency square second term of equation (\ref{RHD2021ClLM KdV ui U ue}). We have the following construction $[(U^{2}+\frac{u_{ep}^{2}}{3\gamma_{Fe}^{2}})/u_{ep}^{2}]\omega_{Le}^{2}/\gamma_{Fe}^{2}u_{ep}^{4}$, where $[(U^{2}+\frac{u_{ep}^{2}}{3\gamma_{Fe}^{2}})/u_{ep}^{2}]\ll 1$ and $\gamma_{Fe}^{2}u_{ep}^{4}> u_{ep}^{4}\gg U^{4}$. We estimate the ion contribution in the coefficient in the third term in equation (\ref{RHD2021ClLM KdV ui U ue}), where we find $(U^{2}+\frac{u_{ip}^{2}}{3})\omega_{Li}^{2}/U^{6}\approx \omega_{Li}^{2}/U^{4}$. It shows that contributions of electrons and ions can be comparable or one of them can dominate. If both parts of coefficient in the second term in equation (\ref{RHD2021ClLM KdV ui U ue}) are comparable we can drop the contribution of electrons in the coefficient in the third term in equation (\ref{RHD2021ClLM KdV ui U ue}). In the opposite limit, if both parts of the coefficient in the third term in equation (\ref{RHD2021ClLM KdV ui U ue}) are comparable we can drop the contribution of ions in coefficient in the second term in equation (\ref{RHD2021ClLM KdV ui U ue}). In general case, we keep all of them. We consider the electron contribution in the coefficient of the fourth term in equation (\ref{RHD2021ClLM KdV ui U ue}), where we drop $1$ in compare with $u_{ep}^{2}/U^{2}$, we can drop $\frac{u_{et}^{2}}{c^{2}}<1$ in compare with $\frac{v_{Fe}^{2}}{3\gamma_{Fe}U^{2}}\gg 1$ (if relativistic effects are relatively small), but we have $\frac{v_{Fe}^{2}}{3\gamma_{Fe}U^{2}}\geq 1$ (comparable with $\frac{u_{et}^{2}}{c^{2}}\sim1$) for the strong relativistic effects $\gamma_{Fe}\gg 1$. We also have a product of small $\frac{\Gamma_{0e}}{n_{0e}}$ and large $\frac{u_{ep}^{2}}{U^{2}}$ parameters $\frac{\Gamma_{0e}}{n_{0e}}\frac{u_{ep}^{2}}{U^{2}}$ which can be above or below of $1$. We consider the contribution of ions in the coefficient of the fourth term in equation (\ref{RHD2021ClLM KdV ui U ue}). We include $u_{ip}^{2}/U^{2}\ll1$ We also use $\Gamma_{0i}\approx n_{0i}$. We see the combination of two terms $\frac{v_{Fi}^{2}}{3U^{2}}\ll 1$ and $\frac{u_{it}^{2}}{c^{2}}\ll1$, but they can be comparable to each other. So, we have $1+(\frac{v_{Fi}^{2}}{3U^{2}}-\frac{u_{it}^{2}}{c^{2}})$, where we can drop two last terms in compare with $1$. So, we have simplification of the contribution of ions down to $\frac{q_{i}}{m_{i}}\frac{\omega_{Li}^{2}}{U^{3}}$. In general case it can be comparable with the contribution of electrons. Finally, we present an analysis of the last term in equation (\ref{RHD2021ClLM KdV ui U ue}). No further simplification is found for the part presenting electrons. However, strong simplification is found for the ions, where $u_{iM}^{4}\ll c^{4}$, $p_{0i}/n_{0i}c^{2}\sim v_{Fi}^{2}/c^{2}\ll 1$. Hence, two terms are found $1-\frac{\Gamma_{0i}}{n_{0i}}=0$ which give zero combination since $\Gamma_{0i}\approx n_{0i}$. So, there is no contribution of ions in the last term. After described modifications we find simplification of equation (\ref{RHD2021ClLM KdV ui U ue}): \begin{widetext} $$\partial_{\xi}^{3}\varphi_{1} +\biggl(\frac{U\omega_{Le}^{2}}{\gamma_{Fe}u_{ep}^{4}} +\frac{\omega_{Li}^{2}}{U^{3}} \biggr) \partial_{\tau}\varphi_{1} +\biggl(-\frac{q_{e}}{m_{e}} \frac{2(U^{2}+\frac{u_{ep}^{2}}{3\gamma_{Fe}^{2}})\omega_{Le}^{2}}{\gamma_{Fe}^{2}u_{ep}^{6}} +\frac{q_{i}}{m_{i}} \frac{2\omega_{Li}^{2}}{U^{4}} \biggr) \varphi_{1}\partial_{\xi}\varphi_{1}$$ $$+\Biggl(\frac{q_{e}}{m_{e}}\frac{\omega_{Le}^{2}U}{\gamma_{Fe}u_{ep}^{4}} \biggl[-\frac{\Gamma_{0e}}{n_{0e}}\frac{u_{ep}^{2}}{U^{2}} +\biggl(\frac{v_{Fe}^{2}}{3\gamma_{Fe}U^{2}}-\frac{u_{et}^{2}}{c^{2}}\biggr)\biggr] +\frac{q_{i}}{m_{i}}\frac{\omega_{Li}^{2}}{U^{3}} \Biggr)\varphi_{1}\partial_{\xi}\varphi_{1}$$ \begin{equation}\label{RHD2021ClLM KdV ui U ue 2} -\frac{q_{e}}{m_{e}}\frac{\omega_{Le}^{2}}{u_{ep}^{2}} \biggl(1-5\frac{p_{0e}}{n_{0e}c^{2}}-\frac{\Gamma_{0e}}{\gamma_{Fe}n_{0e}}+\frac{10}{3}\frac{u_{eM}^{4}}{c^{4}}\biggr) \varphi_{1}\partial_{\xi}\varphi_{1}=0. \end{equation} \end{widetext} The relativistic effects contribute in the properties of the ion-acoustic soliton described by equation (\ref{RHD2021ClLM KdV ui U ue 2}) via the electrons. Main indicator of these effects is the gamma factor on the Fermi velocity $\gamma_{Fe}$. It is located in the denumerators. Hence, it reduces the contribution of electrons in compare with the ions. \section{Conclusion} General derivation of the hydrodynamic model for the relativistically hot plasmas has been presented in earlier papers \cite{Andreev 2021 05}, \cite{Andreev 2021 09}. This hydrodynamic model is based on the dynamics of four material fields: the concentration and the velocity field \emph{and} the average reverse relativistic $\gamma$ factor and the flux of the reverse relativistic $\gamma$ factor. Here we have presented further generalization of this model for the degenerate species. So, we consider temperatures below the Fermi temperature of the chosen species. Moreover, the concentration of the species is large enough so the Fermi velocity getting close to the speed of light. Necessary equations of state for the flux of the particle current $\tilde{p}$, the flux of the average reverse gamma factor $\tilde{t}$, and function $M$, which is the flux of flux of function $\tilde{p}$, have been obtained in the paper. Moreover, the equilibrium average reverse gamma factor $\Gamma_{0}$ has been calculated for degenerate fluid as well. Major application of the developed model has been made for the small amplitude ion-acoustic soliton. However, in order to illustrate the role of the relativistic effects on the simple example, we have considered the spectrum of the Langmuir waves. Moreover, Langmuir waves are considered in two ways. First, the suggested model has been applied to find the spectrum of the Langmuir waves. Second, the relativistic Vlasov kinetic equation has been used to consider the same problem in order to estimate the accuracy of the suggested model. The properties of the relativistic ion-acoustic solitons are studied analytically in terms of the suggested model. \section{Acknowledgements} Work is supported by the Russian Foundation for Basic Research (grant no. 20-02-00476). \section{DATA AVAILABILITY} Data sharing is not applicable to this article as no new data were created or analyzed in this study, which is a purely theoretical one.
2024-02-18T23:39:46.363Z
2021-11-30T02:21:48.000Z
algebraic_stack_train_0000
337
9,047
proofpile-arXiv_065-1838
\section{Introduction} \label{sec:intro} Far-field WPT is considered as a promising technique to exert a revolutionary impact on the powering systems of low power devices and to be the enabler of 1G mobile power networks \cite{BrunoToward}. Nevertheless, boosting the efficiency of WPT remains a key challenge \cite{clerckx2021wireless}. For this purpose, early efforts in the RF community have focused on the design of efficient rectennas \cite{suh2002high,1556784}, while recent efforts in the communication community have emphasized the crucial benefits of efficient signal designs for WPT \cite{Clerckx2016Waveform}. Of notable importance is the work in \cite{Clerckx2016Waveform} that developed a systematic framework for the design and optimization of waveforms to maximize the harvested DC power at the output of the rectenna. Such waveform optimization was further extended to other scenarios such as limited-feedback\cite{Huang1}, large-scale \cite{HuangLarge2017}, multi-user \cite{HuangLarge2017,abeywickrama2021refined}, opportunistic/fair-scheduling \cite{kim2020opportunistic,8476162}, multi-input-multi-output \cite{shen2020beamforming}, low-complexity \cite{ClerckxA}, prototyping and experimentation \cite{KimSignal}, wireless information and power transfer (WIPT) \cite{clerckx2017wireless} and wireless powered backscatter communications \cite{clerckx2017wirelessly}. Despite those progress, the above waveform optimization was performed without much consideration for HPA's non-linearity at the transmitter. Indeed, it has been verified that HPA's non-linearity distorts the amplitude and phase of its input signal\cite{santella1998hybrid}, and results in unexpected performance degradation particularly with multi-sine waveform transmission where the amplitudes' high variations make the input signal more vulnerable to HPA's non-linearity \cite{park2020performance}. To combat HPA's non-linear effect, mainly two lines of methods have been put forward, namely designing signals less susceptible to HPA's non-linearity and by means of digital pre-distortion (DPD). The former method decreases input signals' exposure to HPA's non-linear region by limiting their amplitude variations, such as peak-to-average-power-ratio (PAPR) reduction \cite{kryszkiewicz2018amplifier}, distortion power reduction across desired bandwidth \cite{kryszkiewicz2018amplifier} and leakage power reduction across adjacent channel\cite{goutay2021end}. Indeed, PAPR reduction has been introduced as a transmit waveform constraint in WPT in \cite{Clerckx2016Waveform}. However, this class of methods might be less efficient in WPT because HPA's power efficiency is often higher in the non-linear region and also because the method is not adaptive to HPAs' characteristics. In contrast, DPD pre-distorts the desired input signal according to HPA's transfer characteristics to linearize the transfer function of the joint pre-distorter-and-HPA structure \cite{fu2014frequency}. Recent literature has revealed the performance gain of using DPD in simultaneous WIPT (SWIPT) systems, observing an improved rate-energy region \cite{2020WIPTNON}. However, those papers did not propose a waveform design strategy that comprises HPA's non-linearity and EH's non-linearity simultaneously in WPT/SWIPT\cite{krikidis2020information}. This letter proposes a practical WPT system model accounting for both HPA and rectenna non-linearity, and derives the optimal waveform solution in the non-linear system based on a non-linear solid-state power amplifier (SSPA) and the non-linear rectenna in \cite{Clerckx2016Waveform}. Simulations verify the benefit of the proposed waveform, which compensates the power loss caused by HPA's non-linearity. The paper is organised as follows. Section \ref{section_WPT_system_model} models the non-linear WPT architecture. Section \ref{section_optimization} declares the optimization problem and reformulates it into a tractable problem, which is solved by successive convex programming (SCP), combining with Barrier's method and the gradient descent (GD) method. Section \ref{section_simulations} presents simulation results, and Section \ref{section_conclusion} draws the conclusions. \begin{figure}[htb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=9cm]{whole_structure1.pdf}} \end{minipage} \caption{The WPT structure with HPA and rectenna non-linearity.} \label{Fig_whole_structure} \end{figure} \section{WPT System Model} \label{section_WPT_system_model} Consider a system as depicted in Fig. \ref{Fig_whole_structure}. The transmitter consists of $M$ antennas, with each antenna transmitting over $N$ evenly frequency-spaced sub-carriers. At the transmitter, the RF signal is amplified and filtered before being transmitted. The complex input signal at the amplifier of the $m^{\text{th}}\:\:(m=1,2,...,M)$ antenna is written as: \begin{align} \label{eq_input_signal_complex} \widetilde{x}^{\text{in}}_m(t)&=\sum_{n=0}^{N-1}\widetilde{w}^{\text{in}}_{n,m}e^{j2\pi f_nt}, \end{align} where $\widetilde{w}^{\text{in}}_{n,m}$ denotes the complex weight of the $n^{\text{th}}\:\:(n=0,1,...,N-1)$ sub-carrier at the $m^{\text{th}}$ antenna, and $f_n=f_0+(n-1)\Delta_f$ denotes the frequency of the $n^{\text{th}}$ sub-carrier, with $f_0$ being the lowest sub-carrier frequency and $\Delta_f$ being the frequency spacing. The input signal $\widetilde{x}^{\text{in}}_m(t)$ is amplified and filtered before being transmitted. Adopting an SSPA model\cite{rapp1991effects}, the complex signal at the output of the SSPA at the $m^{\text{th}}$ antenna becomes: \begin{equation} \label{eq_HPA_model} \widetilde{x}^{\text{HPA}}_m(t)=f_{\text{SSPA}}(\widetilde{x}^{\text{in}}_m(t))=\frac{G\widetilde{x}^{\text{in}}_m(t)}{[1+(\frac{Gx^{\text{in}}_m(t)}{A_s})^{2\beta}]^{\frac{1}{2\beta}}}, \end{equation} where $x^{\text{in}}_m(t)=|\widetilde{x}^{\text{in}}_m(t)|$ is the amplitude envelop of the complex input signal $\widetilde{x}^{\text{in}}_m(t)$, $G$ denotes the small-signal amplifier gain of SSPA, $A_s$ denotes the saturation voltage of SSPA, and $\beta$ denotes the smoothing parameter of SSPA. $\widetilde{x}^{\text{HPA}}_m(t)$, after propagating through a BPF, becomes the complex transmit signal $\widetilde{x}^{\text{tr}}_m(t)$. Denote by $\widetilde{w}^{\text{tr}}_{n,m}$ the complex weight of the $n^{\text{th}}$ sub-carrier at the $m^{\text{th}}$ antenna. We have: \begin{align} \label{eq_transmit_signal_complex} \widetilde{x}^{\text{tr}}_m(t)&=\sum_{n=0}^{N-1}\widetilde{w}^{\text{tr}}_{n,m}e^{j2\pi f_nt}. \end{align} After propagating through the frequency-selective channel, the complex received signal at the receiver is: \begin{align} \label{eq_WPT_received_signal} \widetilde{y}(t)&=\sum_{m=1}^{M}\sum_{n=0}^{N-1}\widetilde{h}_{n,m}\widetilde{w}^{\text{tr}}_{n,m}e^{j2\pi f_nt}, \end{align} where $\widetilde{h}_{n,m}\sim \mathcal{CN}(0,1)$ denotes the complex channel of the $n^{\text{th}}$ sub-carrier of the signal from the $m^{\text{th}}$ transmit antenna. At the receiver, the wireless signal $\widetilde{y}(t)$ is picked up and is converted into DC as a power supply via a rectenna. We model the non-linear rectenna based on \cite{Clerckx2016Waveform}, whose output DC is approximately proportional to a scaling term as: \begin{align} \label{eq_scaling_term0} z_{DC}&=k_2R_{\text{ant}}\varepsilon\{y(t)^2\}+k_4R_{\text{ant}}^2\varepsilon\{y(t)^4\}\\ \label{eq_SSPA_poly_x_tr} \nonumber&=\frac{k_2R_{\text{ant}}}{2}(\sum_{m=1}^{M}\sum_{n=0}^{N-1}|\widetilde{w}^{\text{tr}}_{n,m}\widetilde{h}_{n,m}|^2)\\ \nonumber&\quad +\frac{3k_4R_{\text{ant}}^2}{8}(\sum_{\tiny{\begin{array}{c}m_0,m_1\\m_2,m_3\end{array}}}\sum_{\tiny{\begin{array}{c} n_0,n_1,n_2,n_3\\n_0+n_1=n_2+n_3\end{array}}} \widetilde{h}_{n_0,m_0}\widetilde{w}^{\text{tr}}_{n_0,m_0}\times\\ &\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\widetilde{h}_{n_1,m_1}\widetilde{w}^{\text{tr}}_{n_1,m_1}\widetilde{h}^*_{n_2,m_2}\widetilde{w}^{\text{tr}^*}_{n_2,m_2}\widetilde{h}^*_{n_3,m_3}\widetilde{w}^{\text{tr}^*}_{n_3,m_3}), \end{align} where $y(t)=\mathfrak{R}\{y(t)\}$ is the real received signal, and $k_i=i_s/(i!(\eta_0 V_0)^i)$ with $i_s$ being the reverse bias saturation current, $\eta_0$ being the ideality factor, $V_0$ being the thermal voltage of the diode and $R_{\text{ant}}$ being the characteristic impedance of the receiving antenna. \section{Optimization Solutions} \label{section_optimization} Consequently, subjected to a transmit power constraint and an input power constraint, the optimization problem to maximize the end-to-end harvested DC in WPT is written as: \begin{maxi!} {{\{\widetilde{w}^{\text{in}}_{n,m}\}}}{z_{DC}(\{\widetilde{w}^{\text{in}}_{n,m}\}),}{\label{eq_optimization_P1}}{\label{eq_optimization_P1_1}} \addConstraint{\frac{1}{2}\sum_m^M \sum_n^N |\widetilde{w}^{\text{in}}_{n,m}|^2 \leq P^{\max}_{\text{in}}}\label{eq_optimization_P1_2} \addConstraint{\frac{1}{2}\sum_m^M \sum_n^N |\widetilde{w}^{\text{tr}}_{n,m}(\{\widetilde{w}^{\text{in}}_{n,m}\})|^2 \leq P^{\max}_{\text{tr}},}\label{eq_optimization_P1_3} \end{maxi!} where $P^{\max}_{\text{in}}$ and $P^{\max}_{\text{tr}}$ are the input power constraint and the transmit power constraint respectively \footnote{ Eq. \eqref{eq_optimization_P1_2} prevents the power of SSPA's input signal exceeding SSPA's saturation power (the maximal output power) significantly, and avoids poor amplifier efficiency. Eq. \eqref{eq_optimization_P1_3} limits the transmit signal's RF exposure to human beings.}. Unfortunately, the scaling term $z_{DC}$ as a function of $\{\widetilde{w}^{\text{in}}_{n,m}\}$ in Eq. \eqref{eq_optimization_P1_1} is hardly specified, while $z_{DC}$ as a function of $\{\widetilde{w}^{\text{tr}}_{n,m}\}$ has been written explicitly in Eq. \eqref{eq_SSPA_poly_x_tr}. Thus, to solve problem \eqref{eq_optimization_P1}, we alter the optimization variables in problem \eqref{eq_optimization_P1} from $\{\widetilde{w}^{\text{in}}_{n,m}\}$ into $\{\widetilde{w}^{\text{tr}}_{n,m}\}$ and express $\{\widetilde{w}^{\text{in}}_{n,m}\}$ in Eq. \eqref{eq_optimization_P1_2} by using $\{\widetilde{w}^{\text{tr}}_{n,m}\}$. Consequently, an equivalent optimization problem is formed as: \begin{maxi!} {\substack{\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\}}}{z_{DC}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\}),}{\label{eq_optimization_P3}}{\label{eq_optimization_P3_1}} \addConstraint{{\sum_{m=1}^M\frac{1}{2T}\int_{T}\{\frac{x^{\text{tr}}_m(t)}{G}[\frac{1}{1-(\frac{x^{\text{tr}}_m(t)}{A_s})^{2\beta}}]^{\frac{1}{2\beta}}\}^2 dt}\nonumber\breakObjective{\leq P^{\max}_{\text{in}}}}\label{eq_optimization_P3_3} \addConstraint{\frac{1}{2}\sum_m^M \sum_n^N {\overline{w}^{\text{tr}^2}_{n,m}}+\widehat{w}^{\text{tr}^2}_{n,m} \leq P^{\max}_{\text{tr}},}\label{eq_optimization_P3_2} \end{maxi!} where $\{\overline{w}^{\text{tr}}_{n,m}\}$ and $\{\widehat{w}^{\text{tr}}_{n,m}\}$ are the real and imaginary part of $\{\widetilde{w}^{\text{tr}}_{n,m}\}$ respectively, and $x^{\text{tr}}_m(t)$ in Eq. \eqref{eq_optimization_P3_3} is the amplitude of $\widetilde{x}^{\text{tr}}_m(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\},t)$. The objective function and constraints in problem \eqref{eq_optimization_P3} can be proved convex. Problem \eqref{eq_optimization_P3} maximizes a convex objective function, which can be solved by SCP. In SCP, the objective term is linearly approximated by its first-order Taylor expansion at a fixed operating point, forming a new tractable optimization problem whose optimal solution is used as a new operating point of the next iteration. The procedure is repeated until two successive solutions are close enough and can be viewed as the solution of problem \eqref{eq_optimization_P3}. Assume $(\{\overline{w}^{\text{tr},(l-1)}_{n,m}\},\{\widehat{w}^{\text{tr},(l-1)}_{n,m}\})$ are the values of the operating point at the beginning of the $l^{\text{th}}$ iteration. Then, $z_{DC}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})$ at the $l^{\text{th}}$ iteration is linearly approximated as: \begin{align} \label{eq_first_order_Taylor} z_{DC}^{(l)}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})=\sum_{m=1}^M\sum_{n=0}^{N-1} \overline{\alpha}^{(l)}_{n,m}\overline{w}^{\text{tr}}_{n,m}+\widehat{\alpha}^{(l)}_{n,m}\widehat{w}^{\text{tr}}_{n,m}, \end{align} where $(\{\overline{\alpha}^{(l)}_{n,m}\},\{\widehat{\alpha}^{(l)}_{n,m}\})$ are the first-order Taylor coefficients of $(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})$ respectively at the $l^{\text{th}}$ iteration. Hence, at the $l^{\text{th}}$ iteration, problem \eqref{eq_optimization_P3} is approximated as: \begin{maxi!} {\substack{\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\}}}{z_{DC}^{(l)}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\}),}{\label{eq_optimization_P4}}{\label{eq_optimization_P4_1}} \addConstraint{\text{Eq}. \eqref{eq_optimization_P3_2},\quad \text{Eq}. \eqref{eq_optimization_P3_3}.}{\label{eq_optimization_P4_2}} \end{maxi!} Problem \eqref{eq_optimization_P4} is solved by using Barrier's method, where the non-linear constraints in Eq. \eqref{eq_optimization_P4_2} are omitted by reformulating problem \eqref{eq_optimization_P4} into: \begin{align} \label{eq_optimization_P4_l} \nonumber\min_{\{\overline{w}^{tr}_{n}\},\{\widehat{w}^{tr}_{n}\}} \quad &-z_{DC}^{(l)}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})\\ &\quad +\sum_{i=1}^{2}I_-(f_{c,i}(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})), \end{align} where \begin{align} \label{eq_interpratation} I_-(x)=&\lbrace\begin{matrix} 0,\:\:\:\:&x\leq 0,\\ \infty,\:\:\:\:&x> 0, \end{matrix}\\\label{eq_interpratation1} f_{c,1}(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})=&\frac{1}{2}\sum_m^M \sum_n^N {\overline{w}^{\text{tr}^2}_{n,m}}+\widehat{w}^{\text{tr}^2}_{n,m} - P^{\max}_{\text{tr}},\\ \nonumber f_{c,2}(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})=&\sum_{m=1}^M\frac{1}{2T}\int_{T}\{\frac{x^{\text{tr}}_m(t)}{G}[\frac{1}{1-(\frac{x^{\text{tr}}_m(t)}{A_s})^{2\beta}}]^{\frac{1}{2\beta}}\}^2 dt\\ & - P^{\max}_{\text{in}}. \end{align} Further, to make problem \eqref{eq_optimization_P4_l} differentiable, $I_-(x)$ is approximated as: \begin{equation} \label{eq_I_-} \widehat{I}_-(x)=-(\frac{1}{t})\log(-x), \end{equation} where $t$ is a parameter that sets the accuracy of the approximation. The larger the $t$, the closer the $\widehat{I}_-(x)$ is to ${I}_-(x)$. Consequently, for a specific $t$, the optimization problem \eqref{eq_optimization_P4_l} becomes: \begin{align} \label{eq_optimization_barrier_approx} \nonumber \min_{\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\}} \quad &-z_{DC}^{(l)}(\{\overline{w}^{\text{tr}}_{n,m}\},\{\widehat{w}^{\text{tr}}_{n,m}\})-\\&\frac{1}{t}\sum_{i=1}^{2}\log(-f_{c,i}(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})), \end{align} which can be solved by GD methods such as Newton's Method. In summary, the optimization problem \eqref{eq_optimization_P3} is solved in a iterative manner by adopting SCP. In each SCP's round, the corresponding optimization problem \eqref{eq_optimization_P4} is solved by Barrier's method iteratively, with an exit condition of a sufficient large $t$ so that problem \eqref{eq_optimization_barrier_approx} approximates problem \eqref{eq_optimization_P4} satisfyingly. The whole optimization process is described in Algorithm \ref{SCP}. \begin{algorithm}[h] \SetAlgoLined $\textbf{Input}$: $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(0)},\epsilon_0>0,l\leftarrow 1$\; $\textbf{Output}$: $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{\star}$\; $\textbf{Repeat}$: \\ $\:\:\:\:\:\:1: \:$Compute $(\{\overline{\alpha}\},\{\widehat{\alpha}\})^{(l)}$ at the operating point $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l-1)}$ using Taylor expansion\; $\:\:\:\:\:\:2: \text{Compute } (\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}$ using Algorithm \ref{algorithm_barrier}\; $\:\:\:\:\:\:3: \:$Update $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{\star}\leftarrow (\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}$\; $\:\:\:\:\:\:4: \:$Quit if \\ $\:\:\:\:\:\:\:\:\:\:\:\:\:|{(\{\mathbf{\overline{w}}^{\text{tr}}_{n}\},\{\mathbf{\widehat{w}}^{\text{tr}}_{n}\})^{(l)}}-{(\{\mathbf{\overline{w}}^{\text{tr}}_{n}\},\{\mathbf{\widehat{w}}^{\text{tr}}_{n}\})^{(l-1)}}|< \epsilon_0$\; $\:\:\:\:\:\:\:5: \:l\leftarrow l+1$\; \caption{Successive convex programming (SCP)} \label{SCP} \end{algorithm} \begin{algorithm}[h] \SetAlgoLined $\textbf{Input}$: $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(B_0)}\leftarrow(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l-1)},\:t>0,$\\ $\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\mu_B>0,\epsilon_B>0$\; $\textbf{Output}$: $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}$; \\ $\textbf{Repeat}$: \\ $\:\:\:\:\:\:\:1:\:$Compute $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})$ by minimizing problem \eqref{eq_optimization_barrier_approx} using Newton's Method with initialised point $(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(B_0)}$\; $\:\:\:\:\:\:\:2:\text{Update }(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}\leftarrow(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})$\; $\:\:\:\:\:\:\:3: \:\text{Quit if } 2/t < \epsilon_B$\; $\:\:\:\:\:\:\:4: \:t\leftarrow\mu_Bt,\:(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(B_0)}\leftarrow(\{\overline{w}^{\text{tr}}_{n}\},\{\widehat{w}^{\text{tr}}_{n}\})^{(l)}$\; \caption{Barrier's method} \label{algorithm_barrier} \end{algorithm} \textit{Remark 1:} Current literature optimizes the WPT transmit waveform based on different optimization variables, such as the amplitude and phase of the weights\cite{2017Communications,shen2020beamforming}, the real and imaginary part of the weights\cite{abeywickrama2021refined}, and the complex weight vector\cite{HuangLarge2017}. This letter solves problem \eqref{eq_optimization_P3} by optimizing the real and imaginary part of the weights, because the non-linear SSPA constraint in Eq. \eqref{eq_optimization_P3_3} is only proved convex relative to the real and imaginary parts of the weights of the sub-carriers. \section{Simulations} \label{section_simulations} The power efficiency of the proposed waveform is evaluated under a Wi-Fi-like scenario with $f_0=5.18$ GHz. For the SSPA, set the smoothing parameter to $\beta=1$ and the small-signal gain to $G=1$; For the rectenna, set $i_s=5\:\mu$A, $\eta_0=1.05$, $V_0=25.86$ mV, and $R_{\text{ant}}=50\:\Omega$. Fig. \ref{fig_diff_P_tr} compares the energy harvesting performance between the proposed input waveform and the waveform considering only rectenna's non-linearity by putting the optimal transmit waveform in\cite{Clerckx2016Waveform} directly into SSPA. The energy harvesting performance assuming an ideal linear HPA is plotted as a benchmark (black), demonstrating the power loss caused by HPA's non-linearity compared with other curves. The comparison with using an ideal HPA also reveals that, although larger transmit power gives larger harvested energy in practical WPT systems, it also leads to more severe power loss caused by HPA's non-linearity. When the transmit power constraint grows sufficiently large, the harvested energy is limited by the saturation power of SSPA. Fig.\ref{fig_diff_P_tr} also verifies that, until the transmit power constraint reaches SSPA's saturation power ($-35\:$dBW), the proposed waveform always outperforms all the other solutions which are only optimized for rectenna's non-linearity. The result highlights the significance of considering HPA's non-linearity for waveform design. Interestingly, Fig.\ref{fig_diff_P_tr} also shows that, although the non-linear HPA prefers low-PAPR input signals, using the transmit waveform with PAPR constraints in \cite{Clerckx2016Waveform} as the input waveform (PAPR=$20$) will not necessarily outperform using the transmit waveform without PAPR constraints in \cite{Clerckx2016Waveform} as the input waveform. This might originate from a trade-off between the HPA non-linearity and the rectenna non-linearity, since high PAPR signals are preferred by rectenna's non-linearity, which is the opposite for SSPA \cite{Clerckx2016Waveform}. The phenomenon indicates that adding PAPR constraint only is not sufficient to grasp the HPA's non-linearity for optimal input waveform design and thus highlights the significance of designing waveforms adaptive to SSPA's transfer characteristics. However, that the curve of PAPR$=12$ outperforms the curve of PAPR$=20$ still illustrates SSPA's preference on low-PAPR signals. \begin{figure}[t] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{diff_P_tr-eps-converted-to.pdf}} \end{minipage} \caption{{Energy harvesting performance with $G=1, A_s=-35 \:$dBV,$\: P^{\max}_{\text{in}}=-20\:$dBW,$\: N=8$. `Ideal HPA' stands for using the optimal transmit waveform in \cite{Clerckx2016Waveform} to the input of an ideal HPA; `OPT' stands for the proposed optimal solution accounting for SSPA's and rectenna's non-linearity; `Decoupling' stands for using the optimal transmit waveform in \cite{Clerckx2016Waveform} to the input of SSPA; `PAPR=12' and `PAPR=20' stand for the optimal transmit waveform in \cite{Clerckx2016Waveform} with different PAPR constraints.}} \label{fig_diff_P_tr} \end{figure} The effect of HPA's non-linearity on energy harvesting performance is further verified in Fig. \ref{fig_diff_N}, where $z_{DC}$ is plotted as a function of the number of the sub-carriers with different saturation voltages. Fig. \ref{fig_diff_N} shows that the harvested energy increases linearly with the number of sub-carriers when using the optimal transmit waveform in \cite{Clerckx2016Waveform} into an ideal amplifier (black). However, if adopting the same waveform as the black curve but with a non-linear SSPA (blue), the harvested energy tends to saturate when the number of sub-carriers keeps increasing, especially with low SSPA's saturation voltage. This is because the PAPR of the optimal waveform in \cite{Clerckx2016Waveform} increases with the number of sub-carriers, giving larger maximal amplitudes of the signal and making the signal exposed to SSPA's non-linear regime more severely, which results in more power loss. In contrast, using the proposed input waveform (red) can compensate for SSPA's non-linear effect and guarantee the same harvested energy as using an ideal amplifier, as long as the input signal does not make the SSPA operate in very high non-linear regime (i.e. $A_s=-24\:$dBV, $N=16$). \begin{figure}[t] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{diff_N-eps-converted-to.pdf}} \end{minipage} \caption{{$z_{DC}$ as a function of $N$ with different $A_s$, $G=1, \: P^{\max}_{\text{in}}=-20\:$dBW, $P^{\max}_{\text{tr}}=-40\:$dBW.}} \label{fig_diff_N} \end{figure} \section{Conclusions} \label{section_conclusion} This paper proposes an input waveform design strategy which maximizes the harvested energy in WPT, considering both HPA and rectenna non-linearity. The power loss caused by HPA's non-linearity is evaluated through simulations. The simulations also verify that the proposed input waveform achieves better energy harvesting performance compared with the waveform that only accounts for rectenna's non-linearity, emphasizing the significance of considering transmitter's non-linearity in efficient wireless powered networks design. \bibliographystyle{IEEEtran}
2024-02-18T23:39:46.594Z
2021-11-30T02:20:06.000Z
algebraic_stack_train_0000
352
3,739
proofpile-arXiv_065-1848
\section{introduction}\label{sec: introduction} The prevailing feeling among low dimensional topologists is that ``most" links $\mathcal{L}$ in $S^3$ are hyperbolic. That means that the open manifold $S^3 \smallsetminus \mathcal{L}$ can be endowed with a complete hyperbolic metric of sectional curvature $-1$. Being hyperbolic is a property of the manifold with far reaching consequences. However, proving that a specific link $\mathcal{L}$ is hyperbolic turns out to be not trivial. This is especially true if the link $\mathcal{L}$ is ``heavy duty", i.e., has a very large crossing number. See for example \cite{minsky-moriah:surplus}. The question of when can one decide if the complement of a link in $S^3$ is a hyperbolic manifold from its projection diagram has been of interest for a long time. Just to give three examples: The first result in this direction is by Hatcher and Thurston who proved that complements of $2$-bridge knots which have at least two twist regions (they are not torus knots or links) are hyperbolic, see \cite{Hatcher-Thurston}. The second is Menasco's result \cite{Menasco} that a non-split prime alternating link which is not a torus link is hyperbolic. Later Futer and Purcell proved in \cite{FuterPurcell}, among other results, that every link with a $6$-highly twisted irreducible diagram and which has at least two twist regions is hyperbolic. Their result is obtained by applying Marc Lackenby's $6$-surgery theorem, see \cite{lackenby2000word}, to the corresponding fully augmented links. Our main theorem is: \begin{theorem}\label{thm: highly twisted plats are hyperbolic} Let $L$ be a $3$-highly twisted $2m$-plat, $m \geq 2,$ with at least three twist regions, then $L$ is hyperbolic. \end{theorem} Every link $\mathcal{L}$ in $S^3$ has a plat projection \cite{BuZi}. Assume that $\mathcal{L}$ has a $2m$-plat projection $D(\mathcal{L})=D_P(\mathcal{L})$ in some plane $P$, for some $m \geq 2$. Note that every $2m$-plat projection defines a knot or link projection with $m\geq b(\mathcal{L})$ bridges, and every $m$-bridge knot or link has a $2m$-plat projection (see \cite[p.~24]{BuZi}). Here, $b(\mathcal{L})$ denotes the bridge number of $\mathcal{L}$ as defined in the next section. It follows from Theorem \ref{thm: highly twisted plats are hyperbolic} that not all links have a $3$-highly twisted plat diagram. The subset of links that do is a ``large'' subset in a sense that can be made precise, see the discussion in \cite{lustig2012large}. In Theorem \ref{thm: highly twisted plats are hyperbolic} we weaken the conditions imposed in \cite{FuterPurcell} on $\mathcal{L}$ from $6$-highly twisted to $3$-highly twisted, at the price of requiring that the diagram of $\mathcal{L}$ be a plat. It seems that the techniques developed here using the Euler characteristic, with some additional work, might be adequate for more general diagrams. Thus, we would like to make the following conjecture: \begin{conjecture}\label{con: just highlt twisted} Let $L$ be a link in $S^3$ with a link diagram which is prime, twist-reduced, $3$-highly twisted and has least two twist regions, then the link $L$ is hyperbolic. \end{conjecture} \vskip15pt \section{preliminaries}\label{sec: preliminaries} \subsection{Bubbles and twist regions.}\label{subsec: bubbles and twist regions} \vskip10pt Given a projection of a link $\mathcal{L}$ onto a plane $P$, surround each crossing in the projection diagram by a small $3$-ball $B$. Denote the collection of these $3$-balls by $\mathcal{B}$. Then $\mathcal{L}$ is isotopic to a link $L$ that is embedded in $P \cup{\partial{\mathcal{B}}}$. For a single crossing we refer to $\partial B$ as a \emph{bubble}. Note that $P$ divides each bubble into two hemispheres denoted by $\partial B^+$ and $\partial B^-$. Denote the union of all the $\partial B^\pm$ by $\mathcal{B}^\pm$ respectively. Denote the two disjoint $2$-spheres $(P \smallsetminus \mathcal{B}) \cup \mathcal{B}^\pm$ by $P^\pm$ respectively. Each of $P^\pm$ bounds a $3$-ball $H^\pm$ in $S^3\smallsetminus L$. A \emph{twist region} $T$ in $L$ is a ``cube" $D\times [-\varepsilon,+\varepsilon]$ where $D$ is a maximal disk in $P$ so that and $(T,T\cap L)$ is a trivial integer 2-tangle. For example, in Figure~\ref{fig:plat}, a box labeled $a_{i,j}$ indicates a \emph{twist region} with $a_{i,j}$ crossings, where $a_{i,j}$ can be positive, negative, or zero. In the example in the figure, $a_{1,1}=a_{2,2}=-3$, and all other $a_{i,j}=-4$. \begin{figure} \import{figures/}{TwistRegions.pdf_tex} \caption{A 6-plat projection of a $3$-bridge knot. } \label{fig:plat} \end{figure} \subsection{Plats.} Let $b$ be an element in the braid group $\mathcal{B}_{2m}$ on $2m-1$ generators $\{\sigma_1,\dots,\sigma_{2m-1}\}$. We require that $b$ is written as a concatenation of sub-words as follows: $$b=b_1\cdot b_2 \cdot .... \cdot b_{n-1},$$ where $n$ is odd, and where $b_i$ has the following properties: \begin{enumerate} \item When $i$ is odd, $b_i$ is a product of all $\sigma_j$ with $j$ even. Namely: $$b_i = \sigma_2^{a_{i, 1}} \cdot \sigma_4^{a_{i, 2}} \cdot \dots \cdot \sigma_{2m-2}^{a_{i, m-1}}$$ \medskip \item When $i$ is even, $b_i$ is a product of all $\sigma_j$ with $j$ odd. Namely: $$b_i = \sigma_1^{a_{i, 1}} \cdot \sigma_3^{a_{i, 2}} \cdot \dots \cdot \sigma_{2m - 1}^{a_{i, m}}$$ \end{enumerate} Consider the geometric braid on $2m$ strings corresponding to the element $b$. At the top of $b_1$, connect each pair of strands $\{1, 2\}, \{3, 4\}, \dots \{2m - 1, 2m\}$, (ordered from left to right) by a small unknotted arc. Similarly connect the same pairs of strands at the bottom of $b_n$. The obtained knot or link is a {\it $2m$-plat}. The number $m$ is the \emph{width} of the plat and $n$, is the \emph{length} of the plat. The braid $b$ will be called the {\it underlying braid} of the plat. For any knot or link $\mathcal{L}\subset S^3$ there is an $m\in\mathbb{N}$ so that $\mathcal{L}$ has a $2m$-plat projection, on some projection plane $P$ as indicated in Figure~\ref{fig:plat}. This follows from the fact that any knot or link can be presented as a closed braid (proved by Alexander in 1923 see \cite[p.~23]{BuZi}) and then strands of the braid closure can be pulled across the braid diagram; for example see \cite[p.~24]{BuZi}. The number $m$ is by no means unique. If $m = br(\mathcal{L})$, where $br(\mathcal{L})$ is the bridge number of $\mathcal{L}$ (see Schubert \cite{Schubert}), then the $2m$-plat is a \emph{minimal plat} for $\mathcal{L}$. Given a knot or link in a $2m$-plat projection then the twist regions corresponding to the $\sigma_{i, j}^{a_{i, j}}$ are called \emph{twist boxes}. The twist boxes are arranged in a configuration which is ``almost'' a matrix: There are $n \in \mathbb{N}$ rows indexed by $i \in \{1, \dots, n\}$. For odd $i$'s there are $m-1$ columns and rows with an even $i$ have $m$ columns. Denote the crossing number in each twist box by $t_{i,j} = a_{i,j}$. \begin{definition} A $2m$-plat will be called \emph{$c$-highly twisted} if $\abs{t_{i,j}}\geq c$ for some constant $c$, for all $i,j$. Similarly, a knot or link that admits a $c$-highly twisted plat projection will be called a \emph{$c$-highly twisted knot or link}. \end{definition} \subsection{Diagram regions}\label{subsec: diagram regions} Every knot of link diagram $D(L)$ in a projection plane $P \subset S^3$ defines a planar $4$-regular graph. The complementary surfaces of $P \smallsetminus D(L)$ can be colored black and white so that two surfaces adjacent to an edge are colored in different colors. A $2m$-plat diagram $D(L)$ determines a template diagram $\mathcal T$ in $P$ where each twist region is replaced by a rectangle. Thinking of these rectangles as vertices of valency $4$ in the graph determined by $\mathcal{T}$, one can color the complementary regions of the graph in a black and white checkerboard manner. \begin{definition}[Lackenby \cite{lackenby2004volume}]\label{def: twist reduced} A link diagram $D(L)$ is \emph{prime} if any simple closed curve in $P$ intersecting $D(L)$ in two points bounds a subdiagram with no crossings. A link diagram $D(L)$ is \emph{twist-reduced} if any simple closed curve in $P$ which intersects the edges of $\mathcal T$ transversally in four points composed of two pairs each of which is adjacent to a crossing of $D(L)$, bounds a subdiagram which is the diagram of an integer $2$-tangle. \end{definition} \begin{remark}\label{rem: Plats are twist reduced} Let $L$ be a $3$-highly twisted $2m$-plat with $m \geq 3$. Then one can observe directly that the corresponding diagram $D(L)$ is prime and twist-reduced. \end{remark} \begin{remark}\label{rem: Intersect or bubble} An arc in $P$ connecting two regions of different colors must cross an edge of the graph or go through a rectangle. Hence an arc on $P^\pm$ in the $2m$-plat diagram connecting two regions of different colors must intersect $L$ or meet a bubble (by which we mean that the arc intersects $\partial B$ in an arc for some bubble $B$). \end{remark} \begin{definition}\label{def: Distance} Given two regions in $ A, B \in P \smallsetminus \mathcal{T}$ define the distance $d(A, B)$ between $A$ and $B$ to be the minimal number of color changes over all arcs between $A$ and $B$. Similarly, if $a,b$ are points in regions $A,B$ respectively, then define the distance $d(a,b)$ to be the distance $d(A,B)$. \end{definition} \begin{definition}\label{def: regions} The regions in $P\smallsetminus \mathcal{T}$ are composed of quadrilaterals, triangles, four bigons and a single unbounded region. The bigon regions are called the \emph{corners} of the template $\mathcal T$. \end{definition} \begin{definition} We say that an arc of $L$ connects two regions if its endpoints are contained in the closure of the two regions respectively (note that the same arc can connect different pairs of regions). \end{definition} \begin{definition}\label{def:BridgeNumber} Given a knot or link projection with the over-crossings and under-crossings indicated, a \emph {bridge}\footnote{Note that the standard definition of a bridge requires the arc to be maximal and contain at least one over-crossing.} does not contain any under-crossings, i.e., it is a subarc of $L\cap P^+$. Note that in a plat, a bridge can pass over at most two crossings. \end{definition} \begin{observation}\label{obs: single arc} If $L$ is 2-highly twisted, and $\alpha$ a bridge which connects two regions of the diagram $D(L)$ then: \begin{enumerate} \item \label{obs: single arc. two bubbles} If $\alpha$ passes over two crossings, then $\alpha$ is the unique bridge connecting the same regions that passes over two crossings. \item \label{obs: single arc. two bubbles and distance 2}If $\alpha$ passes over two crossings and the two regions that are connected by $\alpha$ are at distance at least 2, then $\alpha$ is the unique bridge connecting them. \end{enumerate} Note that the only case of an arc $\alpha$ which passes over two crossings and connects regions of distance less than $2$ occurs on the boundaries of the corner bigon regions of the diagram. In this case, there is a second arc which does not pass over any crossings which connects the same regions. \end{observation} \vskip20pt \section{Surfaces in link complements.} \vskip10pt \subsection{Normal position.}\label{subsec: normal position} We are interested in studying compact surfaces $S$ properly embedded in $S^3 \smallsetminus \mathcal{N}(L)$. If $\partial S \neq \emptyset$ we extend $S$ by shrinking the neighborhood $\mathcal{N}(L)$ radially. This determines a map $i:S\to S^3$, whose image we denote by $S$ as well, which is an embedding on $S\smallsetminus\partial S$ and $i(\partial S)\subseteq L$. \begin{lemma}\label{lem: normal form} Let $S\subset S^3 \smallsetminus \mathcal{N}(L)$ be a proper surface with no meridional boundary components, and let $(T,t)$ be a twist region. Then, up to isotopy, and $S \cap T$ is a disjoint union of disks $D \subset (T,t)$ of one of the following three types: \begin{enumerate} \item[ \underline{Type 0}:] $D$ separates the two strings of $t$. \vskip7pt \item[ \underline{Type 1}:] $\partial D$ decomposes as the union of two arcs $\alpha \cup \beta$ such that $\alpha\subset t$ and $\beta\subset \partial T$. \vskip7pt \item[ \underline{Type 2}:] $\partial D$ decomposes as the union of four arcs $\alpha_1 \cup \beta_1 \cup \alpha_2 \cup \beta_2$ where $\alpha_i \subset t_i$ and $\beta_i \subset \partial T$. \end{enumerate} Moreover, the isotopy decreases the number of bubbles that $S$ meets, and if no component of $\partial S$ is a meridian, then we may further assume that $i|_{\partial L}:\partial S \to L$ is a covering map. \end{lemma} \begin{proof} If no component of $\partial S$ is a meridian, we may assume that up to isotopy $i|_{\partial L}:\partial S \to L$ is a covering map. The twist region $(T,t)$ is a trivial 2-tangle. The complement $T\smallsetminus \mathcal{N}(t)$ can be identified with $P\times [0,1]$ where $P$ is a twice holed disk. Let $E$ be the disk $\alpha\times [0,1]$ where $\alpha$ is the simple arc connecting the two holes of $P$. Up to a small isotopy, we may assume that $S$ intersects $E$ transversely. Since the bubbles in $T$ are in some neighborhood of $E$, we may assume that $S$ meets a bubble if it does so in $E$. The intersection $S\cap E$ comprises of simple closed curves and arcs. All curves and arcs except those connecting $\alpha\times \{0\}$ to $\alpha\times \{1\}$ can be eliminated by an isotopy pushing $S$ off $T$. This isotopy decreases the number of bubbles $S$ meets. The number of bubbles the resulting surface meets equals the number of such arcs times the number of twist in the twist box. Up to isotopy, we may also assume that $S$ intersects $P\times \{ \tfrac{1}{2}\}$ transversely. Hence, $S\cap (P\times \{ \tfrac{1}{2}\})$ is a collection of simple closed curves and arcs. By pushing $S$ outwards towards the boundary of the disk $P$, one can assume that each component of $S\cap (P\times \{ \tfrac{1}{2}\})$ is of the following form: \begin{enumerate} \setcounter{enumi}{-1} \item An arc connecting the boundary of the disk $P$ to itself separating the holes, and intersecting $\alpha$ once. \item An arc connecting a hole to the boundary of the disk and not intersecting $\alpha$. \item An arc connecting the two holes and not intersecting $\alpha$. \end{enumerate} Thus, $S\cap (P\times [\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon])$ is a collection of disks of types (0),(1) or (2) as stated. By an ambient isotopy, we can stretch the slab $P\times [\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]$ to $P\times [0,1]=T$. The number of bubbles the resulting surface meets equals the number of arcs of type (0) times the number of twist in the twist box. The arcs of type (0) are in one-to-one correspondence with the arcs of $S\cap E$. The fact that $i:\partial S \to L$ is a covering map was not affected by the isotopies above. \end{proof} \begin{definition}\label{def: normal position} A surface $S\subset S^3 \smallsetminus \mathcal{N}(L)$ is \emph{in normal position} if its extension intersects each twist region as specified in Lemma \ref{lem: normal form} and $i:\partial S \to L$ is a covering map. In particular, $S$ has no meridional boundary components. \end{definition} \begin{corollary} Let $S \subset S^3 \smallsetminus \mathcal{N} (L)$ be a proper incompressible surface, and let $(T,t)$ be a twist region. Then, up to isotopy, each component of the intersection $S \cap T \cap P^\pm$ looks as in Figure \ref{fig: three types of intersection}. \end{corollary} \begin{figure} \centering \begin{overpic}[height=4cm]{figures/Type0twist.pdf} \put(5,-10){Type 0} \end{overpic} \hskip 1cm \begin{overpic}[height=4cm]{figures/Type1twist.pdf} \put(5,-10){Type 1} \end{overpic} \hskip 1cm \begin{overpic}[height=4cm]{figures/Type2twist.pdf} \put(5,-10){Type 2} \put(75,76){$S\cap P^-$} \put(75,63){$S\cap P^+$} \end{overpic} \medskip \caption{The possible three types of intersection of $S$ with a twist box.} \label{fig: three types of intersection} \end{figure} \subsection{Curves of intersection.}\label{subsec: Curves of intersection} Let $S \subseteq S^3 \smallsetminus \mathcal{N}(L)$ be a surface in normal position. We would like to study the surface $S$ through its curves of intersection with the planes $P^\pm$. However, disks of Type (2) cause some technical complications. In order to simplify the situation we consider the surface $\widehat{S}$ obtained by removing those disks from $S$. Explicitly: Let $\mathcal{T}$ be the union of all twist boxes of the plat $L$. Consider the collection of disks $\mathcal{D}$ of Type (2) which occur as intersections $S\cap \mathcal{T}$. We may assume that $\partial \mathcal{D} \subset P \cup L$, and that the subsurface $\widehat{S}=S \smallsetminus \mathcal{D}$ is transversal to $P^\pm$. Define $\mathcal{C}^+ = \partial i^{-1}(\widehat{S} \cap H^+)$ and $\mathcal{C}^- = \partial i^{-1}(\widehat{S} \cap H^-)$. Now define $\mathcal{C} =\mathcal{C}^+ \cup \mathcal{C}^-$. As each of $P^\pm$ is a $2$-sphere, $\widehat{S}\cap H^\pm$ is a collection of subsurfaces of $\widehat{S}$, the boundary of which are simple closed curves $c \subset S$. For $c\in \mathcal{C}^+$, denote by $S_c$ the component of $\widehat{S} \cap H^+$ so that $c\subset \partial S_c$, and respectively for $c\in \mathcal{C}^-$. Although formally defined as the preimage under $i$, we think of curves in $\mathcal{C}^\pm$ as curves on $P^\pm$, as such they are disjoint outside $L$. \begin{definition}\label{def: c passes}\label{def: numbers} For $c\in \mathcal{C}$, \begin{enumerate} \item We say that a curve $c \in \mathcal{C}$ \emph{passes through a bubble} $B$ if $c\cap (\partial B \smallsetminus L) \neq \emptyset$. \item An \emph{intersection point} on $c\in \mathcal{C}$ is an endpoint of the intervals in $c\smallsetminus L$. \item Denote by $\bbl{c}$ the number of bubbles (with multiplicities) through which $c$ passes. \item Denote by $\bdr{c}$ the number of intersection points along $c$. \item Define $\bbb{c}=\bbl{c}+\bdr{c}$. \item Define $\mathcal{C}_{i,j} = \{ c\in\mathcal{C} \mid \bbl{c}=i,\bdr{c}=j\}$. \end{enumerate} \end{definition} \subsection{Taut surfaces}\label{subsec: taut surfaces} \begin{definition}\label{def: complexity} Given an incompressible surface $S \subset S^3 \smallsetminus \mathcal{N}(L)$ we define a { \it lexicographic complexity} of $S$ as follows: \begin{equation} \text{Com}(S) = (\abs{\mathcal{C}},\, \sum_{c\in\mathcal{C}} \bbl{c},\, \sum_{c\in\mathcal{C}} \bdr{c}) \end{equation} \end{definition} Recall that a properly embedded surface $S$ in a $3$-manifold $M$ is called \emph{essential} if it is either a 2-sphere which does not bound a 3-ball, or it is incompressible, boundary incompressible and not boundary parallel. \begin{lemma}\label{lem: properties of curves} Let $S\subset S^3 \smallsetminus \mathcal{N}(L)$ be an essential surface in normal position. Assume that either \begin{enumerate}[label=(\roman*)] \item $S$ is an essential 2-sphere, and minimizes complexity among all essential 2-spheres, or \item $S$ is not a 2-sphere, the link $L$ is not split (i.e., $S^3 \smallsetminus L$ is irreducible), and $S$ minimizes complexity in its isotopy class. \end{enumerate} Then, for all $c \in \mathcal{C}$ we have: \vskip5pt \begin{enumerate} \item $S_c \cong \mathbb{D}^2$. \vskip5pt \item $\bdr{c}$ is even. \vskip5pt \item If $\bdr{c}\le 2$ then $\bbl{c} >0$. \vskip5pt \item If $\bdr{c}=0$ then $\bbl{c}$ is even. \vskip5pt \item The curve $c$ does not pass twice through the same bubble on the same side of $L$. \vskip5pt \item The curve $c$ does not contain an arc, as depicted in Figure \ref{reducing bubbles a}, that passes through exactly one bubble and has two intersection points with an edge of $L$ that emanates from the bubble on both sides of the edge. \vskip5pt \item The curve $c$ does not contain an arc, as depicted in Figure \ref{reducing bubbles b}, that is contained in a twist box, and passes through exactly one bubble and one intersection point. \vskip5pt \item There is no arc of $c\cap L$ such that a small extension of the arc along $c$ has endpoints in the same region. \end{enumerate} \end{lemma} \begin{proof} Let $S\subset S^3 \smallsetminus \mathcal{N}(L)$ be an essential surface satisfying (i) or (ii). Note that in both cases, compressing along a disk $D\subset S^3 \smallsetminus \mathcal{N}(L)$ with $D\cap S = \partial D$ results either in two essential spheres, or a surface in the same isotopy class of $S$. Thus, by the assumption on $S$, surfaces obtained by such a compression cannot have lower complexity. \vskip5pt \noindent (1) Since $S$ is essential, each subsurface $S_c$ must be planar, as otherwise it contains a non-trivial compression disk. If $S_c$ has more than one boundary component then compressing along a disk in $H^+$ or $H^-$ whose boundary separates boundary components of $S_c$ will result in a surface with fewer intersections with $P$ in contradiction to the choice of $S$. \vskip5pt \noindent (2) By definition, $\bdr{c}$ is the number of endpoints of arcs in $c\smallsetminus L$. Since each arc has two endpoints, $\bdr{c}$ is even. \vskip5pt \noindent (3) By (2) $\bdr{c}$ is either two or zero. If $\bdr{c} = 0$ and $\bbl{c} = 0$ then $c$ bounds a disk on $P \smallsetminus L$. Compressing $S$ along this disk reduces the number of intersections with $P$. If $\bdr{c} = 2$ and $\bbl{c} = 0$ then $c$ bounds a disk $D$ in $P$ such that $\partial D = \alpha \cup \beta$, where $\alpha$ is an arc in $L$ and $\beta$ is an arc in $P$. Since both $\alpha$ and $\beta$ do not pass through bubbles $\partial D$ bounds a disk in $P\smallsetminus L$. By choosing an innermost such $D$ we may assume that $D \subset P \smallsetminus (L\cup S)$ and thus $D$ is a boundary compression disk. Compressing $S$ along $D$ we get an isotopic surface with less intersections with $P$ which is a contradiction. \vskip5pt \noindent (4) Consider the colors of complementary regions of $P \smallsetminus \mathcal{T}$ which the curve $c$ intersects. If $\bdr{c}=0$ every change of colors, of these regions along $c$, accounts for one bubble that $c$ meets. Since $c$ is a closed curve the total number of color changes is even, and correspondingly $\bbl{c}$ is even. \vskip5pt \noindent (5) This claim follows directly from Lemma 1(ii) of \cite{Menasco}. \vskip5pt \noindent (6) In this case there is an ambient isotopy of the surface $S$ which pushes the disk bounded by the curve $c$ through the bubble thus reducing the number of bubbles met by $S$ by one. The isotopy is indicated by the arrow in Figure \ref{reducing bubbles a}. The surface $S$ is assumed to be in normal form, by Definition \ref{def: normal position}. This contradicts the assumption on the choice of $S$ as minimizing the complexity as in Definition \ref{def: complexity}. \vskip5pt \noindent (7) The proof in this case is the same as in case (6), using the isotopy described by Figure \ref{reducing bubbles b}. Note that as a result of the isotopy we see two more intersection points with $L$ but one less bubble. \vskip5pt \noindent (8) If such an arc $\alpha \subseteq c\cap L$ then by pushing $S$ through $P$ in a neighborhood of $\alpha$ we reduce the number of intersection points by 2, in contradiction to the minimal complexity of $S$. \end{proof} \begin{figure}[ht] \centering \begin{overpic}[width=3.5cm]{figures/isotopy1.pdf} \put(5,55){$\alpha$} \end{overpic} \includegraphics[width=1.5cm]{figures/rightarrow.pdf} \includegraphics[width=3.5cm]{figures/isotopy2.pdf} \caption{The isotopy in Case (6) of Lemma \ref{lem: properties of curves}} \label{reducing bubbles a} \end{figure} \begin{figure}[ht] \centering \begin{overpic}[width=3.5cm]{figures/isotopy3.pdf} \put(60,90){$\alpha$} \end{overpic} \includegraphics[width=1.5cm]{figures/rightarrow.pdf} \includegraphics[width=3.5cm]{figures/isotopy4.pdf} \caption{The isotopy in Case (7) of Lemma \ref{lem: properties of curves}} \label{reducing bubbles b} \end{figure} \begin{definition} A surface $S$ is \emph{taut} if it satisfies the conditions specified in Lemma~\ref{lem: properties of curves}. \end{definition} \begin{remark} Note that if $S$ is taut then $\mathcal{C}_{0,0}=\mathcal{C}_{0,2}=\mathcal{C}_{i,2k+1}=\mathcal{C}_{2k+1,0}=\emptyset$ for all $i,k\in\mathbb{N}\cup \{0\}$. \end{remark} \begin{remark}\label{rem: Assume taut} From now on we assume that the surface $S$ is taut. \end{remark} \medskip \section{Euler Characteristic and curves of intersection} \label{sec: Curves and the Euler Characteristic} \vskip10pt \subsection{Distributing Euler characteristic among curves} For each curve $c\in \mathcal{C}$ we will define the \emph{contribution} of $c$, and show that the Euler characteristic of $S$ can be computed by summing up the contributions of curves $c\in\mathcal{C}$. \begin{definition} The \emph{contribution} $\chi_+(c)$ of a curve $c\in\mathcal{C}$ is defined by $$\chi_+(c) = \frac{\chi(S_c)}{|\partial S_c|} - \frac{1}{4}\bbb{c}.$$ \end{definition} Note that if $S$ is taut, then $S_c\cong D^2$, and therefore $\chi_+(c)=1 - \frac{1}{4}\bbb{c}$. \begin{lemma}\label{lem: Euler characteristic from Euler contributions} If $S \subset S^3 \smallsetminus \mathcal{N}(L)$ is taut then $\chi(S)=\sum_{c\in\mathcal{C}} \chi_+(c)$. \end{lemma} \begin{proof} The union of the collection of all the curves $c\in\mathcal{C}$ on $S$ is an embedded graph $X$ of $S$. Let $X^0$ and $X^1$ denote the vertex and the edge sets of $X$ respectively. The vertices of the graph are the points in $S$ corresponding to the points on $c$ which are on $P\cap \mathcal{B}$ (i.e., where $c$ meets a bubble) or $P\cap L$ (i.e., an intersection point of $c$). The graph $X$ partitions $S$ into disk regions of three types: \begin{enumerate} \item the subsurfaces $S_c \subseteq \widehat{S} \cap H^\pm$ for $c\in \mathcal{C}^\pm$, \smallskip \item the regions $R \subseteq \widehat{S}\cap B$ where $B\in\mathcal{B}$ is a 3-ball bounded by a bubble, or \smallskip \item regions $D \subset S$ corresponding to Type (2) disks. \end{enumerate} In case (3), the regions $D$ are disks whose boundary consists of two arcs on $L$ and two edges of $X$. By collapsing each such disk $D$ to one of the edges in $X$ we get a homotopic surface. By abuse of notation, we call it $S$, and call the corresponding graph $X$. Note that in the new surface, $\partial S \subset X$. It follows that \begin{align}\label{eq: chi S using the graph} \begin{split} \chi(S) &= \chi(X) + \sum_{S' \subseteq S \cap H^\pm} \chi(S') + \sum_{R \subseteq S\cap B} \chi(R)\\ &= |X^0| - |X^1| + \sum_{S' \subseteq S \cap H^\pm} \chi(S') + \sum_{R \subseteq S\cap B} \chi(R). \end{split} \end{align} We compute how each $c\in\mathcal{C}$ contributes to each of the summands in \eqref{eq: chi S using the graph}: \underline{The vertices of $X$.} Every curve $c\in\mathcal{C}$ passes through $2\bbl{c}$ vertices of $X^0$ in the interior of $S$ (because it goes in and out of a bubble). Further, it goes through $\bdr{c}$ vertices of $X^0$ in $\partial S$. Moreover, each of these vertices belongs to two curves $c\in \mathcal{C}$. Hence, \begin{equation} |X^0| = \sum_{c\in\mathcal{C}} (\bbl{c} + \tfrac{1}{2} \bdr{c}) \end{equation} \underline{The edges of $X$.} Every curve $c\in \mathcal{C}$ passes through $2\bbl{c} + \bdr{c}$ edges in $X^1$. Note that every edge in $X^1\cap \interior{S}\cap \mathcal{B}$ or in $X^1\cap \partial S$ belongs to exactly one curve in $\mathcal{C}$, while each edge in $X^1 \cap \interior{S} \smallsetminus \mathcal{B}$ belongs to two curves in $\mathcal{C}$. Thus, edges in $X^1\cap \interior{S}\cap \mathcal{B}$ are in one to one correspondence with the bubbles they meet. Hence, $$|X^1 \cap \interior{S} \cap \mathcal{B} | = \sum_{c\in \mathcal{C}} \bbl{c}.$$ Similarly each edge in $X^1\cap \partial S$ accounts for 2 in $\bdr{c}$. Hence, $$|X^1 \cap \partial S | = \sum_{c\in \mathcal{C}} \tfrac{1}{2} \bdr{c}.$$ Each edge in $X^1 \cap \interior{S} \smallsetminus \mathcal{B}$ accounts for two vertices in $X^0$. So the number of these edges is equal to $$|X^1 \cap \interior{S} \smallsetminus \mathcal{B}| = \tfrac{1}{2} |X^0| = \tfrac{1}{2}(\sum_{c\in\mathcal{C}} (\bbl{c} + \tfrac{1}{2} \bdr{c})).$$ Adding these contributions together gives \begin{equation} |X^1| = \sum_{c\in\mathcal{C}}( \tfrac{3}{2}\bbl{c} + \tfrac{3}{4} \bdr{c}). \end{equation} \underline{Regions $S' \subset S\cap H^\pm$.} To every curve $c\in\mathcal{C}$ there is a surface $S_c \subseteq S\cap H^\pm$, and each such surface is associated to $|\partial S_c|$ curves $c\in\mathcal{C}$. Thus, \begin{equation} \sum_{S' \subseteq S \cap H^\pm} \chi(S') = \sum_{c\in\mathcal{C}} \frac{\chi(S_c)}{|\partial S_c|}. \end{equation} \underline{Regions $R\subset S\cap B$.} Each curve $c\in \mathcal{C}$ passes through the boundary of $\bbl{c}$ such regions. As each such region has four curves passing through its boundary, we have \begin{equation} \sum_{R \subseteq S\cap B} \chi(R) = \sum_{c\in C} \tfrac{1}{4}\bbl{c}. \end{equation} \medskip Summing over all of the above we get, \begin{align*} \chi(S) &= |X^0| - |X^1| + \sum_{S' \subseteq S \cap H^\pm} \chi(S') + \sum_{R \subseteq S\cap B} \chi(R)\\ &= \sum_{c\in \mathcal{C}} (\tfrac{\chi(S_c)}{|\partial S_c|} - \tfrac{1}{4}(\bbl{c} + \bdr{c}))\\ &= \sum_{c\in \mathcal{C}} \chi_+(c). \end{align*} \end{proof} \subsection{Redistributing positive Euler characteristic among curves} In order to prove that $L$ is hyperbolic, we would eventually be interested in surfaces $S$ with non-negative Euler characteristic, namely spheres, annuli and tori. It would thus be useful to distribute the Euler characteristic of $S$ in such a way that each summand is non-positive. This would rule out 2-spheres, and show that each summand must be zero for $S$ to be a torus or an annulus. Lemma \ref{lem: Euler characteristic from Euler contributions} shows that the Euler characteristic of $S$ is the sum of the contributions of curves in $\mathcal{C}$. However, some curves might have positive contributions. Our strategy would be to ``pass'' the contributions of curves with $\chi_+> 0$ to ``neighbouring'' curves with $\chi_+<0$. This will be done by defining $\chi'(c)$ in Definition \ref{def: distributing}, and proving in Lemmas \ref{lem: summing chi' gives Euler char} and \ref{lem: non-positive contribution of chi'} respectively that $\chi(S)= \sum\chi'(c)$ and $\chi'(c)\le 0$. Our search for ``neighbouring'' curves will use the following definition: \begin{definition}\label{def: arcs and opposite} \hfill \begin{enumerate} \item Let $c\in \mathcal{C}$, two subarcs $\alpha_1,\alpha_2$ of $c$ whose endpoints are not in $\mathcal{B} \cup L$ are \emph{equivalent} if they are isotopic in $c$ so that the endpoints of each arc in the isotopy are not in $\mathcal{B} \cup L$. Note that for equivalent arcs $\alpha_1,\alpha_2$, we have $\bbl{\alpha_1}=\bbl{\alpha_2}$ and $\bdr{\alpha_1}=\bdr{\alpha_2}$. We will use the term \emph{subarc of $c$} (or simply \emph{arc}) to indicate its equivalence class. Two arcs $\alpha,\alpha'$ are \emph{disjoint} if they are subarcs of different curves or if they have disjoint representatives. Alternatively, two arcs $\alpha,\alpha'$ are non-disjoint, if they are subarcs of the same curve, and they overlap in a bubble or an intersection point. \item Two curves (or subarcs of curves) $c,c'\in\mathcal{C}$ are said to be \emph{opposite along an arc $\alpha$}, or simply \emph{opposite}, if $c\ne c'$ and $\alpha \subset c\cap c' \smallsetminus L$. Note that if $c,c'$ are opposite then one of them is in $P^+$ and the other in $P^-$. \end{enumerate} \end{definition} \begin{definition}\label{def: sets} Denote by $\mathcal{C}_{>0}$ (resp. $\mathcal{C}_{=0},\mathcal{C}_{\le 0},\mathcal{C}_{< 0}$) the collection of all $c\in \mathcal{C}$ such that $\chi_+(c)>0$ (resp. $=0,\le 0, <0$). \end{definition} Note that if $S$ is taut, then $\mathcal{C}_{>0} = \mathcal{C}_{2,0} \cup \mathcal{C}_{1,2}$ and $\mathcal{C}_{=0} = \mathcal{C}_{4,0}\cup \mathcal{C}_{2,2} \cup \mathcal{C}_{0,4}$. We begin by studying the curves in $\mathcal{C}_{>0}$: \medskip \begin{figure} \centering \subfigure[]{ \begin{overpic}[height=3.5cm]{figures/C20b.pdf} \put(48,90){$\kappa'$} \put(50,78){$\kappa''$} \put(-7,40){$c$} \end{overpic}} \hskip 2cm \subfigure[]{ \begin{overpic}[height=3.cm]{figures/C20a.pdf} \put(90,80){$\kappa'$} \put(90,60){$\kappa''$} \put(30,55){$c$} \end{overpic} } \caption{$c\in C_{2,0}$ and the opposite arcs $\kappa'$ and $\kappa''$.}\label{fig:c2} \end{figure} \underline{Curves in $\mathcal{C}_{2,0}$.} Let $c\in \mathcal{C}_{2,0}$. The two possible configurations for $c$ are shown in Figure ~\ref{fig:c2}. \smallskip In case (a), opposite to $c$ there are two arcs $\kappa'$ and $\kappa''$ shown in Figure \ref{fig:c2} passing through two and four bubbles respectively. Let $\mathcal{K}_{4,0}$ denote the collection of all such arcs $\kappa''$ opposite to some $c\in \mathcal{C}_2$. (In this and following discussion, the subscript $4,0$ indicates that the arcs in $\mathcal{K}_{4,0}$ pass through four bubbles and zero intersection points.) In case (b), opposite to $c \in \mathcal{C}_{2,0}$ there are two arcs, $\kappa'$ and $\kappa''$ shown in Figure \ref{fig:c2}. Each of the arcs $\kappa'$ and $\kappa''$ passes through three bubbles. Let $\mathcal{K}_{3,0}$ denote the collection of all such $\kappa',\kappa''$ which are opposite to some $c\in\mathcal{C}_{2,0}$ as described above. Each arc $\kappa'\in \mathcal{K}_{3,0}$ is part of some closed curve $c'\in \mathcal{C}$. We would like to distinguish those $\kappa'$ for which $c'\in \mathcal{C}_{4,0}$ as those have $\chi_+(c')=0$. We denote the subcollection of all $\kappa'\in\mathcal{K}_{3,0}$ for which $c'\notin \mathcal{C}_{4,0}$ by $\widehat\mathcal{K}_{3,0}\subset \mathcal{K}_{3,0}$. Let $\kappa'\in\mathcal{K}_{3,0} \smallsetminus \widehat\mathcal{K}_{3,0}$. Then $\kappa'$ is a subarc of a curve $c'$ with $\bbl{c'}=4$, which is opposite to some $c\in\mathcal{C}_{2,0}$. Note that this case can only occur in the ``corners'' of the plat, as shown in Figure \ref{fig: cases of kappa tilde} for the top left ``corner''. Let $\tild{\kappa}$ be the arc shown in Figure \ref{fig: cases of kappa tilde}. Let $\tild{\mathcal{K}}$ denote the collection of all $\tild{\kappa}$ constructed in this way. \medskip \underline{Curves in $\mathcal{C}_{1,2}$.} Schematically, a curve $c\in\mathcal{C}_{1,2}$ must be as shown in Figure ~\ref{fig:c12}. \begin{remark}\label{rem: c12 passes through top or bottom} As $c$ emanates from $c\cap L$ into regions of different colors, $c\cap L$ cannot pass over a crossing which is not the top or bottom in its twist box. \end{remark} \smallskip \begin{figure}[ht!] \centering \begin{overpic}[width=3.5cm]{figures/C20c.pdf} \put(65,75){$c$} \put(95,23){$\kappa$} \end{overpic} \caption{$c\in C_{1,2}$ and the opposite arc $\kappa$.}\label{fig:c12} \end{figure} Similarly, opposite to a curve $c\in\mathcal{C}_{1,2}$ there is an arc $\kappa$ passing through two bubbles and one intersection point, shown in Figure \ref{fig:c12}. Let $\mathcal{K}_{2,1}$ be the collection of all such $\kappa$ opposite to some $c\in\mathcal{C}_{1,2}$. Define $\mathcal{K} =\widehat\mathcal{K}_{3,0}\cup \mathcal{K}_{4,0} \cup \tild{\mathcal{K}}\cup \mathcal{K}_{2,1}$. \begin{lemma}\label{lem: Kappaare disjoint} Arcs in $\mathcal{K}$ are pairwise disjoint. \end{lemma} \begin{proof} By definition (see Definition \ref{def: arcs and opposite}), two arcs are not-disjoint if they overlap in a bubble or an intersection point. One can check that the intersection of an arc $\kappa$ with a bubble in a twist box determines its intersection with the entire twist box, and that $\kappa$ must pass through either the top or bottom bubble in this twist box. Because the plat is 3-highly twisted, an arc $\kappa$ cannot pass through both the top and bottom bubbles of the twist box. The part of the arc leaving the twist box determines the curve $c\in \mathcal{C}_{2,0}\cup \mathcal{C}_{1,2} \cup \mathcal{C}_{4,0}$ opposite to it. Thus, two arcs $\kappa_1,\kappa_2$ which overlap in a bubble determine the same opposite curve $c$, hence they are identical. Similarly, if two arcs $\kappa_1,\kappa_2$ overlap in an intersection point, they are opposite to the same curve $c\in \mathcal{C}_{1,2}$, and they are identical. \end{proof} Recall that our goal is to redistribute the Euler characteristic among curves so that each will contribute non-positively. The quantity $\chi'$ defined below is the sought for redistribution. \begin{definition}\label{def: distributing} Let $c'\in \mathcal{C}_{\le 0}$. Let $n_3$ (resp. $\widehat{n}_3$; $n_4$; $\tild{n}$; $n_{2,1}$) be the number of subarcs $\kappa \in \mathcal{K}_{3,0}$ (resp. $\widehat\mathcal{K}_{3,0}$; $\mathcal{K}_{4,0}$; $\tild{\mathcal{K}}$; $ \mathcal{K}_{2,1}$) in $c'$. We associate to $c'$ the following quantity \[ \chi'(c') = \chi_+(c')+ \tfrac{1}{4}\widehat{n}_3 + \tfrac{1}{2}n_4 + \tfrac{1}{4}\tild{n} + \tfrac{1}{4} n_{2,1}. \] \end{definition} The next lemma shows that $\chi'$ is a redistribution of the Euler characteristic of $S$ among curves in $\mathcal{C}_{\le 0}$. \begin{lemma}\label{lem: summing chi' gives Euler char} $\chi(S) = \sum_{c'\in\mathcal{C}_{\le 0}} \chi'(c').$ \end{lemma} \begin{proof} Since by Lemma \ref{lem: Euler characteristic from Euler contributions}, $\chi(S) = \sum_{c\in \mathcal{C}} \chi_+ (c)$, it remains to prove that $$\sum_{c\in \mathcal{C}} \chi_+ (c) = \sum_{c'\in\mathcal{C}_{\le 0}} \chi'(c').$$ Subtracting $\sum_{c'\in\mathcal{C}_{\le 0}}\chi_+(c')$ from both sides and recalling that $\mathcal{C}_{\le 0} = \mathcal{C} \smallsetminus \mathcal{C}_{>0}$, we have to show $$\sum_{c\in\mathcal{C}_{>0}} \chi_+(c) = \sum_{c'\in \mathcal{C}_{\le 0}} (\chi'(c')-\chi_+(c')).$$ The left hand side is simply $\tfrac{1}{2}|\mathcal{C}_{2,0}| + \tfrac{1}{4}|\mathcal{C}_{1,2}|$ since $\mathcal{C}_{>0} = \mathcal{C}_{2,0} \cup \mathcal{C}_{1,2}$ and $$\chi_+(c)=\begin{cases} \tfrac{1}{2} & \mbox{if }c\in\mathcal{C}_{2,0} \\ \tfrac{1}{4} & \mbox{if }c\in\mathcal{C}_{1,2} \end{cases}.$$ By the definition of $\chi'$, the right hand side gives $\tfrac{1}{4}|\widehat\mathcal{K}_{3,0}| + \tfrac{1}{2}|\mathcal{K}_{4,0}| + \tfrac{1}{4}|\tild{\mathcal{K}}| + \tfrac{1}{4}|\mathcal{K}_{2,1}|$. Note that for every arc $\kappa\in\mathcal{K}_{3,0} \smallsetminus \widehat\mathcal{K}_{3,0}$ there exists a \emph{unique} arc $\tild{\kappa}\in\tild{\mathcal{K}}$. Hence we have $|\mathcal{K}_{3,0} \smallsetminus \widehat\mathcal{K}_{3,0}| =|\tild{\mathcal{K}}|$. Therefore, the sum becomes $\frac{1}{4}|\mathcal{K}_{3,0}| + \frac{1}{2}|\mathcal{K}_{4,0}|+\tfrac{1}{4}|\mathcal{K}_{2,1}|$. Since every curve in $\mathcal{C}_{2,0}$ is opposite to either two arcs in $\mathcal{K}_{3,0}$, or one arc in $\mathcal{K}_{4,0}$ we get $|\mathcal{C}_{2,0}| = \frac{1}{2}|\mathcal{K}_{3,0}| + |\mathcal{K}_{4,0}|$ which, after dividing by 2, gives \begin{equation}\label{eq: C2} \tfrac{1}{2}|\mathcal{C}_{2,0}| = \tfrac{1}{4}|\mathcal{K}_{3,0}| + \tfrac{1}{2}|\mathcal{K}_{4,0}|. \end{equation} Similarly, every curve in $\mathcal{C}_{1,2}$ is opposite to one arc in $\mathcal{K}_{2,1}$ and so $|\mathcal{C}_{1,2}|=|\mathcal{K}_{2,1}|$ which, after dividing by 4, gives \begin{equation}\label{eq: C12} \tfrac{1}{4}|\mathcal{C}_{1,2}|=\tfrac{1}{4}|\mathcal{K}_{2,1}| \end{equation} Adding together \eqref{eq: C2} and \eqref{eq: C12} completes the proof. \end{proof} The next lemma shows that indeed $\chi'$ is non-positive. \begin{lemma}\label{lem: non-positive contribution of chi'} $\chi'(c')\le 0$ for all $c'\in\mathcal{C}_{\le 0}$. \end{lemma} \begin{proof} Let $c'\in\mathcal{C}_{\le 0}$ and let $\widehat{n}_3,n_4,\tild{n},n_{2,1}$ be as in Definition ~\ref{def: distributing} of $\chi'(c')$. Then we have, \begin{align}\label{eq: chi' inequality} \begin{split} \chi'(c')&=\chi_+(c')+\tfrac{1}{4}\widehat{n}_3 + \tfrac{1}{2}n_4 + \tfrac{1}{4}\tild{n}+\tfrac{1}{4}n_{2,1} \\ & \leq (1-\tfrac{1}{4}\bbb{c'}) +\tfrac{1}{4}\widehat{n}_3 + \tfrac{1}{2}n_4 + \tfrac{1}{4}\tild{n}+\tfrac{1}{4}n_{2,1} \\ & \le 1 + \sum_{c'\supset\kappa\in\widehat\mathcal{K}_{3,0}} (\tfrac{1}{4} - \tfrac{1}{4}\bbb{\kappa}) +\sum_{c'\supset\kappa\in\mathcal{K}_{4,0}} (\tfrac{1}{2} - \tfrac{1}{4}\bbb{\kappa})\\ &\quad\quad + \sum_{\substack{c'\supset\kappa\in\tild{\mathcal{K}}}} (\tfrac{1}{4} - \tfrac{1}{4}\bbb{\kappa}) + \sum_{c'\supset\kappa\in\mathcal{K}_{2,1}} (\tfrac{1}{4} - \tfrac{1}{4}\bbb{\kappa})\\ &= 1-\tfrac{1}{2}(\widehat n_3+n_4+\tild{n}+n_{2,1}). \end{split} \end{align} Where the last equality follows since each one of the summands is $-\tfrac{1}{2}$. We divide into cases depending on the sum $n=\widehat n_3+n_4+\tild{n}+n_{2,1}$. \textbf{Case 0. $n=0$.} We have $\chi'(c')=\chi_+(c')$. But since $c'\notin \mathcal{C}_{>0}$ we have $\chi_+(c')\le 0$ and we are done. (Note that this case includes the case $\bbl{c'}=4$ and $n_3=1$. See Remark \ref{rem: special case of redistribution}) \textbf{Case 1. $n=1$.} That is, $c'$ contains a single subarc $\kappa\in \widehat \mathcal{K}_{3,0} \cup \mathcal{K}_{4,0} \cup \tild\mathcal{K} \cup \mathcal{K}_{2,1}$. We further divide into sub-cases: \emph{Case 1.a.} $\kappa\in\widehat\mathcal{K}_{3,0}$. Clearly $\bbl{c'}\ge \bbl{\kappa}= 3$. By definition of $\widehat\mathcal{K}_{3,0}$, $\bbl{c'}\ne 4$. Then, either $\bbl{c'}\geq6$ or $c'$ has at least two intersection points with $K$. Therefore, $\bbb{c'}\ge 5$ which gives $$\chi'(c') = \chi_+(c')+\tfrac{1}{4}\le (1-\tfrac{5}{4}) + \tfrac{1}{4} = 0.$$ \emph{Case 1.b.} $\kappa\in\mathcal{K}_{4,0}$ so $\bbl{\kappa}= 4$. The distance between the endpoints of $\kappa$ is 2 in the dual graph, so the closed curve $c'$ that contains $\kappa$ must contain two additional bubbles or two additional intersection points. Thus, $\bbb{c'}\ge 6$, which gives $$\chi'(c') = \chi_+(c') + \tfrac{1}{2} \le (1-\tfrac{6}{4}) + \tfrac{1}{2} = 0 .$$ \emph{Case 1.c.} $\kappa\in\tild{\mathcal{K}}$. Here again $\bbl{\kappa}=3$. This happens in one of the cases described in Figure \ref{fig: cases of kappa tilde}. The endpoints of $\tild{\kappa}$ are at distance $\ge 2$. Hence, the closed curve $c'$ containing $\kappa$ must have two additional bubbles or two additional intersection points. It follows that $\bbb{c'}\ge 5$, which gives $$\chi'(c') = \chi_+(c') + \tfrac{1}{4} \le (1-\tfrac{5}{4}) + \tfrac{1}{4}= 0 .$$ \emph{Case 1.d.} $\kappa\in\mathcal{K}_{2,1}$ so $\bbl{\kappa}=2$ and $\bdr{\kappa}=1$. The curve $c'$ must contain an additional intersection point. Let $\alpha$ be the subarc of $c'$ between the two intersection points. Let $\kappa^*$ be a small continuation of $\kappa \cup \alpha$ along $c'$. \begin{claim*} The endpoints of $\kappa^*$ are at distance greater or equal to 1. \end{claim*} \begin{proof} The arc $\alpha$ cannot contain more than one overpass: Otherwise, $\alpha$ passes over two crossings which occur in different twist boxes. This implies that the curve $c\in\mathcal{C}_{1,2}$ opposite to $\kappa$ is such that $c\cap L$ is contained in one of the twist boxes, and connects the regions to its left and right, which must have the same color. This contradicts the fact that $c\smallsetminus (c\cap L)$ passes through exactly one bubble. If $\alpha$ does not contain an overpass the regions containing the endpoints of $\kappa^*$ have different colors, thus the endpoints of $\kappa^*$ are at distance greater or equal to 1. If $\alpha$ contains one overpass then if the endpoints of $\kappa^*$ can be connected by an arc containing no intersection points or bubbles then the union of $\kappa^*$ and the arc bound a subdiagram of $D(L)$ which contradicts the assumption that $L$ is twist-reduced. \end{proof} It follows from the claim that $\bbb{c'}\ge \bbb{\kappa^*}+1=4+1=5$, which gives $$\chi'(c') = \chi_+(c') + \tfrac{1}{4} \le (1-\tfrac{5}{4}) + \tfrac{1}{4}= 0.$$ \begin{figure} \centering \begin{overpic}[height=4cm]{figures/K21a.pdf} \put(61,72){$c$} \put(-9,55){$c'$} \end{overpic} \hskip 2cm \begin{overpic}[height=4cm]{figures/K21b.pdf} \put(-6,25){$c'$} \put(70,55){$c$} \end{overpic} \caption{Two possibilities of a curve containing an arc in $\mathcal{K}_{2,1}$.} \label{fig: K21} \end{figure} \vskip7pt \textbf{Case 2. $n\ge 2$.} In this case we are done by inequality \eqref{eq: chi' inequality}. \end{proof} \begin{remark}\label{rem: special case of redistribution} Note that, when $n_3=1 \mbox{ and }c'\in \mathcal{C}_{4,0}$, i.e., when $c'$ contains a subarc in $\mathcal{K}_{3,0} \smallsetminus \widehat\mathcal{K}_{3,0}$, then $\chi'(c')=\chi_+(c')=0$. In this case, the positive contribution of the unique $c\in\mathcal{C}_{2,0}$ opposite $c'$ is counted not by $\chi'(c')$ but by $\chi'(\tilde c)$ where $\tilde c$ is the curve opposite to $c'$ containing the corresponding arc $\tilde\kappa$. \end{remark} The proof above also gives the following lemma: \begin{lemma}\label{lem: classification of chi'=0} The curve $c'\in\mathcal{C}_{\le 0}$ satisfies $\chi'(c')=0$ exactly in the following cases: \vskip5pt \begin{enumerate}\setcounter{enumi}{-1} \item \label{lem: chi'=0 case no calK} $c'$ does not contain a subarc in $\mathcal{K}$ and $\bbb{c'}=4$. (it might have $n_3=1$) \vskip5pt \item \label{lem: chi'=0 case one arc}$c'$ contains exactly one subarc $\kappa$ of $\mathcal{K}$ and satisfies either: \vskip5pt \begin{enumerate} \item \label{lem: chi'=0 case 3,0} $\bbb{c'}=5$ and $\kappa \in \widehat\mathcal{K}_{3,0}$. \vskip5pt \item \label{lem: chi'=0 case 4,0} $\bbb{c'}=6$ and $\kappa \in \mathcal{K}_{4,0}$. \vskip5pt \item \label{lem: chi'=0 case tilde}$\bbb{c'}=5$ and $\kappa \in \tild{\mathcal{K}}$. \vskip5pt \item \label{lem: chi'=0 case 2,1}$\bbb{c'}=5$ and $\kappa \in \mathcal{K}_{2,1}$. \vskip5pt \end{enumerate} In all cases, $\bbb{c'}=\bbb{\kappa}+2$. \vskip5pt \item \label{lem: chi'=0 case two arcs} $c'$ is the union of exactly two arcs of $\mathcal{K}$.\qed \end{enumerate} \end{lemma} As an immediate corollary to Lemma \ref{lem: non-positive contribution of chi'} we get the following: \begin{corollary}\label{cor: nonsplit} The link $L$ is non-split. \end{corollary} \begin{proof} Assume for contradiction that $S^3\smallsetminus L$ contains an essential sphere. Let $S$ be an essential sphere with least complexity among all essential spheres. By Lemma \ref{lem: properties of curves}, $S$ is taut. By Lemma~\ref{lem: summing chi' gives Euler char}, $\chi(S) = \sum_{c'\in\mathcal{C}_{\le 0}} \chi'(c').$ Thus it follows from Lemma~\ref{lem: non-positive contribution of chi'} that $\chi(S) \leq 0$, which is a contradiction. \end{proof} \vskip15pt \section{Analysing the curves}\label{sec: Analysing the curves} As stated in the introduction our goal is to prove the following: \begin{theorem}\label{thm: boundary parallel} There are no essential tori or annuli in $ S^3 \smallsetminus \mathcal{N}(L)$. \end{theorem} In what follows, \textbf{assume that $S$ is a taut essential torus or annulus.} By Lemma~\ref{lem: summing chi' gives Euler char}, $\sum_{c'\in\mathcal{C}_{\le 0}} \chi'(c')=\chi(S) = 0$. By Lemma \ref{lem: non-positive contribution of chi'} each summand $\chi'(c')\le 0$, and thus must be equal to $0$. It follows that all the curves $c'\in\mathcal{C}_{\le 0}$ are as classified in Lemma \ref{lem: classification of chi'=0}. The proof of Theorem \ref{thm: boundary parallel} will proceed by analysing each case of Lemma \ref{lem: classification of chi'=0} separately, and showing that $S$ must be boundary parallel. Before we give the proof we first need to define some notation which will be used below. \begin{definition} The projection of a twist region in the diagram is a rectangle called a twist box, we refer to its edges as top/bottom/left/right as obvious from Figure \ref{fig: turn and cross}. \end{definition} \begin{figure}[ht] \centering \begin{overpic}[height=5cm]{figures/box.pdf} \put(16,90){top} \put(12,-3){bottom} \put(-10,55){left} \put(44,55){right} \put(48,76){cross} \put(48,22){turn} \end{overpic} \caption{A crossing and turning arc in a twist box} \label{fig: turn and cross} \end{figure} \begin{lemma} \label{lem: no tilde} There are no arcs in $\mathcal{K}_{3,0} \smallsetminus \widehat\mathcal{K}_{3,0}$. In particular $\tild\mathcal{K}=\emptyset$. \end{lemma} \begin{proof} Assume in contradiction that there exists an arc $\kappa' \in \mathcal{K}_{3,0} \smallsetminus \widehat\mathcal{K}_{3,0}$. By assumption, the curve $c'$ containing $\kappa'$ is in $\mathcal{C}_{4,0}$. Let $c\in\mathcal{C}_{2,0}$ be the curve opposite to $\kappa'$, let $\tild{\kappa}$ be the corresponding arc in $\tild\mathcal{K}$ opposite to $c'$, and let $\tild c$ be the curve containing $\tild\kappa$. Without loss of generality, we may assume that $c\subset P^+$. This implies that $\kappa',c' \subset P^-$ and $\tild \kappa, \tild c\subset P^+$. The possible configurations of $c,c',\kappa',\tild{\kappa}$ are depicted in Figure \ref{fig: cases of kappa tilde}. Note that under the assumptions above, the sign of the twists in the figure must be as depicted. \begin{figure}[ht] \centering \subfigure[]{ \begin{overpic}[height=4cm]{figures/Ktilde1.pdf} \put(40,17){$c$} \put(102,70){$\tilde\kappa$} \put(43,88){$c'$} \end{overpic}} \hskip 1cm \subfigure[]{ \begin{overpic}[height=4cm]{figures/Ktilde2.pdf} \put(40,17){$c$} \put(102,58){$\tilde\kappa$} \put(43,88){$c'$} \end{overpic}}\\ \subfigure[]{ \begin{overpic}[height=4cm]{figures/Ktilde4.pdf} \put(23,55){$c$} \put(92,18){$\tilde\kappa$} \put(50,5){$c'$} \end{overpic}} \hskip 1cm \subfigure[]{ \begin{overpic}[height=4cm]{figures/Ktilde3.pdf} \put(3,80){$c$} \put(82,10){$\tilde\kappa$} \put(79,45){$c'$} \end{overpic}} \caption{The possible cases of $\tild{\kappa} \in \tild{\mathcal{K}}$.} \label{fig: cases of kappa tilde} \end{figure} The curve $\tild c$ must satisfy one of the cases of Lemma \ref{lem: classification of chi'=0}. Note that only Cases (\ref{lem: chi'=0 case tilde}) and (\ref{lem: chi'=0 case two arcs}) are applicable. \vskip7pt \emph{Case (\ref{lem: chi'=0 case tilde}).} In this case $\bbl{\tild c} = 3 $ and $\bdr{\tild c}=2$. The complementary subarc $\beta=\tilde c \smallsetminus \tild \kappa$ must have two intersection points and no bubbles. We rule out each of the cases. In Case (a) of Figure \ref{fig: cases of kappa tilde}, the arc $\beta$ meets $L$ and travels along it in a single arc in $P^+$ which cannot go through an underpass. As can be seen from Figure \ref{fig: cases of kappa tilde}, no such arc exists. In Case (b), the arc $\tild \kappa$ ends in a twist box, and so the complementary arc $\beta$ must pass through a bubble. Cases (c) and (d) are ruled out similarly to Cases (a) and (b) respectively. \vskip7pt \emph{Case (\ref{lem: chi'=0 case two arcs}).} In this case the complementary arc $\beta$ is in $\mathcal{K}$. The curve $\kappa$ cannot be in $\mathcal{K}_{4,0}$ or $\mathcal{K}_{2,1}$ as otherwise the curve $c$ would be in $\mathcal{C}_{7,0}$ or $\mathcal{C}_{5,1}$. However these sets are empty by Lemma \ref{lem: properties of curves}. \vskip7pt In Case (a), the endpoints of $\beta$ are the same as those of $\tild\kappa$. If $\beta\in\tild\mathcal{K}$ then it is one of the arcs depicted in Figure \ref{fig: cases of kappa tilde}. Hence, it must also be as in Case (a) and must run parallel to $\tild \kappa$. But this would imply that $\tild c$ passes through the same bubble twice on the same side of $L$, in contradiction to tautness of $S$. If $\beta \in \mathcal{K}_{3,0}$, then there is a corresponding subarc $\alpha$ in $L\cap P^+$ depicted in Figure \ref{fig: subarc parallel to kappa'} that connects the same regions as $\beta$. As Figure \ref{fig: cases of kappa tilde} is a precise depiction of the possibilities, one can readily check that there is no such arc. \begin{figure}[ht] \centering \begin{overpic}[height=3.5cm]{figures/alpha.pdf} \put(75,48){$\kappa$} \put(92,58){$b$} \put(5,15){$a$} \put(25,56){$\alpha$} \end{overpic} \caption{The arc $\alpha$ parallel to $\kappa\in\mathcal{K}_{3,0}$} \label{fig: subarc parallel to kappa'} \end{figure} \vskip7pt In Case (b), $\tild\kappa$ ends in the ``middle'' of a twist box after passing through its top bubble. The arc $\beta = \tild c \smallsetminus \tild \kappa$ continuing $\tild{\kappa}$ must pass through the bubble immediately below. The arc of $\beta$ intersecting this bubble cannot be contained in any arc in $\mathcal{K}$. Hence, this case is impossible. This finishes the proof of the claim. \end{proof} \begin{lemma}\label{lem: kappa of K_{4,0}} If $c'$ contains a subarc $\kappa$ of $\mathcal{K}_{4,0}$, then it is of the form shown in Figure~\ref{fig:C6a}. \end{lemma} \begin{figure}[ht!] \centering \begin{overpic}[height=3.5cm]{figures/C6a.pdf} \put(47,96){$a$} \put(46,2){$b$} \put(0,90){$\alpha$} \end{overpic} \caption{The unique form of a curve in $S\cap P^+$ containing an arc $\kappa$ in $\mathcal{K}_{4,0}$.}\label{fig:C6a} \label{fig: desired itersection 1} \end{figure} \begin{proof} Let $\kappa$ be the subarc of $c'$ in $\mathcal{K}_{4,0}$, with endpoints $a$ and $b$. As $c$ must have $\chi'(c')=0$, it follows from Lemma~\ref{lem: classification of chi'=0} that the complementary arc of $\kappa$ in $c'$ is either in $\mathcal{K}_{4,0}$, or passes through exactly two bubbles, or through two intersection points with the link $L$. The distance between $a$ and $b$ is 2: It is clearly smaller or equal to $2$. It must be even as the regions have the same color and if it is $0$ the diagram would not be twist-reduced. So, by Lemma~\ref{obs: single arc} the only subarc $\alpha\subset L\cap P^+$ connecting the region containing $a$ to the region containing $b$ is the bridge adjacent to $\kappa$. See Figure~\ref{fig:C6a}. The complement of $\kappa$ in $c'$ cannot be an arc of $\mathcal{K}_{4,0}$ since any arc of $\mathcal{K}_{4,0}$ connecting these regions must follow $\alpha$ from the same side, contradicting the assumption that $S$ is taut. The complement of $\kappa$ in $c'$ cannot contain exactly two intersection points, as the arc spanning them must be $\alpha$, which again results in a contradiction to the tautness of $S$. Thus, the complement of $\kappa$ in $c'$ must contain two bubbles. The only possible such arc is an arc on the other side of $\alpha$ adjacent to $\kappa$, as in Figure~\ref{fig:C6a}. \end{proof} \begin{lemma}\label{lem: kappa in K_{3,0}} If $c'$ contains a subarc $\kappa\in\mathcal{K}_{3,0}$ then it is of the form shown in Figure~\ref{fig:C6}. \end{lemma} \begin{figure}[ht!] \centering \begin{overpic}[width=3.5cm]{figures/C6.pdf} \put(4,20){$a$} \put(17,63){$\alpha$} \put(5,35){$c'$} \put(78,67){$b$} \end{overpic} \caption{The unique form of a curve $c'$ containing an arc of $\mathcal{K}_{3,0}$.}\label{fig:C6} \label{fig: desired itersection 2} \end{figure} \begin{proof} Let $c'$ be a curve containing a subarc $\kappa\in \mathcal{K}_{3,0}$. Let $a,b$ be the endpoints of $\kappa$ as in Figure \ref{fig: subarc parallel to kappa'}. As $\chi'(c')=0$, it follows from Lemma~\ref{lem: non-positive contribution of chi'} that the complementary subarc $\beta = c'\smallsetminus \kappa$ is either in $\mathcal{K}_{3,0}$ (in which case $c'\in \mathcal{C}_{6,0}$), or contains exactly two intersections and no bubbles (in which case $c'\in\mathcal{C}_{3,2}$). If $\beta \in \mathcal{K}_{3,0}$, let $\alpha$ be the the subarc of $L\cap P^+$ connecting the regions containing $a$ and $b$, adjacent to $\kappa$, as in Figure \ref{fig: subarc parallel to kappa'}. Similarly, let $\alpha'$ be the subarc of $L\cap P^+$ connecting the regions containing $a$ and $b$, adjacent to $\beta$. As each of $\alpha,\alpha'$ passes over two crossings and the uniqueness implied by Lemma~\ref{obs: single arc}, $\alpha=\alpha'$. Since $S$ is taut, $\kappa$ and $\beta$ must be two different sides of $\alpha$, resulting in the configuration depicted in Figure \ref{fig:C6}. Next suppose $\beta$ has two intersection points and no bubbles. Note that $\kappa$ must pass through two bubbles in one twist box and through a single bubble in another. Let $T$ be the second twist box. The arc $\beta$ starts from $b$ and first meets $L$ at some intersection point $p$. The two endpoints of $\beta$ must belong to regions of different colors since its complement in $c'$ passes through three bubbles. In particular $\beta$ cannot connect the two regions to the right and left of the twist box $T$. It follows that the point $p$ cannot be any of the points depicted by small empty squares in Figure \ref{fig:C32}(a), as otherwise the arc $c'\cap L\subseteq \beta$ would be one of the arcs in $P^+\cap L \cap T$ connecting its right and left sides. The point $p$ also cannot be the point depicted by a small empty circle as otherwise $S$ is not taut. \begin{figure}[ht!] \centering \subfigure[]{ \begin{overpic}[width=4cm]{figures/C32_a.pdf} \put(70,47){$b$} \put(32,80){$T$} \put(52,80){$\vdots$} \put(40,18){$c'$} \end{overpic} } \hskip1cm \subfigure[]{ \begin{overpic}[width=4cm]{figures/C32_b.pdf} \put(34,52){$q$} \put(52,80){$\vdots$} \put(70,47){$b$} \put(72,24){$p$} \put(40,57){$\tilde c$} \put(40,20){$c'$} \end{overpic} } \caption{A possible configuration of $c'\in \mathcal{C}_{3,2}$ on $P^+$ that has an arc of $\mathcal{K}_{3,0}$: (a) The side $P^+$, the twist box $T$ and the forbidden positions for the point $p$. (b) The side $P^-$, the opposite curve $\tilde{c}$, and one of the possible positions for the point $p$. }\label{fig:C32} \end{figure} Assume that $c'$ is a curve on $P^+$. Consider the curve $\tilde c$ opposite to $c'$ at $b$ on $P^-$. See Figure \ref{fig:C32}(b). The curve $\tilde c$ must pass through two bubbles in $T$ and through the intersection point $p$. Denote a point on $\tilde c$ just beyond this second bubble by $q$ (see Figure \ref{fig:C32}(b)). The points $b$ and $q$ are on opposite sides of the twist box $T$. The curve $\tilde c$ does not contain any $\eta$ in $\mathcal{K}$: If it does, then $\eta$ will not pass through the two bubbles or intersection point specified above; thus, $\bbb{\tilde c}\ge \bbb{\eta}+3$ in contradiction to Lemma \ref{lem: classification of chi'=0}. Therefore, as $\tilde c$ passes through at least two bubbles and two intersection points and contains no arc of $\mathcal{K}$, by Lemma \ref{lem: classification of chi'=0} it must be $\tilde c\in \mathcal{C}_{2,2}$. Thus, $\tilde c\cap L$ is an arc of $L\cap P^-$ connecting the regions containing $b,q$ and ending at the point $p$. Since $\tilde c\cap L$ connects the two regions to the left and right of a twist box it must be an arc that passes through the twist box. Since $\tilde{c}$ is opposite to $c'$, the point $p$ cannot be any of the points depicted by small empty squares and circles in Figure \ref{fig:C32}(a). This leaves only one possible such arc, depicted by a black dotted line in Figure \ref{fig:C32}(b). However, if $\tilde c$ contains such an arc, this would imply that $S$ is not taut in contradiction to the assumption on $S$. \end{proof} \begin{lemma}\label{lem: no kappa in K_2,1} There is no curve $c'\in\mathcal{C}$ containing a subarc $\kappa$ in $\mathcal{K}_{2,1}$ \end{lemma} Note that, in general, if $c\in\mathcal{C}_{1,2}$ then the arc $\alpha=c\cap L$ is a bridge. If $\alpha$ contains no crossings then it corresponds to Figure \ref{reducing bubbles a}, which would imply that $S$ is not taut. Thus, $\alpha$ passes through either one or two crossings. \begin{proof} Assume, for contradiction, that $c'\in\mathcal{C}$ is a curve containing a subarc $\kappa\in\mathcal{K}_{2,1}$. By Lemma \ref{lem: classification of chi'=0} there are two cases: Either $c'$ is a union of two arcs in $\mathcal{K}_{2,1}$, or $c'\in \mathcal{C}_{3,2}$ and it contains one arc in $\mathcal{K}_{2,1}$. Assume first that $c'$ is the union of exactly two arcs $\kappa_1,\kappa_2\in \mathcal{K}_{2,1}$. It follows that $c'\in \mathcal{C}_{4,2}$, and can be viewed as the union of three arcs $\alpha',\beta_1,\beta_2$, where $\alpha'=c'\cap L$ and $\beta_1,\beta_2$ are subarcs of $\kappa_1,\kappa_2$ respectively each passing through two bubbles. Let $c_i$, $i=1,2$, be the curve in $C_{1,2}$ opposite to $\kappa_i$, and let $\alpha_i=c_i \cap L$. Assume first that $\alpha'$ does not pass over any crossing. Thus, $\beta_1,\beta_2$ emanate from $\alpha'$ into adjacent regions with opposite colors. Each of them passes through two bubbles, and hence they cannot meet to form a closed curve. Next assume that $\alpha'$ passes over two crossings. Then, $\alpha_1$ (and $\alpha_2$) passes over a crossing which is not the top or bottom in its twist box, contradicting Remark \ref{rem: c12 passes through top or bottom}. See Figure \ref{fig: two twist boxes and alpha}. \begin{figure}[ht!] \centering \begin{overpic}[width=3.5cm]{figures/3arcs.pdf} \put(50,45){$\alpha'$} \put(90,100){$\alpha_1$} \put(40,82){$c_1$} \put(-4,2){$\alpha_2$} \put(50,11){$c_2$} \end{overpic} \caption{The arc $\alpha_1\cup\alpha'\cup \alpha_2$.} \label{fig: two twist boxes and alpha} \end{figure} Finally, assume $\alpha'$ goes through a single crossing. By Remark \ref{rem: c12 passes through top or bottom}, $\alpha_1$ (and $\alpha_2$) must pass over the top or bottom crossings of a twist box. The only possible configuration is if $\alpha'$ passes over the middle crossing of a twist box with three crossings. The fact that $c_1$ and $c_2$ pass through bubbles at the top or bottom of two other twist boxes, determines the orientations of the two adjacent twist boxes as depicted in Figure \ref{fig:C12a}. However, as can be readily checked, a plat diagram $L$ cannot contain a subdiagram as in the figure. \begin{figure}[ht!] \centering \begin{overpic}[width=7cm]{figures/C12a.pdf} \put(27,40){$\alpha'$} \put(53,56){$\alpha_1$} \put(39,27){$\alpha_2$} \put(46,72){$c_1$} \put(46,10){$c_2$} \end{overpic} \caption{}\label{fig:C12a} \end{figure} Now, assume that $c'\in \mathcal{C}_{3,2}$ and contains exactly one arc $\kappa\in\mathcal{K}_{2,1}$. Let $c\in \mathcal{C}_{1,2}$ be the curve opposite to $\kappa$. Assume first that $\alpha=c\cap L$ passes through two crossings. Since $c\smallsetminus\alpha$ passes through exactly one bubble, the regions containing the endpoints of $\alpha$ are at distance 1. Such a bridge $\alpha$ exists only in the corners of the diagram, in one of the configurations depicted in Figure \ref{fig:C12corners}. The corresponding curve $c'$ in each of the possible configurations, contains an arc $\eta$ as in the figure, which prolongs $\kappa$. The endpoints of $\eta$ are at distance greater than one, and hence $c' \smallsetminus \eta$ cannot pass through only one bubble, in contradiction to the assumption. \begin{figure} \centering \begin{overpic}[height=4cm]{figures/c21corners1.pdf} \put(5,80){$c$} \put(102,46){$\eta$} \end{overpic} \hskip 1cm \begin{overpic}[height=4cm]{figures/c21corners2.pdf} \put(5,80){$c$} \put(96,14){$\eta$} \end{overpic} \caption{The two possibilities for a curve $c\in\mathcal{C}_{1,2}$ with a bridge passing through two crossings.}\label{fig:C12corners} \end{figure} Next assume that $\alpha$ passes through one crossing. Then let $\eta$ be small prolongation of $\kappa \cup (c' \cap L)$ in $c'$. The arc $c' \cap L$ contains zero, one or two crossings. If it contains two crossings then the arc $c\cap L$ passes over a crossing which is not the top or bottom of its twist box, in contradiction to Remark \ref{rem: c12 passes through top or bottom} similar to Figure \ref{fig: two twist boxes and alpha}. If $c'\cap L$ contains one crossing. The curve $c$ bounds a disk $D$ on the plane, containing a segment of $L$ which connects two twist boxes. The five possibilities for such a curve $c$ are depicted in Figure \ref{fig: possibilities of K21 and one crossing}. The subarc $\eta$ of $c'$ is the arc depicted in blue in the figure. In cases (a) and (b), the endpoints of $\eta$ are in regions with the same color, contradicting the assumption that $c\smallsetminus \eta$ passes through a single bubble. In case (d) and (e), the arc $\kappa$ cannot be continued to an arc $\eta$ such that $\eta\cap L$ has one crossing because of a conflict in orientation, hence (d) and (e) do not occur. We are left with case (c). In this case $\eta$ can be closed to form $c'$ as shown in figure. However, the arc $\gamma$ opposite to $c'$, at the bottom of the figure, passes through two bubbles and two intersection points, none of which belong to an arc of $\mathcal{K}_{2,1}$. By Lemma \ref{lem: classification of chi'=0}, the curve containing $\gamma$ must close up with no further bubbles or intersection points, which is impossible. \begin{figure}[ht] \subfigure[]{% \begin{overpic}[width=2.5cm]{figures/K21_1.pdf} \put(40,50){$c$} \put(45,3){$\eta$} \end{overpic} } \subfigure[]{% \begin{overpic}[width=2.5cm]{figures/K21_3.pdf} \put(70,52){$c$} \put(0,10){$\eta$} \end{overpic} } \subfigure[]{% \begin{overpic}[width=2.5cm]{figures/K21_4.pdf} \put(30,5){$\gamma$} \put(30,82){$c$} \put(8,40){$c'$} \end{overpic} } \subfigure[]{% \begin{overpic}[width=2.5cm]{figures/K21_5.pdf} \put(35,8){$c$} \put(73,60){$\kappa$} \end{overpic} } \subfigure[]{% \begin{overpic}[width=2.5cm]{figures/K21_6.pdf} \put(30,100){$c$} \put(80,64){$\kappa$} \end{overpic} } \caption{The five possibilities for a curve $c\in\mathcal{C}_{1,2}$ (in red) and the subarc $\eta$ of the curve $c'$ (in blue) opposite to $c$ such that $c\cap L$ passes under one crossings. Note that case (a) can occur also on the top boundary of the plat.} \label{fig: possibilities of K21 and one crossing} \end{figure} Thus, suppose $c' \cap L$ contains no crossings. In this case, the endpoints of $\eta$, shown in blue in Figure \ref{fig: C12 bridge}, are in the same two regions as the endpoints of the maximal bridge (dashed) passing above the curve $c$. As $c'$ contains one additional bubble the endpoints of the bridge are at distance 1 and thus the bridge is at a corner of the plat. In each corner, there are two possible such bridges, and on each of these the curve $c$ can be oriented in two ways. Thus, altogether there are four possibilities that are depicted in Figure \ref{fig:C12corners2}. \begin{figure}[ht] \centering \begin{overpic}[width=4cm]{figures/C12_bridge.pdf} \put(40,60){$c'$} \end{overpic} \caption{The bridge of a curve $c\in C_{1,2}$ such that $c\cap L$ has one crossing.} \label{fig: C12 bridge} \end{figure} \begin{figure}[ht] \subfigure[]{% \begin{overpic}[width=3.3cm]{figures/C12corners1.pdf} \put(42,14){$c$} \put(-7,45){$c'$} \put(32,63){$\tau$} \end{overpic} } \subfigure[]{% \begin{overpic}[width=3.3cm]{figures/C12corners2.pdf} \put(4,27){$c$} \put(45,10){$c'$} \put(92,15){$\tau$} \end{overpic} } \subfigure[]{% \begin{overpic}[width=3.3cm]{figures/C12corners3.pdf} \put(30,46){$c$} \put(72,42){$c'$} \put(40,2){$\tau$} \end{overpic} } \subfigure[]{% \begin{overpic}[width=3.3cm]{figures/C12corners4.pdf} \put(30,47){$c$} \put(75,75){$c'$} \put(90,16){$\tau$} \end{overpic} } \caption{The four possibilities for a curve $c\in\mathcal{C}_{1,2}$ with a bridge passing through zero crossings.} \label{fig:C12corners2} \end{figure} In each of these possibilities, there is an arc $\tau$ opposite to $c'$, so that $\tau$ has three or four bubbles and its endpoints are at distance at least two. Thus, the closed curve containing $\tau$ cannot be in $\mathcal{K}$ and thus must have $\chi'<0$. \end{proof} \begin{definition}\label{def: cross ans turn} If a curve enters and exists a twist box from the left or right side edges then we say that it $ \it{crosses}$ the twist box. Otherwise, if it enters from a side edge and exists from a top/bottom edge or vice versa we say that it $\it{turns}$ at the twist box. \end{definition} \begin{lemma}\label{lem: No C22} There are no curves in $\mathcal{C}_{2,2}$. \end{lemma} \begin{proof} For any curve $c$ in $\mathcal{C}_{2,2}$. Consider the subarcs $\alpha=c\cap L$ and $\beta=c\smallsetminus \alpha$. The two endpoints of a small extension of $\alpha$ on $c$ must be contained in regions of the same color as the rest of $c$ contains exactly two bubbles. As $S$ is taut, these points cannot be in the same region. Thus the arc $\alpha$ passes over at least one crossing of the projection diagram. \begin{claim} If $\mathcal{C}_{2,2}$ is nonempty then there exists a curve $c\in\mathcal{C}_{2,2}$ such that $\alpha$ passes over one crossing, and $\beta$ crosses one twist box. \end{claim} \begin{proof}[Proof of claim]\hfill \noindent \underline{Case 1.} Assume first, that $\alpha$ passes over two crossings. Following $\beta$ from one of its endpoints, $p,q$, we meet a twist box either from its side or its top/bottom. If, following $\beta$ from either endpoint, we meet a twist box from its top/bottom, then $\beta$ meets two different twist boxes. Consider the curve $c'$ opposite to $c$ sharing the arc of $\beta$ between the bubbles. The curve $c'$ crosses the same two twist boxes. The diagram is twist-reduced and thus $c'\notin \mathcal{C}_{4,0}$ as otherwise it would bound a reducing subdiagram. Therefore, we must have $\chi'(c')<0$, a contradiction. Thus, following $\beta$ from one of its endpoints, say $p$, $\beta$ meets a twist box $T$ from the side. We claim that one of the two curves opposite to $c$ containing $p$ or $q$ must cross $T$: Let $c'$ be a curve opposite to $c$ which contains $p$. If $c'$ crosses $T$, we are done. Thus assume that $c'$ turns at $T$ and exists through, say, its \emph{bottom}. This implies that $\beta$ must cross $T$ and pass through its two bottom bubbles. Therefore, the curve $c''$ opposite to $c$ containing $q$ crosses $T$ and passes through its second and third bubbles (counted from the bottom). Let $c'$ be a curve opposite to $c$ containing an endpoint of $\beta$ and crossing $T$, as in the previous paragraph. As $c'$ passes through at least two bubbles and has at least two intersection points, it must be in $\mathcal{C}_{2,2}$ by Lemma \ref{lem: classification of chi'=0} and Lemma \ref{lem: no kappa in K_2,1}. Since $c\cap L$ passes over two crossings, the arcs $c\cap L$ and $c'\cap L$ are as the arcs $\alpha'$ and $\alpha_1$ in Figure \ref{fig: two twist boxes and alpha}. In particular, $c'\cap L$ passes over one crossing and therefore, $c'$ is the required curve. \medskip \noindent \underline{Case 2.} Now assume that $\alpha$ passes over one crossing. Repeating the argument above we conclude that there is a curve $c'\in\mathcal{C}_{2,2}$ opposite to $c$ that crosses a twist box. If $\alpha'=c'\cap L$ passes over two crossings, we are back to Case 1. Otherwise, it passes over one crossing and we are done. \end{proof} Let $c$ be a curve as in the claim. A small pushout of $c$ bounds a subdiagram of $L$ as in the definition of a twist-reduced diagram (Definition \ref{def: twist reduced}). Since $L$ is twist-reduced, the subdiagram must be contained in a twist box $T$. Thus, both $\alpha$ and $\beta$ are contained in $T$. Since $S$ passes through all bubbles in $T$, and every other bridge of $T$, there exists an innermost curve such that both $\alpha$ and $\beta$ pass through the same bubble. However, this implies that $S$ is not taut in contradiction. \end{proof} \begin{lemma}\label{lem: c in C4} Every curve $c'\in\mathcal{C}_{4,0}$ is of one of the forms depicted in Figure \ref{fig: C4}. \end{lemma} \begin{figure}[ht] \centering \begin{overpic}[height=3cm]{figures/C40a.pdf} \put(65,40){$c$} \end{overpic} \hskip 1cm \begin{overpic}[height=4cm]{figures/C40b.pdf} \put(40,30){$c$} \end{overpic} \hskip 1cm \begin{overpic}[height=4cm]{figures/C40c.pdf} \put(40,35){$c$} \end{overpic} \caption{The possible cases of a curve in $\mathcal{C}_{4,0}$} \label{fig: C4} \end{figure} \begin{proof} Assume by contradiction that $c'\in \mathcal{C}_{4,0}$ and it is not of the form in Figure \ref{fig: C4}. The curve $c'$ passes through four bubbles which are divided between at most four twist boxes. Assume that $c'$ only turns at twist boxes. Consider one of the turns of $c'$ at a twist box $T$. Such a curve has a curve $c''$ opposite to it which crosses the twist box. The curve $c''$ must also be in $\mathcal{C}_{4,0}$ as it cannot be of any of the types formerly investigated: $\mathcal{C}_{2,2}$ is empty and furthermore $c''$ cannot have an arc in $\mathcal{K}$ since $c''$ is opposite to an arc in $\mathcal{C}_{4,0}$ and not to $\mathcal{C}_{>0}$. In addition, $c''$ cannot be of one of the forms in Figure \ref{fig: C4}: Otherwise one of the arcs $c'\cap c''$ continues in $c'$ to cross a twist box, in contradiction to the assumption that $c'$ only turns. By replacing $c'$ by $c''$, if need be, we assume from now on that \begin{enumerate} \item[(i)] the curve $c'$ is a curve in $\mathcal{C}_{4,0}$ which crosses a twist box, and \item[(ii)] $c'$ is not of one of the forms in Figure \ref{fig: C4}. \end{enumerate} Let $c'$ be an innermost such curve. Let $D$ be the innermost disc bounded by $c'$. Consider the intersection of $D$ with the twist boxes meeting $c'$. Since the diagram is twist-reduced, the curve $c'$ cannot cross two different twist boxes, this leaves us with three cases: \vskip7pt \noindent \underline{Case 0.} The curve $c'$ passes twice through the same bubble. Then it is of the desired form, in contradiction to the assumption. \vskip7pt \noindent \underline{Case 1.} The intersection of one of these twist boxes with $D$ contains a (projection of a) bubble that is not met by $c'$. Let $B,B'$ be adjacent bubbles of a twist box so that $B$ is contained in $D$ and $c'$ passes through $B'$, see Figure \ref{fig: C4 case 1}. There must be a curve $c''$, on the same side of $P$ as $c'$, passing through the bubbles $B$ and $B'$, as follows from Figure \ref{fig: three types of intersection}. Since $c'$ is innermost, and the curve $c''$ crosses the twist box. By the previous lemmas, namely Lemmas \ref{lem: classification of chi'=0}, \ref{lem: no tilde}, \ref{lem: kappa in K_{3,0}}, \ref{lem: kappa of K_{4,0}}, and \ref{lem: no kappa in K_2,1}, it is in one of the forms shown in Figures \ref{fig:C6a}, \ref{fig:C6} or \ref{fig: C4}. In all of the cases, $c''$ contains an arc outside the twist box, connecting $B$ and $B'$. There are two cases to consider, either $c'$ crosses the twist box, or turns at the twist box, see Figure \ref{fig: C4 case 1}. If $c'$ crosses, the curve $c^*$ opposite to $c'$ (and $c''$), shown in Figure \ref{fig: C4 case 1}, passes through four bubbles, it must therefore close up without passing through any additional bubbles, which would imply that $c'$ passes through the same bubble $B'$ twice, in which case we are done by case 0. If $c'$ turns, then the curve $c^*$, passes three times through two bubbles in the twist box, and in order to close up, must turn at another twist box at a bubble $B^*$, it follows that $c'$ passes through the bubble $B^*$ twice. In both cases, we are done by Case 0. \begin{figure}[ht] \centering \begin{overpic}[height=4cm]{figures/C40proof1.pdf} \put(69,7){$c^*$} \put(10,7){$c'$} \put(63,84){$c''$} \put(5,36){$D$} \end{overpic} \hskip 1cm \begin{overpic}[height=4cm]{figures/C40proof2.pdf} \put(69,12){$c^*$} \put(69,3){$c'$} \put(63,87){$c''$} \put(5,75){$D$} \end{overpic} \caption{The curve $c'$ crosses the twist box on the left, and turns on the right. The minimal disk $D$ whose boundary is $c'$ is shown in gray.} \label{fig: C4 case 1} \end{figure} \vskip7pt We are left with the following case: \vskip7pt \noindent \underline{Case 2.} The disc $D$ does not contain a bubble in a twist box that $c'$ meets, and $c'$ does not pass twice through the same bubble. Therefore, $c'$ must cross once some twist box, which we denote by $U$, and turn at two other twist boxes which we denote by $V$ and $W$ (see Figure~\ref{fig: special case of C4}. Let $B,B'$ be the two adjacent bubbles in $U$ through which $c'$ crosses. As we are not in Case 1, the innermost disk bounded by $c'$ contains no bubbles of $U$, $V$ or $W$. Thus, one of the bubbles $B,B'$, say $B$, is extremal in the twist box $U$, meaning its the first bubble in $U$ counted from the top or bottom. Let $\tau$ be the subarc of $c'$ exiting $B'$, and connecting $U$ to one of $V,W$, say $V$. If $\tau$ enters $V$ from the side, then opposite to $\tau$ there is a curve $c^*$ crossing the twist boxes $U$ and $V$, which we have shown can not exist because $L$ is twist-reduced. Thus, $\tau$ must enter $V$ from the bottom or the top. A similar argument shows that the subarc $\eta$ of $c'$ connecting $V$ to $W$, must enter $W$ from the top or the bottom. Thus, the subarc $\mu$ of $c'$ connecting $U$ and $W$, emerges from the side of $U$ and ends in the side $W$ as depicted in Figure \ref{fig: special case of C4}. \begin{figure}[ht] \bigskip \centering \begin{overpic}[height=5cm]{figures/C40case2.pdf} \put(18,50){$W$} \put(50,50){$U$} \put(80,50){$V$} \put(38,42){$B'$} \put(58,11){$B$} \put(8,11){$B^*$} \put(70,7){$D$} \put(66,33){$\tau$} \put(94,11){$\eta$} \put(33,26){$\mu$} \put(58,22){$c^{**}$} \put(3,27){$c^{*}$} \put(37,11){$c^{***}$} \put(33,17.5){$\vdots$} \put(33,12){$\vdots$} \end{overpic} \caption{The only possible configuration in Case 2} \label{fig: special case of C4} \end{figure} Let $c^*$ be the curve opposite to $c'$ along $\mu$. The curve $c^*$ passes through three bubbles, one in $U$ and two in $W$, and must close after an additional turn. By the checkerboard coloring, the additional turn must be at the bubble $B^*$ at the bottom of $W$, as shown in Figure \ref{fig: special case of C4}. Opposite to $c^*$ at $B^*$, there is another curve $c^{**}$ which passes through $B^*$ parallel to $c'$. By the previous lemmas, as in case 1, $c^{**}$ in $\mathcal{C}_{4,0}$. If it is in $\mathcal{C}_{4,0}$, one can check that in order to close up $c^{**}$ must cross $U$ parallel to $c'$. Iterating this argument gives an infinite collection of nested curves intersecting $B$ and $B^*$, which is a contradiction. This finishes the proof of the lemma. \end{proof} \vskip15pt \section{No tori and no annuli} \vskip8pt In this section we prove Theorem \ref{thm: boundary parallel} and Theorem \ref{thm: highly twisted plats are hyperbolic} \vskip5pt \begin{lemma}\label{lem: desired implies prallel} If $S$ is a taut torus or annulus and if all the curves in $\mathcal{C}_{\le 0}$ are of the form depicted in Figures \ref{fig:C6a}, \ref{fig:C6} and \ref{fig: C4} then $S$ is a boundary parallel torus. \end{lemma} \begin{proof} Let $S$ intersect $P^{\pm}$ only in curves as in Figures \ref{fig:C6a}, \ref{fig:C6} and \ref{fig: C4}. The surface $S$ is obtained by capping each curve in $P^\pm$ by the disk it bounds in $H^\pm$ respectively. Thus, $S$ is a torus following one of the components of $L$. That is, $S$ is boundary parallel. \end{proof} \begin{corollary}\label{cor: atoroidal} The link complement $S^3 \smallsetminus \mathcal{N}(L$) does not contain essential tori. In particular, the link $L$ is prime. \end{corollary} \begin{proof} Let $S$ be an essential torus in $S^3 \smallsetminus \mathcal{N}(L)$. By Lemma \ref{lem: properties of curves}, we may assume that $S$ is taut. By Lemmas \ref{lem: no tilde}, \ref{lem: kappa of K_{4,0}}, \ref{lem: kappa in K_{3,0}}, and \ref{lem: c in C4}, all the curves in $\mathcal{C}$ are of the form depicted in Figures \ref{fig:C6a}, \ref{fig:C6} and \ref{fig: C4}. Hence, by Lemma \ref{lem: desired implies prallel}, the torus $S$ is boundary parallel which contradicts the choice of $S$. \end{proof} \begin{proposition}\label{pro: unannular} The link complement $S^3 \smallsetminus \mathcal{N}(L$) does not contain essential annuli. \end{proposition} \begin{proof} Let $S$ be an incompressible and boundary incompressible annulus. Since by Corollary \ref{cor: atoroidal} $L$ is prime, we may assume that $\partial S$ has no meridional component. By Lemma \ref{lem: properties of curves}, we may assume that $S$ is taut. Assume first that $S$ passes through all twist boxes in disks of Type (0) or (2) only, as in Lemma \ref{lem: normal form}. Therefore, as $S$ is an annulus, there must be a twist box $T$ such that $S$ meets $T$ in a Type (2) disk. When $S$ emerges from $T$ it intersects $P$ in a curve $c\in\mathcal{C}$. By Lemmas \ref{lem: no kappa in K_2,1} and \ref{lem: No C22} such a curve must be in $\mathcal{C}_{0,4}$. The curve $c$ cannot meet a twist box as otherwise this twist box will meet $S$ in a disk of Type (1). Consider the disk $D$ in $P$ bounded by $c$. Either $\interior{D} \cap L = \emptyset$ in which case $S$ is not taut, or the diagram $D(L)$ is not prime, a contradiction. Therefore, there is a twist box $T$ which intersects $S$ in a disk of Type (1). Since $\partial S$ contains a string of $T$, there is a curve $c$ in $\mathcal{C}$ which passes through the ``middle'' of the twist box $T$ -- i.e., the intersection $c\cap L$ contains a bridge which does not meet the top and bottom of $T$. Let $\alpha$ be a small continuation of this bridge in $c$. Lemmas \ref{lem: no kappa in K_2,1} and \ref{lem: No C22} rule out curves containing $\mathcal{K}_{2,1}$ and curves in $\mathcal{C}_{2,2}$. Thus, $c$ must be in $\mathcal{C}_{0,4}$, and must contain another bridge. Let $\beta$ be a small continuation of that bridge in $c$. The arcs $\alpha$ and $\beta$ belong to different components of $\partial S$: Otherwise, the arc on $c$ connecting $\alpha$ and $\beta$ together with $\partial S$ bounds a disk in $S$. An innermost curve $c\in\mathcal{C}$ in this disk must have only two intersection points with $L$, which is a contradiction to Lemma \ref{lem: properties of curves}. The endpoints of $\alpha$ in $c$ belong to regions of the same color, hence the same holds for $\beta$. Note that a bridge in a plat passes at most two crossings. We divide the proof into cases depending on the number of crossings $\beta$ passes over. \vskip7pt \noindent\underline{Case 0:} If $\beta$ does not pass over any crossing. Then, as the endpoints of $\beta$ belong to regions of the same color, they must belong to the same region, contradicting the assumption that $S$ is taut. \vskip7pt \noindent\underline{Case 1:} If $\beta$ passes over one crossing. A small pushout of the curve $c$ bounds a subdiagram of $L$ containing the two crossings points over which $\alpha$ and $\beta$ pass. Since we assumed that $L$ is twist-reduced these two crossing points must belong to the same twist box $T$. Let $n$ be the number of bridges of $T$ in-between $\alpha$ and $\beta$. We further divide into sub-cases according to $n$. \vskip7pt \underline{Sub-case 1.0:} $n=0$. First, if $\alpha$ and $\beta$ meet the same bridge of $L$ then $c$ bounds a boundary compression disk for $S$, which is a contradiction. So, assume that $\alpha$ and $\beta$ meet adjacent bridges of $S$. The annulus $S$ must thus spiral in-between the strands of $L\cap T$. Thus we obtain a disk of Type 2 (as in Lemma \ref{lem: normal form}) and hence, by the definition of $\mathcal{C}^+$ and $\mathcal{C}^-$ as in the beginning of \S\ref{subsec: Curves of intersection} this curve does not appear in $\mathcal{C}$. \vskip7pt \underline{Sub-case 1.1:} $n=1$. The tangle $L\cap T$ has two components $\lambda_1,\lambda_2$. Let $l_1,l_2$ be the corresponding components of $L$ (possibly $l_1=l_2$). Because $n = 1$ the arcs $\alpha$ and $\beta$ meet the same string of $L\cap T$, say $\lambda_1$. Hence the two boundary components of $S$ are contained in the same component $l_1$ of $L$. If $l_1=l_2$, then there exists a curve $c'\in\mathcal{C}_{0,4}$ which meets the bridge in-between $\alpha$ and $\beta$, and this curve must be as in Case 1.0. Thus, we may assume that $l_1\ne l_2$. Consider the disk $\Delta$ as depicted in Figure \ref{fig: C04 proof}(a). Its interior intersects $L$ in a single point in $l_2$, and its boundary is the union of an arc on $S$ and an arc on $l_1$. The manifold $\mathcal{N}(S)\cup \mathcal{N}(l_1)$ has two torus boundary components $U$ and $V$. See Figure \ref{fig: C04 proof}(b). Let $U$ be the torus that meets $\Delta$. Let $U_-$ be the component of $S^3\smallsetminus U$ containing $l_2$, and let $U_+$ be the other component. The torus $U$ is incompressible in $U_-$, as such a compression must be on $\Delta$ and $\Delta$ meets $l_2$ once. It is also incompressible in $U_+$, as if such a compression disk exists then since it cannot intersect $l_1$, it gives a compression of the annulus $S$, in contradiction to the incompressibility of $S$. \begin{figure}[ht] \centering \subfigure[]{ \begin{overpic}[height=4cm]{figures/C04proofDelta.pdf} \put(17,40){$\Delta$} \put(30,-3){$l_2$} \put(50,-3){$l_1$} \put(38,30){\Tiny$\beta$} \put(38,80){\Tiny$\alpha$} \end{overpic} } \hspace{2cm} \subfigure[]{ \begin{overpic}[height=5cm]{figures/C04proofDelta2.pdf} \put(60,88){$l_1$} \put(73,75){$V$} \put(85,65){$S$} \put(90,50){$U$} \put(42,30){$l_2$} \put(45,45){$U^-$} \end{overpic} } \caption{(a) The disk $\Delta$. (b) A cross section of the twist box, the annulus $S$ and the tori $U,V$} \label{fig: C04 proof} \end{figure} By Corollary \ref{cor: atoroidal}, $U$ must be boundary parallel to either $\partial \mathcal{N} (l_2)$ or $\partial \mathcal{N} (l_1)$. If $U$ is parallel to $\partial \mathcal{N} (l_2)$, then since $l_1$ is parallel to a curve in $U$ crossing $\Delta$ once, then there exists an annulus $A\subset S^3 \smallsetminus L$ whose boundary is $l_1\cup l_2$. The annulus $A$ is incompressible, since otherwise $l_1 \cup l_2$ would be a 2-component unlink that is not linked with $L$, i.e., $L$ is split, contradicting Corollary \ref{cor: nonsplit}. The annulus $A$ is trivially boundary-incompressible because the boundary components of $A$ are on two different components of $L$. If we run the argument for $A$ instead of $S$, Case 1.1 cannot occur because the boundary components of $A$ are on two different components of $L$. If $U$ is parallel to $\partial \mathcal{N} (l_1)$, the intersection $\Delta \cap U$ is a curve on $U$ which meets the meridian of $\mathcal{N}(l_1)$ exactly once: as if it meets it more than once, then the union $\mathcal{N}(\Delta) \cup \mathcal{N}(l_1)$ determines a once-punctured non-trivial lens space contained in $S^3$, which is impossible. Thus, $\partial \Delta$ which is parallel to $\Delta\cap U$ is also parallel to $l_1$. Therefore, the arcs $\partial \Delta \smallsetminus l_1 \subset S$ and $l_1 \smallsetminus \partial \Delta \subset L$ bound a disk. Since the arc $\partial \Delta \smallsetminus l_1$ connects different components of $S$ it is an essential arc, and the disk is a boundary compression for $S$, a contradiction. \vskip7pt \underline{Sub-case 1.2:} $n\ge 2$. As the boundary of the annulus $S$ must pass through every other bridge in $T$, there must be another curve of $\mathcal{C}_{0,4}$ in between $\alpha$ and $\beta$. By choosing an innermost such curve we are back in one of the previous cases. \vskip10pt \noindent\underline{Case 2:} If $\beta$ passes over two crossings, then since the endpoints of $\beta$ have the same color they are at distance 2. Therefore, we are in Case \eqref{obs: single arc. two bubbles and distance 2} of Observation \ref{obs: single arc}. By the uniqueness of such arcs, $\alpha = \beta$. However, since $\alpha$ is a bridge in the ``middle'' of a twist box, it has only one crossing, a contradiction. \end{proof} Theorem \ref{thm: boundary parallel} immediately follows from Corollary \ref{cor: atoroidal} and Proposition \ref{pro: unannular}. \begin{proof}[Proof of Theorem \ref{thm: highly twisted plats are hyperbolic}] If $m = 2$ then $L$ is a $2$-bridge knot/link which is not a torus knot/link, since there are at least two twist regions, hence it is atoroidal and unannular by \cite{Hatcher-Thurston}. If $m \geq 3$ then the theorem follows directly from Theorem \ref{thm: boundary parallel}. In both case apply Thurston's result (see \cite{thurston}) that a link complement which contains no essential annuli or tori is hyperbolic. \end{proof} \bibliographystyle{amsplain}
2024-02-18T23:39:46.651Z
2021-11-30T02:23:30.000Z
algebraic_stack_train_0000
354
15,918
proofpile-arXiv_065-1868
\section{Introduction} The increasing sensitivity of gravitational-wave (GW) detectors~\cite{TheVirgo:2014hva,TheLIGOScientific:2014jea} and the associated compact binaries detections~\cite{LIGOScientific:2020ibl} motivate work towards physically complete, precise and efficient gravitational-wave models. The effective-one-body (EOB) approach~\cite{Buonanno:1998gg,Buonanno:2000ef,Damour:2000we,Damour:2001tu,Damour:2015isa} is a way to deal with the general-relativistic two-body problem that, by construction, allows the inclusion of perturbative (post-Newtonian, black hole perturbations) and full numerical relativity (NR) results within a single theoretical framework. It currently represents a state-of-art approach for modeling waveforms from binary black holes, conceptually designed to describe the entire inspiral-merger-ringdown phenomenology of quasicircular binaries~\cite{Nagar:2018gnk,Nagar:2018zoe,Cotesta:2018fcv,Nagar:2019wds,Nagar:2020pcj,Ossokine:2020kjp,Schmidt:2020yuu} or even eccentric inspirals~\cite{Chiaramello:2020ehz,Nagar:2021gss,Nagar:2021xnh} and dynamical captures along hyperbolic orbits~\cite{Damour:2014afa,Nagar:2020xsk,Nagar:2021gss,Gamba:2021gap}. An alternative, though less flexible, approach to generate waveforms for detection and parameter estimation relies on phenomenological models, whose latest avatar is {\tt IMRPhenomX}~\cite{Pratten:2020fqn,Garcia-Quiros:2020qpx,Pratten:2020ceb}. Note however that this kind of waveform models {\it does rely} on the EOB approach to accurately describe the waveform during the long inspiral, until it is matched to (short) NR simulations. Currently, there are two families of NR-informed EOB waveform models: the {\tt SEOBNR} family~\cite{Cotesta:2018fcv,Ossokine:2020kjp} and the {\tt TEOBResumS}~\cite{Akcay:2020qrj, Gamba:2021ydi} family. Both models incorporate precession and tidal effects in some form, but \texttt{TEOBResumS}{} also has spin-aligned versions that can deal with eccentric inspirals and hyperbolic encounters~\cite{Nagar:2020xsk,Nagar:2021gss}. Although they are both EOB models, their building blocks are very different, starting from the choice of the underlying Hamiltonians and resummation strategies (see e.g.~\cite{Rettegno:2019tzh}). The quality of {\it any} waveform model (specifically, an EOB or a phenomenological one in the current context), is assessed by computing the unfaithfulness (or mismatch) between the waveforms generated by the model and the corresponding NR waveforms over the NR-covered portion of the binary parameter space. This is an obvious procedure since the waveform is the crucial observable that is needed for data analysis. If this is the {\it only} viable procedure for phenomenological models, for EOB models there are other quantities that might be worth considering. In particular, one has to remember that within the EOB one has access to the {\it full relative dynamics} of the binary and thus one can complement the waveform comparison with other, gauge-invariant, dynamical quantities. For example, one has access to the gauge-invariant relation between energy and angular momentum~\cite{Damour:2011fu,Nagar:2015xqa,Ossokine:2017dge}, to the periastron advance~\cite{LeTiec:2011bk, LeTiec:2013uey, Hinderer:2013uwa} or, for hyperbolic encounters, to the scattering angle~\cite{Damour:2014afa}. Together with the Hamiltonian and the waveform, the third building block of any EOB model is the radiation reaction, i.e. the flux of angular momentum and energy radiated via gravitational waves. Surprisingly, the only direct comparison between EOB and NR fluxes, namely Ref.~\cite{Boyle:2008ge}, dates back to more than a decade ago. The purpose of this paper is to update Ref.~\cite{Boyle:2008ge} focusing on spin-aligned BBHs. More specifically, it aims at presenting: (i) new calculations of the fluxes from (some of) the spin-aligned NR datasets of the Simulating eXtreme Spacetimes (SXS) catalog~\cite{Boyle:2019kee} and (ii) new EOB/NR comparisons between the fluxes that involve both the most recent version of \texttt{TEOBResumS}{}~\cite{ Nagar:2019wds, Nagar:2020pcj} and \texttt{SEOBNRv4HM}{}~\cite{Cotesta:2018fcv,Ossokine:2020kjp}. From the EOB/NR flux comparisons with \texttt{TEOBResumS}{}, we learn the importance of including next-to-quasi-circular (NQC) corrections also in the flux modes beyond the $\ell=m=2$ dominant one in order to achieve a rather high level of consistency ($\lesssim 1\%$) between the EOB ad NR fluxes up to merger. By contrast, the EOB/NR flux comparisons with \texttt{SEOBNRv4HM}{} show deficits of this model over the NR-covered portion of the parameter space. While including NQC factors in the radiation reaction in \texttt{TEOBResumS}{}, we eventually build an improved model, called \texttt{TEOBResumS\_NQC\_lm}{}, that aims at being more self-consistent and that differs from the standard \texttt{TEOBResumS}{} also for a more precise determination of the NR-informed spin-orbit dynamical parameter. By computing the unfaithfulness for the $\ell = m = 2$ mode over the sample of 534 nonprecessing, quasicircular simulations of the SXS catalog already considered in Ref.~\cite{Riemenschneider:2021ppj}, we find that both the standard model and the updated one are promising foundations in view of the requirements for third generation detectors~\cite{Reitze:2021gzo, Couvares:2021ajn, Punturo:2021ryo, Katsanevas:2021fzj, Kalogera:2021bya, McClelland:2021wqy}. The paper is organized as follows. In Sec.~\ref{sec:flux} we remind the structure of the radiation reaction within the \texttt{TEOBResumS}{} model, provide a novel computation of the angular momentum flux from (a sample of) NR simulations and compare it with the \texttt{TEOBResumS}{} one. The outcome of this comparisons points to the fact that an improved EOB model would benefit of the inclusion in the flux of NQC corrections beyond the $\ell=m=2$ ones. This improved model is constructed in Sec.~\ref{sec:new}, notably by providing a new NR-informed fit of the next-to-next-to-next-to-leading-order (NNNLO) effective spin-orbit parameter $c_3$ previously introduced in~\cite{Damour:2014sva,Nagar:2015xqa}. In Sec.~\ref{sec:barF} we assess the accuracy of this NQC-improved model by computing the EOB/NR unfaithfulness using the PSD of advanced LIGO~\cite{aLIGODesign_PSD}, of Einstein Telescope~\cite{Hild:2009ns, Hild:2010id} and of Cosmic Explorer~\cite{Evans:2021gyd}. Finally, Sec.~\ref{sec:seob} provides a comprehensive comparison between NR, \texttt{SEOBNRv4HM}{}~\cite{Bohe:2016gbl,Cotesta:2020qhw, Ossokine:2020kjp} and \texttt{TEOBResumS}{} in its native (i.e. non-NQC-improved) form. We gather our concluding remarks in Sec.~\ref{sec:conclusions}. Unless otherwise specified, we use natural units with $c=G=1$. Our notations are as follows: we denote with $(m_1,m_2)$ the individual masses, while the mass ratio is $q\equiv m_1/m_2\geq 1$. The total mass and symmetric mass ratio are then $M\equiv m_1+m_2$ and $\nu = m_1 m_2/M^2$. We also use the mass fractions $X_{1,2}\equiv m_{1,2}/M$ and $X_{12}\equiv X_1-X_2=\sqrt{1-4\nu}$. We address with $(S_1,S_2)$ the individual, dimensionful, spin components along the direction of the orbital angular momentum. The dimensionless spin variables are denoted as $\chi_{1,2}\equiv S_{1,2}/(m_{1,2})^2$. We also use $\tilde{a}_{1,2}\equiv X_{1,2}\chi_{1,2}$, the effective spin $\tilde{a}_0=\tilde{a}_1+\tilde{a}_2$ and $\tilde{a}_{12}\equiv \tilde{a}_1-\tilde{a}_2$. \section{Gravitational wave fluxes} \label{sec:flux} \subsection{Angular momentum fluxes from Numerical Relativity simulations} \label{sec:cleaning} In the systematic analysis of fluxes of Ref.~\cite{Boyle:2008ge}, performed using NR data from the SXS collaboration, a lot of effort was devoted at the time to remove the spurious oscillations that are present when the flux is expressed in terms of some, gauge-invariant, frequency parameter. The quality of SXS simulations has hugely improved from Ref.~\cite{Boyle:2008ge}. Although SXS data has been used recently in the computation of the fluxes to obtain energy versus angular momentum curves (see e.g. Refs.~\cite{Damour:2011fu,Nagar:2015xqa}), an explicit calculation of the flux analogous to the one presented in Ref.~\cite{Boyle:2008ge} has not been attempted again. This is the purpose of this section. Let us start by fixing our notations and conventions. The strain waveform is decomposed in spin-weighted spherical harmonics as \be h_+ - i h_\times = \dfrac{1}{D_L}\sum_\ell \sum_{m=-\ell}^{\ell}h_{\ell m}{}_{-2}Y_{\ell m}(\iota,\phi) \end{equation} where $D_L$ indicates the luminosity distance. The angular momentum flux radiated at infinity reads\footnote{Along the $z$-axis orthogonal to the orbital plane. Since we are considering a nonprecessing system the components of the angular momentum along $(x,y)$ directions are zero.} \begin{equation} \dot{J}_{\infty} = - \frac{1}{8\pi} \sum_{\ell=2}^{\ell_{\rm max}}\sum_{m=-\ell}^{\ell} m \Im(\dot{h}_{\ell m} h_{\ell m}^*). \end{equation} Here we will consider $\ell_{\rm max}=8$. For clarity, we work with the Newton-normalized angular momentum flux \be \frac{\dot{J}_{\infty}}{\dot{J}^{\rm circ}_{\rm Newt}} , \end{equation} where the circularized Newtonian flux formally reads \begin{equation} \dot{J}^{\rm circ}_{\rm Newt} = \frac{32}{5}\nu^2\left(\Omega_{\rm NR}\right)^{7/3}. \end{equation} Here we define the NR orbital frequency $\Omega_{\rm NR}$ simply as \begin{equation} \Omega_{\rm NR} \equiv \frac{\omega_{22}^{\rm NR}}{2} , \end{equation} where $\omega_{22}^{\rm NR}\equiv \dot{\phi}_{22}^{\rm NR}$ is the NR quadrupolar GW frequency and $\phi_{22}^{\rm NR}$ the phase defined from $h_{22}=A_{22}^{\rm NR} e^{- {\rm i} \phi_{22}^{\rm NR}}$. We compute the NR fluxes out of a certain sample of SXS datasets, and choose extrapolation order\footnote{For the time-domain phasing and unfaithfulness computations we use instead $N = 3$.} $N=4$ to avoid systematics during the inspiral. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{fig01.pdf} \caption{\label{fig:cleaning_steps} Intermediate steps of the cleaning procedure. The $x$-domain is separated into three different parts, delimited by the vertical lines (top panel). No smoothing is applied on the third region, while the span of the moving average changes between the first and second region. The bottom panel focuses on the intersection points between the raw function and the smoothed one, that are finally fitted with a polynomial. The inset highlights the behavior around the interface between the first and second region.} \end{figure} When the so-computed fluxes are depicted in terms of the gauge-invariant frequency parameter \be \label{eq:xNR} x_{\rm NR} \equiv \left(\Omega_{\rm NR}\right)^{2/3} \end{equation} one finds spurious oscillations. These oscillations are due to residual eccentricity (or other effects related to the BMS symmetry being violated~\cite{Mitman:2021xkq}), and are additionally amplified when taking the derivatives. The amplification might be large and make the raw flux totally useless for any meaningful comparison with the analogous, fully nonoscillatory, EOB quantity. We have developed an efficient method to completely remove this oscillating behavior, and produce a rather clean and smooth representation of the flux versus $x$. The procedure is applied to the sample of SXS simulations reported in Table~\ref{tab:spinning_flux}, that is chosen so that the datasets distribution is approximately uniform over the NR-covered portion of the parameter space. We cut each flux at the NR merger, defined as the peak of $|h_{22}|$. The procedure uses a {\tt MATLAB} function called \texttt{smooth}, i.e. a moving average whose span can be selected by the user\footnote{Namely, it is a lowpass filter with filter coefficients equal to the reciprocal of the span, meaning the higher the frequency of the oscillations to be removed, the higher the value of the chosen span.}. The $x$-domain on which the flux function is defined is separated into three parts: the first and the second ones get smoothed with different spans, as the frequency of the oscillations progressively lowers; the third part, that is already essentially nonoscillatory, is left untouched. The three regions are optimized manually for each dataset in Table~\ref{tab:spinning_flux}. \begin{table}[t] \caption{\label{tab:spinning_flux}Sample of SXS spin-aligned datasets for which we compute the angular momentum flux. From left to right the columns display: the SXS ID; the binary parameters; the highest and second-highest level of resolution; the average of the difference between the raw flux and the cleaned one.} \begin{center} \begin{ruledtabular} \begin{tabular}{c c c c r} ID & $(q, \chi_1, \chi_2)$ & $\text{Lev}_h$ & $\text{Lev}_l$ & $\langle\Delta \dot{J}^\infty_{\rm NR - NR_{\rm clean}}\rangle$ \\ \hline BBH:1155 & $(1,0,0)$ & 3 & 2 & $1\cdot 10^{-6}$ \\ BBH:1222 & $(2,0,0)$ & 4 & 3 & $5.4\cdot 10^{-5}$ \\ BBH:1179 & $(3,0,0)$ & 5 & 4 & $1.8\cdot 10^{-5}$ \\ BBH:0190 & $(4.499,0,0)$ & 3 & 2 & $1.5\cdot 10^{-5}$ \\ BBH:0192 & $(6.58,0,0)$ & 3 & 2 & $1.3\cdot 10^{-5}$ \\ BBH:1107 & $(10,0,0)$ & 4 & 3 & $7.2\cdot 10^{-5}$ \\ \hline BBH:1137 & $(1,-0.97,-0.97)$ & 4 & 2 & $6.3\cdot 10^{-5}$ \\ BBH:2084 & $(1,-0.90,0)$ & 4 & 3 & $-2\cdot 10^{-6}$ \\ BBH:2097 & $(1,+0.30,0)$ & 4 & 3 & $2.4\cdot 10^{-5} $\\ BBH:2105 & $(1,+0.90,0)$ & 4 & 3 & $2.3\cdot 10^{-5} $\\ BBH:1124 & $(1,+0.99,+0.99)$ & 3 & - & $2.6\cdot 10^{-5}$ \\ BBH:1146 & $(1.5,+0.95,+0.95)$ & 2 & 0 & $1.2\cdot 10^{-5}$ \\ BBH:2111 & $(2,-0.60,+0.60)$ & 4 & 3 & $-9\cdot 10^{-6} $\\ BBH:2124 & $(2,+0.30,0)$ & 4 & 3 & $9\cdot 10^{-6}$ \\ BBH:2131 & $(2,+0.85,+0.85)$ & 4 & 3 & $2\cdot 10^{-5}$ \\ BBH:2132 & $(2,+0.87,0)$ & 4 & 3 & $1.3\cdot 10^{-5}$ \\ BBH:2133 & $(3,-0.73,+0.85)$ & 4 & 3 & $2.2\cdot 10^{-5}$ \\ BBH:2153 & $(3,+0.30,0)$ & 4 & 3 & $3.6\cdot 10^{-5}$ \\ BBH:2162 & $(3,+0.60,+0.40)$ & 4 & 3 & $1.7\cdot 10^{-5}$ \\ BBH:1446 & $(3.154,-0.80,+0.78)$ & 3 & 2 & $9\cdot 10^{-6}$ \\ BBH:1936 & $(4,-0.80,-0.80)$ & 3 & 2 & $-1.8\cdot 10^{-5}$ \\ BBH:2040 & $(4,-0.80,-0.40)$ & 3 & 2 & $7\cdot 10^{-6}$ \\ BBH:1911 & $(4,0,-0.80)$ & 3 & 2 & $7\cdot 10^{-6}$ \\ BBH:2014 & $(4,+0.80,+0.40)$ & 3 & - & $-1\cdot 10^{-6} $\\ BBH:1434 & $(4.368,+0.80,+0.80)$ & 3 & - & $2.5\cdot 10^{-5}$ \\ BBH:1463 & $(4.978,+0.61,+0.24)$ & 3 & 2 & $1.5\cdot 10^{-5}$ \\ BBH:0208 & $(5,-0.90,0)$ & 3 & 2 & $9.2\cdot 10^{-5}$ \\ BBH:1428 & $(5.518,-0.80,-0.70)$ & 3 & 2 & $-2\cdot 10^{-6}$ \\ BBH:1437 & $(6.038,+0.80,+0.15)$ & 3 & 2 & $5\cdot 10^{-6} $\\ BBH:1436 & $(6.281,+0.009,-0.80)$ & 3 & 2 & $1\cdot 10^{-6} $\\ BBH:1435 & $(6.588,-0.79,+0.7)$ & 3 & 2 & $2\cdot 10^{-6}$ \\ BBH:1448 & $(6.944,-0.48,+0.52)$ & 3 & - &$ 2.1\cdot 10^{-5}$ \\ BBH:1375 & $(8,-0.90, 0)$ & 3 & - & $2.6\cdot 10^{-5}$ \\ BBH:1419 & $(8,-0.80,-0.80)$ & 3 & - & $-1.3\cdot 10^{-5}$ \\ BBH:1420 & $(8,-0.80,+0.80)$ & 3 & 2 &$ 2.2\cdot 10^{-5}$ \\ BBH:1455 & $(8,-0.40, 0)$ & 3 & 2 & $-3\cdot 10^{-6}$ \\ \end{tabular} \end{ruledtabular} \end{center} \end{table} \begin{figure}[t] \includegraphics[width=0.44\textwidth]{fig02.pdf} \caption{\label{fig:cleaning_final} The cleaned numerical angular momentum flux for the simulation SXS:BBH:1437 (dashed orange) is plotted against the original one (red). The inset in the upper panel shows how the final flux follows the original curve, averaging the oscillations. In the lower panel we display the difference between the cleaned and the raw flux, whose mean (dashed light blue line) is of order $10^{-5}$, hence proving the effectiveness of the procedure. Our cleaning method also allows to estimate numerical accuracy (red curve), that is evaluated by subtracting to the cleaned flux its equivalent coming from the second-highest available resolution. } \end{figure} The cleaning procedure can be summarized in three steps: (i) we first apply the moving average to reduce the amplitude of the oscillations (see inset in the upper panel of Fig.~\ref{fig:cleaning_steps}); (ii) then we find the intersection points between the raw flux and the smoothed one, (see markers in the inset of Fig.~\ref{fig:cleaning_steps}); (iii) as a third step, the intersection points between the raw and the smoothed flux are fitted by a polynomial in $x$. For the datasets SXS:BBH:1155, SXS:BBH:1222, SXS:BBH:0190, SXS:BBH:0192 this is accomplished via a seventh order polynomial, while it suffices a fifth order one for the others\footnote{Polynomials have been chosen after attempting different fitting functions, but they prove to be the simplest and more effective choice. We also found it more practical to apply a fit due to the large number of simulations taken into account.}. The outcome of the fit is finally joined to the third part that was left unmodified. The final result, after some additional smoothing at the junction point, is shown in Fig.~\ref{fig:cleaning_final}. Its reliability can be verified by computing the difference with the raw data and checking that it averages zero. This is shown in the bottom panel of Fig.~\ref{fig:cleaning_final}, where the residual does not show any evident global trend, actually averaging at $\sim 5 \times 10^{-6}$. To obtain a conservative estimate of the NR uncertainty on the final fluxes, we apply the cleaning procedure to both the highest and second highest available resolution and then take the difference. This is also shown in the bottom panel of Fig.~\ref{fig:cleaning_final}. The procedure is found to be efficient and reliable for all configurations of Table~\ref{tab:spinning_flux}, where the quality of the cleaning procedure is indicated by the average of the difference between the raw flux and the cleaned one (last column of the table). The final result is displayed in Fig.~\ref{fig:cleaned_fluxes}. The figure highlights how both the value of the flux at merger and its global behavior have a clear dependence on the mass ratio and the effective Kerr parameter. This testifies how equal-mass binaries have a more adiabatic evolution, corresponding to slower plunges and a lower angular momentum loss. If the BHs have positive spins the plunge is even slower, owing to the well known effect of spin-orbit coupling (or hang-up effect)~\cite{Damour:2001tu, Campanelli:2006uy}. Conversely for high mass ratio binaries (nearer to the test-mass limit) and negative spins, the fact that the system is progressively more and more nonadiabatic implies larger angular momentum losses, and the evolution ends at lower frequencies. \begin{figure}[t] \includegraphics[width=0.43\textwidth]{fig03a.pdf} \\ \vspace{2mm} \includegraphics[width=0.43\textwidth]{fig03b.pdf} \caption{\label{fig:cleaned_fluxes} The top panel shows the final outcome of the Newton-normalized angular momentum flux calculation for the NR datasets of Table~\ref{tab:spinning_flux}, with $x$ given by Eq.~\eqref{eq:xNR}, shown up to merger. The color code is chosen depending on the final value of the flux, displayed in the lower panel. The merger values show a clear dependence on $\nu$ and $\tilde{a}_0$, mirroring whether the dynamics is more or less adiabatic: larger emissions correspond to faster plunges (with $\tilde{a}_0<0$).} \end{figure} \subsection{Angular momentum flux and radiation reaction within EOB} \label{sec:EOB_flux} Let us now turn to discuss EOB fluxes within \texttt{TEOBResumS}{}. To do so, we start by reviewing the analytical elements of \texttt{TEOBResumS}{} that will be useful for our discussion. We use mass-reduced phase-space variables $(r,\varphi,p_\varphi,p_{r_*})$, related to the physical ones by $r=R/M$ (relative separation), $p_{r_*}=P_{R_*}/\mu$ (radial momentum), $\varphi$ (orbital phase), $p_\varphi=P_\varphi/(\mu M)$ (angular momentum) and $t=T/M$ (time). The \virg{tortoise} radial momentum is $p_{r_*}\equiv (A/B)^{1/2}p_r$, where $A$ and $B$ are the EOB potentials (with included spin-spin interactions~\cite{Damour:2014sva}). The Hamilton's equations for the relative dynamics read \begin{align} \dot{\varphi} &= \Omega = \partial_{p_\varphi} \hat{H}_{\rm EOB}, \\ \dot{r} &= \left( \frac{A}{B} \right)^{1/2} \partial_{p_{r_*}} \hat{H}_{\rm EOB}, \\ \dot{p}_\varphi &= \hat{{\cal F}}_\varphi , \nonumber \\ \dot{p}_{r_*} &= - \left( \frac{A}{B} \right)^{1/2} \partial_{r} \hat{H}_{\rm EOB}, \end{align} where $\hat{H}_{\rm EOB}$ is the EOB Hamiltonian~\cite{Nagar:2018zoe}, $\Omega$ is the orbital frequency and $\hat{{\cal F}}_\varphi$ is the radiation reaction force accounting for mechanical angular momentum losses due to GW emission. Note that within this context we are assuming that the radial force $\hat{{\cal F}}_r=0$, that is equivalent to a gauge choice for circular orbits~\cite{Buonanno:2000ef}. For a balance argument, the system angular momentum loss should be equal to the sum of the GW flux emitted at infinity, $\dot{J}_{\infty}$, and absorbed by the event horizons of the two black holes, $\dot{J}_{\rm H_{1,2}}$, that is \begin{equation} \dot{J}_{\rm system} = \hat{{\cal F}}_\varphi = - \dot{J}_{\infty} - \dot{J}_{\rm H_1} - \dot{J}_{\rm H_2}. \end{equation} In general, within this equation there should be an additional term accounting for Schott contributions, that are due to the interactions between the radiation and the field. However, it is always possible to choose a gauge such that there is no Schott contribution to the angular momentum~\cite{Bini:2012ji} and this is the choice made here (on top of neglecting $\hat{\cal{F}}_r$). The azimuthal radiation reaction force is hence written as \begin{equation} \hat{{\cal F}}_\varphi = \hat{{\cal F}}_\varphi^\infty + \hat{{\cal F}}_\varphi^{\rm H}, \end{equation} where $\hat{{\cal F}}_\varphi^{\rm H}$ is the horizon flux contribution~\cite{Damour:2014sva}. The asymptotic term reads \begin{equation} \label{eq:RR} \hat{{\cal F}}_\varphi^\infty = -\frac{32}{5} \nu r_{\omega}^4 \Omega^5 \hat{f}^{\infty}(v_\varphi^2;\nu), \end{equation} where $\hat{f}^{\infty}(v_\varphi^2;\nu)$ is the reduced (i.e., Newton-normalized) flux function, $v_\varphi^2 \equiv (r_{\omega}\Omega)^2$ and $r_{\omega}$ is a modified radial separation defined in such a way that $1 = \Omega^2 r_{\omega}^3$ is valid during the plunge, fulfilling a modified Kepler's law that accounts for non-circularity~\cite{Damour:2006tr,Damour:2007xr}. The reduced flux function is defined by normalizing the resummed circularized energy flux as $\hat{f} \equiv ({\cal F}_{22}^{\rm Newt})^{-1} \sum {\cal F}_{\ell m}$, with all multipoles (except $m = 0$ modes) up to $\ell = 8$. The Newtonian term reads ${\cal F}_{22}^{\rm Newt} = (32/5) \nu^2 x^5$ and the multipolar terms ${\cal F}_{\ell m}$ are factorized and resummed analogously to what is done for the waveform~\cite{Damour:2012ky}. Explicitly, building upon Ref.~\cite{Damour:2008gu}, the structure of each flux multipole is \begin{equation} \label{eq:flux_multipoles} {\cal F}_{\ell m} = {\cal F}_{\ell m}^{\rm Newt} |\hat{h}_{\ell m} |^2 {\cal F}_{\ell m}^{\rm NQC}. \end{equation} This is related to the correction entering the factorization of the waveform multipoles \be h_{\ell m} = h_{\ell m}^{\rm Newt} \, \hat{h}_{\ell m} \, \hat{h}_{\ell m}^{\rm NQC} \end{equation} where $h_{\ell m}^{\rm Newt}$ is the Newtonian prefactor\footnote{As pointed out in Ref.~\cite{Nagar:2019wds}, the standard Newtonian prefactors proportional to some power of $v_\varphi$ are replaced in some multipoles by suitable powers of $v_\varphi v_\Omega$, with $v_\Omega=\Omega^{1/3}$. This is a practical solution to ease the action of the NR-informed NQC amplitude corrections and allow them to correctly capture the peak amplitude of each multipole. When including NQC corrections also in the higher mode contribution to the flux, this choice will eventually yield a partial inconsistency between the waveform and the flux. In Appendix~\ref{sec:hlm_Newt} we show that by using the standard Newtonian prefactors in the waveform we generically improve the EOB/NR flux agreement for positive spins, but get inconsistent results for negative spins.}, $\hat{h}_{\ell m}$ is the resummed PN correction and $\hat{h}_{\ell m}^{\rm NQC}$ is the next-to-quasi-circular factor. The latter is described in more detail in Refs.~\cite{Damour:2014sva,Nagar:2017jdw,Nagar:2019wds,Riemenschneider:2021ppj} (see in particular Sec.~IIID of~\cite{Nagar:2019wds}). For each flux mode we have \begin{equation} {\cal F}_{\ell m}^{\rm NQC} = \left|\hat{h}_{\ell m}^{\rm NQC}\right| ^2 = \left(1 + a_1^{\ell m} n_1 ^{\ell m} + a_2^{\ell m} n_2 ^{\ell m}\right)^2 \end{equation} where $(n_1^{\ell m}, n_2^{\ell m})$ are functions of the radial momentum and of the radial acceleration (and a priori depend on the mode); $(a_1^{\ell m}, a_2^{\ell m})$ are numerical coefficients that are informed by NR simulations~\cite{Damour:2014sva,Nagar:2017jdw} via an iterative procedure~\cite{Damour:2009kr}. NQC corrections can, and actually should, be applied to each waveform (and thus flux) mode since they complete the analytical waveform, that is quasicircular by construction. In practice, within \texttt{TEOBResumS}{} we add NQC corrections {\it only} in the $(2,2)$ flux mode, while the waveform is NQC-completed up to $\ell=m=5$~\cite{Nagar:2019wds}. Finally, we remind that \texttt{TEOBResumS}{} is NR-informed via two different parameters, $a_6^c(\nu)$ and $c_3(\nu, \tilde{a}_1, \tilde{a}_2)$, respectively tuning the $A$ potential and the spin-orbit sector of the model. Details on these functions can be found in Sec.~IIC of Ref.~\cite{Nagar:2020pcj}. For most of the analyses carried out in the following, we make use of the private {\tt MATLAB} version of \texttt{TEOBResumS}{}, in which we implement the changes for \texttt{TEOBResumS\_NQC\_lm}{}. The publicly available $C$ version is used in the unfaithfulness calculation for the standard \texttt{TEOBResumS}{}. \subsection{Comparing NR and EOB fluxes} \label{sec:comparison} \begin{figure}[t] \includegraphics[width=0.44\textwidth]{fig04.pdf} \caption{\label{fig:1436old} Comparing Newton normalized total angular momentum fluxes summed up to $\ell_{\rm max}=8$. The upper panel shows: (i) the raw numerical flux (orange) and as its cleaned version (dashed red); (ii) the EOB flux with $\ell=m=2$ NQC corrections (dash-dotted light blue) and without (dotted green); (iii) the 3.5PN flux (purple). From left to right, the vertical lines indicate the EOB LSO and the NR merger respectively. Fractional differences are shown in the bottom panel, together with the NR uncertainty. NQC corrections are essential to reduce the gap between the EOB and NR curves.} \end{figure} \begin{figure}[t] \includegraphics[width=0.44\textwidth]{fig05a.pdf} \\ \vspace{1.5mm} \includegraphics[width=0.44\textwidth]{fig05b.pdf} \caption{\label{fig:1436old_22} {\it Top}: Comparing Newton-normalized $\ell = m = 2$ angular momentum fluxes, including again NR, the EOB fluxes with and without NQC corrections, and the 3.5PN result. Remarkably, the fractional difference with NR for the NQC-corrected EOB curve is of order $10^{-3}$ up to merger. The vertical lines indicate the LSO and the merger point. The $\ell = m = 2$ numerical flux has been cleaned separately from the total one, and the difference between the raw flux and the final fit averages to $-2 \cdot 10^{-5}$. {\it Bottom}: Fractional differences for the EOB/NR $\ell = m = 2$ fluxes at $x = 0.2$ for all configurations of Table~\ref{tab:spinning_flux}. The largest differences occur when $\tilde{a}_0 < 0$, where $x=0.2$ approximately corresponds to the plunge regime.} \end{figure} Let us now move to compare EOB and NR fluxes. The Newton-normalized EOB flux is expressed versus $x_{\rm EOB}=\Omega^{2/3}$, while the NR curve is expressed versus $x_{\rm NR}=\Omega_{\rm NR}^{2/3}$ as defined above. To simplify the notation, in the figure we will simply use $x$ for the horizontal axis, but it is intended that $x=x_{\rm NR}$ when dealing with the NR curve and $x=x_{\rm EOB}$ for the EOB curve. As an illustrative configuration we choose SXS:BBH:1436, corresponding to parameters $(q, \chi_1, \chi_2) = (6.281, 0.009, -0.8)$. The Newton-normalized, total, angular momentum flux, summed up to $\ell_{\rm max}=8$ is displayed in Fig.~\ref{fig:1436old}. In particular, the figure shows: (i) the raw and cleaned NR fluxes, that are effectively indistinguishable on this scale; (ii) two EOB fluxes, one with the $\ell=m=2$ NQC correction in the flux and another without it; (iii) the 3.5PN flux. The EOB fluxes prove both the power of resummation techniques and the effectiveness of NQC corrections in achieving a good agreement with the NR quantities. The upper panel in Figure~\ref{fig:1436old_22} is analogous to Fig.~\ref{fig:1436old}, but only focuses on the $\ell=m=2$ contribution. The most interesting fact inferred by the plot is that the NQC factor is crucial to yield a fractional difference $\sim 10^{-3}$ up to merger. The lower panel of the same figure shows the distribution of the EOB/NR fractional difference at $x = 0.2$ over the parameter space. This seems to point out to a decreased agreement for configurations having a negative $\tilde{a}_0$, but one has to note however that, as can be seen in Fig.~\ref{fig:cleaned_fluxes}, the fluxes for these datasets end at lower frequencies and hence $x = 0.2$ corresponds to the late plunge. \begin{figure}[t] \center \includegraphics[width=0.45\textwidth]{fig06.pdf} \caption{\label{fig:1436_multipoles} Exploring the importance of higher modes using the SXS:BBH:1436 dataset. In each panel the flux is summed up to the indicated $(\ell,m)$ mode. NQC corrections are included either in the $\ell=m=2$ EOB mode only (dashed blue line) or in all $\ell=m$ modes up to $\ell=5$ (dotted purple line). The NR-informed NQC corrections in higher modes are essential to improve the EOB/NR agreement beyond plunge (the vertical line indicates the LSO frequency) and up to merger.} \end{figure} \begin{table}[t] \caption{\label{tab:c3}Binary configurations, first-guess values of $c_3$ used to inform the global interpolating fit given in Eq.~\eqref{eq:c3fit}, and the corresponding $c_3^{\rm fit}$ values.} \begin{center} \begin{ruledtabular} \begin{tabular}{lllc|cc} $\#$ & ID & $(q,\chi_1,\chi_2)$ & $\tilde{a}_0$ &$c_3^{\rm first\;guess}$ & $c_3^{\rm fit}$\\ \hline 1 & BBH:1137 & $( 1, -0. 97, -0. 97)$ & $-0.97$ & 89.7 & 89.33 \\ 2 & BBH:0156 & $( 1, -0.9498, -0.9498)$ & $-0.95$ & 88.5 & 88.33 \\ 3 & BBH:0159 & $( 1, -0. 90, -0. 90)$ & $-0.90$ & 84.5 & 85.86 \\ 4 & BBH:2086 & $( 1, -0. 80, -0. 80)$ & $-0.80$ & 82 & 80.93 \\ 5 & BBH:2089 & $( 1, -0. 60, -0. 60)$ & $-0.60$ & 71 & 71.19 \\ 6 & BBH:0150 & $( 1, +0. 20, +0. 20)$ & $+0.20$ & 35.5 & 35.73 \\ 7 & BBH:2102 & $( 1, +0. 60, +0. 60)$ & $+0.60$ & 22.2 & 21.67 \\ 8 & BBH:2104 & $( 1, +0. 80, +0. 80)$ & $+0.80$ & 15.9 & 16.31 \\ 9 & BBH:0153 & $( 1, +0. 85, +0. 85)$ & $+0.85$ & 15.05 & 15.29 \\ 10 & BBH:0160 & $( 1, +0. 90, +0. 90)$ & $+0.90$ & 14.7 & 14.5 \\ 11 & BBH:0157 & $( 1, +0. 95, +0. 95)$ & $+0.95$ & 14.3 & 14.1 \\ 12 & BBH:0177 & $( 1, +0. 99, +0. 99)$ & $+0.99$ & 14.2 & 14.29 \\ 13 & BBH:0004 & $( 1, -0. 50, 0. 0)$ & $-0.25$ & 55.5 & 54.44 \\ 14 & BBH:0005 & $( 1, +0. 50, 0. 0)$ & $+0.25$ & 35 & 34.17 \\ 15 & BBH:2105 & $( 1, +0. 90, 0. 0)$ & $+0.45$ & 27.7 & 27.21 \\ 16 & BBH:2106 & $( 1, +0. 90, +0. 50)$ & $+0.70$ & 19.1 & 19.09 \\ 17 & BBH:0016 & $( 1.5, -0. 50, 0. 0)$ & $-0.30$ & 56.2 & 56.14 \\ 18 & BBH:1146 & $( 1.5, +0. 95, +0. 95)$ & $+0.95$ & 14.35 & 13.98 \\ 19 & BBH:2129 & $( 2, +0. 60, 0. 0)$ & $+0.40$ & 29.5 & 29.31 \\ 20 & BBH:2130 & $( 2, +0. 60, +0. 60)$ & $+0.60$ & 23 & 22.41 \\ 21 & BBH:2131 & $( 2, +0. 85, +0. 85)$ & $+0.85$ & 16.2 & 15.73 \\ 22 & BBH:2139 & $( 3, -0. 50, -0. 50)$ & $-0.50$ & 65.3 & 62.45 \\ 23 & BBH:0036 & $( 3, -0. 50, 0. 0)$ & $-0.38$ & 58.3 & 57.62 \\ 24 & BBH:0174 & $( 3, +0. 50, 0. 0)$ & $+0.37$ & 28.5 & 30.87 \\ 25 & BBH:2158 & $( 3, +0. 50, +0. 50)$ & $+0.50$ & 27.1 & 26.64 \\ 26 & BBH:2163 & $( 3, +0. 60, +0. 60)$ & $+0.60$ & 24.3 & 23.56 \\ 27 & BBH:0293 & $( 3, +0. 85, +0. 85)$ & $+0.85$ & 17.1 & 17.05 \\ 28 & BBH:1447 & $(3.16, +0.7398, +0. 80)$ & $+0.75$ & 19.2 & 19.46 \\ 29 & BBH:2014 & $( 4, +0. 80, +0. 40)$ & $+0.72$ & 21.5 & 21.52 \\ 30 & BBH:1434 & $(4.37, +0.7977, +0.7959)$ & $+0.80$ & 19.8 & 20.05 \\ 31 & BBH:0111 & $( 5, -0. 50, 0. 0)$ & $-0.42$ & 54 & 57.18 \\ 32 & BBH:0110 & $( 5, +0. 50, 0. 0)$ & $+0.42$ & 32 & 30.98 \\ 33 & BBH:1432 & $(5.84, +0.6577, +0. 793)$ & $+0.68$ & 25 & 24.42 \\ 34 & BBH:1375 & $( 8, -0. 90, 0. 0)$ & $-0.80$ & 64.5 & 65.12 \\ 35 & BBH:0114 & $( 8, -0. 50, 0. 0)$ & $-0.44$ & 57 & 56.07 \\ 36 & BBH:0065 & $( 8, +0. 50, 0. 0)$ & $+0.44$ & 29.5 & 31.78 \\ 37 & BBH:1426 & $( 8, +0.4838, +0.7484)$ & $+0.51$ & 30.3 & 29.98 \\ \end{tabular} \end{ruledtabular} \end{center} \end{table} \looseness=-2 The cumulative importance of higher modes with respect to the $\ell=m=2$ one is studied in Fig.~\ref{fig:1436_multipoles} for the same SXS:BBH:1436 configuration. The figure contrasts the EOB flux with the NR one, where both functions incorporate modes summed up to the indicated $(\ell,m)$ value. The plot shows that for the standard $\texttt{TEOBResumS}{}$ the EOB/NR agreement progressively worsens during the late inspiral up to merger, due to the lack of the NR-informed NQC corrections beyond the $\ell=m=2$ ones. Including NQC corrections in the flux in all the $\ell=m$ modes up to $\ell=5$ yields a closer agreement between the analytical and numerical fluxes up to merger. The NQC parameters are determined with the usual iteration procedure, although we maintain the same values of the NR-informed parameters $(a_6^5,c_3)$ determined with the standard $\ell=m=2$ NQC correction. The effect is very evident for this specific dataset, but it is a feature that is always present, also for other configurations. This exercise indicates that to increase the physical completeness and NR-consistency of \texttt{TEOBResumS}{} it would be needed to include NQC corrections {\it at least} in the $\ell=m$ multipoles in the flux. Evidently, this operation will eventually imply the need of constructing new NR-informed $(a_6^c,c_3)$ functions that are consistent with the new choice of radiation reaction\footnote{Note that part of the residual difference cannot be totally removed because the Newtonian prefactors in the waveform are not consistent with those in the flux for $\ell=m>2$, as pointed out above. See Appendix~\ref{sec:hlm_Newt} for other details.}. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{fig07.pdf} \caption{The first-guess $c_3$ values of Table~\ref{tab:c3} versus the spin variable $\tilde{a}_0$. The unequal-spin and unequal-mass points can be essentially seen as a correction to the equal-mass, equal-spin values. The latter are fitted to obtain the first part of the fit, $c_3^{\rm eq}$ (dashed red).} \label{fig:c3} \end{center} \end{figure} \section{Improving the consistency between waveform and flux of \texttt{TEOBResumS}{}} \label{sec:new} Let us construct a modified \texttt{TEOBResumS}{} model that incorporates iterated NQC corrections in all $\ell=m$ modes in the flux up to $\ell=5$. Since we are modifying the radiation reaction, this choice in principle calls for a new determination of both the $a_6^c$ and $c_3$ functions. However, we have verified that the improvements brought by a newly tuned $a_6^c(\nu)$ are marginal, so that, for the sake of simplicity, we keep its standard expression that we quote here for completeness as \be a_6^c=n_0\dfrac{1+n_1\nu + n_2\nu^2+n_3\nu^3}{1+d_1\nu}, \end{equation} where \begin{align} n_0 &= \;\;\;5.9951,\\ n_1 &=-34.4844,\\ n_2 &=-79.2997,\\ n_3 &=\;\;\;713.4451,\\ d_1 &=-3.167. \end{align} By contrast, we look for a new NR-informed representation of $c_3$. We follow our usual procedure, that is described for example in Sec.~IIB.2 of Ref.~\cite{Nagar:2018zoe}. Typically, for each NR dataset one determines a value of $c_3$ so that the EOB/NR accumulated phase difference up to merger is within (or compatible with) the NR phase uncertainty at NR merger. This leaves a certain flexibility and arbitrariness in the choice of $c_3$ and, in previous attempts, we were typically accepting EOB/NR phase differences of the order of 0.1-0.2~rad at merger. Here, on the understanding that the NR phase uncertainty might be overestimated by taking the difference between the two highest resolutions, we attempt to ask more, requiring that the EOB/NR phase difference is {\it as flat as possible} through inspiral, merger and ringdown when the two waveforms are aligned during the early inspiral. As a cross check, we also align the two waveforms during the late plunge, just before merger, to verify that the phase difference keeps remaining flat. This further proves that the $c_3$ determination, that mostly affects the plunge phase, is done robustly. To exploit at best current NR information, we consider a sample of 37 SXS configurations, most of which were already taken into account in the previous determinations of $c_3$. Here we replaced some datasets used in Ref.~\cite{Nagar:2018zoe} with newer ones with improved accuracy and included a few more simulations so as to cover the parameter space more efficiently. Table~\ref{tab:c3} reports the SXS configurations, the corresponding values of $\tilde{a}_0$ , the first-guess values of $c_3$ obtained with the procedure explained above as well as the corresponding ones obtained after a global fit. Specifically, the $c_3^{\rm first-guess}$ data of Table~\ref{tab:c3} are fitted with a global function as $c_3(\nu,\tilde{a}_0,\tilde{a}_{12})$ that reads \begin{align} \label{eq:c3fit} c_3(\nu,\tilde{a}_0,\tilde{a}_{12})=\, &p_0\dfrac{1 + n_1\tilde{a}_0 + n_2\tilde{a}_0^2 + n_3\tilde{a}_0^3 + n_4\tilde{a}_0^4}{1 + d_1\tilde{a}_0}\nonumber\\ + &p_1 \tilde{a}_0\sqrt{1-4\nu} + p_2\tilde{a}_0^2\sqrt{1-4\nu} \nonumber\\ + &p_3 \tilde{a}_0\nu\sqrt{1-4\nu} + p_4 \tilde{a}_{12}\nu^2, \end{align} where the fitted parameters are \begin{align} p_0&=\;\;\;43.872788,\\ n_1&=-1.849495,\\ n_2&=\;\;\;1.011208,\\ n_3&=-0.086453,\\ n_4&=-0.038378,\\ d_1&=-0.888154, \\ p_1&= \;\;\;26.553,\\ p_2&= -8.65836, \\ p_3&= -84.7473, \\ p_4&= \;\;\;24.0418 \ . \end{align} Figure~\ref{fig:c3} highlights that the span of the ``best'' (first-guess) values of $c_3$ is rather limited (especially for spins aligned with the orbital angular momentum) around the equal-mass, equal-spin case. As in previous work, the fitting procedure consists of two steps. First, one fits the equal-mass, equal-spin data with a quasi-linear function of $\tilde{a}_0=\tilde{a}_1+\tilde{a}_2$ with $\tilde{a}_1=\tilde{a}_2$. This delivers the six parameters $(p_0,n_1,n_2,n_3,n_4,d_1)$. The corresponding fit $c_3^{\rm eq}$ is shown as a dashed red curve in Fig.~\ref{fig:c3}. Note that the analytical structure of the fitting function was chosen in order to accurately capture the nonlinear behavior of $c_3$ for $\tilde{a}_0\to 1$. In the second step one subtracts this fit from the corresponding $c_3^{\rm first-guess}$ values and fits the residual. This determines the parameters $(p_1,p_2,p_3,p_4)$. The novelty with respect to previous work is that the functional form chosen for the unequal-mass, unequal-spin fit is more effective in capturing the first-guess values all over the SXS sample considered. To give a flavor of the improved EOB/NR agreement that can be obtained with the new $c_3$ and with the new radiation reaction, let us report a few examples. From now on we will refer to the improved model as \texttt{TEOBResumS\_NQC\_lm}{}, to easily distinguish it from \texttt{TEOBResumS}{}. Figure~\ref{fig:new_multipoles} shows the updated flux comparison for SXS:BBH:1436, and also includes the dataset SXS:BBH:1437 with $(q, \chi_1, \chi_2) = (6.038, 0.8, 0.1476)$. The addition of NQC corrections to $\ell = m$ modes up to $\ell=5$ of the radiation reaction is essential to improve the behavior of the analytic flux towards merger. For \texttt{TEOBResumS\_NQC\_lm}{} the fractional difference between EOB/NR total fluxes for the configuration SXS:BBH:1436 remains below $10^{-2}$ until $x \sim 0.26$. By contrast, in Fig.~\ref{fig:1436old}, the fractional difference for \texttt{TEOBResumS}{} already reached $10^{-2}$ at the LSO and kept growing until merger. We finally test the performance of the model over all datasets of Table~\ref{tab:spinning_flux}, by computing the fractional difference between EOB and NR (total) fluxes at $x = 0.2$ for both \texttt{TEOBResumS}{} and \texttt{TEOBResumS\_NQC\_lm}{}, as shown respectively in the top and bottom panel of Fig.~\ref{fig:flux_diff}. Here one can see an evident improvement for larger mass ratios and negative values of the effective Kerr parameter. \begin{figure*}[t] \includegraphics[width=0.45\textwidth]{fig08a.pdf} \hspace{1.5mm} \includegraphics[width=0.45\textwidth]{fig08b.pdf} \caption{\label{fig:new_multipoles} {\it Left}: Analogous of Fig.~\ref{fig:1436_multipoles} obtained using the new model, showing that the change in $c_3$ does not affect the behavior of the flux. When summing up to $\ell = 8$ as done in Fig.~\ref{fig:1436old}, the EOB/NR fractional difference for \texttt{TEOBResumS\_NQC\_lm}{} remains below $10^{-2}$ for most of the evolution, even beyond the LSO. {\it Right}: Contrasting the performance of \texttt{TEOBResumS}{} and \texttt{TEOBResumS\_NQC\_lm}{} for the dataset SXS:BBH:1437 with $(q, \chi_1, \chi_2) = (6.038, 0.8, 0.1476)$. The behavior of the flux up to $\ell = 8$ progressively gets less robust and is discussed in Appendix~\ref{sec:NQCissues}.} \end{figure*} \begin{figure}[t] \includegraphics[width=0.43\textwidth]{fig09b.pdf} \\ \vspace{1mm} \includegraphics[width=0.43\textwidth]{fig09c.pdf} \caption{\label{fig:flux_diff} Fractional EOB/NR flux differences at \mbox{$x = 0.2$} for \texttt{TEOBResumS}{} (top) and \texttt{TEOBResumS\_NQC\_lm}{} (bottom) evaluated for the sample of SXS data of Table~\ref{tab:spinning_flux}. For \texttt{TEOBResumS\_NQC\_lm}{} we are excluding from the points two configurations, corresponding to datasets SXS:BBH:1419 and SXS:BBH:1375, where the contribution of modes with \mbox{$\ell_{\rm max}>5$} becomes important towards merger. These will be discussed in Appendix~\ref{sec:NQCissues}.} \end{figure} \begin{figure}[t] \includegraphics[width=0.23\textwidth]{fig10a.pdf} \includegraphics[width=0.23\textwidth]{fig10b.pdf} \includegraphics[width=0.23\textwidth]{fig10c.pdf} \includegraphics[width=0.23\textwidth]{fig10d.pdf} \caption{\label{fig:phasings} EOB/NR time-domain phasing for two illustrative datasets: SXS:BBH:1463 with $(q, \chi_1, \chi_2) = (4.978, +0.61, +0.24)$ (top panels) and SXS:BBH:1426 with $(q, \chi_1, \chi_2) = (8, +0.48, +0.75)$ (bottom panels), using \texttt{TEOBResumS}{} (left) and \texttt{TEOBResumS\_NQC\_lm}{} (right). Each plot shows: (i) the phase difference and the relative amplitude difference; (ii) the real parts of the EOB and NR waveforms; (iii) the instantaneous GW frequency together with twice the orbital frequency $\Omega$. Vertical dash-dotted lines indicate the alignment interval. The phase differences $\Delta \phi^{\rm EOBNR}_{22}$ at merger (vertical dashed blue line) are respectively $(-0.34,-0.70)$~rad for \texttt{TEOBResumS}{} and become $(-0.14,-0.11)$ for \texttt{TEOBResumS\_NQC\_lm}{}. Note that only SXS:BBH:1426 was used to inform $c_3$.} \end{figure} Another example is shown in Fig.~\ref{fig:phasings}, that focuses on time-domain phasings. We use here the Regge-Wheeler-Zerilli normalized waveform, defined as $\Psi_{\ell m} = h_{\ell m}/\sqrt{(\ell - 1) \ell (\ell + 1) (\ell + 2)}$. The EOB waveforms have been obtained by setting the spin values with 6 digits precision, considering the initial $\chi_1, \chi_2$ given in the metadata file for each simulation\footnote{We noticed a decreased phase difference at merger when using larger precisions.}. The figure contrasts EOB/NR waveform phasings for the $\ell = m = 2$ multipole, considering datasets SXS:BBH:1463 (first row) and SXS:BBH:1426 (second row) using \texttt{TEOBResumS}{} (left) and \texttt{TEOBResumS\_NQC\_lm}{} (right). As usually done, in this case we are using $N=3$ extrapolation order for the SXS waveforms. In each figure, the top panels show the phase and amplitude difference, where $\Delta \phi^{\rm EOBNR}_{22} \equiv \phi^{\rm EOB}_{22}- \phi^{\rm NR}_{22}$. The EOB/NR phasing agreement is better for \texttt{TEOBResumS\_NQC\_lm}{} than for \texttt{TEOBResumS}{}, although the SXS:BBH:1426 dataset is among those used to inform the new expression of $c_3$. \section{EOB/NR $\ell=m=2$ unfaithfulness} \label{sec:barF} \begin{figure}[t] \center \includegraphics[width=0.45\textwidth]{fig11.pdf} \caption{\label{fig:noises} Sensitivity curves for the three detectors we take into consideration in computing the unfaithfulness for the two versions of our model: Advanced LIGO, Einstein Telescope (ET) and Cosmic Explorer (CE). Here ET-C is the sensitivity model described in Ref.~\cite{Hild:2009ns}, while ET-D is the latest version~\cite{Hild:2010id}.} \end{figure} \begin{figure*}[t] \center \includegraphics[width=0.32\textwidth]{fig12a.pdf} \includegraphics[width=0.32\textwidth]{fig12b.pdf} \includegraphics[width=0.32\textwidth]{fig12c.pdf} \\ \includegraphics[width=0.32\textwidth]{fig12d.pdf} \includegraphics[width=0.32\textwidth]{fig12e.pdf} \includegraphics[width=0.32\textwidth]{fig12f.pdf} \caption{\label{fig:barF} EOB/NR unfaithfulness for \texttt{TEOBResumS}{} (top panels) and \texttt{TEOBResumS\_NQC\_lm}{} (bottom panels) evaluated over the sample of 534 nonprecessing quasicircular datasets of the SXS catalog already considered in Ref.~\cite{Riemenschneider:2021ppj}, using: (i) the zero-detuned, high-power noise spectral density of Advanced LIGO (first column), (ii) the latest version of the expected noise for the Einstein Telescope (second column), (iii) the expected noise for Cosmic Explorer (third column). We observe here how the changes implemented in the new version of our model ensure a slight decrease in $\bar{F}_{\rm EOB/NR}$, whose average is between $10^{-3}$ and $10^{-4}$.} \end{figure*} \begin{figure}[t] \begin{center} \includegraphics[width=0.43\textwidth]{fig13a.pdf} \\ \includegraphics[width=0.43\textwidth]{fig13b.pdf} \caption{\label{fig:maxF}Contrasting $\bar{F}^{\rm max}_{\rm EOB/NR}$ for \texttt{TEOBResumS}{} and \texttt{TEOBResumS\_NQC\_lm}{} versus $\tilde{a}_0$ and $q$, using the PSD of Advanced LIGO. This complements the top panels of Fig.~\ref{fig:barF}.} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.23\textwidth]{fig14a.pdf} \includegraphics[width=0.23\textwidth]{fig14b.pdf} \\ \includegraphics[width=0.23\textwidth]{fig14c.pdf} \includegraphics[width=0.23\textwidth]{fig14d.pdf} \\ \includegraphics[width=0.23\textwidth]{fig14e.pdf} \includegraphics[width=0.23\textwidth]{fig14f.pdf} \caption{Distribution over the parameter space $(\nu, \tilde{a}_0)$ of those configurations whose $\bar{F}^{\rm max}_{\rm EOB/NR}$ exceeds $10^{-3}$, for aLIGO (first row), ET-D (second row), CE (third row), both for \texttt{TEOBResumS}{} (left column) and its updated version (right column). Notably, the changes implemented in \texttt{TEOBResumS\_NQC\_lm}{} lower the maximum unfaithfulness, although we see that higher values of the spin remain the most challenging ones to be modeled, along with the comparable-mass case.} \label{fig:outliers} \end{center} \end{figure} A global view of the EOB/NR agreement is given by the computation of the EOB/NR unfaithfulness as a function of the total mass of the system. As done for the time-domain phasing, for the EOB spin values we take the initial $(\chi_1, \chi_2)$ given in the metadata file for each simulation with 6 digits precision. For simplicity, here we focus only on the $\ell=m=2$ mode. Considering two waveforms $(h_1,h_2)$ with same fixed mass ratio and spins, the unfaithfulness is a function of the total mass $M$ of the binary and is defined as \be \label{eq:barF} \bar{F}(M) \equiv 1-F=1 -\max_{t_0,\phi_0}\dfrac{\langle h_1,h_2\rangle}{||h_1||||h_2||}, \end{equation} where $(t_0,\phi_0)$ are the initial time and phase, $||h||\equiv \sqrt{\langle h,h\rangle}$, and the inner product between two waveforms is defined as $\langle h_1,h_2\rangle\equiv 4\Re \int_{f_{\rm min}^{\rm NR}(M)}^\infty \tilde{h}_1(f)\tilde{h}_2^*(f)/S_n(f)\, df$, where $\tilde{h}(f)$ denotes the Fourier transform of $h(t)$, $S_n(f)$ is the detector's power spectral density (PSD) and $f_{\rm min}^{\rm NR}(M)=\hat{f}^{\rm NR}_{\rm min}/M$ is the initial frequency of the NR waveform. In practice, the integral is done up to a maximal NR frequency $f_{\rm max}^{\rm NR}$ that is chosen as the frequency where the amplitude of $\tilde{h}_{\rm NR}$ is $10^{-3}$. Waveforms are tapered in the time-domain at the beginning of the inspiral so as to reduce the presence of high-frequency oscillations in the corresponding Fourier transforms. As a step forward to previous work, we here consider for this calculation not only the standard zero-detuned, high-power noise spectral density of Advanced LIGO~\cite{aLIGODesign_PSD}, but also the anticipated PSD of Einstein Telescope, considering its latest sensitivity model ET-D~\cite{Hild:2010id}, and of Cosmic Explorer~\cite{Evans:2021gyd}. The corresponding PSDs are shown in Fig.~\ref{fig:noises}, together with the less recent ET-C version of the PSD of Einstein Telescope~\cite{Hild:2009ns}. As a complementary analysis we perform the unfaithfulness computation for this PSD in Appendix~\ref{sec:ET-C}. The outcome of the $\bar{F}(M)$ computation is shown in Fig.~\ref{fig:barF}, where we used Eq.~\eqref{eq:barF} with $h_1=h_{\rm EOB}$ and $h_2=h_{\rm NR}$. For each detector choice, the top panels of the figure displays the results obtained with \texttt{TEOBResumS}{}, while the bottom ones those pertaining to \texttt{TEOBResumS\_NQC\_lm}{}. For what concerns the aLIGO PSD, the first column of Fig.~\ref{fig:barF} highlights that $\bar{F}^{\rm max}_{\rm EOB/NR}$ comfortably stays well below the $10^{-2}$ threshold, all over the parameter space. More precisely, one finds that for \texttt{TEOBResumS\_NQC\_lm}{} the datasets in the range $10^{-3} < \bar{F}^{\rm max}_{\rm EOB/NR} < 10^{-2}$ are 18.4\% (see Table~\ref{tab:maxFbar}), out of which 1.7\% have a maximum $ \bar{F}_{\rm EOB/NR}$ value above $3\times 10^{-3}$, where the latter percentage value is lower than the one related to \texttt{TEOBResumS}{}. The largest unfaithfulness values obtained with \texttt{TEOBResumS\_NQC\_lm}{}, $\bar{F}^{\rm max}_{\rm EOB/NR}=(0.47, 0.49)\%$, correspond respectively to the extremely spinning configuration SXS:BBH:1124 with $(1,+0.998,+0.998)$ and to the configuration SXS:BBH:1434 with $(4.367, +0.798, +0.795)$. In general, as deducible from Fig.~\ref{fig:outliers}, the largest values of $\bar{F}^{\rm max}_{\rm EOB/NR}$ are obtained for the datasets with individual spins large and positive, i.e. in a regime where we a priori expect the largest uncertainties in both the NR waveforms and in the model. Our result already represents nonnegligible quantitative progress with respect to Refs.~\cite{Nagar:2020pcj,Riemenschneider:2021ppj}. Still, there exists room for improvement, since the NR error is estimated between $10^{-6}$ and $10^{-4}$, as shown in the right panel of Fig.~2 of Ref.~\cite{Nagar:2020pcj}. For what concerns ET-D, the second column of Fig.~\ref{fig:barF} and Table~\ref{tab:maxFbar} highlight that $\bar{F}^{\rm max}_{\rm EOB/NR}$ mostly stays below $10^{-3}$. For \texttt{TEOBResumS\_NQC\_lm}{}, there are only 11 configurations with $\bar{F}^{\rm max}_{\rm EOB/NR} > 3 \times 10^{-3}$, and again the highest values correspond to SXS:BBH:1124 and SXS:BBH:1434. Moreover, 79.9\% of the total number of mismatches for \texttt{TEOBResumS\_NQC\_lm}{} are in the range $10^{-4} < \bar{F}_{\rm EOB/NR} < 10^{-3}$ and 3.9\% of the total mismatches are below $10^{-4}$ (see Table~\ref{tab:maxFbar}). Finally, regarding CE, only 1.3\% of the configurations have $\bar{F}^{\rm max}_{\rm EOB/NR} > 3 \cdot 10^{-3}$, and the percentage of those below $10^{-3}$ reaches 84.1\%. It is quite remarkable that for this detector $6.4\%$ of the total mismatches using \texttt{TEOBResumS\_NQC\_lm}{} are below $10^{-4}$. \begin{table*}[t] \caption{\label{tab:maxFbar} Quantifying the EOB/NR agreement. The central columns of the table contain the fraction of datasets whose maximum unfaithfulness $\bar{F}^{\rm max}_{\rm EOB/NR}$ is within the indicated limits for either \texttt{TEOBResumS}{} or \texttt{TEOBResumS\_NQC\_lm}{}. The last two columns display percentage numbers out of \textit{all} the mismatch values. These are found independently of the single simulations, by considering how many points pertaining to the curves of Fig.~\ref{fig:barF} fall into a certain range of $\bar{F}$. The range in $M$ is $2.5M_{\odot}$. } \begin{center} \begin{ruledtabular} \begin{tabular}{l l | c c c | c c} & & $\bar{F}^{\rm max} < 10^{-3}$ & $10^{-3} < \bar{F}^{\rm max} < 10^{-2}$ & $\bar{F}^{\rm max}> 3\times 10^{-3}$ & $10^{-4} < \bar{F}< 10^{-3}$ & $\bar{F} < 10^{-4}$\\ \hline aLIGO &\texttt{TEOBResumS}{} & 83.1\% & 16.9\% & 2.1\% & 83.9\% & 3.1\% \\ &\texttt{TEOBResumS\_NQC\_lm}{} & 82.0\% & 18.4\% & 1.7\% & 81.5\% & 3.8\% \\ \hline ET-D &\texttt{TEOBResumS}{} & 83.5\% & 15.9\% & 2.6\% & 82.9\% & 3.2\% \\ &\texttt{TEOBResumS\_NQC\_lm}{} & 81.5\% & 18.5\% & 2.1\% & 79.9\% & 3.9\% \\ \hline CE &\texttt{TEOBResumS}{} & 85.6\% & 14.8\% & 1.7\% & 84.7\% & 5.2\%\\ &\texttt{TEOBResumS\_NQC\_lm}{} & 84.1\% & 16.7\% & 1.3\% & 82.8\% & 6.4\%\\ \end{tabular} \end{ruledtabular} \end{center} \end{table*} Concerning the two configurations displayed in Fig.~\ref{fig:phasings}, the lowered phase difference at merger for \texttt{TEOBResumS\_NQC\_lm}{} reflects in a slightly lower value of $\bar{F}^{\rm max}_{\rm EOB/NR}$. Namely, for the dataset SXS:BBH:1463, the [\%] unfaithfulness switches from $(0.1437, 0.1736, 0.1323) $ respectively for aLIGO, ET-D and CE to $(0.1434, 0.1703, 0.1323)$, while for the dataset SXS:BBH:1426, the values lower from $(0.1671, 0.1985, 0.1546)$ to $(0.0675, 0.0731, 0.0613)$. Figures~\ref{fig:barF},~\ref{fig:maxF} and~\ref{fig:outliers} represent, to our knowledge, the first systematic assessment of the quality of a state-of-the-art waveform model in view of the 3G detector effort~\cite{Reitze:2021gzo, Couvares:2021ajn, Punturo:2021ryo, Katsanevas:2021fzj, Kalogera:2021bya, McClelland:2021wqy}. Our plots look a bit more optimistic than the conclusions of Ref.~\cite{Purrer:2019jcp}, that assessed the quality of the phenomenological waveform model {\tt IMRPhenomPv2} for specific configurations, and concluded that the accuracy of current waveform models needs to be improved by at least three orders of magnitude. If this is certainly true of {\tt IMRPhenomPv2}, it doesn't seem to be the case for the spin-aligned model that we are discussing here, as it already grazes the expected detector calibration uncertainty, $\sim 10^{-5}$, for masses up to $20M_\odot$. For larger values of $M$, where the detector is mostly sensitive to the ringdown, $\bar{F}_{\rm EOB/NR}$ goes up to $10^{-3}$ for several configurations. This however should be carefully interpreted, since it is related to two physical facts: (i) on the one hand, the quality of the late part of the NR ringdown might be more or less noisy depending on the configuration, thus affecting the unfaithfulness calculation; (ii) on the other hand, even if there was no relevant numerical noise, there are differences between the EOB modeled ringdown and the actual one. In particular, the absence of mode mixing between positive and negative frequency QNMs (a phenomenon that is present especially for spins anti-aligned with the angular momentum) can play a role in this context. In addition, one should also be aware of the fact that the NR-informed postmerger was constructed using SXS data extrapolated with $N=2$~\cite{Nagar:2020pcj}, since this reduces the amount of NR noise during this specific part of the waveform. However, the EOB/NR comparison is done using $(N=3)$-extrapolated waveform data, that gives a good compromise between the inspiral and the merger-ringdown part of the signal. This means that the differences that we see in Fig.~\ref{fig:barF} for large masses are {\it partly} coming from the NR simulations and not from the model. We thus expect that our EOB/NR comparisons will benefit of improved NR simulations that use Cauchy Characteristic Extraction~\cite{Moxon:2021gbv, Fischer:2021qbh, Zertuche:2021xkb}. On a more general ground, a precise assessment of the accuracy of the current version(s) of \texttt{TEOBResumS}{} for ET will require dedicated injection/recovery campaigns. Nonetheless our analysis seems to indicate that both versions of \texttt{TEOBResumS}{}, either the standard or the NQC-improved one, already offer a reliable starting point to investigate PE having in mind 3G detectors. To obtain such result it was crucial to improve the self-consistency of the model and to provide a new analytical representation of the $c_3$ function carefully selecting a new sample of useful NR datasets. \section{Contrasting \texttt{TEOBResumS}{} and \texttt{SEOBNRv4HM}{} waveform models} \label{sec:seob} Now that we have explored the performance of \texttt{TEOBResumS}{} under a different point of view and shown how to improve it further, let us shift to compare it with \texttt{SEOBNRv4HM}{}~\cite{Bohe:2016gbl,Cotesta:2020qhw, Ossokine:2020kjp}. This model is another state-of-the-art EOB model informed by NR simulations and differs from \texttt{TEOBResumS}{} for several structural choices, that involve the structure of the Hamiltonian, the gauge, the analytic content and the resummation strategies. A comprehensive analysis of what distinguishes the Hamiltonians of \texttt{TEOBResumS}{} and of \texttt{SEOBNRv4HM}{} is presented in Ref.~\cite{Rettegno:2019tzh}. The {\tt SEOBNRv4} model was presented in 2016 and never structurally updated since, except for the addition of higher modes~\cite{Cotesta:2020qhw}, without any change to the dynamics, and precession~\cite{Ossokine:2020kjp}. The purpose of this section is to discuss more specific comparisons between the two models, especially focusing on frequencies and angular momentum fluxes. Moreover, even if \texttt{TEOBResumS}{} has been publicly available for many years~\cite{Nagar:2018zoe}, direct comparisons involving both EOB models and the full NR catalog do not seem to exist in the literature. Note however that \texttt{SEOBNRv4HM}{} was compared to the most recent generation of phenomenological models (see in particular Fig.17 of Ref.~\cite{Pratten:2020fqn}). We aim at filling this gap by providing one-to-one comparisons between \texttt{SEOBNRv4HM}{} and \texttt{TEOBResumS}{} that involve the important observables discussed so far: (i) angular momentum fluxes; (ii) waveform amplitude and frequency and the consistency of this latter with the dynamics; (iii) EOB/NR unfaithfulness computations taking into account also 3G detectors. In this section we will use {\it only} the standard version of \texttt{TEOBResumS}{}. In addition, for the unfaithfulness calculation we will use the publicly available $C$ implementation\footnote{The same code is going to be released also via {\tt LALSimulation}.}, that employs fits for the $\ell=m=2$ NQC parameters entering the flux as well as the (iterated) post-adiabatic approximation~\cite{Nagar:2018gnk} to efficiently describe the inspiral, as detailed in Ref.~\cite{Riemenschneider:2021ppj}. \subsection{Angular momentum fluxes} Let us firstly discuss the flux of angular momentum. To begin with, one has to be aware that -- to the best of our knowledge -- the dynamical phase-space variables are not among the standard outputs of the \texttt{SEOBNRv4HM}{} implementation within {\tt LALSimulation}, so that some modifications of the code are needed\footnote{By contrast, let us remind that the standalone \texttt{TEOBResumS}{} $C$ code can optionally output several dynamical quantities.}. This was done and explicitly described already in Ref.~\cite{Nagar:2019wds}. The simplest way to compute the angular momentum flux for \texttt{SEOBNRv4HM}{} is by taking the time derivative of the angular momentum $p_\varphi$, i.e. using the relation \be \dot{J}_{\tt SEOB}= -\dot{p}_\varphi^{\tt SEOB} = -\hat{\cal F}_{\varphi}^{\tt SEOB}. \end{equation} Figure \ref{fig:fluxes_seob_comparison} displays the related fluxes for the configurations $(1.5, 0.95, 0.95)$, $(2, 0.85, 0.85)$, $(2, -0.6, 0.6)$ and $(5.52, -0.8, -0.7)$, corresponding to SXS datasets SXS:BBH:1146, SXS:BBH:2131, SXS:BBH:2111 and SXS:BBH:1428. Each panel compares five curves: (i) the NR flux (red); (ii) the standard \texttt{TEOBResumS}{} flux; (iii) the flux from \texttt{TEOBResumS}{} without the $\ell=m=2$ NQC corrections; (iv) the \texttt{SEOBNRv4HM}{} flux. Let us firstly focus on the two cases with the largest spins, top row of Fig.~\ref{fig:fluxes_seob_comparison}: the figure highlights the differences between the \texttt{SEOBNRv4HM}{} and NR fluxes. We believe this is related to the \texttt{SEOBNRv4HM}{} dynamics for these two configurations, as we will further point out in Sec.~\ref{sec:wave_amp_freq} below. By contrast, the \texttt{TEOBResumS}{} fluxes look consistent with the NR one. In particular, the agreement that can be reached between \texttt{TEOBResumS}{} and NR {\it without} the NQC correction factor is remarkable. However, this also shows that the NQC implementation should be revised for large spins, since it introduces nonnegligible differences already during the inspiral\footnote{As already suggested in Ref.~\cite{Chiaramello:2020ehz} it would be better to see the NQC corrections as an effective way of improving the EOB analytical waveform only very close to merger, and as such they should be progressively switched on only during the plunge.} (see also Appendix~\ref{sec:NQCissues}). \begin{figure}[t] \center \includegraphics[width=0.23\textwidth]{fig15a.pdf} \includegraphics[width=0.23\textwidth]{fig15b.pdf} \\ \vspace{1mm} \includegraphics[width=0.23\textwidth]{fig15c.pdf} \includegraphics[width=0.23\textwidth]{fig15d.pdf} \caption{\label{fig:fluxes_seob_comparison} For the configurations corresponding to simulations SXS:BBH:1146, SXS:BBH:2131, SXS:BBH:2111, SXS:BBH:1428 we show several angular momentum fluxes: (i) the NR one (orange), (ii) the \texttt{TEOBResumS}{} one (dash-dotted light blue), (iii) the \texttt{TEOBResumS}{} one without the NQC correction in the $\ell=m=2$ mode (dash-dotted green), (iii) the corresponding flux from \texttt{SEOBNRv4HM}{} computed as $-\dot{p}_\varphi$ (dotted yellow). } \end{figure} The differences between the \texttt{SEOBNRv4HM}{} and NR fluxes remain large also in the other two cases (bottom row of Fig.~\ref{fig:fluxes_seob_comparison}). Given the many structural differences between the \texttt{SEOBNRv4HM}{} and \texttt{TEOBResumS}{} models, it is difficult to precisely track what are the elements within \texttt{SEOBNRv4HM}{} that are responsible of the flux behavior. The lack of the NQC factor in the \texttt{SEOBNRv4HM}{} flux is seemingly not enough to explain the differences that appear in the bottom panels of Fig.~\ref{fig:fluxes_seob_comparison}, since the \texttt{SEOBNRv4HM}{} curve differs even from the NQC-free flux of \texttt{TEOBResumS}{}. Let us mention at least two other differences that may be relevant in strong field. First of all, although the \texttt{SEOBNRv4HM}{} flux shares the same formal functional form of the \texttt{TEOBResumS}{} one, the definition of $r_\omega$ is different (see e.g.~\cite{Cotesta:2018fcv}). In addition, the PN truncation and the resummation of each waveform multipole, including the quadrupole one, differs between one model and the other. \subsection{Waveform amplitude and frequency} \label{sec:wave_amp_freq} \begin{figure}[t] \center \includegraphics[width=0.23\textwidth]{fig16a.pdf} \includegraphics[width=0.23\textwidth]{fig16b.pdf} \\ \includegraphics[width=0.23\textwidth]{fig16c.pdf} \includegraphics[width=0.23\textwidth]{fig16d.pdf} \\ \includegraphics[width=0.23\textwidth]{fig16e.pdf} \includegraphics[width=0.23\textwidth]{fig16f.pdf} \\ \includegraphics[width=0.23\textwidth]{fig16g.pdf} \includegraphics[width=0.23\textwidth]{fig16h.pdf} \caption{\label{fig:freq_comparison_seob}Contrasting \texttt{TEOBResumS}{} (left) and \texttt{SEOBNRv4HM}{} (right). For each configuration we show: (i) the waveform amplitude, (ii) the instantaneous gravitational wave frequency, (iii) twice the orbital frequency $\Omega$ and (iv) the pure orbital frequency $\Omega_{\rm orb}$ (i.e., without the spin-orbit contribution). Each binary is also labeled by its effective spin $\hat{S}\equiv (S_1+S_2)/M^2$. For any configuration \texttt{TEOBResumS}{} maintains an excellent consistency between (twice) the orbital frequency and the gravitational wave frequency. This is especially true, as a priori expected, in the highly adiabatic cases with large positive spins where NQC corrections have a very limited effect. By contrast, for \texttt{SEOBNRv4HM}{} this holds only for the configuration $(2, -0.6, 0.6)$. In the other cases, $\omega_{22}\neq 2\Omega$ and the correct behavior of the waveform frequency is guaranteed only by the action of NQC corrections. } \end{figure} Let us now provide a direct comparison between \texttt{TEOBResumS}{}, \texttt{SEOBNRv4HM}{} and NR waveforms for the configurations considered above. We focus on the $\ell=m=2$ waveform amplitude and frequency. Figure~\ref{fig:fluxes_seob_comparison} contrasts the EOB/NR performance for \texttt{TEOBResumS}{} (left panels) and \texttt{SEOBNRv4HM}{} (right panels). The figure focuses around merger time and the waveforms are aligned in the late inspiral, just before merger. We recall that among the configurations presented in the figure, only the $(2,+0.85,+0.85)$ was used to inform $c_3$ for \texttt{TEOBResumS}{} and similarly only this was used to calibrate the spin sector of \texttt{SEOBNRv4HM}{}~\cite{Bohe:2016gbl}. Both models deliver an excellent agreement with the NR waveform amplitude and frequency. However, there are relevant differences in the underlying dynamics, as suggested by the behavior of twice the orbital frequency, $2\Omega$, that is also displayed on the figure. In particular one sees that while for \texttt{TEOBResumS}{} $\omega_{22^{\rm EOB}}\simeq 2\Omega$ is always true up to the merger point, for \texttt{SEOBNRv4HM}{} this is approximately true only for the $(2,-0.6,+0.6)$ configuration. For the other cases, the dynamics seems to point to a delayed plunge, but the NR calibration of the \texttt{SEOBNRv4HM}{} model manage to have the analytical waveform on top of the NR one. Let us remember in fact that Ref.~\cite{Bohe:2016gbl} also calibrates the time shift between the EOB orbital frequency and the peak of the EOB waveform where NQC corrections are determined and the ringdown attached. This feature is not needed in the \texttt{TEOBResumS}{} model, that uses as natural anchor point to determine NQC corrections the peak of the {\it pure} orbital frequency\footnote{In fact, we use $t_{\rm NQC}=t_{\Omega_{\rm orb}}^{\rm peak}-1$, see in particular Eqs.~(3.46)-(3.47) of Ref.~\cite{Nagar:2017jdw} and Eqs.~(102)-(105) and (108) of Ref.~\cite{Damour:2014sva}.} $\Omega_{\rm orb}$ (also shown in the figure), a quantity that is obtained subtracting the spin-orbit contribution from the total frequency. This structure is the effective generalization to the comparable-mass case of what is found in the test-mass limit~\cite{Damour:2014sva}, where the maximum of $\Omega_{\rm orb}$ is always very close to the peak of the $\ell=m=2$ waveform amplitude, as we also remind in Fig.~\ref{fig:testmass_freq} below. \subsubsection{The large-mass-ratio limit} Let us now consider the case of binary black hole coalescences in the large mass ratio limit and highlight the qualitative and quantitative features that are shared by \texttt{TEOBResumS}{}. Figure~\ref{fig:testmass_freq} shows amplitude and frequencies for a nonspinning test-particle (used to model the smaller black hole) inspiralling and plunging in the equatorial plane of a Kerr black hole. The analytical waveforms are generated using the test-mass limit version of {\texttt{TEOBResumS}} presented in Ref.~\cite{Albanesi:2021rby}, while the numerical waveforms are computed using the 2+1 time-domain code \texttt{Teukode}~\cite{Harms:2014dqa} that solves the Teukolsky equation (see also Ref.~\cite{Barausse:2011kb,Taracchini:2013wfa} for an earlier EOB model in the test-particle limit). Note that, as usual, the dynamics generating the EOB and Teukolsky waveforms is the same. The analytical/numerical comparisons show that the condition $\omega_{22}\simeq 2\Omega$ is satisfied throughout the full evolution of the binary up to merger \footnote{For $\hat{a}<0$ we have mode-mixing in the ringdown waveform but this is not relevant for the discussion of this paper.}. Figure~\ref{fig:testmass_freq} collects a few, non extremal, values of the dimensionless Kerr parameter $\hat{a}$ so to have a global view of the waveform phenomenology. It is useful to drive a qualitative and semi-quantitative comparison with Fig.~\ref{fig:freq_comparison_seob}. First, one notices the qualitative similarities between \texttt{TEOBResumS}{} and Teukolsky waveforms and dynamics, in particular the location of the peak of $\Omega_{\rm orb}$. This is a feature that was included within \texttt{TEOBResumS}{} by construction and seems to be one of the key points that allows one to have robust and consistent waveforms all over the parameter space. It is suggestive that the agreement is also semi quantitative for those cases that have $\hat{a}\simeq \hat{S}$. For example, the configuration with $\hat{S}=-0.2$ in Fig.~\ref{fig:freq_comparison_seob} shows a behavior of $\Omega$ and $\Omega_{\rm orb}$ that is qualitatively and quantitatively consistent with the $\hat{a}=-0.2$ case. Similarly, the $\hat{a}=0.5$ configuration shows a behavior close to the ones with $\hat{S}=0.47$ and $\hat{S}=0.49$ (although the EOB frequency $\Omega$ does not deliver a local maximum), while the $\hat{a}=-0.6$ configuration is consistent with the $\hat{S}=-0.59$ one, with $\Omega$ becoming negative after merger. This similarity between test-mass and comparable-mass frequencies can be traced back to the quasi-universal behavior of $\omega_{22}$ at merger when plotted versus $\hat{S}$, already shown for NR data in Fig.~33 of Ref.~\cite{Nagar:2018zoe}. Although at the moment this is nothing more than a suggestive semi-quantitative analogy, if taken seriously it could be helpful to further improve the dynamics of \texttt{TEOBResumS}{} and increase its consistency with the test-mass one, especially for high spins. The most obvious thing that needs to be improved is the frequency behavior for the shown high-spin configurations, $(1.5,0.95,0.95)$ and $(2,+0.85,+0.85)$, where $\Omega$ keeps growing (until the evolution is stopped well after merger), which is in contrast with the local maximum present in the test-mass case for $\hat{a}=0.5$ (and $\hat{a}=0.7$ as well). This is related to the well known problem of the absence of a LSO in \texttt{TEOBResumS}{} for large, positive, spins and it might be solved using a different factorization and gauge for the spin-orbit sector~\cite{Rettegno:2019tzh}. Still, the current coherence between frequencies that is proper of \texttt{TEOBResumS}{} looks like an encouraging starting point for any future development. \begin{figure}[t] \includegraphics[width=0.23\textwidth]{fig17a.pdf} \includegraphics[width=0.23\textwidth]{fig17b.pdf} \\ \includegraphics[width=0.23\textwidth]{fig17c.pdf} \includegraphics[width=0.23\textwidth]{fig17d.pdf}\\ \includegraphics[width=0.23\textwidth]{fig17e.pdf} \includegraphics[width=0.23\textwidth]{fig17f.pdf} \caption{\label{fig:testmass_freq} Comparing EOB and numerical amplitude and frequencies in the large-mass-ratio limit ($\nu=10^{-3}$) for different values of the Kerr dimensionless spin parameter $\hat{a}$. As can be seen, $\omega_{22}\simeq 2 \Omega$ throughout the whole evolution up to merger. This behavior is also qualitatively shared by \texttt{TEOBResumS}{}, as shown in Fig.~\ref{fig:freq_comparison_seob}. We also show the reliability of the analytical prescription overlapping the EOB amplitudes and frequencies to numerical results.} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=0.32\textwidth]{fig12a.pdf} \includegraphics[width=0.32\textwidth]{fig12b.pdf} \includegraphics[width=0.32\textwidth]{fig12c.pdf} \\ \includegraphics[width=0.32\textwidth]{fig18d.pdf} \includegraphics[width=0.32\textwidth]{fig18e.pdf} \includegraphics[width=0.32\textwidth]{fig18f.pdf} \caption{\label{fig:SEOBvsTEOB}Direct EOB/NR unfaithfulness comparison using the standard implementation of \texttt{TEOBResumS}{} (top panels) and {\texttt{SEOBNRv4HM}} (bottom panels). Again, the unfaithfulness is evaluated for the sample of 534 nonprecessing quasicircular NR simulations of the SXS catalog (likewise Fig.~\ref{fig:barF}) using: (i) the zero-detuned, high-power noise spectral density of Advanced LIGO (first column), (ii) the expected PSD for Einstein Telescope (second column), (iii) the expected PSD for Cosmic Explorer (third column). } \end{center} \end{figure*} \subsection{Unfaithfulness} \label{sec:barF_seobNRHM} Let us finally move to the calculation of the EOB/NR unfaithfulness using the \texttt{SEOBNRv4HM}{} model. This calculation is not new, since it was done for the first time in Ref.~\cite{Bohe:2016gbl} as test of the {\tt SEOBNRv4} model. However, from Ref.~\cite{Bohe:2016gbl} several {\it new} NR simulations offering a better covering of the parameter space became available and the original $\bar{F}$ calculation was not updated since. In particular, updated comparisons don't seem to exist in Refs.~\cite{Cotesta:2018fcv,Ossokine:2020kjp}, nor in Ref.~\cite{Mihaylov:2021bpf}, that presents a faster version of the \texttt{SEOBNRv4HM}{} model based on the application of the post-adiabatic approximation developed in Ref.~\cite{Nagar:2018gnk} (and notably already applied to the \texttt{SEOBNRv4HM}{} Hamiltonian in Ref.~\cite{Rettegno:2019tzh}). To our knowledge, it seems that $\bar{F}_{\rm EOB/NR}$ has never been directly computed all over the 534 spin-aligned datasets currently available\footnote{The actual number of nonprecessing quasicircular datasets is larger but we do not consider some problematic simulations.}. It should be mentioned, though, that there exists a comparison between \texttt{SEOBNRv4HM}{} and the NR surrogate~\cite{Pratten:2020fqn}. The purpose of this section is to complement the results of Ref.~\cite{Pratten:2020fqn} via a direct comparison with the SXS datasets. To put this analysis into the right context, we present these results by contrasting them with the corresponding ones obtained using the {\it standard}, publicly available, $C$ implementation of \texttt{TEOBResumS}{} already presented in Ref.~\cite{Riemenschneider:2021ppj}. Since this model relies on fits for the NQC corrections, as detailed in Ref.~\cite{Riemenschneider:2021ppj}, its performance is slightly less good than the one we would obtain by using the (iterated) {\tt MATLAB} implementation and similarly less good than what is theoretically achievable using \texttt{TEOBResumS\_NQC\_lm}{}. Figure~\ref{fig:SEOBvsTEOB} directly compares $\bar{F}_{\rm EOB/NR}(M)$ from \texttt{TEOBResumS}{} (top panels) with the one from \texttt{SEOBNRv4HM}{} (bottom panels). The calculation is done for Advanced LIGO (first column), ET-D (second column) and CE (third column). The bottom-left panel of Fig.~\ref{fig:SEOBvsTEOB} is the analogous of Fig.~2 of Ref.~\cite{Bohe:2016gbl}, but with the additional SXS data that were not available at the time, and highlights the very different behavior of the two models for low masses, where \texttt{SEOBNRv4HM}{} grazes the $10^{-2}$ level for many configurations. This mirrors intrinsic structural differences, probably connected to the completely different way of deforming the Hamiltonian of a point-particle around a Kerr black hole implemented in the two models~\cite{Rettegno:2019tzh}. If this is acceptable for Advanced LIGO (although it evidences that the \texttt{SEOBNRv4HM}{} implementation is not accurate enough), it is not acceptable for ET-D or CE1, where \texttt{SEOBNRv4HM}{} grazes the $10^{-2}$ level for many configurations. Concerning the requirements for third generation detectors, Ref.~\cite{Purrer:2019jcp} concluded that current EOB models are not yet sufficiently accurate. Our analysis shows that things look better by at least one order of magnitude for \texttt{TEOBResumS}{} or \texttt{TEOBResumS\_NQC\_lm}{}, that thus represent more encouraging starting points for developing highly faithful waveform models. Coming back to the Advanced LIGO design sensitivity curve, the results of the two left panels of Fig.~\ref{fig:SEOBvsTEOB} are further summarized in Fig.~\ref{fig:max_SEOB_TEOB}, that shows the corresponding $\bar{F}^{\rm max}_{\rm EOB/NR}$ either versus $\tilde{a}_0$ or versus $q$. Again, $\texttt{TEOBResumS}{}$ is quite robust all over the parameter space, although its performance worsens when the effective spin is increased. This clearly indicates where the model needs to be improved further, coherently with the discussion made in the sections above. By contrast, this structure is absent for \texttt{SEOBNRv4HM}{} points, that look randomly distributed all over the parameter space. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{fig19a.pdf} \includegraphics[width=0.45\textwidth]{fig19b.pdf} \caption{\label{fig:max_SEOB_TEOB}Contrasting $\bar{F}^{\rm max}_{\rm EOB/NR}$ for \texttt{TEOBResumS}{} and \texttt{SEOBNRv4HM}{} versus $\tilde{a}_0$ and $\nu$. The values for \texttt{TEOBResumS}{} are smaller than those of \texttt{SEOBNRv4HM}{}, and also show a clear dependence on the effective spin, indicating where the model may need further improvements.} \end{center} \end{figure} \section{Conclusions} \label{sec:conclusions} We have presented an updated version of the spin-aligned waveform model \texttt{TEOBResumS}{} that differs from the previous ones for (i) a more careful procedure to inform the spin sector of the model, including new choices for NR simulations and a different functional form for the fit of the effective spin-orbit parameter $c_3$, (ii) a specific effort to improve the behavior of the radiation reaction up to merger. In particular, our main achievement is to show that a careful inclusion of NQC corrections in the flux typically allows to achieve a EOB/NR flux consistency below $1\%$ during the plunge. The consequent recalibration of the spin-orbit sector eventually grants a model that shows a higher NR-faithfulness all over the NR-covered parameter space. In addition we have provided the first ever detailed comparison between \texttt{TEOBResumS}{} and \texttt{SEOBNRv4HM}{}. Our results can be summarized as follows. \begin{enumerate} \item[(i)] We have presented a novel computation of the angular momentum flux from a selected sample of 36 SXS datasets chosen so as to give a meaningful coverage of the full NR parameter space. Apparently, ours is the first computation of this kind from the early exploration of Ref.~\cite{Boyle:2008ge}. We have introduced an efficient procedure to remove low-frequency oscillations that are present in the raw fluxes obtained directly from the data. Such oscillations, if kept, would prevent us to perform quantitatively accurate EOB/NR comparisons when the fluxes are represented as functions of the frequency. \item [(ii)] We have shown that the radiation reaction included in the standard implementation of \texttt{TEOBResumS}{}~\cite{Nagar:2020pcj,Riemenschneider:2021ppj} already exhibits an excellent consistency with the NR fluxes. However, this can be further improved by including NQC flux corrections in all $\ell=m$ modes up to $\ell=5$. \item[(iii)] This modification to the radiation reaction effectively defined a {\it new} model, called \texttt{TEOBResumS\_NQC\_lm}{}, that also required us to update the determination of the NR-informed effective spin-orbit parameter $c_3$. We did so by choosing a {\it new} sample of SXS NR datasets, many of which have improved accuracy with respect to the ones used in previous work. We evaluated the performance of this model all over the 534 spin-aligned SXS simulations available, using the Advanced LIGO PSD as well as the ones of ET and CE. To our knowledge, this is the first time an EOB model is being extensively tested for 3G detectors. We found that $\bar{F}_{\rm EOB/NR}$ is within $10^{-4}$ and $10^{-3}$ for more than $80\%$ of the considered binaries. The outliers always occur for configurations with large, positive, spins, that are the most difficult to simulate numerically and to model analytically. Although we are still far from the expected 3G detector calibration error, between $\sim 10^{-4}$ and $\sim 10^{-5}$, our analysis shows that (any version of) $\texttt{TEOBResumS}{}$ can already be used for 3G-related studies provided the spin parameters are not too extreme. In our opinion, it might be possible that the increase in accuracy needed for 3G detectors advocated in Ref.~\cite{Purrer:2019jcp} will be less dramatic than suggested. \item[(iv)] By contrast, when the same analyses are performed on the {\tt SEOBNRv4HM} EOB waveform model, we find large differences between the analytical and numerical fluxes for a restricted sample of dataset for which, however, \texttt{TEOBResumS}{} is NR-consistent already in its native form. For the same configurations we also considered waveform and frequencies comparisons, underlining how the dynamics of \texttt{TEOBResumS}{} is qualitatively consistent with the expectations coming from test-particle limit calculations. Similarly, we show that for the same configurations the dynamics of \texttt{SEOBNRv4HM}{}, differently from the one of \texttt{TEOBResumS}{}, is qualitatively inconsistent with the expectations coming from test-particle limit calculations. We finally fill the apparent gap in the literature of the calculation of the EOB/NR unfaithfulness for the $\ell=m=2$ mode over all the 534 spin-aligned SXS NR simulations available, for Advanced LIGO, ET-D and CE detectors. The outcome of this calculation is directly contrasted with the corresponding one from the standard version of \texttt{TEOBResumS}{}, highlighting the different performance of the two models, especially during the inspiral. This is worth noticing because \texttt{TEOBResumS}{} and \texttt{SEOBNRv4HM}{} were built using similar strategies and the same original PN information\footnote{Actually, \texttt{SEOBNRv4HM}{} includes the exact spin-orbit sector of a spinning test-body~\cite{Barausse:2009aa,Barausse:2009xi,Barausse:2011ys}, while it is only approximated within \texttt{TEOBResumS}{}. It is however straightforward to build a \texttt{TEOBResumS}{}-like Hamiltonian with the exact spinning test-body limit included~\cite{Rettegno:2019tzh}.}. \end{enumerate} The most important take-away message of our work is that \texttt{TEOBResumS}{} can be improved (especially in the large-spin sector) only by means of minimal modifications to its structure and a more careful choice of the NR simulations used to inform the model. In this respect, it is worth mentioning that the available NR simulations could be better exploited to inform both $a_6^c$ and $c_3$. To maintain continuity with previous work, we did not change the function describing $a_6^c$ and we anchored the fit of $c_3$ to the equal-mass case, using 16 equal-mass SXS dataset, while only additional 20 are used to determine the function up to $q=8$. This was motivated by the fact that in the past the SXS collaboration mainly focused on producing equal-mass binaries. Nowadays things have changed, and in particular there are many dataset available with $q=4$, since they were needed to construct a NR waveform surrogate {\tt NRSur7dq4}~\cite{Varma:2019csw}. Since we are using only 2 dataset with $q\simeq 4$, an improved model would be obtained by just anchoring the $c_3$ fit to more $q=4$ simulations, possibly with also an improved choice of $a_6^c$ more carefully exploiting the nonspinning datasets. We expect that this will additionally improve the EOB/NR agreement, possibly pushing it below the $10^{-4}$ level for all binaries. This seems to be at reach given the simplicity and minimality of our procedures and will be tackled in future work. \begin{figure}[t] \includegraphics[width=0.45\textwidth]{fig20.pdf} \\ \caption{\label{fig:1437} Contrasting EOB/NR total fluxes summed up to $\ell = 8$ using either \texttt{TEOBResumS}{} or \texttt{TEOBResumS\_NQC\_lm}{} for the dataset SXS:BBH:1437, with $(q, \chi_1, \chi_2) = (6.038, 0.8, 0.1476)$. The addition of NQC corrections increases the EOB/NR agreement, though it is not sufficient to completely remove the growing behavior at the end of the evolution. As seen in Fig.~\ref{fig:new_multipoles}, for \texttt{TEOBResumS\_NQC\_lm}{} the multipoles up to $\ell = m = 5$ are consistent with the numerical flux, meaning that the improvement is only needed for modes with $\ell \ge 6$. } \end{figure} \acknowledgements A.A. has been supported by the fellowship Lumina Quaeruntur No. LQ100032102 of the Czech Academy of Sciences. We are grateful to M.~Breschi for a careful reading of the manuscript, and to S.~Bernuzzi for daily discussions and for the music. The {\tt TEOBResumS} code is publicly available at \mbox{\url{https://bitbucket.org/eob_ihes/teobresums/}}. The {\tt v2} version of the code, that implements the PA approximation and higher modes, is fully documented in Refs.~\cite{Nagar:2018gnk, Nagar:2018plt, Nagar:2019wds, Nagar:2020pcj, Riemenschneider:2021ppj}. We recommend the above references to be cited by \texttt{TEOBResumS}{} users.
2024-02-18T23:39:46.718Z
2021-11-30T02:19:52.000Z
algebraic_stack_train_0000
362
14,307
proofpile-arXiv_065-1879
\section*{Introduction} We study critical behaviour of the $O(n)$-symmetric $\phi^{4}$ model with an antisymmetric tensor order parameter $\phi_{ik}=-\phi_{ki}$; $i$, $k=1$, \dots, $n$. The action of the model: \begin{equation} \label{action} S(\phi)=\frac{1}{2}\,\mathrm{tr}\left(\phi\left(-\partial^{2}+m^{2}_{0}\right)\phi\right) - \frac{g_{10}}{4!} \left(\mathrm{tr}\left(\phi^{2}\right)\right)^{2} - \frac{g_{20}}{4!}\, \mathrm{tr} \left(\phi^{4}\right). \end{equation} includes two independent $O(n)$ invariant quartic structures and consequently two independent coupling constants. Previously, this model was studied in the framework of minimal subtraction (MS) scheme with $\varepsilon$-expansion and renormalization group approach in the space of a fixed dimension with pseudo-$\varepsilon$-expansion up to four-loop order \cite{AKL1,KL1,KL2}. It was shown that requirement of convergence of a functional integral with action (\ref{action}) imposes restriction on the values the couplings can take: \begin{eqnarray} \label{ineq} &\text{even}& \qquad 2 g_{10} + g_{20} >0, \quad n g_{10} + g_{20} >0, \\ &\text{odd} &\qquad 2 g_{10} + g_{20} >0, \quad (n - 1) g_{10} + g_{20} >0, \nonumber \end{eqnarray} and also high order asymptotic (HOA) for coefficients of the perturbation series and corresponding $\varepsilon$-expansion had been found: \begin{equation} \label{hoag} \beta^{(N)}_{i}(g_{1},g_{2})= \text{Const} \cdot N!N^{b}(-a(g_{1},g_{2}))^{N}\left(1+O\Big(\frac{1}{N}\Big)\right) \end{equation} \begin{equation} \label{hoae} g^{(N)}_{1,2*} = \text{Const} \cdot N!N^{b+1}(-a(g^{(1)}_{1*},g^{(1)}_{2*}))^{N}\left(1+O\Big(\frac{1}{N}\Big)\right). \end{equation} Here $a(g_{1},g_{2})=\max_{k}\ [a_{k}(g_{1},g_{2})]=\max_{k}\,((2kg_{1} + g_{2})/4k)$; $k=1$, \dots, $n/2$; $g^{(1,2)}_{1*}$ is a one loop contribution to a coordinates of the fixed points and $a_{k}$ correspond to different instanton solutions. It is also know that in cases $n=2,3$ the model is reduced to scalar and $O(3)$-vector models, and for $n>4$ there is no IR stable fixed points within perturbation theory. In case $n=4$ there are 3 fixed points: A which is a saddle point at all orders and points B and C which are saddle point and IR stable point at the 4-loop level. The coordinates of the latter point are known to be the subject of the relation $g^{*}_{1} = -0.75 g^{*}_{2}$ at all orders. Recently the $\varepsilon$-expansions were extended up to six order \cite{PB} which allows us refine our understanding of the critical properties of the model and provides a great sandbox to study stability of certain resummation techniques based on Borel-Leroy transformation in the case of multi charge model. \label{sec:borel-leroy} \section*{Borel-Leroy transformation} For some quantity $f(z)$ Borel-Leroy transform is defined as follows: \begin{equation} f(z) = \sum_{N \ge 0} f_{N} z^{N}; \quad \Rightarrow \quad B(t) = \sum_{N \ge 0} \frac{f_{N}}{\Gamma(N+b_{0}+1)}\ t^{N} = \sum_{N \ge 0} B_{N} t^{N}. \end{equation} If the original series was asymptotic with exponentially growing coefficients then series for Borel image will be convergent in a circle of the radius $1/a$. \begin{equation} f_{N} \simeq \text{Const} \cdot N!N^{b}(-a)^{N} \quad \Rightarrow \quad B_{N} \simeq \text{Const} \cdot N^{b - b_{0}}(-a)^{N}. \end{equation} So that in order to perform inverse transform and get resumed quantity: \begin{equation} f^{\mathrm{res}}(z) = \int^{\infty}_{0} dt\ t^{b_o} e^{-t} B(tz) \end{equation} one should construct an analytical continuation for $B(t)$ outside of the convergence radius (for a more detailed discussion of Borel-Leroy-based techniques see \cite{K} and references therein). \label{sec:Conformal-borel} \section*{Conformal-Borel} One possible way to perform inverse transform is to map integration contour inside the convergence radius of $B(t)$: \begin{equation} u(t) = \frac{\sqrt{1+at}-1}{\sqrt{1+at}+1} \quad \Leftrightarrow \quad t(u) = \frac{4u}{a(u-1)^{2}}, \label{mapping} \end{equation} where $a$ is the parameter of HOA (\ref{hoag}), (\ref{hoae}) then re-expand in terms of new variable and perform inverse transform. The choice of this mapping function together with setting free parameter $b_{0} = 3/2$ guaranties that resumed series will have correct HOA. Values of critical exponents obtained in such a way presented in Tables \ref{tabconfpa}--\ref{confpc}. \begin{table}[H] \caption{\label{tabconfpa}Values of critical exponents at different number of loops taken into account for the fixed point A.} \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline \multirow{2}{*}{quantity} & \multicolumn{3}{c|}{$d=2$} & \multicolumn{3}{c|}{$d=3$} \\ \cline{2-7} & 4 loop & 5 loops & 6 loops & 4 loops & 5 loops & 6 loops\\ \hline \hline $g_{1}^{*}$ & 1.10 & 1.24 & 1.34 & 0.542 & 0.571 & 0.586 \\ \hline $g_{2}^{*}$ & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline $\omega_1$ & $-0.487$ & $-0.583$ & $-0.673$ & $-0.226$ & $-0.246$ & $-0.260$\\ \hline $\omega_2$ & 1.35 & 1.39 & 1.36 & 0.781 & 0.791 & 0.786\\ \hline $\eta$ & 0.0557 & 0.0820 & 0.106 & 0.0192 & 0.0245 & 0.0279\\ \hline \end{tabular} \bigskip \caption{\label{confpb}Values of critical exponents at different number of loops taken into account for the fixed point B.} \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline \multirow{2}{*}{quantity} & \multicolumn{3}{c|}{$d=2$} & \multicolumn{3}{c|}{$d=3$} \\ \cline{2-7} & 4 loop & 5 loops & 6 loops & 4 loops & 5 loops & 6 loops\\ \hline \hline $g_{1}^{*}$ & 2.40 & 2.89 & 3.21 & 1.14 & 1.26 & 1.32 \\ \hline $g_{2}^{*}$ & $-3.95$ & $-5.21$ & $-6.32$ & $-1.75$ & $-2.04$ & $-2.24$ \\ \hline $\omega_1$ & $-0.198$ & $-0.413$ & $-0.683$ & $-0.0547$ & $-0.105$ & $-0.153$\\ \hline $\omega_2$ & 1.28 & 1.36 & 1.43 & 0.755 & 0.774 & 0.787\\ \hline $\eta$ & 0.0126 & 0.0267 & 0.0440 & 0.00407 & 0.00721 & 0.0101\\ \hline \end{tabular} \bigskip \caption{\label{confpc}Values of critical exponents at different number of loops taken into account for the fixed point C.} \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline \multirow{2}{*}{quantity} & \multicolumn{3}{c|}{$d=2$} & \multicolumn{3}{c|}{$d=3$} \\ \cline{2-7} & 4 loop & 5 loops & 6 loops & 4 loops & 5 loops & 6 loops\\ \hline \hline $g_{1}^{*}$ & 2.02 & 2.32 & 2.54 & 1.03 & 1.10 & 1.14 \\ \hline $g_{2}^{*}$ & $-2.69$ & $-3.10$ & $-3.38$ & $-1.37$ & $-1.47$ & $-1.52$ \\ \hline $\omega_1$ & 0.245 & 0.377 & 0.500 & 0.0774 & 0.109 & 0.131\\ \hline $\omega_2$ & 1.30 & 1.38 & 1.41 & 0.762 & 0.781 & 0.787\\ \hline $\eta$ & 0.0477 & 0.0744 & 0.101 & 0.0177 & 0.0238 & 0.0284\\ \hline \end{tabular} \end{table} \label{sec:pade-borel} \section*{Pade-Borel} More simple way to make analytical continuation of Borel image is to construct Pade approximants of it in such a way that the initial terms of the series expansion of $B(t)$ are reproduced. \begin{equation} f^{\mathrm{res}}_{[N, M]}(z) = \int^{\infty}_{0}\!\! dt\ e^{-t} t^{b_o} B_{[N,M]}(zt); \quad B_{[N, M]}(t) = \frac{P_{N}(t)}{P_{M}(t)} = \frac{\sum_{i =0}^{i = N} \alpha_{i} t^{i}}{\sum_{j =0}^{j = M} \beta_{j}t^{j}}\,. \end{equation} Values of critical exponents corresponding to the IR stable fixed point at $d=3$ obtained in such a way presented in Tables~\ref{tabeps}--\ref{tabeps4}. \begin{table}[H] \caption{\label{tabeps}Coordinate $g^{*}_{1}$.} \centering \begin{tabular}{|c||c c c c c c } \hline \backslashbox{N}{M} & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline 1 & 0.818 & 2.93 & 0.772 & 1.44 & 0.471 & 0.0260 \\ \cline{1-1} 2 & 1.28 & 1.12 & 1.23 & 1.12 & 1.32& \\ \cline{1-1} 3 & 1.03 & 1.19 & 1.17 & 1.18 & & \\ \cline{1-1} 4 & 1.51 & 1.16 & 1.18 & & & \\ \cline{1-1} 5 & 0.222 & 1.20 & & & &\\ \cline{1-1} 6 & 4.48 & & & & & \\ \cline{1-1} \end{tabular} \end{table} \begin{table}[H] \caption{\label{tabeps2}Eigenvalue $\omega_{1}$.} \centering \begin{tabular}{|c||c c c c c c } \hline \backslashbox{N}{M} & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline 1 & $-0.0909$ & $-0.0217$ & $-0.00783$ & $-0.00313$ & $-0.00143$ & $-0.00072$ \\ \cline{1-1} 2 & 0.221 & 0.0912 & 0.0337 & 0.0530 & $-0.0377$ & \\ \cline{1-1} 3 & $-0.00926$ & 0.153 & 0.156 & 0.210 & & \\ \cline{1-1} 4 & 0.578 & 0.156 & 0.153 & & & \\ \cline{1-1} 5 & $-1.00$ & 0.199 & & & &\\ \cline{1-1} 6 & 4.28 & & & & & \\ \cline{1-1} \end{tabular} \end{table} \begin{table}[H] \caption{\label{tabeps3}Eigenvalue $\omega_{2}$.} \centering \begin{tabular}{|c||c c c c c c } \hline \backslashbox{N}{M} & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline 1 & 1.00 & 0.645 & 1.18 & 0.434 & $-0.129$ & 0.201 \\ \cline{1-1} 2 & 0.430 & 0.817 & 0.776 & 0.818 & 0.755 & \\ \cline{1-1} 3 & 1.71 & 0.769 & 0.797 & 0.791 & & \\ \cline{1-1} 4 & $-2.07$ & 0.831 & 0.790 & & & \\ \cline{1-1} 5 & 11.1 & 0.708 & & & &\\ \cline{1-1} 6 & $-41.1$ & & & & & \\ \cline{1-1} \end{tabular} \end{table} \begin{table}[H] \caption{\label{tabeps4}Exponent $\eta$.} \centering \begin{tabular}{|c||c c c c c } \hline \backslashbox{N}{M} & 0 & 1 & 2 & 3 & 4 \\ \hline 2 & 0.0206 & 0.0802 & 0.0170 & $-0.00749$ & 0.00832 \\ \cline{1-1} 3 & 0.0391 & 0.0339 & 0.0429 & 0.0339 & \\ \cline{1-1} 4 & 0.0316 & 0.0370 & 0.0372 & & \\ \cline{1-1} 5 & 0.0520 & 0.0372 & & &\\ \cline{1-1} 6 & $-0.00503$ & & & & \\ \cline{1-1} \end{tabular} \end{table} \label{sec:Proximity} \section*{Proximity of the ressumed series to the exact results} If we calculate analytical continuation of $B(t)$ from first $l$ known coefficients we can expand it back in powers of $t$ to find that the expansion not just reproduce first $l$ coefficients but also add some additional sub-series that we are actually summing up. \begin{equation} B_{\mathrm{continued}}(t) = \sum_{N \leq l} B_{N} t^{N} + \sum_{N > l} B^{r}_{N} t^{N}. \end{equation} In order to estimate how close this reconstructed sub-series to the unknown exact coefficients one can try to reconstruct last know coefficient $B_{l}$ taking into account less of know contributions, and then estimate proximity to exact answer and convergence rate calculating relative discrepancy from exact value. \begin{equation} \xi_{l} = \frac{f_{l} - f^{r}_{l}}{f_{l}}\,. \end{equation} The estimates of the value of $\xi_{6}$ for $\varepsilon$-expansions fixed point C obtained from conformal mapping are presented in Table~\ref{predictA} and for Pade approximation in Tables~\ref{tabeps5}--\ref{tabeps6}. \begin{table}[H] \caption{\label{predictA}$\xi_{6}$ value for $\varepsilon$-expansions at the IR attractive fixed point.} \centering \begin{tabular}{|c|c|c|c|} \hline quantity & 3 loops & 4 loops & 5 loops \\ \hline $g^{*}_{1}$ & $-41.9$ & 18.2 & $-3.15$ \\ \hline $\omega_{1}$ & $-9.47$ & 5.93 & $-1.47$ \\ \hline $\omega_{2}$ & 0.996 & 0.424 & 0.0389 \\ \hline $\eta$ & 100 & $-80.7$ & 22.7 \\ \hline \end{tabular} \end{table} \begin{table}[H] \caption{\label{tabeps5}$\xi_{6}$ value for $\varepsilon$-expansions of $g^{*}_{1}$ (left) and $\eta$ (right) at the IR attractive fixed point.} \begin{tabular}{|c||c c c c c } \hline \backslashbox{N}{M} & 1 & 2 & 3 & 4 \\ \hline 1 & 0.973 & 0.889 & 0.998 & 1.97 \\ \cline{1-1} 2 & 0.983 & 0.700 & 0.270 & \\ \cline{1-1} 3 & 0.500 & 0.113 & & \\ \cline{1-1} 4 & 0.136 & & & \\ \cline{1-1} \end{tabular} \quad \begin{tabular}{|c||c c c c } \hline \backslashbox{N}{M} & 1 & 2 & 3 \\ \hline 2 & 1.38 & 0.525 & 2.74 \\ \cline{1-1} 3 & 0.973 & 0.469 & \\ \cline{1-1} 4 & $-0.0563$ & & \\ \cline{1-1} \multicolumn{1}{c}{} & & & \end{tabular} \end{table} \begin{table}[H] \caption{\label{tabeps6}$\xi_{6}$ value for $\varepsilon$-expansions of $\omega_{2}$ at the IR attractive fixed point.} \centering \begin{tabular}{|c||c c c c c } \hline \backslashbox{N}{M} & 1 & 2 & 3 & 4 \\ \hline 1 & 0.997 & 0.915 & 0.708 & 0.495 \\ \cline{1-1} 2 & 0.550 & 0.194 & 0.0432 & \\ \cline{1-1} 3 & 0.212 & 0.0197 & & \\ \cline{1-1} 4 & 0.0550 & & & \\ \cline{1-1} \end{tabular} \end{table} \label{sec:large eps} \section*{Large $\varepsilon$ behaviour} Since the mapping function (\ref{mapping}) tends to unity at large values of $t$ it is possible to modify conformal analytical continuation in a way that allows one to control not only HOA but also large $z$ behaviour of the resumed series \begin{equation} \widetilde{B}(u(t)) = \bigg[\frac{t}{u(t)}\bigg]^{\nu} \sum_{N \leq l} B_{N} u(t)^{N}. \end{equation} It's shown \cite{K} that if exact series has power asymptotic then setting parameter $\nu$ close to its actual value speeds up rate of order by order convergence. More over in case of scalar $\phi^{4}$ model relative discrepancy $\xi_{l}$ tends to have minimal absolute value at the actual value of $\nu$. However from the Figure~\ref{fig1} it can be seen that in our case there seem to be no universal value of $\nu$, even for different $\varepsilon$-series separately, in which vicinity $\xi_{6}$ minimizes its absolute value. \begin{figure}[H] \centering \begin{subfigure}[t]{65mm} \includegraphics[width=65mm]{omega1.pdf} \caption{} \end{subfigure} \hfill \begin{subfigure}[t]{65mm} \includegraphics[width=65mm]{omega2.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{65mm} \includegraphics[width=65mm]{g1.pdf} \caption{} \end{subfigure} \hfill \begin{subfigure}[t]{65mm} \includegraphics[width=65mm]{eta.pdf} \caption{} \end{subfigure} \caption{\label{fig1} Dependence of the relative discrepancy $\xi_{6}$ on the value of $\nu$ for $\omega_{1}$ (a), $\omega_{2}$ (b), $g^{*}_{1}$ (c), $\eta$ (d) for IR stable fixed point with the different number of loops taken into account.} \end{figure} \label{sec:beta res} \section*{Ressumation of $\beta$ functions} Except for resummation of $\varepsilon$-expansions we can also resum directly series of $\beta$-functions in the powers of couplings. For that purpose we should rescale couplings: \begin{equation} \beta(g_{1}, g_{2}) = \sum_{i,j} \beta_{i,j} \ g_{1}^{i} g_{2}^{j} \quad \Rightarrow \quad \beta(z) = \beta(z g_{1}, z g_{2}) = \sum_{i} \beta_{i}(g_{1}, g_{2})z^{i} \end{equation} so that $\beta(z)|_{z=1}=\beta(g_{1}, g_{2})$, and then resum the series in powers of $z$ with coefficients depending on couplings. \begin{equation} B(t) = \sum \frac{\beta_{N}(g1, g2)}{\Gamma(N+b_{0}+1)}\ t^{N} = \sum B_{N}(g_{1}, g_{2}) t^{N}. \end{equation} At each point of couplings plane the most relevant instanton shell be used so that value of parameter a in (\ref{mapping}) is now depend on the particular coordinate on the plane of invariant couplings. Ressumed $\beta$-functions are given by: \begin{equation} \beta^{\mathrm{res}}(g_{1}, g_{2}) = \beta^{\mathrm{res}}(z=1) = \int^{\infty}_{0} dt\ e^{-t} t^{b_o} B(t). \end{equation} Finaly we solve numerically system of equations for invariant coupling: \begin{equation} s\partial_{s}\bar g(s,g) = \beta^{\mathrm{res}}_{g} (\bar g), \quad \bar g(1,g) = g \quad s = p/\mu. \end{equation} \begin{figure}[H] \begin{subfigure}[t]{65mm} \includegraphics[width=70mm]{n4d3.png} \caption{$d=2$} \end{subfigure} \hfill \begin{subfigure}[t]{65mm} \includegraphics[width=70mm]{n4d2.png} \caption{$d=3$} \end{subfigure} \caption{Ressumed RG flows in the plane of invariant couplings. The gray area is unphysical region according to (\ref{ineq}) lower dashed line separates the region where instanton solution governing asymptotics of perturbative series formally melts down making resummation procedure meaningless. The black dots mark fixed points positions according to Table~\ref{confpc}.}\label{fig2} \end{figure} As can be seen from the Fig.~\ref{fig2}(a) at $d=3$ we have only qualitatively agreement between results of resummation of $\varepsilon$-expansions for fixed points and root positions of $\beta$-functions resumed directly. At the same time Fig.~\ref{fig2}(b) shows that at $d=2$ situation seem to be much worse because resumed $\beta$-functions are lacking of two nontrivial fixed points at all. The later is indicating that for $d=2$ six orders or perturbation theory may be not sufficient to make even consistent qualitative conclusions about asymptotic regimes of the model under consideration based on the resummation techniques. \label{sec:Conclusions} \section*{Conclusions} We have obtained coordinates of the fixed points and established their IR stability properties at 6-loop level resuming corresponding $\varepsilon$-expansions using two different procedures based on the Borel-Leroy transform. For one of the points that appeared to be IR attractive we also calculated anomalous dimension of a pair correlation function. These results are in qualitative agreement with each other but numerically show discrepancy already at the level of a second significant digit. We obtained estimates for how close first unknown coefficients of resumed series are to their exact values. The best predictions are obtained for $\omega_{2}$ eigenvalue which original $\varepsilon$-series are most divergent, while the highest discrepancy is obtained for $\eta$ series which is still showing apparent convergence even at 6-loop level. We have shown that accounting for possible power-like asymptotic of $\varepsilon$-expansions does not help to optimise resummation procedure. And finally we have resumed expansions for $\beta$-functions directly and shown that at $d=3$ they qualitatively agree with results of $\varepsilon$-expansions resummation while at $d=2$ there is no even qualitatively agreement. The precise reason of such a discrepancy could be a subject of a separate study. \label{sec:Acknowledgments} \section*{Acknowledgments} Author is grateful to A.F. Pikelner and G.A. Kalagov for useful discussions and comments and to FFK2021 Organizing Committee for support and hospitality. The reported study was funded by the Russian Foundation for Basic Research, project number 20-32-70139.
2024-02-18T23:39:46.759Z
2021-11-30T02:24:56.000Z
algebraic_stack_train_0000
366
3,101
proofpile-arXiv_065-1923
\section{Introduction} \label{sec:introduction} The advent of \ac{DL} allowed for raising the accuracy bar of \ac{ML} models for countless tasks and domains. Riding the wave of enthusiasm around such stunning results, \ac{DL} models have been deployed even in high-stake decision-making environments, not without criticism~\cite{Rudin2019-bj}. These kinds of environments require not only high predictive accuracy but also an \emph{explanation} of why that prediction was made. The need for explanations initiated the discussion around the explainability of \ac{DL} models, which are known to be ``black boxes". In other words, their inner workings are hard for humans to understand. Who should be accountable for a model-based decision and how a model came to a certain prediction are just some of the questions that drive research on explaining \ac{ML} models. With the first attempts of the legislative machinery to make explanations for automatic decisions a user’s right~\cite{Goodman2017-iw}, the pressure on generating explanations for the \ac{ML} model's behaviors raised even more. Although the endeavor of the \ac{XAI} community to develop either models that are explainable by design~\cite{Chen2018-eq,Zhang2017-by,Hou2020-zf} and methods to explain existing black-box models~\cite{Ribeiro2016-uy,Lundberg2017-ar,Bahdanau2014-fv}, the way to \ac{DL} explainability is paved with results that are mostly preliminary and anecdotal in nature (\textit{e.g.},~\cite{Jain2019-oe,Wiegreffe2019-ht,Serrano2019-tm}). Most notably, it is hard to relate different pieces of research due to a lack of common theoretical grounds capable of supporting and guiding the discussion. In particular, we detect a gap in the literature on foundational issues such as a shared definition of the term "explanation" and the users' role in the design and deployment of explainability for complex \ac{ML} models. The XAI community suffers from the paucity of common terminology, with only a few attempts of establishing one, focusing more on the distinction among the terms "interpretable", "explainable", and "transparent" rather than the inner structure and meaning of an explanation (\textit{e.g.}, \cite{Graziani,clinciu-hastie-2019-survey,murdoch_definitions_2019}). Similarly, a lack of an outline of the main theoretical components of the discussion around explainability disperses research, while the current literature finds it hard to provide the involved stakeholders with principled analytical tools to operate on black-box models. In this work, we propose a simple but effective theoretical framework that outlines the core components of the explainability machinery and lays out grounds for a more coherent debate on how to explain the decisions of \ac{ML} models. Such a framework is not meant to be set in stone but rather to be used as a common reference among researchers and iteratively improved to fit more and more sophisticated explainability methods and strategies. Thus, we hope to provide shared jargon and formal definitions to inform and standardize the discussion around crucial topics of \ac{XAI}. The core of the proposed theoretical framework is a novel definition of explanation, that draws from existing literature in sociology and philosophy but, at the same time, is easy to operationalize when analyzing a specific approach to explain the predictions made by a model. We conceive an explanation as the interaction of two decoupled components, namely \emph{evidence} and its \emph{interpretation}. Evidence is any sort of information stemming from a \ac{ML} model, while an interpretation is some semantic meaning that human stakeholders attribute to the evidence to make sense of the model’s inner workings. We relate these definitions to crucial properties of explanations, especially \emph{faithfulness} and \emph{plausibility}. Jacovi \& Goldberg define faithfulness as ``the accurate representation of the causal chain of decision-making processes in a model"~\cite{Jacovi2020-ec}. We argue that faithfulness relates in different ways to the elements of the proposed theoretical framework, namely, it assures the interpretation of the evidence is true to how the model actually uses it within its inner reasoning. A property orthogonal to faithfulness is plausibility, namely ``the degree to which some explanation is aligned with the user’s understanding of the model's decision process" \cite{Jacovi2020-ec}. A follow-up work by Jacovi \& Goldberg addresses plausibility as the ``property of an explanation of being convincing towards the model prediction, regardless of whether the model was correct or whether the interpretation is faithful"~\cite{Jacovi2021-pi}. We relate plausibility to faithfulness and highlight a need for faithfulness to be embedded in explainability methods and strategies, as well as plausibility as an important (yet not indispensable) property of the same. As case studies, we zoom in on the evaluation of faithfulness of some popular \ac{DL} explanation tools and strategies, such as "attention"~\cite{Bahdanau2014-fv,Vaswani2017-kq}, \ac{Grad-CAM}~\cite{Selvaraju2016-hw}, \ac{SHAP}~\cite{Lundberg2017-ar}. In addition, we look at the faithfulness of models traditionally considered intrinsically interpretable (a notion with distance ourselves to) such as a \emph{linear regressors} and models based on \emph{fuzzy logic}. \section{Designing Explainability} \label{sec:designing-explainability} Research in \ac{XAI} seizes the problem of explaining models for decision-making from multiple perspectives. First of all, we observe that most of the existing literature uses the terms ``interpretable" and ``explainable" interchangeably, while some have highlighted the semantic nuance that distinguishes the two words~\cite{Mittelstadt2019-jk}. We argue that the term \emph{explainable} (and, by extension \emph{explainability}) is more suited than the term \emph{interpretable} (similarly, by extension, \emph{interpretability}) to describe the property of a model for which effort is made to generate human-understandable clarifications of its decision-making process. The definition of \emph{explanation} is thus crucial and will be discussed extensively in \autoref{sec:defining-explanations}. Our claim follows two rationales: \emph{(i)} the term \emph{interpretation} is used within our proposed framework with a precise meaning that deviates from the current literature and that we deem more accurate (see \autoref{sec:interpretation}); \emph{(ii)} we argue against grouping models into \emph{inherently interpretable} and \emph{post-hoc explainable}. Recently, Molnar has defined ``intrinsic interpretability" as a property of \ac{ML} models that are considered fully understandable due to their simple structure (\textit{e.g.}, short decision trees or sparse linear models), while ``post hoc explainability" as the need for some models to apply interpretation methods after training~\cite{Molnar2022-kh}. Although principled, we drop this hard distinction by claiming that all models embed a certain degree of explainability. Even though to the best of our knowledge, no metrics can quantify explainability yet, we can assert that this depends on multiple factors. In particular, a model is as explainable as the explanations that are proposed to the user to justify a certain prediction are effective. Thus, bringing the human into the explainability design loop is key to deploying models that are actually explainable. Consequently, there are models for which it is easier to design explanations (\textit{i.e.}, the so-called \emph{white-box} models, \textit{e.g.}, linear regression, decision trees, rule-based systems, etc.) and models for which the same process is more difficult (\textit{i.e.}, the so-called \emph{black-box} models, \textit{e.g.}, artificial neural networks). The notion of difficulty here is defined by the inner complexity of the model, which relates to the amount of cognitive load the user can sustain. We highlight that the degree of explainability moves along a gradient from black-box to white-box models, without clear-cut thresholds. Nevertheless, in \autoref{sec:case-studies}, we show that explanations for both white-box and black-box models fit our proposed framework. Thus they both can be structured homogeneously and more deeply understood by leveraging theoretical tools. Most importantly, we advocate for explainability design as a crucial part of \ac{AI} software development. We endorse Chazette et al., in claiming that explainability should be considered a non-functional requirement in the software design process~\cite{Chazette2020-zz}. Thus explanations for any \ac{ML} models (and, especially, for \ac{DL} models) should be accounted for within the initial design of an \ac{AI}-powered application. Even the most accurate black-box model should not be deployed without an explanation mechanism backing it up, as we cannot be sure whether it learned to discriminate over meaningful or wrong features. A classic example is a dog image classification model learning to detect huskies because of the snowy setting instead of the features of the animal itself, involuntarily deceiving the users~\cite{Ribeiro2016-uy}. A design-oriented approach to \ac{AI} development should involve taking humans into the loop, thus fostering a human-centered \ac{AI} which is more intelligible by design and is expected to increase trust in the end-users~\cite{Li2022-xt}. \section{Characterising the Inference Process of a Machine Learning Model} In this section, we provide a formal characterization of the inference process of a general \ac{ML} model, without any constraint on the task. Such a characterization will be used to introduce the terminology which substantiates the main components of our proposed framework of explainability, whose details are provided in \autoref{sec:defining-explanations}. To this end, we define a \ac{ML} model $M$ as an arbitrarily complex function mapping a \emph{model input} to a \emph{model conclusion} through a sequential composition of \emph{transformation steps}. The whole characterization is exemplified in~\autoref{fig:transformation-functions}. \subsection{Elements of the Characterization} \noindent \textbf{Model Input.} The model input consists of a set of features, either coming from an observation or synthetically generated. \noindent \textbf{Model Conclusion.} The model conclusion is the final output of the model, which is the outcome of the last link in the chain of transformations over the model input. \noindent \textbf{Transformation steps.} Overall, the decision-making process of $M$ can be represented as a chain of $N > 0$ transformations of the original model input, that are causally related. This causal chain is enforced by model design (\textit{e.g.}, the sequence of layers in the architecture of a neural network or the depth of a decision tree). We call each stage of this causal chain a ``transformation step", and we denote it with $s_i$, for $i\in [1, N]$. The transformation steps advance the computation from the model input to the model output through \emph{transformation functions}. \noindent \textbf{Transformation functions.} Each transformation step $s_i$ relates to a set of $n_i$ ``transformation functions" $f_{i,m_i}$, where $m_i\in[1,n_i]$ indicates one of the possible learnable functions at $s_i$. Note that, in general, the number of such functions would be infinite, but we discretize it assuming that we are working on a real scenario using some computational machine. The transformation functions are mappings from a feature set $x_{i-1,j}$ to a feature set $x_{i,z}$, with $j\in [1, k_{i-1}]$, $z\in [1, k_i]$ (\textit{i.e.}, the arrows enclosed in the ellipses in \autoref{fig:transformation-functions}). The number $k_i$ denotes the cardinality of the set of all possible feature sets generated by all possible learnable transformation functions at step $s_i$. These transformation functions are generally opaque to the user in the context of the so-called black-box models. At every step in the chain of transformation steps, the model learns one of the possible transformation functions (\textit{i.e.}, the optimal function according to some learning scheme, highlighted with a solid line in \autoref{fig:transformation-functions}). That is, the model learns the function $\hat{f}_{i,m_i}$ such that $\hat{f}=\hat{f}_{N,m_N} \circ \ldots \hat{f}_{i,m_i} \circ \ldots \circ \hat{f}_{1,m_1}$ is the overall approximation of the true mapping from the model input to the model conclusion. According to the notation above, we denote the model input as $x_{0,0}$ (or simply $x$) and the model conclusion as $\hat{y}_{N,j}$, with $j\in [1, k_N]$ (or simply $\hat{y}$). \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{img/transformation-functions.png} \caption{Example of transformation functions for two steps $s_i$.} \label{fig:transformation-functions} \end{figure} \subsection{Observations} \noindent We asserted that, at each transformation step $s_i$, the model picks one function $\hat{f}_{i, m_i}$ among $n_i$ such that $\hat{f}_{i, m_i}(x_{i-1},j)=x_{i,z}$. This raises issues that increase model opacity. At step $s_i$ the chosen function $\hat{f}_{i, m_i}$ can map different intermediate transformations $x_{i-1,j}$ of the feature set at the previous transformation step into the same transformation $x_{i,z}$ one step further in the chain. This means that the same outcome in the transformation chain, be it intermediate or conclusive, can be achieved through different rationales, and it could be difficult for a human user to understand which of them is the one the model has actually learned. This can be a result of a high dimensionality of the set of transformation functions, as well as a high complexity of the transformed feature set. For example, pictures of zebras and salmon can be discriminated on the basis of either their anatomy (i.e., zebras have stripes, salmon have gills) or the environment/habitat (i.e., zebras live in savannas and salmon in rivers). If we consider a relatively complex model such as a \ac{CNN}, where a transformation step coincides with a layer within the network architecture, it is generally difficult to understand which kind of transformation $f_{i,m_i}$ this represents, if any that is human-understandable. Thus, how do we make sense of which of the $n_i$ possible alternative mappings of $x_{i-1,j}$ led to $x_{i,z}$? This remains an open question, with major implications for the discussion around faithfulness, which we will enlarge in the next section. \section{Defining explanations} \label{sec:defining-explanations} Recent work on \ac{ML} interpretability produced multiple definitions for the term ``explanation". According to Lipton, ``explanation refers to numerous ways of exchanging information about a phenomenon, in this case, the functionality of a model or the rationale and criteria for a decision, to different stakeholders"~\cite{Lipton2016-ba}. Similarly, for Guidotti et al. ``an explanation is an ``interface" between humans and a decision-maker that is at the same time both an accurate proxy of the decision-maker and comprehensible to humans"~\cite{Guidotti2018-ti}. Murdoch et al. add to how the explanation is delivered to the user stating that ``an explanation is some relevant knowledge extracted from a machine-learning model concerning relationships either contained in data or learned by the model. [...] They can be produced in formats such as visualizations, natural language, or mathematical equations, depending on the context and audience"~\cite {Murdoch2019-wk}. On a more general note, Mueller et al. state that ``the property of ``being an explanation" is not a property of the text, statements, narratives, diagrams, or other forms of material. It is an interaction of (i) the offered explanation, (ii) the learner’s knowledge and beliefs, (iii) the context or situation and its immediate demands, and (iv) the learner’s goals or purposes in that context"~\cite{Mueller2019-jr}. Finally, Miller tackles the challenge of defining explanations from a sociological perspective. The author highlights a wide taxonomy of explanations but focuses on those which are an answer to a ``why-question" \cite{Miller2017-wj}. The definitions mentioned above offer a well-rounded perspective on what constitutes an explanation. However, they fail to highlight its atomic components and to characterize their relationships. We synthesize our proposed definition of explanation based on complementary aspects of the existing definitions. The result is a concise definition that is easy to operationalize for supporting the analysis of multiple approaches to explainability. Our full proposed framework is reported in the scheme in \autoref{fig:framework}, whose components will be discussed in the following sections. \begin{figure}[b] \centering \includegraphics[width=0.49\textwidth]{img/framework.png} \caption{Overview of the theoretical framework of explainability.} \label{fig:framework} \end{figure} \subsection{Explanation} \label{sec:explanation} Given a model $M$ which takes an input $x$ and returns a prediction $\hat{y}$, we define \emph{explanation} the output of an \emph{interpretation function} applied to some \emph{evidence}, providing the answer to a ``why question'' posed by the user. \subsection{Evidence} \label{sec:evidence} \emph{Evidence} ($e$) is whatever kind of objective information stemming from the model we wish to provide an explanation for and that can reveal insights into its inner workings and rationale for prediction (\textit{e.g.}, attention weights, model parameters, gradients, etc.). \subsubsection{Evidence Extractor} \label{sec:evidence-extractor} An \emph{evidence extractor} ($\xi$) is a method fetching some relevant information about either $M$, $x$, $\hat{y}$, or a combination of the three. Then: $ e = \xi(x, \hat{y}, M)$. Examples of evidence extractors are, e.g., encoder plus attention layers, gradient back-propagation, and random tree approximation, with the corresponding extracted evidence being attention weights, gradient values, and random tree mimicking the original model. In the peculiar case of a white-box approach, that is, ML models designed to be \emph{easily interpretable} by the user (e.g., linear regression, fuzzy rule-based systems) the extraction of evidence is straightforward since all components of the model directly present a piece of semantic information in a human-comprehensible format. \subsubsection{Explanatory Potential} \label{sec:explanatory-potential} We define \emph{explanatory potential} ($\epsilon(e)$) of some evidence as the extent to which the evidence influences the causal chain of transformations steps of a model. Intuitively, the explanatory potential indicates ``how much'' of a model the selected type of evidence can explain. It can be computed either by counting how many transformation steps are impacted by the evidence (i.e., \textit{breadth}), or how much of each single transformation step is impacted by the evidence (i.e., \textit{width}). \subsection{Interpretation} \label{sec:interpretation} An \emph{interpretation} is a function $g$ associating semantic meaning to some evidence and mapping its instances into explanations either for a given prediction or the whole model. Then an explanation can be defined as either $E=g(e, x, \hat{y}, M)$, or $E=g(e, M)$, respectively. \subsubsection{Local vs. Global Interpretations} \label{sec:local-vs-global} In accordance with the existing literature, we relate ``evidence" and ``interpretation" to the concepts of \textit{locality} and \textit{globality}. Both evidence and interpretations can either be local or global. Local evidence (e.g., attention weights, gradient, etc.) relates relevant model information to a particular model input $x$ and corresponding prediction $\hat{y}$. Global evidence (e.g. full set of model parameters) is generally independent of specific inputs and might explain higher level functioning (providing deeper or wider info) of the model or some of its sub-components. Similarly, interpretations can provide either a local or a global semantic of the evidence. A local interpretation of attention could be, \textit{e.g.}, ``attention weights are descriptive of input components’ importance to model output". On the other hand, a global interpretation of the same evidence may aggregate all the attention weights' heatmaps for a whole dataset and highlight specific patterns. For example, in a dog vs. cat classification problem, a global interpretation of attention may be represented by clusters of similar parts of the animal's body (\textit{e.g.}, groups of ears, tails, etc.) highlighted by the attention activations. \subsubsection{Generating Interpretations} \label{sec:generating-interpretations} Given some evidence involved in one or more steps $s_i$ of $M$, we guess how this evidence is involved in the opaque input-to-output transformations by formulating an interpretation $g$ of some extent of the decision-making process of the model. At a low level, we generate a candidate $g$ that encapsulates the approximations $f_{i,m_i}^* \tilde= f_{i,m_i}$ of the behavior of certain functions learned by $M$ at some steps $s_i$. On an abstract level, interpretations can be seen as hypotheses about the role of evidence in the explanation-generation process. Like a good experimental hypothesis, a good interpretation satisfies two core properties: (i) is testable, and (ii) clearly defines dependent and independent variables. Interpretations can be formulated using different forms of reasoning (\textit{e.g.}, deductive, inductive, abductive, etc.). In particular, the survey on explanations and social sciences by Miller reports that people usually make assumptions (\textit{i.e.}, in our context, choose an interpretation) via social attribution of intent (to the evidence)~\cite{Miller2017-wj}. Social attribution is concerned with how people attribute or explain the behavior of others, and not with the real causes of the behavior. Social attribution is generally expressed through folk psychology, which is the attribution of intentional behavior using everyday terms such as beliefs, desires, intentions, emotions, and personality traits. Such concepts may not truly be the cause of the described behavior but are indeed those humans leverage to model and predict each others’ behaviors. This may generate misalignment between a hypothesized interpretation of some evidence and its actual role within the inference process of the model. In other words, reasoning on evidence through folk psychology might generate interpretations that are \emph{plausible} but not necessarily \emph{faithful} to the inference process of the model (such terms will be further explored in \autoref{sec:faithfulness-vs-plausibility}). \subsection{Explanation Interface} \label{sec:xui} Explanations are meant to be delivered to some target users. We define \ac{XUI} as the format in which some explanation is presented to the end user. This could be, for example, in the form of text, plots, infographics, etc. We argue that an \ac{XUI} is characterized by three main properties: (i) human understandability, (ii) informativeness, and (iii) completeness. The \emph{human-understandability} is the degree to which users can understand the answer to their ``why'' question via the \ac{XUI}. This property depends on user cognition, bias, expertise, goals, etc., and is influenced by the complexity of the selected interpretation function. The \emph{informativeness} (i.e., depth) of an explanation is a measure of the effectiveness of an \ac{XUI} in answering the why question posed by the user. That is the depth of information for some $s_i$ of great interest in the \ac{XUI}. The \emph{completeness} (i.e., width) of an explanation is the accuracy of an \ac{XUI} in describing the overall model’s workings, and the degree to which it allows for anticipating predictions. That is the width in terms of the number of $s_i$ the \ac{XAI} spans. Note that both informativeness and completeness are bound by the explanatory potential of the evidence (\textit{e.g.}, attention weights do not explain the full model, just some transformation steps, while the full set of model parameters does). \section{Concerning Faithfulness and Plausibility} \label{sec:faithfulness-vs-plausibility} In \autoref{sec:generating-interpretations} we observed that social attribution is a double-edged sword for the interpretation generation process, as it may incur in propelling plausibility without accounting for faithfulness. This issue was highlighted by Jacovi \& Goldberg, who introduced a property of explanations called \emph{aligned faithfulness} \cite{Jacovi2021-pi}. In the words of the authors, an explanation satisfies this property if ``it is faithful and aligned to the social attribution of the intent behind the causal chain of decision-making processes". Our proposed framework allows us to go a step forward in the characterization of this property. We note that the property of aligned faithfulness pertains only to interpretations, not evidence. The latter by itself has no inherent meaning, its semantics is defined by some interpretation that may or may not involve social attribution of intent to the causal chain of inference processes. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{img/plausibility-vs-faithfulness.png} \caption{Overview of the outcome on the user of the interaction between faithfulness and plausibility.} \label{fig:plausibility-vs-faithfulness} \end{figure} \subsection{Faithfulness} \label{sec:faithfulness} Given an interpretation function $g$ describing some transformation steps $s_i$ within a model $M$’s inference process, we want to be able to prove that $g$ is faithful (at least to some extent) to the actual transformations made by $M$ to an input $x$ to get a prediction $\hat{y}$. Namely, we define the property of \emph{faithfulness of an interpretation} $\phi_i(g, e)$, as the extent to which an interpretation $g$ accurately describes the behavior of some transformation functions $f_{i,m_i}$ that some model learned to map an output $x_{i-1, j}$ at $s_{i-1}$ into $x_{i,z}$ at $s_i$ making use of some instance evidence $e$. Given some evidence $e$ and its interpretation function $g$, we say that a related explanation is faithful to some transformation steps if the following conditions hold: (i) the evidence $e$ has explanatory potential $\epsilon_i>0$, and (ii) the interpretation $g$ has faithfulness $\phi_i>0$. Then we can define the \emph{faithfulness of an explanation} ($\Phi$) as a function of the faithfulness of the interpretation of each step involved and the related explanatory potential. For example, we could define $\Phi = \sum_i \epsilon_i \phi_i ~\forall i \in I \subseteq [1, N]$ where $I$ is the set of indices of transformation steps $s_i$ that involved the evidence $e$. Thus the faithfulness of an explanation is the sum of the faithfulness scores of its components, \textit{i.e.}, the faithfulness of the interpretations of the evidence involved in the generation of the explanation. Besides, the related explanatory power weights the faithfulness of each interpretation, following the intuition that evidence with higher $\epsilon$ should have a larger impact on the overall faithfulness score of the interpretation. We can have various measures of faithfulness that are associated with different explanation types, in the same way as we have different metrics to evaluate the ability of a \ac{ML} model to complete a task. Thus $\phi_i$ is implicitly bounded. When designing a faithful explanatory method, we can opt for two approaches. We can achieve faithfulness ``structurally" by enforcing this property on pre-selected interpretations in model design (\textit{e.g.}, imposing constraints on transformation steps limiting the range of learnable functions). This direction has been recently explored by Jain et al.~\cite{Jain2020-rj} and Jacovi \& Goldberg~\cite{Jacovi2021-pi}. An alternative, naive, strategy is trial-and-error: formulating interpretations and assessing their faithfulness via formal proofs or requirements-based testing using proxy tasks. While formal proofs are still missing in current literature, a number of tests for faithfulness have been recently proposed~\cite{Adebayo2018-yp,Jain2019-oe,Wiegreffe2019-ht,Serrano2019-tm}. \subsection{Plausibility} \label{sec:plausibility} The combined value of the three above-mentioned properties of \acp{XUI} (\textit{i.e.}, human understandability, informativeness, and completeness) drives the plausibility of an explanation. More specifically, we define \emph{plausibility} as the degree to which an explanation is aligned with the user’s understanding of the model's partial or overall inner workings. Plausibility is a user-dependent property and as such, it is subject to the user’s knowledge, bias, etc. Unlike faithfulness, the plausibility of explanations can be assessed via user studies. Note that a plausible explanation is not necessarily faithful, just like a faithful explanation is not necessarily plausible. It is desirable for both properties to be satisfied in the design of some explanation. Interestingly, an unfaithful but plausible explanation may deceive a user into believing that a model behaves according to a rationale when it is actually not the case. This raises ethical concerns around the possibility that poorly designed explanations could spread inaccurate or false knowledge among the end-users. \autoref{fig:plausibility-vs-faithfulness} provides a simplified overview of the problem. \section{Case Studies} \label{sec:case-studies} \subsection{Attention} \label{sec:attention} The introduction of attention mechanisms has been one of the most notable breakthroughs in \ac{DL} research in recent years. Originally proposed for empowering neural machine translation tasks~\cite{Bahdanau2014-fv}, it is currently employed in many state-of-the-art approaches for numerous cognitive tasks. The chain of transformations in the simplest neural model making use of self-attention is a three-step causal process: (i) encoding, (ii) weight encodings by attention scores, and (iii) decoding into model output. Then we can define the function learned by the model as the composition $\hat{f} = f_{3, m_3} \circ f_{2, m_2} \circ f_{1, m_1}$, where each $f_i$ for $i\in [1,3]$ corresponds to the respective transformation function in the causal chain. \noindent \textbf{Evidence.} For an input $x$ split into $t$ sequentially related tokens, let $f_{1, m_1}$ be an encoder function such that $f_{1, m_1}(x)=\bar{X}$ is the vector of the encoded model input tokens. Then, $f_{2, m_2}(\bar{X})=\sum_{j=1}^t \alpha_j \bar{x}_j$, for all model input tokens $\bar{x}_j \in \bar{X}$, is the linear combination of the encodings weighted by their corresponding attention scores. Then $e_{att} = \{\alpha_j\}_1^t$. \iffalse \begin{equation} e_{att} = \{\alpha_j ~| ~f_{2, m_2}(\bar{X}) = \sum_{j=1}^t \alpha_j \bar{x}_j\} \end{equation} \fi That is, the evidence $e_{att}$ related to a model input is the set of weights $\alpha_j$ produced by the attention layer. The explanatory potential $\epsilon(e_{att})$ is the ratio between the number of parameters involved in the analyzed attention layer with respect to the total number of parameters of the model. \noindent \textbf{Interpretation.} The interpretation of the evidence is a function $g_{att}(e(x,\hat{y}))$ that describes function $f_{3, m_3}$, \textit{i.e.}, how the weighted encodings are decoded into the model conclusion. \noindent \textbf{Faithfulness.} Note that we do not know the faithful interpretation function, so we hypothesize its behavior by formulating a candidate interpretation, a process that is usually guided by the researcher's intuition. In the case of attention, an interpretation generally shared among researchers is that ``the value of each attention weight describes the importance of the corresponding token in the original input to the model output". Unfortunately, albeit plausible, research in this field disproved such an interpretation of attention weights \cite{Jain2019-oe,Wiegreffe2019-ht,Serrano2019-tm}, leaving the role of attention for explainability (if any) still unclear. \subsection{Grad-CAM} \label{sec:gradient} A popular explanation called \ac{Grad-CAM}~\cite{Selvaraju2016-hw} presents a method to explain a prediction made by an image classifier using the information encompassed in the back-propagated gradient of a prediction. In short, \ac{Grad-CAM} uses the information about the gradient computed at the last convolutional layer of a \ac{CNN} given a certain input $x$ to assign a feature importance score for each input feature. \noindent \textbf{Evidence.} The \ac{Grad-CAM} evidence-extraction $\xi_{grad}$ method consisted of using the feature activation map of a convolutional layer from a given input $x$ to compute the neurons' importance weights $\alpha_{i}$. The explanatory potential $\epsilon(e_{grad})$ is related, as for the attention mechanism, to the number of parameters analyzed w.r.t the total number of parameters of the method. \noindent \textbf{Interpretation.} \ac{Grad-CAM} claim that the computed neuron's weights $\alpha_i$ corresponds to the part of the input features that influence the final prediction the most. \noindent \textbf{Faithfulness.} The authors measure the faithfulness of the model using image occlusion. That is, they patched some part of the input to the model, and they measured the correlation with the difference in the final output. With this faithfulness metric, a high correlation means a high faithfulness in the explanation. \subsection{SHAP} \label{sec:shap} Lundberg \& Lee in 2017 proposed \ac{SHAP}~\cite{Lundberg2017-ar}, a method to assign an importance value to each feature used by an opaque model $M$ to explain a single prediction $\hat{y}$. \ac{SHAP} has been presented as a generalization of other well-known explanation methods, such as \ac{LIME}~\cite{Ribeiro2016-wr}, DeepLIFT~\cite{Shrikumar2017-si}, Layer-wise relevance propagation~\cite{Bach2015-np}, and classic Shapley value estimation~\cite{Lundberg2017-ar}. The \ac{SHAP} values are defined as: \begin{equation} \label{eq:shap_equation} h\left(z^{\prime}\right)=\beta_0+\sum_{i=1}^M \beta_i z_i^{\prime} \end{equation} \noindent where $z_i^{\prime} \in \{0,1\}^M$ is a simplified version of the input $x$, $M$ is the number of features used in the explanation, and $\beta_i \in \mathbb{R}$ is a coefficient that represents the effect that the $i-th$ feature has on the output. \noindent \textbf{Evidence.} The only evidence $e_{shape} = \xi_{shap}(M, x)$ used by \ac{SHAP} is the set of predictions made by the classifier in a neighborhood of $x$. To compute the explanatory potential $\epsilon\left({e_{shap}}\right)$, we can use the ratio of predictions employed to compute the \ac{SHAP} values w.r.t the total number of possible samples in the countable (and possibly infinite) neighborhood of $x$. Thus, the greater the number of predictions we have, the higher the exploratory potential of the method. \noindent \textbf{Interpretation.} The interpretation $g_{shap}$ of the evidence proposed by \ac{SHAP} is that, given $e_{shap}$, we can locally reproduce the behavior of a complex unknown model with a simple additive model $h\left( \cdot \right)$, and analyzing $h\left( \cdot \right)$ we can get a local explanation $E_{shap}$ of the behavior of the initial model. That is, the proposed interpretation of the evidence results from the optimization problem in~\autoref{eq:shap_equation}. \noindent \textbf{Faithfulness.} Even though the authors do not present a measure of the faithfulness of the explanation directly, they provide three desirable properties that are \emph{i)} local accuracy, \emph{ii)} missingness, \emph{iii)} consistency. The authors showed that their method is the only one that satisfies all these properties, assessing a requirements-based form of faithfulness as described in §~\ref{sec:faithfulness-vs-plausibility}. \subsection{Linear regression models} \label{sec:linear_regression} Linear regression models are not an explanation method but are normally considered \emph{intrinsically interpretable}. Following our proposed framework, we claim that defining them, among other models, as \emph{intrinsically interpretable} is inaccurate and often misleading. In fact, the definition of what is simple to be interpreted by humans is not well-defined, and we can enumerate various examples of models that are easy to be interpreted by a practitioner but are almost black-boxed for non-expert users. A linear regressor $\hat{f}_{lin}(\cdot)$ is typically formulated as: \begin{equation} \label{eq:linear_reg} \hat{f}_{lin}(x)=\beta_0+\sum_{i=1}^N \beta_i x_i^{\prime} \end{equation} where $\beta_i$ are the weights of the learned features, and $N$ is the feature space dimension. \noindent \textbf{Evidence.} The implicit assumption, claiming that a linear model is intrinsically interpretable, is that the weights ${\beta_i, 1 \leq i \leq M}$ are a good explanation for the model. Thus $e_{lin} = \{\beta_i\}_1^N$. With a linear model, we have the maximum explanatory potential $\xi_{lin}$ because with $e_{lin}$ we can fully describe the model. \noindent \textbf{Interpretation.} Assuming a normalization of the features, we can say that the higher the value of $\beta_i$, the higher the contribution of the feature $x_i$ to the model prediction. \noindent \textbf{Faithfulness.} There are no doubts about the faithfulness of the interpretation of the predictions given the normalization assumption, and in fact, a linear model is normally considered an intrinsically interpretable method. However, in a real scenario, its plausibility to a non-expert user is not guaranteed. \subsection{Fuzzy models} \label{sec:fis} Fuzzy models, especially in the form of \ac{FRBS}, represent effective tools for the modeling of complex systems by using a human-comprehensible linguistic approach. Thanks to these characteristics, they are generally considered white or gray boxes and are often mentioned as good options for interpretable AI~\cite{fuchs2022impact}. \iffalse Although a detailed description of fuzzy modeling goes beyond the scope of this paper, it is important to specify that \fi \acp{FRBS} perform their inference (i.e., calculate a conclusion) by exploiting a knowledge base composed of linguistic terms and rules. \iffalse Thanks to this linguistic approach, and the fact that fuzzy set theory can naturally embed uncertainty and vague concepts, \ac{FRBS} is generally considered \emph{intrinsically interpretable} models. \fi \noindent A fuzzy rule is usually expressed as a sentence in the form: \begin{equation} \texttt{IF <antecedent> THEN <consequent>} \end{equation} \noindent where \texttt{antecedent} is a logic formula created by concatenating clauses like \texttt{`X IS a'} with some logical operators, where \texttt{T} is a linguistic variable (associated with one input feature) and $a$ is a linguistic term. Thanks to this representation, the antecedent of each rule give an intuitive and human-understandable characterization of some class/group. The form of \texttt{consequent} varies according to the type of model and fuzzy reasoner used but can be seen as a function that calculates the model conclusion such that the more a sample satisfies the antecedent, the higher will be the weight of this rule to the final calculation of the model conclusion. Note that, due to the fuzziness of the model, all rules can be applied simultaneously, although with different weights. \noindent \textbf{Evidence.} The rules are good evidence for a large part of the model: they characterize the feature space by using a self-explanatory formalism that can be read and validated, by human operators. \iffalse The fuzzy terms are implemented by using fuzzy sets and corresponding membership functions, which are parametric curves generally in the form of triangles, trapezoids, sigmoids, or Gaussian functions. \fi \noindent \textbf{Interpretation.} The fuzzy sets, that are used to create the fuzzy terms and evaluate the satisfaction of the antecedents, have self-explanatory interpretations: they define how much a value belongs to a given set by means of membership functions. The fuzzy rules are also self-explanatory. The only part that requires a proper interpretation is the output calculation function. In the case of Sugeno reasoning, such functions can be seen as linear regression models, hence all considerations discussed in \autoref{sec:linear_regression} remain valid also in the case of fuzzy models. \noindent \textbf{Faithfulness.} Similarly to the case of linear regression models, there are no doubts about the faithfulness of the interpretation of the predictions given a normalization step. However, in the case of special transformations (e.g., log-transformation), some of the intrinsic interpretability might be lost in favor of better fitting to training data \cite{fuchs2022impact}. \section{Future Work} \label{sec:future-work} The novel framework proposed in this work makes it clear that the \ac{XAI} community must still face many challenges before claiming the explainability of a model. First, we observe an abundance of evidence for explaining black-box models. Yet, generating faithful interpretations is hard, as much as it is simple to be deceived by plausible yet untrue interpretations. Secondly, even if an interpretation might be correct (\textit{i.e.}, faithful), it still has to be wrapped in such a way that it is easy for the stakeholders to digest. These implications wink at working on explainability from a novel perspective. That is, explainability should be part of the software design and engage the stakeholders at the earliest stages of the development of an AI-based tool. Future research will need to address the problem of generating faithful interpretations, possibly envisioned through a top-down model design that accounts for explainability as much as it accounts for accuracy. Moreover, user studies should be integrated into the explainability design to understand how to deliver explanations that are faithful, possibly plausible, and certainly human-understandable. Going a step further, multiple explanation designs should be tested for their effectiveness in enabling the users to perform their tasks in an informed manner, which is the ultimate goal of explainability. \section{Conclusions} \label{sec:conclusions} In this work, we propose a novel theoretical framework that brings order and opportunities for a better design of explanations to the \ac{XAI} community by introducing formal terminology. The framework allows dissecting explanations into evidence (factual data coming from the model) and interpretation (a hypothesized function that describes how the model uses the evidence). The explanation is the product of the application of the interpretation to the evidence and is presented to the target user via some form of explanation interface. These components allow for designing more principled explanations by defining the atomic components and the properties that enable them. There are three core properties: \emph{(i)} the explanatory potential for the evidence (i.e., how much of the model the evidence can tell about); \emph{(ii)} the faithfulness of the interpretation (i.e., whether the interpretation is actually true to the decision-making of the model); \emph{(iii)} the plausibility of the explanation interface (i.e., how much the explanation makes sense to the user and is intelligible). We show that the theoretical framework can be applied to explanations coming from a variety of methods, which fit the atomic components we propose. The lesson learned from analyzing explanations over the lent of our proposed framework is that humans (both stakeholder and researcher) should be involved in the design of explainability as soon as possible in the \ac{AI}-powered software design process. This allows for a proper filling of each component in the theoretical framework of explainability and informs model design. The top-down approach that is established this way propels the human understanding of how \ac{AI} (and \ac{ML} in particular) works, possibly fostering user trust in the system. \section{Conclusions} \label{sec:conclusions} In this work, we propose a novel theoretical framework that brings order and opportunities for a better design of explanations to the \ac{XAI} community by introducing formal terminology. The framework allows dissecting explanations into evidence (factual data coming from the model) and interpretation (a hypothesized function that describes how the model uses the evidence). The explanation is the product of the application of the interpretation to the evidence and is presented to the target user via some form of explanation interface. These components allow for designing more principled explanations by defining the atomic components and the properties that enable them. There are three core properties: \emph{(i)} the explanatory potential for the evidence (i.e., how much of the model the evidence can tell about); \emph{(ii)} the faithfulness of the interpretation (i.e., whether the interpretation is actually true to the decision-making of the model); \emph{(iii)} the plausibility of the explanation interface (i.e., how much the explanation makes sense to the user and is intelligible). We show that the theoretical framework can be applied to explanations coming from a variety of methods, which fit the atomic components we propose. The lesson learned from analyzing explanations over the lent of our proposed framework is that humans (both stakeholder and researcher) should be involved in the design of explainability as soon as possible in the \ac{AI}-powered software design process. This allows for a proper filling of each component in the theoretical framework of explainability and informs model design. The top-down approach that is established this way propels the human understanding of how \ac{AI} (and \ac{ML} in particular) works, possibly fostering user trust in the system. \section{Concerning Faithfulness and Plausibility} \label{sec:faithfulness-vs-plausibility} In \autoref{sec:generating-interpretations} we observed that social attribution is a double-edged sword for the interpretation generation process, as it may incur in propelling plausibility without accounting for faithfulness. This issue was highlighted by Jacovi \& Goldberg, who introduced a property of explanations called \emph{aligned faithfulness} \cite{Jacovi2021-pi}. In the words of the authors, an explanation satisfies this property if ``it is faithful and aligned to the social attribution of the intent behind the causal chain of decision-making processes". Our proposed framework allows us to go a step forward in the characterization of this property. We note that the property of aligned faithfulness pertains only to interpretations, not evidence. The latter by itself has no inherent meaning, its semantics is defined by some interpretation that may or may not involve social attribution of intent to the causal chain of inference processes. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{img/plausibility-vs-faithfulness.png} \caption{Overview of the outcome on the user of the interaction between faithfulness and plausibility.} \label{fig:plausibility-vs-faithfulness} \end{figure} \subsection{Faithfulness} \label{sec:faithfulness} Given an interpretation function $g$ describing some transformation steps $s_i$ within a model $M$’s inference process, we want to be able to prove that $g$ is faithful (at least to some extent) to the actual transformations made by $M$ to an input $x$ to get a prediction $\hat{y}$. Namely, we define the property of \emph{faithfulness of an interpretation} $\phi_i(g, e)$, as the extent to which an interpretation $g$ accurately describes the behavior of some transformation functions $f_{i,m_i}$ that some model learned to map an output $x_{i-1, j}$ at $s_{i-1}$ into $x_{i,z}$ at $s_i$ making use of some instance evidence $e$. Given some evidence $e$ and its interpretation function $g$, we say that a related explanation is faithful to some transformation steps if the following conditions hold: (i) the evidence $e$ has explanatory potential $\epsilon_i>0$, and (ii) the interpretation $g$ has faithfulness $\phi_i>0$. Then we can define the \emph{faithfulness of an explanation} ($\Phi$) as a function of the faithfulness of the interpretation of each step involved and the related explanatory potential. For example, we could define $\Phi = \sum_i \epsilon_i \phi_i ~\forall i \in I \subseteq [1, N]$ where $I$ is the set of indices of transformation steps $s_i$ that involved the evidence $e$. Thus the faithfulness of an explanation is the sum of the faithfulness scores of its components, \textit{i.e.}, the faithfulness of the interpretations of the evidence involved in the generation of the explanation. Besides, the related explanatory power weights the faithfulness of each interpretation, following the intuition that evidence with higher $\epsilon$ should have a larger impact on the overall faithfulness score of the interpretation. We can have various measures of faithfulness that are associated with different explanation types, in the same way as we have different metrics to evaluate the ability of a \ac{ML} model to complete a task. Thus $\phi_i$ is implicitly bounded. When designing a faithful explanatory method, we can opt for two approaches. We can achieve faithfulness ``structurally" by enforcing this property on pre-selected interpretations in model design (\textit{e.g.}, imposing constraints on transformation steps limiting the range of learnable functions). This direction has been recently explored by Jain et al.~\cite{Jain2020-rj} and Jacovi \& Goldberg~\cite{Jacovi2021-pi}. An alternative, naive, strategy is trial-and-error: formulating interpretations and assessing their faithfulness via formal proofs or requirements-based testing using proxy tasks. While formal proofs are still missing in current literature, a number of tests for faithfulness have been recently proposed~\cite{Adebayo2018-yp,Jain2019-oe,Wiegreffe2019-ht,Serrano2019-tm}. \subsection{Plausibility} \label{sec:plausibility} The combined value of the three above-mentioned properties of \acp{XUI} (\textit{i.e.}, human understandability, informativeness, and completeness) drives the plausibility of an explanation. More specifically, we define \emph{plausibility} as the degree to which an explanation is aligned with the user’s understanding of the model's partial or overall inner workings. Plausibility is a user-dependent property and as such, it is subject to the user’s knowledge, bias, etc. Unlike faithfulness, the plausibility of explanations can be assessed via user studies. Note that a plausible explanation is not necessarily faithful, just like a faithful explanation is not necessarily plausible. It is desirable for both properties to be satisfied in the design of some explanation. Interestingly, an unfaithful but plausible explanation may deceive a user into believing that a model behaves according to a rationale when it is actually not the case. This raises ethical concerns around the possibility that poorly designed explanations could spread inaccurate or false knowledge among the end-users. \autoref{fig:plausibility-vs-faithfulness} provides a simplified overview of the problem. \section{Future Work} \label{sec:future-work} The novel framework proposed in this work makes it clear that the \ac{XAI} community must still face many challenges before claiming the explainability of a model. First, we observe an abundance of evidence for explaining black-box models. Yet, generating faithful interpretations is hard, as much as it is simple to be deceived by plausible yet untrue interpretations. Secondly, even if an interpretation might be correct (\textit{i.e.}, faithful), it still has to be wrapped in such a way that it is easy for the stakeholders to digest. These implications wink at working on explainability from a novel perspective. That is, explainability should be part of the software design and engage the stakeholders at the earliest stages of the development of an AI-based tool. Future research will need to address the problem of generating faithful interpretations, possibly envisioned through a top-down model design that accounts for explainability as much as it accounts for accuracy. Moreover, user studies should be integrated into the explainability design to understand how to deliver explanations that are faithful, possibly plausible, and certainly human-understandable. Going a step further, multiple explanation designs should be tested for their effectiveness in enabling the users to perform their tasks in an informed manner, which is the ultimate goal of explainability. \section{Introduction} \label{sec:introduction} The advent of \ac{DL} allowed for raising the accuracy bar of \ac{ML} models for countless tasks and domains. Riding the wave of enthusiasm around such stunning results, \ac{DL} models have been deployed even in high-stake decision-making environments, not without criticism~\cite{Rudin2019-bj}. These kinds of environments require not only high predictive accuracy but also an \emph{explanation} of why that prediction was made. The need for explanations initiated the discussion around the explainability of \ac{DL} models, which are known to be ``black boxes". In other words, their inner workings are hard for humans to understand. Who should be accountable for a model-based decision and how a model came to a certain prediction are just some of the questions that drive research on explaining \ac{ML} models. With the first attempts of the legislative machinery to make explanations for automatic decisions a user’s right~\cite{Goodman2017-iw}, the pressure on generating explanations for the \ac{ML} model's behaviors raised even more. Although the endeavor of the \ac{XAI} community to develop either models that are explainable by design~\cite{Chen2018-eq,Zhang2017-by,Hou2020-zf} and methods to explain existing black-box models~\cite{Ribeiro2016-uy,Lundberg2017-ar,Bahdanau2014-fv}, the way to \ac{DL} explainability is paved with results that are mostly preliminary and anecdotal in nature (\textit{e.g.},~\cite{Jain2019-oe,Wiegreffe2019-ht,Serrano2019-tm}). Most notably, it is hard to relate different pieces of research due to a lack of common theoretical grounds capable of supporting and guiding the discussion. In particular, we detect a gap in the literature on foundational issues such as a shared definition of the term "explanation" and the users' role in the design and deployment of explainability for complex \ac{ML} models. The XAI community suffers from the paucity of common terminology, with only a few attempts of establishing one, focusing more on the distinction among the terms "interpretable", "explainable", and "transparent" rather than the inner structure and meaning of an explanation (\textit{e.g.}, \cite{Graziani,clinciu-hastie-2019-survey,murdoch_definitions_2019}). Similarly, a lack of an outline of the main theoretical components of the discussion around explainability disperses research, while the current literature finds it hard to provide the involved stakeholders with principled analytical tools to operate on black-box models. In this work, we propose a simple but effective theoretical framework that outlines the core components of the explainability machinery and lays out grounds for a more coherent debate on how to explain the decisions of \ac{ML} models. Such a framework is not meant to be set in stone but rather to be used as a common reference among researchers and iteratively improved to fit more and more sophisticated explainability methods and strategies. Thus, we hope to provide shared jargon and formal definitions to inform and standardize the discussion around crucial topics of \ac{XAI}. The core of the proposed theoretical framework is a novel definition of explanation, that draws from existing literature in sociology and philosophy but, at the same time, is easy to operationalize when analyzing a specific approach to explain the predictions made by a model. We conceive an explanation as the interaction of two decoupled components, namely \emph{evidence} and its \emph{interpretation}. Evidence is any sort of information stemming from a \ac{ML} model, while an interpretation is some semantic meaning that human stakeholders attribute to the evidence to make sense of the model’s inner workings. We relate these definitions to crucial properties of explanations, especially \emph{faithfulness} and \emph{plausibility}. Jacovi \& Goldberg define faithfulness as ``the accurate representation of the causal chain of decision-making processes in a model"~\cite{Jacovi2020-ec}. We argue that faithfulness relates in different ways to the elements of the proposed theoretical framework, namely, it assures the interpretation of the evidence is true to how the model actually uses it within its inner reasoning. A property orthogonal to faithfulness is plausibility, namely ``the degree to which some explanation is aligned with the user’s understanding of the model's decision process" \cite{Jacovi2020-ec}. A follow-up work by Jacovi \& Goldberg addresses plausibility as the ``property of an explanation of being convincing towards the model prediction, regardless of whether the model was correct or whether the interpretation is faithful"~\cite{Jacovi2021-pi}. We relate plausibility to faithfulness and highlight a need for faithfulness to be embedded in explainability methods and strategies, as well as plausibility as an important (yet not indispensable) property of the same. As case studies, we zoom in on the evaluation of faithfulness of some popular \ac{DL} explanation tools and strategies, such as "attention"~\cite{Bahdanau2014-fv,Vaswani2017-kq}, \ac{Grad-CAM}~\cite{Selvaraju2016-hw}, \ac{SHAP}~\cite{Lundberg2017-ar}. In addition, we look at the faithfulness of models traditionally considered intrinsically interpretable (a notion with distance ourselves to) such as a \emph{linear regressors} and models based on \emph{fuzzy logic}. \section{Designing Explainability} \label{sec:designing-explainability} Research in \ac{XAI} seizes the problem of explaining models for decision-making from multiple perspectives. First of all, we observe that most of the existing literature uses the terms ``interpretable" and ``explainable" interchangeably, while some have highlighted the semantic nuance that distinguishes the two words~\cite{Mittelstadt2019-jk}. We argue that the term \emph{explainable} (and, by extension \emph{explainability}) is more suited than the term \emph{interpretable} (similarly, by extension, \emph{interpretability}) to describe the property of a model for which effort is made to generate human-understandable clarifications of its decision-making process. The definition of \emph{explanation} is thus crucial and will be discussed extensively in \autoref{sec:defining-explanations}. Our claim follows two rationales: \emph{(i)} the term \emph{interpretation} is used within our proposed framework with a precise meaning that deviates from the current literature and that we deem more accurate (see \autoref{sec:interpretation}); \emph{(ii)} we argue against grouping models into \emph{inherently interpretable} and \emph{post-hoc explainable}. Recently, Molnar has defined ``intrinsic interpretability" as a property of \ac{ML} models that are considered fully understandable due to their simple structure (\textit{e.g.}, short decision trees or sparse linear models), while ``post hoc explainability" as the need for some models to apply interpretation methods after training~\cite{Molnar2022-kh}. Although principled, we drop this hard distinction by claiming that all models embed a certain degree of explainability. Even though to the best of our knowledge, no metrics can quantify explainability yet, we can assert that this depends on multiple factors. In particular, a model is as explainable as the explanations that are proposed to the user to justify a certain prediction are effective. Thus, bringing the human into the explainability design loop is key to deploying models that are actually explainable. Consequently, there are models for which it is easier to design explanations (\textit{i.e.}, the so-called \emph{white-box} models, \textit{e.g.}, linear regression, decision trees, rule-based systems, etc.) and models for which the same process is more difficult (\textit{i.e.}, the so-called \emph{black-box} models, \textit{e.g.}, artificial neural networks). The notion of difficulty here is defined by the inner complexity of the model, which relates to the amount of cognitive load the user can sustain. We highlight that the degree of explainability moves along a gradient from black-box to white-box models, without clear-cut thresholds. Nevertheless, in \autoref{sec:case-studies}, we show that explanations for both white-box and black-box models fit our proposed framework. Thus they both can be structured homogeneously and more deeply understood by leveraging theoretical tools. Most importantly, we advocate for explainability design as a crucial part of \ac{AI} software development. We endorse Chazette et al., in claiming that explainability should be considered a non-functional requirement in the software design process~\cite{Chazette2020-zz}. Thus explanations for any \ac{ML} models (and, especially, for \ac{DL} models) should be accounted for within the initial design of an \ac{AI}-powered application. Even the most accurate black-box model should not be deployed without an explanation mechanism backing it up, as we cannot be sure whether it learned to discriminate over meaningful or wrong features. A classic example is a dog image classification model learning to detect huskies because of the snowy setting instead of the features of the animal itself, involuntarily deceiving the users~\cite{Ribeiro2016-uy}. A design-oriented approach to \ac{AI} development should involve taking humans into the loop, thus fostering a human-centered \ac{AI} which is more intelligible by design and is expected to increase trust in the end-users~\cite{Li2022-xt}. \section{Case Studies} \label{sec:case-studies} \subsection{Attention} \label{sec:attention} The introduction of attention mechanisms has been one of the most notable breakthroughs in \ac{DL} research in recent years. Originally proposed for empowering neural machine translation tasks~\cite{Bahdanau2014-fv}, it is currently employed in many state-of-the-art approaches for numerous cognitive tasks. The chain of transformations in the simplest neural model making use of self-attention is a three-step causal process: (i) encoding, (ii) weight encodings by attention scores, and (iii) decoding into model output. Then we can define the function learned by the model as the composition $\hat{f} = f_{3, m_3} \circ f_{2, m_2} \circ f_{1, m_1}$, where each $f_i$ for $i\in [1,3]$ corresponds to the respective transformation function in the causal chain. \noindent \textbf{Evidence.} For an input $x$ split into $t$ sequentially related tokens, let $f_{1, m_1}$ be an encoder function such that $f_{1, m_1}(x)=\bar{X}$ is the vector of the encoded model input tokens. Then, $f_{2, m_2}(\bar{X})=\sum_{j=1}^t \alpha_j \bar{x}_j$, for all model input tokens $\bar{x}_j \in \bar{X}$, is the linear combination of the encodings weighted by their corresponding attention scores. Then $e_{att} = \{\alpha_j\}_1^t$. \iffalse \begin{equation} e_{att} = \{\alpha_j ~| ~f_{2, m_2}(\bar{X}) = \sum_{j=1}^t \alpha_j \bar{x}_j\} \end{equation} \fi That is, the evidence $e_{att}$ related to a model input is the set of weights $\alpha_j$ produced by the attention layer. The explanatory potential $\epsilon(e_{att})$ is the ratio between the number of parameters involved in the analyzed attention layer with respect to the total number of parameters of the model. \noindent \textbf{Interpretation.} The interpretation of the evidence is a function $g_{att}(e(x,\hat{y}))$ that describes function $f_{3, m_3}$, \textit{i.e.}, how the weighted encodings are decoded into the model conclusion. \noindent \textbf{Faithfulness.} Note that we do not know the faithful interpretation function, so we hypothesize its behavior by formulating a candidate interpretation, a process that is usually guided by the researcher's intuition. In the case of attention, an interpretation generally shared among researchers is that ``the value of each attention weight describes the importance of the corresponding token in the original input to the model output". Unfortunately, albeit plausible, research in this field disproved such an interpretation of attention weights \cite{Jain2019-oe,Wiegreffe2019-ht,Serrano2019-tm}, leaving the role of attention for explainability (if any) still unclear. \subsection{Grad-CAM} \label{sec:gradient} A popular explanation called \ac{Grad-CAM}~\cite{Selvaraju2016-hw} presents a method to explain a prediction made by an image classifier using the information encompassed in the back-propagated gradient of a prediction. In short, \ac{Grad-CAM} uses the information about the gradient computed at the last convolutional layer of a \ac{CNN} given a certain input $x$ to assign a feature importance score for each input feature. \noindent \textbf{Evidence.} The \ac{Grad-CAM} evidence-extraction $\xi_{grad}$ method consisted of using the feature activation map of a convolutional layer from a given input $x$ to compute the neurons' importance weights $\alpha_{i}$. The explanatory potential $\epsilon(e_{grad})$ is related, as for the attention mechanism, to the number of parameters analyzed w.r.t the total number of parameters of the method. \noindent \textbf{Interpretation.} \ac{Grad-CAM} claim that the computed neuron's weights $\alpha_i$ corresponds to the part of the input features that influence the final prediction the most. \noindent \textbf{Faithfulness.} The authors measure the faithfulness of the model using image occlusion. That is, they patched some part of the input to the model, and they measured the correlation with the difference in the final output. With this faithfulness metric, a high correlation means a high faithfulness in the explanation. \subsection{SHAP} \label{sec:shap} Lundberg \& Lee in 2017 proposed \ac{SHAP}~\cite{Lundberg2017-ar}, a method to assign an importance value to each feature used by an opaque model $M$ to explain a single prediction $\hat{y}$. \ac{SHAP} has been presented as a generalization of other well-known explanation methods, such as \ac{LIME}~\cite{Ribeiro2016-wr}, DeepLIFT~\cite{Shrikumar2017-si}, Layer-wise relevance propagation~\cite{Bach2015-np}, and classic Shapley value estimation~\cite{Lundberg2017-ar}. The \ac{SHAP} values are defined as: \begin{equation} \label{eq:shap_equation} h\left(z^{\prime}\right)=\beta_0+\sum_{i=1}^M \beta_i z_i^{\prime} \end{equation} \noindent where $z_i^{\prime} \in \{0,1\}^M$ is a simplified version of the input $x$, $M$ is the number of features used in the explanation, and $\beta_i \in \mathbb{R}$ is a coefficient that represents the effect that the $i-th$ feature has on the output. \noindent \textbf{Evidence.} The only evidence $e_{shape} = \xi_{shap}(M, x)$ used by \ac{SHAP} is the set of predictions made by the classifier in a neighborhood of $x$. To compute the explanatory potential $\epsilon\left({e_{shap}}\right)$, we can use the ratio of predictions employed to compute the \ac{SHAP} values w.r.t the total number of possible samples in the countable (and possibly infinite) neighborhood of $x$. Thus, the greater the number of predictions we have, the higher the exploratory potential of the method. \noindent \textbf{Interpretation.} The interpretation $g_{shap}$ of the evidence proposed by \ac{SHAP} is that, given $e_{shap}$, we can locally reproduce the behavior of a complex unknown model with a simple additive model $h\left( \cdot \right)$, and analyzing $h\left( \cdot \right)$ we can get a local explanation $E_{shap}$ of the behavior of the initial model. That is, the proposed interpretation of the evidence results from the optimization problem in~\autoref{eq:shap_equation}. \noindent \textbf{Faithfulness.} Even though the authors do not present a measure of the faithfulness of the explanation directly, they provide three desirable properties that are \emph{i)} local accuracy, \emph{ii)} missingness, \emph{iii)} consistency. The authors showed that their method is the only one that satisfies all these properties, assessing a requirements-based form of faithfulness as described in §~\ref{sec:faithfulness-vs-plausibility}. \subsection{Linear regression models} \label{sec:linear_regression} Linear regression models are not an explanation method but are normally considered \emph{intrinsically interpretable}. Following our proposed framework, we claim that defining them, among other models, as \emph{intrinsically interpretable} is inaccurate and often misleading. In fact, the definition of what is simple to be interpreted by humans is not well-defined, and we can enumerate various examples of models that are easy to be interpreted by a practitioner but are almost black-boxed for non-expert users. A linear regressor $\hat{f}_{lin}(\cdot)$ is typically formulated as: \begin{equation} \label{eq:linear_reg} \hat{f}_{lin}(x)=\beta_0+\sum_{i=1}^N \beta_i x_i^{\prime} \end{equation} where $\beta_i$ are the weights of the learned features, and $N$ is the feature space dimension. \noindent \textbf{Evidence.} The implicit assumption, claiming that a linear model is intrinsically interpretable, is that the weights ${\beta_i, 1 \leq i \leq M}$ are a good explanation for the model. Thus $e_{lin} = \{\beta_i\}_1^N$. With a linear model, we have the maximum explanatory potential $\xi_{lin}$ because with $e_{lin}$ we can fully describe the model. \noindent \textbf{Interpretation.} Assuming a normalization of the features, we can say that the higher the value of $\beta_i$, the higher the contribution of the feature $x_i$ to the model prediction. \noindent \textbf{Faithfulness.} There are no doubts about the faithfulness of the interpretation of the predictions given the normalization assumption, and in fact, a linear model is normally considered an intrinsically interpretable method. However, in a real scenario, its plausibility to a non-expert user is not guaranteed. \subsection{Fuzzy models} \label{sec:fis} Fuzzy models, especially in the form of \ac{FRBS}, represent effective tools for the modeling of complex systems by using a human-comprehensible linguistic approach. Thanks to these characteristics, they are generally considered white or gray boxes and are often mentioned as good options for interpretable AI~\cite{fuchs2022impact}. \iffalse Although a detailed description of fuzzy modeling goes beyond the scope of this paper, it is important to specify that \fi \acp{FRBS} perform their inference (i.e., calculate a conclusion) by exploiting a knowledge base composed of linguistic terms and rules. \iffalse Thanks to this linguistic approach, and the fact that fuzzy set theory can naturally embed uncertainty and vague concepts, \ac{FRBS} is generally considered \emph{intrinsically interpretable} models. \fi \noindent A fuzzy rule is usually expressed as a sentence in the form: \begin{equation} \texttt{IF <antecedent> THEN <consequent>} \end{equation} \noindent where \texttt{antecedent} is a logic formula created by concatenating clauses like \texttt{`X IS a'} with some logical operators, where \texttt{T} is a linguistic variable (associated with one input feature) and $a$ is a linguistic term. Thanks to this representation, the antecedent of each rule give an intuitive and human-understandable characterization of some class/group. The form of \texttt{consequent} varies according to the type of model and fuzzy reasoner used but can be seen as a function that calculates the model conclusion such that the more a sample satisfies the antecedent, the higher will be the weight of this rule to the final calculation of the model conclusion. Note that, due to the fuzziness of the model, all rules can be applied simultaneously, although with different weights. \noindent \textbf{Evidence.} The rules are good evidence for a large part of the model: they characterize the feature space by using a self-explanatory formalism that can be read and validated, by human operators. \iffalse The fuzzy terms are implemented by using fuzzy sets and corresponding membership functions, which are parametric curves generally in the form of triangles, trapezoids, sigmoids, or Gaussian functions. \fi \noindent \textbf{Interpretation.} The fuzzy sets, that are used to create the fuzzy terms and evaluate the satisfaction of the antecedents, have self-explanatory interpretations: they define how much a value belongs to a given set by means of membership functions. The fuzzy rules are also self-explanatory. The only part that requires a proper interpretation is the output calculation function. In the case of Sugeno reasoning, such functions can be seen as linear regression models, hence all considerations discussed in \autoref{sec:linear_regression} remain valid also in the case of fuzzy models. \noindent \textbf{Faithfulness.} Similarly to the case of linear regression models, there are no doubts about the faithfulness of the interpretation of the predictions given a normalization step. However, in the case of special transformations (e.g., log-transformation), some of the intrinsic interpretability might be lost in favor of better fitting to training data \cite{fuchs2022impact}. \section{Characterising the Inference Process of a Machine Learning Model} In this section, we provide a formal characterization of the inference process of a general \ac{ML} model, without any constraint on the task. Such a characterization will be used to introduce the terminology which substantiates the main components of our proposed framework of explainability, whose details are provided in \autoref{sec:defining-explanations}. To this end, we define a \ac{ML} model $M$ as an arbitrarily complex function mapping a \emph{model input} to a \emph{model conclusion} through a sequential composition of \emph{transformation steps}. The whole characterization is exemplified in~\autoref{fig:transformation-functions}. \subsection{Elements of the Characterization} \noindent \textbf{Model Input.} The model input consists of a set of features, either coming from an observation or synthetically generated. \noindent \textbf{Model Conclusion.} The model conclusion is the final output of the model, which is the outcome of the last link in the chain of transformations over the model input. \noindent \textbf{Transformation steps.} Overall, the decision-making process of $M$ can be represented as a chain of $N > 0$ transformations of the original model input, that are causally related. This causal chain is enforced by model design (\textit{e.g.}, the sequence of layers in the architecture of a neural network or the depth of a decision tree). We call each stage of this causal chain a ``transformation step", and we denote it with $s_i$, for $i\in [1, N]$. The transformation steps advance the computation from the model input to the model output through \emph{transformation functions}. \noindent \textbf{Transformation functions.} Each transformation step $s_i$ relates to a set of $n_i$ ``transformation functions" $f_{i,m_i}$, where $m_i\in[1,n_i]$ indicates one of the possible learnable functions at $s_i$. Note that, in general, the number of such functions would be infinite, but we discretize it assuming that we are working on a real scenario using some computational machine. The transformation functions are mappings from a feature set $x_{i-1,j}$ to a feature set $x_{i,z}$, with $j\in [1, k_{i-1}]$, $z\in [1, k_i]$ (\textit{i.e.}, the arrows enclosed in the ellipses in \autoref{fig:transformation-functions}). The number $k_i$ denotes the cardinality of the set of all possible feature sets generated by all possible learnable transformation functions at step $s_i$. These transformation functions are generally opaque to the user in the context of the so-called black-box models. At every step in the chain of transformation steps, the model learns one of the possible transformation functions (\textit{i.e.}, the optimal function according to some learning scheme, highlighted with a solid line in \autoref{fig:transformation-functions}). That is, the model learns the function $\hat{f}_{i,m_i}$ such that $\hat{f}=\hat{f}_{N,m_N} \circ \ldots \hat{f}_{i,m_i} \circ \ldots \circ \hat{f}_{1,m_1}$ is the overall approximation of the true mapping from the model input to the model conclusion. According to the notation above, we denote the model input as $x_{0,0}$ (or simply $x$) and the model conclusion as $\hat{y}_{N,j}$, with $j\in [1, k_N]$ (or simply $\hat{y}$). \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{img/transformation-functions.png} \caption{Example of transformation functions for two steps $s_i$.} \label{fig:transformation-functions} \end{figure} \subsection{Observations} \noindent We asserted that, at each transformation step $s_i$, the model picks one function $\hat{f}_{i, m_i}$ among $n_i$ such that $\hat{f}_{i, m_i}(x_{i-1},j)=x_{i,z}$. This raises issues that increase model opacity. At step $s_i$ the chosen function $\hat{f}_{i, m_i}$ can map different intermediate transformations $x_{i-1,j}$ of the feature set at the previous transformation step into the same transformation $x_{i,z}$ one step further in the chain. This means that the same outcome in the transformation chain, be it intermediate or conclusive, can be achieved through different rationales, and it could be difficult for a human user to understand which of them is the one the model has actually learned. This can be a result of a high dimensionality of the set of transformation functions, as well as a high complexity of the transformed feature set. For example, pictures of zebras and salmon can be discriminated on the basis of either their anatomy (i.e., zebras have stripes, salmon have gills) or the environment/habitat (i.e., zebras live in savannas and salmon in rivers). If we consider a relatively complex model such as a \ac{CNN}, where a transformation step coincides with a layer within the network architecture, it is generally difficult to understand which kind of transformation $f_{i,m_i}$ this represents, if any that is human-understandable. Thus, how do we make sense of which of the $n_i$ possible alternative mappings of $x_{i-1,j}$ led to $x_{i,z}$? This remains an open question, with major implications for the discussion around faithfulness, which we will enlarge in the next section. \section{Defining explanations} \label{sec:defining-explanations} Recent work on \ac{ML} interpretability produced multiple definitions for the term ``explanation". According to Lipton, ``explanation refers to numerous ways of exchanging information about a phenomenon, in this case, the functionality of a model or the rationale and criteria for a decision, to different stakeholders"~\cite{Lipton2016-ba}. Similarly, for Guidotti et al. ``an explanation is an ``interface" between humans and a decision-maker that is at the same time both an accurate proxy of the decision-maker and comprehensible to humans"~\cite{Guidotti2018-ti}. Murdoch et al. add to how the explanation is delivered to the user stating that ``an explanation is some relevant knowledge extracted from a machine-learning model concerning relationships either contained in data or learned by the model. [...] They can be produced in formats such as visualizations, natural language, or mathematical equations, depending on the context and audience"~\cite {Murdoch2019-wk}. On a more general note, Mueller et al. state that ``the property of ``being an explanation" is not a property of the text, statements, narratives, diagrams, or other forms of material. It is an interaction of (i) the offered explanation, (ii) the learner’s knowledge and beliefs, (iii) the context or situation and its immediate demands, and (iv) the learner’s goals or purposes in that context"~\cite{Mueller2019-jr}. Finally, Miller tackles the challenge of defining explanations from a sociological perspective. The author highlights a wide taxonomy of explanations but focuses on those which are an answer to a ``why-question" \cite{Miller2017-wj}. The definitions mentioned above offer a well-rounded perspective on what constitutes an explanation. However, they fail to highlight its atomic components and to characterize their relationships. We synthesize our proposed definition of explanation based on complementary aspects of the existing definitions. The result is a concise definition that is easy to operationalize for supporting the analysis of multiple approaches to explainability. Our full proposed framework is reported in the scheme in \autoref{fig:framework}, whose components will be discussed in the following sections. \begin{figure}[b] \centering \includegraphics[width=0.49\textwidth]{img/framework.png} \caption{Overview of the theoretical framework of explainability.} \label{fig:framework} \end{figure} \subsection{Explanation} \label{sec:explanation} Given a model $M$ which takes an input $x$ and returns a prediction $\hat{y}$, we define \emph{explanation} the output of an \emph{interpretation function} applied to some \emph{evidence}, providing the answer to a ``why question'' posed by the user. \subsection{Evidence} \label{sec:evidence} \emph{Evidence} ($e$) is whatever kind of objective information stemming from the model we wish to provide an explanation for and that can reveal insights into its inner workings and rationale for prediction (\textit{e.g.}, attention weights, model parameters, gradients, etc.). \subsubsection{Evidence Extractor} \label{sec:evidence-extractor} An \emph{evidence extractor} ($\xi$) is a method fetching some relevant information about either $M$, $x$, $\hat{y}$, or a combination of the three. Then: $ e = \xi(x, \hat{y}, M)$. Examples of evidence extractors are, e.g., encoder plus attention layers, gradient back-propagation, and random tree approximation, with the corresponding extracted evidence being attention weights, gradient values, and random tree mimicking the original model. In the peculiar case of a white-box approach, that is, ML models designed to be \emph{easily interpretable} by the user (e.g., linear regression, fuzzy rule-based systems) the extraction of evidence is straightforward since all components of the model directly present a piece of semantic information in a human-comprehensible format. \subsubsection{Explanatory Potential} \label{sec:explanatory-potential} We define \emph{explanatory potential} ($\epsilon(e)$) of some evidence as the extent to which the evidence influences the causal chain of transformations steps of a model. Intuitively, the explanatory potential indicates ``how much'' of a model the selected type of evidence can explain. It can be computed either by counting how many transformation steps are impacted by the evidence (i.e., \textit{breadth}), or how much of each single transformation step is impacted by the evidence (i.e., \textit{width}). \subsection{Interpretation} \label{sec:interpretation} An \emph{interpretation} is a function $g$ associating semantic meaning to some evidence and mapping its instances into explanations either for a given prediction or the whole model. Then an explanation can be defined as either $E=g(e, x, \hat{y}, M)$, or $E=g(e, M)$, respectively. \subsubsection{Local vs. Global Interpretations} \label{sec:local-vs-global} In accordance with the existing literature, we relate ``evidence" and ``interpretation" to the concepts of \textit{locality} and \textit{globality}. Both evidence and interpretations can either be local or global. Local evidence (e.g., attention weights, gradient, etc.) relates relevant model information to a particular model input $x$ and corresponding prediction $\hat{y}$. Global evidence (e.g. full set of model parameters) is generally independent of specific inputs and might explain higher level functioning (providing deeper or wider info) of the model or some of its sub-components. Similarly, interpretations can provide either a local or a global semantic of the evidence. A local interpretation of attention could be, \textit{e.g.}, ``attention weights are descriptive of input components’ importance to model output". On the other hand, a global interpretation of the same evidence may aggregate all the attention weights' heatmaps for a whole dataset and highlight specific patterns. For example, in a dog vs. cat classification problem, a global interpretation of attention may be represented by clusters of similar parts of the animal's body (\textit{e.g.}, groups of ears, tails, etc.) highlighted by the attention activations. \subsubsection{Generating Interpretations} \label{sec:generating-interpretations} Given some evidence involved in one or more steps $s_i$ of $M$, we guess how this evidence is involved in the opaque input-to-output transformations by formulating an interpretation $g$ of some extent of the decision-making process of the model. At a low level, we generate a candidate $g$ that encapsulates the approximations $f_{i,m_i}^* \tilde= f_{i,m_i}$ of the behavior of certain functions learned by $M$ at some steps $s_i$. On an abstract level, interpretations can be seen as hypotheses about the role of evidence in the explanation-generation process. Like a good experimental hypothesis, a good interpretation satisfies two core properties: (i) is testable, and (ii) clearly defines dependent and independent variables. Interpretations can be formulated using different forms of reasoning (\textit{e.g.}, deductive, inductive, abductive, etc.). In particular, the survey on explanations and social sciences by Miller reports that people usually make assumptions (\textit{i.e.}, in our context, choose an interpretation) via social attribution of intent (to the evidence)~\cite{Miller2017-wj}. Social attribution is concerned with how people attribute or explain the behavior of others, and not with the real causes of the behavior. Social attribution is generally expressed through folk psychology, which is the attribution of intentional behavior using everyday terms such as beliefs, desires, intentions, emotions, and personality traits. Such concepts may not truly be the cause of the described behavior but are indeed those humans leverage to model and predict each others’ behaviors. This may generate misalignment between a hypothesized interpretation of some evidence and its actual role within the inference process of the model. In other words, reasoning on evidence through folk psychology might generate interpretations that are \emph{plausible} but not necessarily \emph{faithful} to the inference process of the model (such terms will be further explored in \autoref{sec:faithfulness-vs-plausibility}). \subsection{Explanation Interface} \label{sec:xui} Explanations are meant to be delivered to some target users. We define \ac{XUI} as the format in which some explanation is presented to the end user. This could be, for example, in the form of text, plots, infographics, etc. We argue that an \ac{XUI} is characterized by three main properties: (i) human understandability, (ii) informativeness, and (iii) completeness. The \emph{human-understandability} is the degree to which users can understand the answer to their ``why'' question via the \ac{XUI}. This property depends on user cognition, bias, expertise, goals, etc., and is influenced by the complexity of the selected interpretation function. The \emph{informativeness} (i.e., depth) of an explanation is a measure of the effectiveness of an \ac{XUI} in answering the why question posed by the user. That is the depth of information for some $s_i$ of great interest in the \ac{XUI}. The \emph{completeness} (i.e., width) of an explanation is the accuracy of an \ac{XUI} in describing the overall model’s workings, and the degree to which it allows for anticipating predictions. That is the width in terms of the number of $s_i$ the \ac{XAI} spans. Note that both informativeness and completeness are bound by the explanatory potential of the evidence (\textit{e.g.}, attention weights do not explain the full model, just some transformation steps, while the full set of model parameters does).
2024-02-18T23:39:46.925Z
2023-02-07T02:26:02.000Z
algebraic_stack_train_0000
376
13,858
proofpile-arXiv_065-1949
\section{Some properties of successive minima} \newcommand{$\pi_1(\Lambda)$}{$\pi_1(\Lambda)$} \subsection{Lattice basis and successive minima} Recall that for a positive integer $d$ and a lattice $\Lambda \subset \mathbb R^d$, and for each $j = 1,\dots, d$, The $j$-th minimum of a lattice $\Lambda \subset \mathbb R^d$, denoted $\lambda_j (\Lambda)$, is the infimum of $\lambda$ such that the set $\{r \in \Lambda : \|r\| \le \lambda \}$ contains $j$ linearly independent vectors. (with respect to the $l^2$ norm on $\mathbb R^d$). A natural question is, can the successive minima always attained by a basis of the rank $d$ lattice $\Lambda$? In other words, does there exist a basis $\{v_1,\dots,v_d\}$ of $\Lambda$ such that \begin{equation*} \|v_j\|=\lambda_j, ~\text{for}~ j\in \{1,2,\dots,d\}. \end{equation*} The answer is positive if and only if $d\le 4$, as are shown in the following theorem and example \begin{theorem}\label{thm:A.1} Let $\Lambda$ be a lattice $\mathbb R^d$. Assume that $d\le 4$, then there exist a basis $\{v_1,\dots,v_d\}$ of $\Lambda$ such that $$\|v_j\|=\lambda_j, ~\text{for}~ j\in \{1,2,\dots,d\}.$$ \end{theorem} \vspace{1cm} The case when $d=1$ is trivial. To prove this theorem for the cases $d=2,3$, we need the following lemma from Euclidean geometry. \begin{lemma}~\\ (1) The minimal distance from any point in the interior of a parallelogram in $\mathbb R^2$ to its vertices is always strictly less than the maximal length of the edges of the parallelogram.~\\ (2) The minimal distance from a point in the interior of a parallelepiped in $\mathbb R^3$ to its vertices is always strictly less than the maximal length of three linearly independent vectors form by the vertices , with at least two of them being the edges of the parallelopiped. In particular, these three vector will span the three dimensional lattice spanned by this parallelopiped. \end{lemma} \begin{proof}~\\ For the part (1), observe that a parallelogram $ABCB'$ can be divided into two triangles $ABC$ and $AB'C$, and any point $D$ in the interior of $ABCB'$ must fall in either the triangle $ABC$ or the triangle $AB'C$ \iffalse \begin{tikzpicture} \draw (-4,0) -- (4,0); \draw (-4,0) -- (2,3); \draw (4,0) -- (2,3); \end{tikzpicture} \fi \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.2] \tkzDefPoint(-4,0){A} \tkzDefPoint(2,3){B} \tkzDefPoint(-2,-3){B'} \tkzDefPoint(4,0){C} \tkzDefPoint(1,1){D} \tkzDefPoint(0.5,0){E} \tkzDrawSegment[color=blue, thick](A,B) \tkzDrawSegment[color=blue, thick](C,B) \tkzDrawSegment[color=blue, thick](A,B') \tkzDrawSegment[color=blue, thick](C,B') \tkzDrawSegment[color=blue, thick](A,C) \tkzDrawSegment[color=red, thick](B,D) \tkzDrawSegment[color=red, dotted](E,D) \tkzDefPointBy[projection=onto A--C](B) \tkzGetPoint{F} \tkzDrawSegment[green](B,F) \tkzMarkRightAngle[,size=0.2,color=green](B,F,C) \tkzDrawPoints(A,B,B',C,D,E,F) \tkzLabelPoints[below](C,D,E,F) \tkzLabelPoints[above, right](B) \tkzLabelPoints[below](B') \tkzLabelPoints[below, left](A) \end{tikzpicture} \end{center} \caption{The parallelogram case} \end{figure} By drawing a line perpendicular to the line $AC$ through the point $B$, we easily see \begin{equation*} |BD|\le |BE| < \max\{|AB|,|BC|\}. \end{equation*} For the part (2), first observe that a parallelepiped can be divided into six tetrahedra and any point $x$ in the interior of the parallelepiped, say $ABCDEFGH$ must fall into one of the six. \newcommand{5}{5} \newcommand{4}{4} \newcommand{6}{6} \newcommand{1}{1} \tikzstyle{place}=[circle,draw=black,fill=black,inner sep=1, outer sep=0] \begin{figure} \begin{center} \begin{tikzpicture}[scale=1] \coordinate (a) at (0,0,0); \coordinate (b) at (6,0,0); \coordinate (c) at (6 + 1,4,1); \coordinate (d) at (1,4,1); \coordinate (e) at (0,0,5); \coordinate (f) at (6,0,5); \coordinate (g) at (6 + 1,4,5 + 1); \coordinate (h) at (1,4,5 + 1); \coordinate (x) at (0.2*6,0.2*4,0.8*5); \draw [dotted] (a) -- (f) -- (h) -- (a); \draw [dotted] (b) -- (g) -- (d) -- (b); \draw [dotted] (f) -- (d); \draw[fill=blue!30,opacity=0.2] (a) -- (b) -- (c) -- (d) -- (a); \draw[fill=blue!30,opacity=0.2] (a) -- (e) -- (f) -- (b) -- (a); \draw[fill=blue!30,opacity=0.2] (a) -- (d) -- (h) -- (e) -- (a); \draw[fill=blue!30,opacity=0.05] (d) -- (c) -- (g) -- (h) -- (d); \draw[fill=blue!30,opacity=0.05] (e) -- (f) -- (g) -- (h) -- (e); \draw[fill=blue!30,opacity=0.05] (b) -- (c) -- (g) -- (f) -- (b); \node[place, label=below right:{$A$}] at (a) {}; \node[place, label=right:{$B$}] at (b) {}; \node[place, label=above:{$C$}] at (c) {}; \node[place, label=above:{$D$}] at (d) {}; \node[place, label=below:{$E$}] at (e) {}; \node[place, label=below:{$F$}] at (f) {}; \node[place, opacity=0.5, label={above left, opacity=0.5:$G$}] at (g) {}; \node[place, label=left:{$H$}] at (h) {}; \node[circle, fill=red, opacity=0.5, inner sep=1pt, label={right, font=\small, text= red, opacity=0.5:$X$}] at (x) {}; \end{tikzpicture} \end{center} \caption{$X$ must fall into one of six tetrahedra.} \end{figure} \begin{figure} \begin{center} \begin{tikzpicture}[scale=1.5] \coordinate (a) at (0,0,0); \coordinate (b) at (6,0,0); \coordinate (c) at (6 + 1,4,1); \coordinate (d) at (1,4,1); \coordinate (e) at (0,0,5); \coordinate (f) at (6,0,5); \coordinate (g) at (6 + 1,4,5 + 1); \coordinate (h) at (1,4,5 + 1); \coordinate (x) at (0.2*6,0.2*4,0.8*5); \coordinate (y) at (0.296*6,0.7,5); \coordinate (z) at (0.281*6,0,5); \draw[fill=blue!30, opacity=0.3] (a) -- (f) -- (h) -- (a); \draw[fill=blue!30, opacity=0.3] (a) -- (e) -- (f) -- (a); \draw[fill=blue!30, opacity=0.3] (a) -- (e) -- (h) -- (a); \draw[fill=blue!30, opacity=0.1] (e) -- (f) -- (h) -- (e); \draw[dotted] (h) -- (z); \node[place, label=below right:{A}] at (a) {}; \node[place, label=below:{$H$}] at (e) {}; \node[place, label=below:{$F$}] at (f) {}; \node[place, label=left:{$E$}] at (h) {}; \node[circle, fill=red, opacity=0.5, inner sep=1pt, label={left, font=\small, text= red, opacity=0.5:$X$}] at (x) {}; \node[circle, fill=red, inner sep=1pt, label={below right, text= red:$Y$}] at (y) {}; \node[circle, fill=red, inner sep=1pt, label={below right, text= red:$Z$}] at (z) {}; \draw [dotted] (h) -- ($(h)!3.14cm!(x)$); \draw (a) -- ($(a)!1.95cm!(y)$); \end{tikzpicture} \end{center} \caption{Construction of points $Y,Z$ in the proof.\label{construction}} \end{figure} If $X$ falls in the tetrahedra $AFEH$. It follows from the first part of this lemma that \begin{equation*} |EX|\le |EY| \le \max\{|EA|,|EZ|\} < \max\{|EH|,|EF|,|EA|\}, \end{equation*} If $X$ falls in the tetrahedra $DHAF$. It follows from the first part of this lemma that \begin{equation*} |DX|\le |DY| \le \max\{|DA|,|DZ|\} < \max\{|DA|,|DH|,|DF|\}, \end{equation*} If $X$ falls in the tetrahedra $BAFD$. It follows from the first part of this lemma that \begin{equation*} |BX|\le |BY| \le \max\{|BA|,|EZ|\} < \max\{|BA|,|BF|,|BD|\}, \end{equation*} where the construction of auxiliary points and segments as illustrated in the figure \ref{construction} above. \end{proof} \begin{proof}[Proof of the theorem for the case d=2,3:]~\\ We do this for $d=3$ and the case $d=2$ is only simpler. Let $v_1, v_2, v_3$ be any linearly independent vectors in $\Lambda$ such that \begin{equation*} \|v_j\|=\lambda_j, ~\text{for}~ j=\{1,2,3\}. \end{equation*} Let $\Lambda_0$ be the lattice spanned by those three vectors. Consider the fundamental domain $$F:=\{t_1 v_1+t_2 v_2+t_3 v_3: t_i \in [0,1)\}$$ of $\Lambda_0$. The closure of $F$ is the parallelepipiped spanned by vectors $v_1,v_2,v_3$ at the origin of $\mathbb R^3$ and it follows that $\mathbb R^3=F+\Lambda_0$. Suppose on the contrary that $\Lambda_0$ is a proper sublattice of $\Lambda$, then there exists a vector $x \in \Lambda - \Lambda_0$. By translating $x$ with a vector in $\lambda_0$, we may assume without loss of generality that $$x=t_1 v_1+t_2 v_2+t_3 v_3,$$ where $t_i \in (0,1)$ ($t_i$ cannot be equal to zero since $x \notin \Lambda_0$). So $x$ is in the interior of the parallelepipiped. By our lemma above, noticing that the length of each edge in the parallelepipiped is equal to $\|v_1\|,\|v_2\|$ or $\|v_3\|$, there exists a vertex $w$ of the parallelepipiped spanned by vectors $v_1,v_2,v_3$ at the origin such that $$\|x-w\| < \max\{ \|v_j\|:j=1,2,3\}=\|v_3\|.$$ Translating the vector $x-w$ to the origin. It follows that $x-w, v_1, v_2$ are still linearly independent and this lead to the contradiction to the assumption that $v_3 = \lambda_3(\lambda)$. Therefore we must have $$\Lambda_0 = \Lambda,$$ namely $v_1,v_2,v_3$ form a basis of $\lambda$. \end{proof} We need the following lemma for the proof of $d=4$ case: \begin{lemma} For $k\ge 1$ and $y_i\in \mathbb{R}$ for all $1\le i\le k$, we have the identity $$S_k:=\sum x_1x_2\cdots x_k = 1,$$ where the sum is over all $2^k$ possible choices of $x_i=y_i$ or $x_i=1-y_i$. \end{lemma} \begin{proof} We perform induction on $k$. When $k=1$, this sum is simply $1-y_1+y_1=1$. Assume $S_{k-1}=1$. For general $k$, we observe that $S_k=y_iS_{k-1}+(1-y_i)S_{k-1}=S_{k-1}=1$. \end{proof} \begin{proof}[Proof of the theorem for the case d=4:]~\\ Let $v_1, v_2, v_3,v_4$ be any linearly independent vectors in $\Lambda$ such that \begin{equation*} \|v_j\|=\lambda_j(\Lambda), ~\text{for}~ j=\{1,2,3,4\}. \end{equation*} As in the proof of cases $d=2,3$, we let $\Lambda_0$ denote the lattice spanned by $v_1, v_2, v_3,v_4$. If $\lambda_0$ is a proper sublattice of $\lambda$, then there exist $v \in \Lambda - \Lambda_0$ such that $$v=\sum_{i=1}^4 t_i v_i, t_i\in \mathbb R$$ and without loss of generality, we may assume that $t_i\in [0,1)$ for any $i=1,2,3,4$. Namely $v$ lives in $$\mathscr{P}:=\left \{\sum_{i=1}^4 t_i v_i, t_i\in \mathbb [0,1) \right \}.$$ \vspace{1cm} \begin{claim} For any $v_0 \in \Lambda_0$, and any $v\in \Lambda -\Lambda_0$ $$\|v-v_0\| \ge \|v_i\|,~\forall i=1,2,3,4$$ \end{claim} \begin{proof}[Proof of Claim] \renewcommand\qedsymbol{\#} If $\|v-v_0\|<\|v_{i_0}\|$ for some $i_0 \in \{1,2,3,4\}$, then $v_i$, $i\in \{1,2,3,4\} \setminus \{i_0\}$, together with $v-v_0$ would be still linearly independent (since $v\in \Lambda-\Lambda_0$) and form a new system of successive minima with strictly smaller $\lambda_i$. \end{proof} \begin{claim} For any $v \in \mathscr P$ we have \begin{equation*} \min_{v_0 \in \Lambda_0} \|v-v_0\|^2 \le \frac{1}{4}\sum_{i=1}^4 \|v_i\|^2. \end{equation*} \end{claim} \begin{proof}[Proof of Claim] \renewcommand\qedsymbol{\#}~\\ The vertices of $\mathscr{P}$ form the set: $$\mathscr{V}:= \left \{\sum_{i=1}^{4} n_i v_i: n_i \in \{0,1\} \right\}.$$ In view of the preceding lemma, the idea here is to find a weighted sum of squared distances from $v$ to each vertices in $\mathscr{V}$. For $v=t_1v_1+t_2v_2+t_3v_3+t_4v_4$, we associae the weights $w(v_0)$ to each $\|v-v_0\|^2$ where $v_0\in \mathscr{V}$: If $v_0=\sum_{i=1}^{4} n_i v_i, n_i \in \{0,1\}$, then $w(v_0):=\prod_{i=1}^4 ((2t_i-1)n_i+(1-t_i))$. For example, if $v_0=v_2+v_3$, then $v_0=(1-t_1)t_2t_3(1-t_4)$. It follows immediately from the preceding lemma that $$\prod_{v_0\in V} w(v_0)=\sum_{x_i=1-t_i\text{ or }t_i} x_1x_2x_3x_4 =1.$$ The claim then follows from the following subclaim: $$\sum_{v_0\in \mathscr{V}}w(v_0)\|v-v_0\|^2=\sum_{i=1}^4 t_i(1-t_i)\|v_i\|^2 \le \sum_{i=1}^4 \frac{1}{4}\|v_i\|^2.$$ Indeed, if we write $\|v-v_0\|^2=\langle \sum_i(t_i-n_i)v_i, \sum_i(t_i-n_i)v_i \rangle$ and in view of lemma for the case when $k=3$, the coefficient for each $\langle v_i,v_i \rangle$ is $$(1-t_i)t_i^2+t_i (1-t_i)^2=(1-t_i)t_i.$$ In view of lemma for the case when $k=2$, the coefficient for each $\langle v_i,v_j \rangle$ where $i\ne j$, is $$2(1-t_i)(1-t_j)t_it_j+2(1-t_i)t_jt_i(t_j-1)+2t_i(1-t_j)(t_i-1)t_j+2t_it_j(t_i-1)(t_j-1)=0.$$ \end{proof} \begin{align*} a \end{align*} Now we find the minimum of \begin{equation*} \sum_{v_0 \in \mathscr{V}} \|v-v_0\|^2, \end{equation*} for $v=t_1v_1+\cdots+t_4v_4 \in \mathscr{P} ,t_1, t_2, t_3, t_4 \in [0,1).$ Indeed, \begin{equation*} \begin{split} \sum_{v_0 \in \mathscr{V}} \|v-v_0\|^2 & = \sum_{n_i\in \{0,1\}} \left \langle \sum_{i=1}^4 (t_i-n_i)v_i, \sum_{i=1}^4 (t_i-n_i) v_i \right \rangle \\ \end{split} \end{equation*} For the moment, we assume that $(t_1,t_2,t_3,t_4)$ can take any value in $\mathbb R^4$ and this problem becomes a standard optimization problem without constraints. Now taking the partial derivative with respect to $t_i$, for $i=1,2,3,4$, we obtain the following system of linear equations: \begin{equation*} \begin{split} &~~~~\frac{\partial}{\partial t_i}\sum_{v_0 \in \mathscr{V}} \|v-v_0\|^2 \\ & = \frac{\partial}{\partial t_i} \sum_{n_i\in \{0,1\}} \left \langle \sum_{i=1}^4 (t_i-n_i)v_i, \sum_{i=1}^4 (t_i-n_i) v_i \right \rangle \\ & = \sum_{n_i\in \{0,1\}} \left \langle v_i \sum_{i=1}^4 (t_i-n_i) v_i \right \rangle. \end{split} \end{equation*} \vspace{0.6cm} We may write the solution to the critical points of this four variable function in its matrix form as: \begin{equation*} \begin{bmatrix} \left \langle v_1, v_1 \right \rangle & \left \langle v_1, v_2 \right \rangle & \left \langle v_1, v_3 \right \rangle & \left \langle v_1, v_4 \right \rangle \\ \left \langle v_2, v_1 \right \rangle & \left \langle v_2, v_2 \right \rangle & \left \langle v_2, v_3 \right \rangle & \left \langle v_2, v_4 \right \rangle \\ \left \langle v_3, v_1 \right \rangle & \left \langle v_3, v_2 \right \rangle & \left \langle v_3, v_2 \right \rangle & \left \langle v_3, v_4 \right \rangle \\ \left \langle v_4, v_1 \right \rangle & \left \langle v_4, v_2 \right \rangle & \left \langle v_4, v_3 \right \rangle & \left \langle v_4, v_4 \right \rangle \end{bmatrix} \begin{bmatrix} 2t_1-1 \\ 2t_2-1 \\ 2t_3-1 \\ 2t_4-1 \end{bmatrix}= \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}. \end{equation*} \vspace{1cm} Since $\{v_1,v_2,v_3,v_4\}$ are linearly independent, the coefficient matrix, as a Gram matrix, is nondegenerate and the unique solution to this equation is $$t_1=t_2=t_3=t_4=\frac{1}{2}.$$ The second derivative test gives immediately that this is a local, and thus global minimum for the function, and its miminum value is \begin{equation*} \begin{split} \min_{v\in \mathbb{R}^4}\sum_{v_0 \in \mathscr{V}} \|v-v_0\|^2 & = \sum_{n_i\in \{0,1\}} \left \langle \sum_{i=1}^4 (t_i-n_i)v_i, \sum_{i=1}^4 (t_i-n_i) v_i \right \rangle \\ & = \sum_{n_i\in \{0,1\}} \left \langle \sum_{i=1}^4 (\frac{1}{2}-n_i)v_i, \sum_{i=1}^4 (\frac{1}{2}-n_i) v_i \right \rangle \\ & = \sum_{n_i\in \{0,1\}} \left \langle \sum_{i=1}^4 (\frac{1}{2}-n_i)v_i, \sum_{i=1}^4 (\frac{1}{2}-n_i) v_i \right \rangle \\ & = 4 \sum_{i=1}^4 \|v_i\|^2, \\ \end{split} \end{equation*} where the last equality follows from the cancellations in the cross terms $\left \langle v_i,v_j \right \rangle$ whenever $i \ne j$. It follows that (noticing that $|\mathscr{V}|=16$) \begin{align*}\label{eq:quater} \min_{v \in \Lambda-\Lambda_0}\min_{v_0 \in \Lambda_0}\|v-v_0\|^2 & \le \min_{v \in \Lambda-\Lambda_0}\frac{1}{16}\sum_{v_0 \in \mathscr{V}} \|v-v_0\|^2 \\ & \le \frac{1}{4} \sum_{i=1}^4 \|v_i\|^2 \tag{by the Claim 8} \\ & \le \max\{\|v_j\|:j=1,2,3,4\} \end{align*} Now combining this with Claim 7 above yields \begin{equation} \max\{\|v_j\|:j=1,2,3,4\}\le \frac{1}{4} \sum_{i=1}^4 \|v_i\|^2 \le \max\{\|v_j\|:j=1,2,3,4\}, \end{equation} and thus $$\|v_1\|=\|v_2\|=\|v_3\|=\|v_4\|.$$ \begin{claim} $\left \langle v_i,v_j \right \rangle = 0$ for all $i\ne j$. \end{claim} \begin{proof}[Proof of Claim] \renewcommand\qedsymbol{\#} Let us summarize what we have obtained so far: We proved that if $v_1,v_2.v_3,v_4$ are linearly independent and $\|v_j\|=\lambda_j, j=1,2,3,4$, then any vector $v \in \mathscr{P}\cap \Lambda -\Lambda_0$ must be of the form: $$\frac{1}{2}(v_1+v_2+v_3+v_4),$$ Since $\mathscr{P}$ is a fundamental domain of $\Lambda$, it follows that $$\Lambda=\frac{1}{2}(v_1+v_2+v_3+v_4)+\Lambda_0.$$ On the other hand, from the inequality $$\|v_i\|^2 = \lambda_i(\Lambda)^2 \le \left\|\frac{1}{2}(\pm v_1\pm v_2\pm v_3 \pm v_4) \right\|^2,$$ we have $$\sum_{1\le i< j \le 4}\pm \left \langle v_i,v_j \right\rangle \ge 0.$$ By symmetry, \begin{equation} \sum_{1\le i< j \le 4}\pm \left \langle v_i,v_j \right\rangle = 0. \end{equation} \vspace{1cm} If we view this equation as a linear system with ${4 \choose 2}=6$ variables $\left \langle v_i,v_j \right\rangle$ and the coefficient matrix $$\begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & -1 \\ 1 & 1 & 1 & 1 & -1 & -1 \\ 1 & 1 & 1 & -1 & -1 & -1 \\ 1 & 1 & -1 & -1 & -1 & -1 \\ 1 & -1 & -1 & -1 & -1 & -1 \end{bmatrix} $$ \vspace{0.5cm} is clearly of rank $6$, which forces $$\left \langle v_i,v_j \right\rangle = 0,$$ for all $i\ne j$. \end{proof} Hence either $$\Lambda=\Lambda_0=\text{Span}_{\mathbb Z}\{v_1,v_2,v_3,v_4\}$$ or $v_i$'s are of equal length and mutually orthogonal $$\Lambda=\text{Span}_{\mathbb Z}\{v_1,v_2,v_3,\frac{1}{2}(v_1+v_2+v_3+v_4)\} \supsetneq \Lambda_0$$ In either case, it is possible to find a basis of $\Lambda$ corresponding to the four successive minima of the lattice, as desired. This completes the proof of the case $d=4$. \end{proof} The following example shows that the theorem above fails for $d\ge 5$. \begin{example} Let $d \ge 5$ and consider the lattice $\Lambda$ spanned by $$e_1,e_2, \dots, e_{d-1},\frac{1}{2}(e_1+\cdots+ e_d),$$ where $e_i$ is the canonical basis vector of $\mathbb R^d$ whose $i$-th component is $1$ while all the other components are zero. It is easy to see that $\Lambda$ contains $\mathbb Z^d$ since $$e_d=2\cdot \frac{1}{2}(e_1+\cdots+ e_d)-e_1-\cdots-e_{d-1} \in \Lambda.$$ Observe that $\lambda_i(\Lambda)=1, \forall i=1,2,\dots, d$ since the closed unit ball at the origin contains exactly $d$ linearly independent vectors $e_1,\cdots,e_d$ with equal length $1$. On the other hand, we cannot find a basis $v_1,\cdots, v_d$ of $\Lambda$ satisfying $$\|v_i\|=1,~\forall i=1,2,\cdots,d.$$ This is because each vector in $\Lambda$ is either of the form $e_i$ or of the form $$\frac{1}{2}(\pm n_i e_i \pm n_j e_j \pm n_k e_k \pm n_p e_p \pm n_q e_q), $$ where $n_i,n_j,n_k,n_p,n_q$ are all nonzero integers. Since $\Lambda \supsetneq \mathbb Z^d$, the basis vectors of $\Lambda$ cannot only be in the former case. But for the latter case, the sum of squares of coefficients is at least $\frac{5}{4}$, contradicting to $\|v_i\|=1,~\forall i=1,2,\cdots,d.$ Therefore, for $d\ge 5$, it is not true that the successive minima of a lattice can be realized by a basis of the lattice.\qed \end{example} However, with a compromise, we can still choose a basis whose lengths are equivalent to the successive minima of the lattice. To this end, we need the following lemma: \begin{lemma}\label{lma4} Let $\Lambda$ be a lattice in $\mathbb R^d$ and $v_1 \in \Lambda$ be a vector with $\|v_1\|=\lambda_1(\Lambda)$. In other words, this is a nonzero vector in $\Lambda$ of shortest length. Let $\pi_1$ be the projection of $\mathbb R^d$ onto $v_1^{\perp}$, the hyperplane in $\mathbb R^d$ orthogonal to $v_1$, then we have to following statements: \begin{enumerate} \item $\|\pi_1(v)\| \ge \frac{\sqrt{3}}{2}\|v_1\|, \forall v \in \Lambda;$ \item $\pi_1(\Lambda)$ is a lattice \footnote{The projections of lattices are not always lattices of the corresponding subspaces. For example, the projection of the standard lattice $\mathbb{Z}^2$ onto the irrational line $y=\sqrt{2}x$ is no longer a lattice with respect to that line, which can be deduced from Dirichlet's simultaneous Diophantine approximation theorem.} in $v_1^{\perp}$ with covolume $\frac{\text{covol}(\lambda)}{\|v_1\|}$; \end{enumerate} \end{lemma} \vspace{0.5cm} \begin{proof} For (1) and (2), we suppose on the contrary that there is a vector $v\in \Lambda$ such that $$\|\pi_1(v)\| < \frac{\sqrt{3}}{2} \|v_1\|.$$ The orthogonal decomposition of $\mathbb{R}^d$ gives $$v=\pi_1(v)+tv_1,$$ for some $t\in \mathbb R$. Since $\pi_1(v+nv_1)=\pi_1(v)$, for all $n\in \mathbb{Z}$, by replacing $v$ with $v+nv_1$ for some $n$, we may assume $v=\pi_1(v)+tv_1$ with $t\in [-\frac{1}{2},\frac{1}{2})$. For (1), since $v_1$ is perpendicular to $\pi_1(v)$, by the Pythagorean's theorem \begin{align*} \|v\|^2 &= \|\pi(v)\|^2+t^2\|v_1\|^2 \\ &< \frac{3}{4}\|v_1\|^2+\frac{1}{4}\|v_1\|^2 \\ &=\|v_1\|^2, \end{align*} which contradicts to the choice of $\|v_1\|$ as a minimal-length nonzero vector of $\Lambda$. This proves (1). To see (2), we first observe that from (1), all vectors in $\pi_1(\Lambda)$ ~are bounded $\frac{\sqrt 3}{2} \|v_1\|$ away from zero (This gives the discreteness). On the other hand clearly $\pi_1(\Lambda)$ contains $d-1$ linearly independent vectors. So by definition, $\pi_1(\Lambda)$~ is a lattice in $v_1^{\perp}$ and it makes sense from now to talk about its fundamental domain, covolume and success minima. We shall first study the relation between the fundamental domain of $\Lambda$ and $\pi_1(\Lambda)$. Let $F_1$ be a fundamental domain of $\pi_1(\Lambda)$. \begin{claim} $F:=F_1+[0,1)v_1$ is a fundamental domain of $\Lambda$ \end{claim} \begin{proof}[Proof of Claim] \renewcommand\qedsymbol{\#} For any $x\in \mathbb{R}^d$,$\pi_1\in v_1^{\perp}$. By the definition of fundamental domain $\pi_1(\Lambda)$, there exists a vector $v\in \Lambda$ such that $$\pi_1(x-v)\in F.$$ It follows that $x-v-\pi_1(x-v) \in \mathbb Rv_1$. Since $\pi_1(v_1)=0$, there exists $n\in \mathbb{Z}$ and $t\in[0,1)$ such that $$x-v-\pi_1(x-v) \in \mathbb Rv_1=nv_1+tv_1.$$ Therefore, $x-v-nv_1=tv_1+\pi_1(x-v) \in [0,1)v_1+F_1$. Namely for any vector $x$ in $\mathbb{R}^d$, we can find a translation of $x$ by a vector in $\Lambda$ that falls into $[0,1)v_1+F_1 =:F$. \vspace{5mm} On the other hand, if $x-v'$ and $x-v''$ are both in $F$ with $v',v'' \in \Lambda$, we would like to see $v'=v''$. Suppose: \begin{equation*} \begin{cases} x-v'=t'v_1+y'\\ x-v''=t''v_1+y'' \end{cases}\,, \end{equation*} where $t',t'' \in [0,1)$ and $y',y''\in F_1$. Applying $\pi_1$ to both sides, we get \begin{equation*} \begin{cases} y'=\pi_1(x)-\pi_1(v')\\ y''=\pi_1(x)-\pi_1(v'') \end{cases}\,. \end{equation*} Since $F_1$ is a fundamental domain for $\pi_1(\mathbb{R}^d) / \pi_1(\Lambda)$, the translation is unique and $\pi_1(v')=\pi_1(v'')$. So $v'-v'' \in \mathbb Z v_1$. But $v'-v''=(x-v'')-(x-v') \in [0,1)v_1+F_1 - \big ([0,1)v_1+F_1 \big) = (-1,1)v_1+(F_1-F_1)$, so it forces $v'=v''.$ \end{proof} Now since $v_1$ is orthogonal to all vectors in $F_1$, it follows that \begin{align*} \infty>\text{covol}(\Lambda) &=m(F)\\ &=m([0,1)v_1 + F_1)\\ &=\|v_1\| \cdot m(F_1)\\ &=\|v_1\| \cdot \text{covol}(\pi_1(\Lambda)) \end{align*} This proves (2). \iffalse For (3), we observe that $\lambda_k(\pi_1(\Lambda)) \le \lambda_{k+1}(\Lambda)$. This is because if $v_1,v_2,\dots v_{k+1}$ represent the first $k+1$ successive minima vectors, then their projection images (excluding $\pi_1(v_1)=0$), $\pi_1(v_1),\pi_1(v_2),\dots \pi_1(v_{k+1})$ are still linearly independent in $v_1^{\perp}$ and \begin{equation*} \begin{cases} \|\pi_1(v_2)\|\le \lambda_{k+1}(\Lambda)\\ ~~~~~\vdots \\ \|\pi_1(v_{k+1})\|\le \lambda_{k+1}(\Lambda)\\ \end{cases}\,, \end{equation*} which implies $\lambda_k(\pi_1(\Lambda))\le \lambda_{k+1}(\Lambda)$. For the other direction, we first observe \begin{equation*} \lambda_j(\Lambda) &\le \max\{\|v_1\|,\dots, \|v_j\| \}\\ &\le_d \max\{\|w_1\|,\dots, \|w_j\| \} \\ \end{equation*} \fi \end{proof} \begin{theorem}\label{thmA5} Let $\Lambda$ be a lattice in $\mathbb R^d$. Then there exist a basis $v_1,v_2,\dots,v_d$ of $\Lambda$ such that $$\|v_1\|=\lambda_1(\Lambda),\|v_2\|_d \asymp_d \lambda_2(\Lambda),\dots,\|v_d\| \asymp_d \lambda_d(\Lambda).$$ Here $A \asymp_d B$ means there exist positive constants $c_d,C_d$ depending only on $d$ such that $$c_d|A|\le |B| \le C_d|A|.$$ \end{theorem} \begin{proof} We shall prove this by induction on $d$. The case $d=1$ is obvious. Assume the statement holds for lattices with rank less than or equal to $d-1$. For a rank $d$ lattice $\Lambda$ in $\mathbb{R}^d$, let $v_1$ be any nonzero vector in $\Lambda$ satisfying $\|v_1\|=\lambda_1(\Lambda)$ and let $\pi_1$ be the projection of $\mathbb R^d$ onto $v_1^{\perp}$, the hyperplane in $\mathbb R^d$ orthogonal to $v_1$ as in the previous lemma. Now applying the induction hypothesis to the $d-1$ dimensional hyperplane $v_1^{\perp}$ and the rank $d-1$ lattice $\pi_1(\Lambda)$ contained in $v_1^{\perp}$ yields a basis $w_2\dots w_d$ of $\pi_1(\Lambda)$ with $$\|w_2\|= \lambda_1(\pi_1( \Lambda)), \|w_3\|\asymp_d \lambda_2(\pi_1( \Lambda)),\dots,\|w_d\| \asymp_d \lambda_{d-1}(\pi_1 (\Lambda)).$$ By the monotonicity of $\lambda_j's$, we know $$\|w_2\|\lesssim_d \cdots \lesssim_d \|w_d\|.$$ Our next step is to choose some $v_2,\dots,v_d$ in $\Lambda$ as preimages of $w_2,\dots,w_d$ under $\pi_1$ such that $v_1,v_2\dots, v_d$ form a basis of $\Lambda$. We start by choosing $v_2,\cdots,v_d$ to be any $d-1$ vectors in $\mathbb R^d$ with $$\pi_1(v_j)=w_j, 2\le j \le d.$$ It follows that $v_1,\cdots,v_d$ are $\mathbb{R}$-linearly independent and thus form an $\mathbb{R}$-linear basis of $\mathbb R^d$. For any $v \in \Lambda$, \begin{align*} \pi_1(v) &=n_2w_2+\cdots+n_dw_d\\ &=n_2\pi_1(v_2)+\cdots+n_d\pi_1(v_d)\\ &=\pi_1(n_2v_2+\cdots+n_dv_d). \end{align*} So $\pi_1[v-(n_2v_2+\dots+n_dv_d)]=0$ and $$v=n_2v_2+\dots+n_dv_d+tv_1,$$ for some $t\in \mathbb{R}$. But since $v \in \Lambda$, $tv_1 \in \Lambda$ and thus $t=0$ or $\pm 1$ since $v_1$ by our choice is a minimal nonzero vector. Therefore, $\Lambda=\text{Span}_{\mathbb{Z}}\{v_1,\dots v_d\}$. Namely $v_1,\dots,v_d$ indeed form a basis for $\Lambda$. Observe that replacing each $v_j$ with $v_j+n_j v_1, n_j\in \mathbb{Z}$ does not change the nature that $v_1,\cdots,v_d$ form a basis of $\Lambda$. Since $v_j=w_j+t_jv_1$ for some $t_j\in \mathbb R$, by carefully choosing $n_j$ we may assume $t_j\in [-\frac{1}{2},\frac{1}{2})$. It follows that for $2 \le j \le d$, \begin{align*} \|w_j\|\le \|v_j\| &\le \|w_j\|+|t_j|\|v_1\| \\ &\le \|w_j\| +\frac{1}{2} \frac{2}{\sqrt 3} \|w_2\| \\ &\lesssim_d (1+\frac{1}{\sqrt{3}})\|w_j\|, \end{align*} where the second inequality follows from the previous lemma with $\pi_1(v_2)=w_2$. So $\|v_j\|\asymp_d \|w_j\|, j=2\dots, d$. Next, we observe that $\lambda_{j-1}(\pi_1(\Lambda)) \le \lambda_{j}(\Lambda)$. This is because if $v_1,v_2',\dots v_{j}'$ represent the first $j$ successive minima vectors in $\Lambda$, then their projection images (excluding $\pi_1(v_1)=0$), $\pi_1(v_1),\pi_1(v_2'),\dots \pi_1(v_{j}')$ are still linearly independent in $v_1^{\perp}$ and \begin{equation*} \begin{cases} \|\pi_1(v_2')\|\le \lambda_{j}(\Lambda)\\ ~~~~~\vdots \\ \|\pi_1(v_{j}')\|\le \lambda_{j}(\Lambda)\\ \end{cases}\,, \end{equation*} which implies $\lambda_{j-1}(\pi_1(\Lambda))\le \lambda_{j}(\Lambda)$. Therefore $$\|v_j\|\asymp_d \|w_j\| \asymp_d \lambda_{j-1}(\pi_1(\Lambda)) \le \lambda_{j}(\Lambda)$$ for $j=2,\dots,d$. On the other hand, \begin{align*} \lambda_j(\Lambda) &\le \max\{\|v_1\|,\dots, \|v_j\| \}\\ &\lesssim_d \max\{\|w_2\|,\dots, \|w_j\| \} \\ &\lesssim_d \|w_j\|\\ &=\lambda_{j-1}(\pi_1(\Lambda)). \end{align*} Therefore $$\|v_j\|\asymp_d \lambda_j(\Lambda), j=1,2,\dots d.$$ The proof is complete by the induction hypothesis. \end{proof} \begin{remark}\label{Minkowski reduced basis} In practice, the basis in the theorem can be achieve by the Minkowski reduced basis. A basis $\{b_1,\dots, b_d\}$ of a lattice $\Lambda \subset \mathbb{R}^d$ is called \textit{Minkowski reduced} if for each $1\le i \le d$, $b_i$ is the shortest nonzero vector in the lattice such that $i$ linearly independent vectors $\{b_1,...b_i\}$ can be extended to a basis of the lattice. See \cite{HELFRICH1985125} for an algorithm to produce a Minkowski reduced basis. Interestingly, it is still not know whether the construction of shortest vectors in a lattice with respect to the $l^2$ norm is NP-hard or not (But the answer is affirmative for the $l^{\infty}$-norm \cite{1981Another}. Moreover, the $l^2$ case is proved to be NP-hard for randomized algorithms in \cite{Aj98}). \end{remark} \iffalse For the second part of the statement, we look at the $4$-dimensional space $V_4$ spanned by the lattice vectors $v_1,\dots, v_4$ from above. Note that $\Gamma_4:=\text{Span}_{\mathbb{Z}}\{v_1,v_2,v_3,v_4\}$ is a lattice in $V_4$, and by our Theorem \ref{thm:A.1} above, we can choose \fi \iffalse \begin{remark} Our proof by induction above also shows the following: Given any set of vectors $v_1,\cdots,v_k \in \Lambda$ satisfying $v_i\asymp_d \lambda_i(\lambda)$, we can extend them \end{remark} \fi \vspace{5mm} As another corollary to our Lemma \ref{lma4}, we can prove the classical Minkowski's Second Convex Body Theorem: \begin{theorem}[Minkowski's Second Convex Body Theorem, 1896 \cite{MI1896}]\label{Min2} Let $\Lambda\subset \mathbb{R}^d$ be a lattice and let $\lambda_k(\Lambda)$ denote the $k$-th successive minima of $\Lambda$. Then $$\lambda_1(\Lambda)\cdots \lambda_d(\Lambda)\asymp_d \text{covol}(\Lambda).$$ \end{theorem} \begin{proof} Like we did in the previous proof, we still proceed by induction. The case $d=1$ is obvious. Assume the statement holds for lattices with rank less than or equal to $d-1$. Let $v_1$ be any nonzero vector in $\Lambda$ satisfying $\|v_1\|=\lambda_1(\Lambda)$ and let $\pi_1$ be the projection of $\mathbb R^d$ onto $v_1^{\perp}$, the hyperplane in $\mathbb R^d$ orthogonal to $v_1$ as in the previous lemma. Now applying Lemma \ref{lma4} (1) to the $d-1$ dimensional hyperplane $v_1^{\perp}$ and the rank $d-1$ lattice $\pi_1(\Lambda)$ contained in $v_1^{\perp}$ yields a basis $w_2\dots w_d$ of $\pi_1(\Lambda)$ with $$\|w_2\|= \lambda_1(\pi_1( \Lambda)), \|w_3\|\asymp_d \lambda_2(\pi_1( \Lambda)),\dots,\|w_d\| \asymp_d \lambda_{d-1}(\pi_1 (\Lambda)).$$ By the induction hypothesis $$\|w_2\|\cdots \|w_d\| \asymp_d \lambda_1(\pi_1(\Lambda))\cdots \lambda_{d-1}(\pi_1(\Lambda))\asymp_d \text{covol}(\pi_1(\Lambda)).$$ On the other hand, from the proof of the Theorem \ref{thmA5}, we know $$\|w_j\| \asymp_d \|v_j\|\asymp_d \lambda_j(\Lambda), j=2,\dots d.$$ Since $\|v_1\|=\lambda_1(\Lambda)$ by construction, by Lemma \ref{lma4} (2), $$\text{covol}(\Lambda)=\text{covol}(\pi_1(\Lambda))\cdot \|v_1\|\asymp_d \lambda_1(\Lambda)\cdots \lambda_d(\Lambda).$$ \end{proof} \subsection{Continuity of successive minima} Next, we study the continuity of successive minima on the space of lattices. \begin{lemma}\label{bound} Let $b \in \text{SL} (d,\mathbb{R})$ and $\|b\|_{op}$ denotes the operator norm of $b$, then we have for all $i=1,2,\dots, d$ and unimodular lattice $\Lambda$, the inequality \begin{equation} \lambda_i(b\Lambda) \le \|b\|_{op} \lambda_i(\Lambda) \end{equation} \end{lemma} \begin{proof} For $i=1,2,\dots, d$, let $v_1,\dots, v_i$ denote the $i$ linearly independent vectors in $\mathbb R^d$ such that $$\|v_i\|=\lambda_i(\Lambda).$$ Consider the vectors $bv_1, \dots, bv_i$. Since $b\in \text{SL}(d,\mathbb{R})$, $bv_1, \dots, bv_i$ are again linearly independent. From $\|bv_i\|\le \|b\|_{op} \|v_i\|$ it follows that $bv_1, \dots, bv_i$ are contained in a ball of radius $\|b\|_{op} \lambda_i(\Lambda)$. So it follows that $\lambda_i(b\Lambda) \le \|b\|_{op} \lambda_i(\Lambda)$. \end{proof} \begin{theorem}\label{thm:cts} $\lambda_i(\cdot)$ are continuous functions on the space of unimodular lattices $\mathcal{L}$ for $i=1,2,\cdots d$. \end{theorem} \begin{proof} We may identify $\mathcal{L}$ with the homogeneous space $G/\Gamma := \text{SL}(d,\mathbb{R})/ \text{SL}(d,\mathbb{Z})$ By the Lemma \ref{bound}, we have for any $b,c\in \text{SL}(d,\mathbb{R})$, \begin{equation}\label{eq:ineq} \frac{1}{\|b\|_{op}}\lambda_i(b\Lambda) \le \lambda_i(\Lambda) \le \|c\|_{op} \lambda_i(c^{-1}\Lambda). \end{equation} For any $\Lambda \in \mathcal{L}$, we may write $\Lambda=g\mathbb{Z}^d$ for some $g\in \text{SL}(d,\mathbb{R})$, identified with $g\Gamma$. For any convergent sequence of lattices \begin{equation}\label{eq:conv} g_i\Gamma \to g\Gamma, t\to \infty \end{equation} which is equivalent to the convergence $g^{-1}g_j\Gamma \to \Gamma$. Let $d$ denote any right-invariant metric on $G$ and define a metric $d'$ on $G/\Gamma$ by $$d'(g\Gamma,h\Gamma):=\inf_{\gamma_1,\gamma_2 \in \Gamma} d(g\gamma_1, h \gamma_2).$$ For each $i$, we may choose $\gamma_i \in \Gamma$ as the element closest to $g^{-1}g_i$, namely $$d(g^{-1}g_j,\gamma_j)=\min_{\gamma\in \Gamma} d(g^{-1}g_i,\gamma)=d'(g^{-1}g_j\Gamma,\Gamma).$$ It follows that the condition $d'(g_j\Gamma, g\Gamma) \to 0$ is equivalent to $d(g^{-1}g, \gamma_j)\to 0$. Therefore, by replacing the representative $g_j$ in $g_j\Gamma$ with $g_i\gamma$. We may assume $g_j\to g$ for the equation \ref{eq:conv}. Now taking $b=g_jg^{-1}$ and $c=b^{-1}$ in the inequality \ref{eq:ineq}, we have $$\lambda_i(g_jg^{-1}\Lambda) \to \lambda(\Lambda).$$ and therefore $\lambda_i$ is continuous on $G/\Gamma$. \end{proof} \subsection{Sucessive minima and the dual lattice} Recall that for a lattice $\Lambda\subset \mathbb{R}^d$ with basis $\{\normalfont \textbf{b}_1,\cdots,\normalfont \textbf{b}_d\}$, since $\normalfont \textbf{b}_1,\cdots,\normalfont \textbf{b}_d$ are linearly independent, from linear algebra we know there exist vectors $\normalfont \textbf{b}_1^*, \cdots \normalfont \textbf{b}_d^*$, call \textit{dual vectors} to $\normalfont \textbf{b}_1,\cdots,\normalfont \textbf{b}_d$, such that $$\langle \normalfont \textbf{b}_i,\normalfont \textbf{b}_i^*\rangle= \begin{cases} 0 & i\ne j\\ 1 & i=j \end{cases}. $$ \vspace{5mm} The $\mathbb{Z}$-span of dual basis vectors, namely $\Lambda^*:=\text{Span}\{\normalfont \textbf{b}_1^*, \cdots \normalfont \textbf{b}_d^*\}$, is called the \textit{dual (or polar or reciprocal) lattice} to the lattice $\Lambda$. \label{dual lattice} Although defined through basis, it turns out that the dual lattices are independent of the choice of basis of the original lattice. \iffalse \begin{proposition} Let $\normalfont \textbf{b}_1,\cdots,\normalfont \textbf{b}_d$ and $\normalfont \textbf{c}_1,\cdots,\normalfont \textbf{c}_d$ be two basis of lattice $\Lambda$, then their duals $\normalfont \textbf{b}_1^*,\cdots,\normalfont \textbf{b}_d^*$ and $\normalfont \textbf{c}_1^*,\cdots,\normalfont \textbf{c}_d^*$ span the same lattice $\Lambda^*$. As a consequence, our notion of dual lattices is well-defined. \end{proposition} \fi \begin{proposition} The dual lattice $\Lambda^*$ consists of all vectors $\normalfont \textbf{b}^* \in \mathbb{R}^d$ such that $\langle \normalfont \textbf{b}^*,\normalfont \textbf{b} \rangle$ is an integer for all $\normalfont \textbf{b}$ in $\Lambda$. As a consequence, $\Lambda^*$ is also the dual of $\Lambda$. \end{proposition} \begin{proof} Let $\normalfont \textbf{b}_1,\cdots,\normalfont \textbf{b}_d$ be a basis of the lattice $\Lambda$ and their duals be $\normalfont \textbf{b}_1^*,\cdots,\normalfont \textbf{b}_d^*$. For any $\normalfont \textbf{b} \in \Lambda$ and any $\normalfont \textbf{c} \in \Lambda^*$, suppose $$\normalfont \textbf{b}=s_1\normalfont \textbf{b}_1+\cdots+s_d\normalfont \textbf{b}_d, \text{ and } \normalfont \textbf{c}=t_1\normalfont \textbf{b}_1^*+\cdots+t_d\normalfont \textbf{b}_d^*$$ with integer coefficients $s_i,t_i\in \mathbb{Z}$ for $i=1,2,\cdots, d.$ We have immediately that $$\langle \normalfont \textbf{b},\normalfont \textbf{c} \rangle = s_1t_1\cdots+s_d t_d\in \mathbb{Z}.$$ On the other hand, if $\normalfont \textbf{b}^*= u_1\normalfont \textbf{b}_1^*+\cdots+u_d\normalfont \textbf{b}_d^* \in \mathbb{R}^d$, where $u_i \in \mathbb{R}$ satisfies $\langle \normalfont \textbf{b}^*,\normalfont \textbf{b} \rangle \in \mathbb{Z},$ for any $\normalfont \textbf{b} \in \mathbb{Z}$, then in particular this holds for $\normalfont \textbf{b}=\normalfont \textbf{b}_i$, for any $i=1,2,\cdots,d$ and thus $$u_i=\langle \normalfont \textbf{b}^*,\normalfont \textbf{b}_i \rangle \in \mathbb{Z}. $$ Therefore $\normalfont \textbf{b}^* \in \Lambda^*$. \end{proof} The dual lattice operator commutes nicely with an invertible linear transformation on $\mathbb{R}^d$: \begin{proposition}\label{dual is same as transpose inverse} Let $\Lambda$ be a lattice on $\mathbb{R}^d$ and $T:\mathbb{R}^d \to \mathbb{R}^d$ be an invertible linear transformation, then we have $$(T\Lambda)^*=T^*\Lambda^*,$$ where $T^*={}^tT^{-1}$ is the inverse of the transpose of $T$ and $\Lambda^*$ is the dual lattice to $\Lambda$. \end{proposition} \begin{proof} If $\normalfont \textbf{b}_1,\cdots, \normalfont \textbf{b}_d$ is a basis of $\Lambda$, then $\normalfont \textbf{b}_1,\cdots,T\normalfont \textbf{b}_d$ is a basis of $T\Lambda$. The corresponding dual basis $$(T\normalfont \textbf{b}_1)^*,\cdots,(T\normalfont \textbf{b}_d)^*$$ satisfy \begin{equation*} \begin{bmatrix} {}^t(T\normalfont \textbf{b}_1)^* \\ \vdots \\ {}^t(T\normalfont \textbf{b}_1)^* \end{bmatrix} \begin{bmatrix} T\normalfont \textbf{b}_1 & \cdots & T\normalfont \textbf{b}_d \end{bmatrix} =I_d \end{equation*} But on the other hand, \begin{equation*} \begin{bmatrix} {}^t({}^tT^{-1}\normalfont \textbf{b}_1) \\ \vdots \\ {}^t({}^t T^{-1}\normalfont \textbf{b}_d) \end{bmatrix} \begin{bmatrix} T\normalfont \textbf{b}_1 & \cdots & T\normalfont \textbf{b}_d \end{bmatrix} =I_d \end{equation*} So by the uniqueness of inverse matrix, $T^*\normalfont \textbf{b}_i={}^t T^{-1}\normalfont \textbf{b}_i= (Tb_i)$, for any $i=1,2\cdots,d$. Therefore $(T\Lambda)^*=T^*\Lambda^*.$ \end{proof} The following theorem associates the successive minima of a lattice and those of its dual: \begin{theorem}[\cite{CA97} Chapter VIII, Theorem VI]\label{sucessive minima of dual lattice} Let $\lambda_1,\cdots,\lambda_d$ be the successive minima of lattices in $\mathbb{R}^d$. Then for any lattice $\Lambda$ and its dual $\Lambda^*$, we have $$1\le \lambda_r(\Lambda) \lambda_{d+1-r} (\Lambda^*) \le d!$$ for any $r=1,2,\cdots d$. \end{theorem} Now let us return to our proof, for the flow of lattices $(g_tu_A\Lambda)$, the proposition above gives its dual as: \begin{align*} (g_tu_A\mathbb{Z}^d)^*= & g_t^* u_A^*(\mathbb{Z}^d)^* \\ = & {}^t g_t^{-1} \cdot {}^t u_A^{-1} \mathbb{Z}^d\\ = & {}^t\begin{bmatrix}e^{t/m}I_m & 0 \\0 & e^{-t/n}I_n \end{bmatrix}^{-1}\cdot {}^t\begin{bmatrix}I_m & A \\0 & I_n \end{bmatrix}^{-1}\mathbb{Z}^d\\ = & \begin{bmatrix}e^{-t/m}I_m & 0 \\0 & e^{t/n}I_n \end{bmatrix}\cdot \begin{bmatrix}I_m & 0 \\-{}^tA & I_n \end{bmatrix}\mathbb{Z}^d\\ \end{align*} \section{The measure-theoretical distribution of $\lambda_i(\Lambda)$ in the space of unimodular lattices} \subsection{Haar measure on the space of unimodular lattices} In this subsection we shall recall a few definitions and results on Siegel sets and the probability Haar measure on the space of unimodular lattices, identified with $G/\Gamma:=\text{SL}(d,\mathbb{R})/ \text{SL}(d,\mathbb{Z})$. \vspace{3mm} The main reference for the following is \cite{BM00} Chapter V and \cite{Fo15} Section 2.6. Let $K:=\text{SO}(d,\mathbb{R})$, $$A:=\{\text{diag}(a_1,...a_d):a_1\cdots a_d=1,a_i>0, \forall i=1,2,\dots, d\},$$ the diagonal subgroup of $\text{SL}(d,\mathbb{R})$ with positive entries and $$N:=\{(n_{ij})\in \text{SL}(d,\mathbb{R}): n_{ii}=1, n_{ij}=0,\forall i<j\},$$ the subgroup of upper triangular unipotent matrices in $G$. We have \begin{theorem}[Iwasawa Decomposition]\label{iwasawa decomposition} The product map $$K\times A \times N \to G, (k,a,n) \to kan$$ is a homeomorphism. \end{theorem} \begin{definition}[Siegel Sets in $\text{SL}(d,\mathbb{R})$]~\\ A Siegel Set in $\text{SL}(d,\mathbb{R})$ is a set $\Sigma_{t,u}$ of the form $$\Sigma_{t,u}:=K A_t N_u,$$ where $t,u>0$ and $A_t$ and $N_u$ are given by $$A_t:=\left \{\text{diag}(a_1,...a_d) \in A: \frac{a_i}{a_{i+1}}\le \frac{2}{\sqrt{3}}, i=1,2,...d-1 \right\}$$ and $$N_u:=\left \{(n_ij) \in N: |n_{ij}|\le u, \forall i<j \right\}$$ \end{definition} It turns out that $\Sigma_{t,u}$ can cover the fundamental domain of $G:=\text{SL}(d,\mathbb{R})$ under the action of $\Gamma:=\text{SL}(d,\mathbb{Z})$: \begin{theorem}\label{siegel sets cover the fundamental domain} For $t\ge \frac{2}{\sqrt 3}$ and $u\ge \frac{1}{2}$, we have $G=\Sigma_{t,u} \Gamma$. As a result $\Sigma_{t,u}$ contains a fundamental domain of $G/\Gamma$. \end{theorem} Another important fact about Siegel sets is that it only intersects finitely many of its $\Gamma$-translates \begin{theorem}\label{finiteness of nonempty intersections} Fix $t$ and $u$, then for all but finitely many $\gamma \in \Gamma$, we have $$\Sigma_{t,u} \gamma \cap \Sigma_{t,u} = \varnothing.$$ In particular, all but finitely many $\gamma \in \Gamma$ satisfies $$\Sigma_{t,u} \cap F\gamma = \varnothing.$$ \end{theorem} Now we turn to look at the Haar measure on $G$. Let $B=AN \cong N\rtimes_c A$ (note that as sets $AN=NA$) be the semidirect product of $A$ and $N$ with conjugation as action: $$c:A\to \text{N}, a \mapsto c_a,$$ where $c_a(n)=ana^{-1}$. In other words, the product in $AN$ is defined by $$a_1n_1\cdot a_2n_2:=(a_1a_2)(n_1a_1n_2a_1^{-1}).$$ \begin{proposition} $da:=\frac{da_1}{a_1}\dots \frac{da_{d-1}}{a_{d-1}}$, with the right hand side identified with the standard Lebesgue measure on $\mathbb{R}^{d-1}$, is a bi-invariant Haar measure on $A$. \end{proposition} \begin{proof} For $a'=\text{diag}(a_1',a_2',\dots,a_d')\in A$, we have $a'a=\text{diag}(a_1'a_1,a_2'a_2,\dots,a_d'a_d)$. Hence, $$d(a'a)=\frac{d(a_1'a_1)}{a_1'a_1}\cdots \frac{d(a_{d-1}'a_{d-1})}{a_{d-1}'a_{d-1}}=da.$$ \end{proof} \begin{proposition} $dn:=\prod_{i<j}dn_{ij}$, with the right hand side identified with the standard Lebesgue measure on $\mathbb{R}^{d(d-1)/2}$, is a bi-invariant Haar measure on $N$. \end{proposition} \begin{proof} For $n'=(n'_{ij}) \in N$, the $(i,j)$-th entry of $(n_{ij}')(n_{ij})$ is $$n_{ij}+(n_{i,i+1}'n_{i+1,j}+\cdots+n_{i,j-1}'n_{j-1,j})+n_{ij}',$$ whose partial derivative w.r.t. $n_{ij}$ is $1$. So by the $\frac{d(d-1)}{2}$-dimensional change of variable formula with Jacobian the identity matrix, we obtain the left invariance $d(n'n)=dn$. The right invariance is similar. \end{proof} \vspace{5mm} \begin{proposition} $\rho(a)dadn$ is a right invariant Haar measure on $B$, where the coefficient $\rho(a):=\prod_{i<j}\frac{a_i}{a_j}$. \end{proposition} \begin{proof} For $a'n',an\in N\rtimes_c A=:B$, and for any continuous function $f$ with compact support on $AN$, identified with $\mathbb{R}^{d-1}\times \mathbb{R}^{d(d-1)/2}$ via the previous propositions, \begin{align*} \int_A \int_N f(ana'n') \rho(a)dadn =& \int_A \int_N f(aa'a'^{-1}na'n')\rho(a)dadn \\ =& \int_A \int_N f(aa'(a'^{-1}na')n')\rho(a)dadn. \\ \end{align*} Making a change of variable $n\mapsto a'na'^{-1}$, whose Jacobian can be easily computed as $\rho(a')=\prod_{i<j}\frac{a_i'}{a_j'}$, this is equal to $$\int_N \int_A f(aa'nn') \rho(a)d(a'na'^{-1}) da=\int_N \int_A f(aa'nn') \rho(a)\rho(a')dnda.$$ Making change of variables $a\mapsto aa'^{-1}$ and then $n\to nn'^{-1}$ and noticing that $da, dn$ are bi-invariant and that $\rho$ is a group character, the above is equal to \begin{align*} \int_N \int_A f(ann') \rho(aa'^{-1})\rho(a')dnda =& \int_N \int_A f(an) \rho(a)dnda\\ =& \int_A \int_N f(an) \rho(a)dadn. \end{align*} This proves the right invariance of the measure $\rho(a)dadn$ on $B$. \end{proof} \begin{theorem}\label{decomposition of haar measure on G} Let $dk$ denote a (finite) Haar measure on $K$. If we identify $G=\text{SL}(d,\mathbb{R})$ with $KB=KAN$ via the Iwasawa decomposition (Theorem \ref{iwasawa decomposition}), then $\rho(a)dkdadn$ gives a bi-invariant Haar measure on $G$. \end{theorem} Now we define the Haar measure on $G/\Gamma$: \begin{tad}[Haar meaure on $G/\Gamma$] Let $F$ be any compactly supported continuous function on $G/\Gamma$, then there exists a compacted supported continuous function $f$ on $G$ such that $$F(g\Gamma):=\sum_{\gamma \in \Gamma}f(g\gamma).$$ Define \begin{equation}\label{definition of haar measure on Ga} \int_X F(g\Gamma)d(g\Gamma):=\int_G f(g)dg. \end{equation} The right hand side $\int_G f(g)dg$ is independent of the choice of $f$ by unfolding the integral using the quotient integral formula (Theorem 2.51 in \cite{Fo15}). Therefore by the theory of Radon measures on locally compact Hausdorff spaces (\cite{Fo07} Chapter 7), the equation \ref{definition of haar measure on Ga} defines a left $G$-invariant (and thus bi-invariant by the unimodularity) Haar measure on $G/\Gamma$. \end{tad} For the Haar measure on $K=\text{SO}(d,\mathbb{R})$ and the scaling, since the map $$\text{SO}(d,\mathbb{R}) \to S^{d-1}, g\mapsto ge_1$$ has $$\text{Stab}_{e_1}(G)= \begin{bmatrix} 1 & \mathbb{R}^{1\times d-1} \\ 0 & \text{SO}(d-1,\mathbb{R}) \end{bmatrix},$$ we have the identification $\text{SO}(d,\mathbb{R})/\text{SO}(d-1,\mathbb{R})\cong S^{d-1}$. We use this identification and induction to stipulate: \begin{equation}\label{volume of special linear group} \text{Vol}(K)=\mu_K(\text{SO}(d,\mathbb{R})):=\prod_{i=1}^{d-1} \text{Vol}(S^i) = \prod_{i=1}^{d-1} \frac{\pi^{\frac{i}{2}}}{\Gamma(\frac{i}{2}+1)}. \end{equation} \vspace{5mm} \begin{theorem} Every Siegel set $\Sigma_{t,u} \in \text{SL}(d.\mathbb{R})$ has finite Haar measure in $G$ and it follows from Theorem \ref{siegel sets cover the fundamental domain} that the Haar measure defined above is finite. Therefore $SL(d,\mathbb{Z})$ is a lattice in $\text{SL}(d,\mathbb{R})$. \end{theorem} \subsection{Distribution function associated to sucesssive minima and estimates} \begin{proposition} For a rank $d$ unimodular lattice $\Lambda \in \mathcal{L}$, let $\lambda_i(\Lambda)$ denote its $i$-th successive minima ($1 \le i \le d$). For any $\delta>0$, we have $$\mu(\{\Lambda \in \mathcal{L}:\lambda_i(\Lambda)=\delta \})=0,$$ where $\mu$ is the Haar measure defined on the space of unimodular lattices. \end{proposition} \begin{proof} Indeed, the set $\{\Lambda \in \mathcal{L}:\lambda_i(\Lambda)=\delta \}$ is contained in $$S_{\delta}:=\{\Lambda \in \mathcal{L}:\text{there exists a vector } v\in \Lambda \text{ with } \|v\|=\delta \}.$$ Noticing that any unimodular lattice can be written as $g\mathbb{Z}^d$ for some $g\in \text{SL}(d,\mathbb{R})$ and the local identification between Haar measure on $G=\text{SL}(d,\mathbb{R})$ and $G/\Gamma =\text{SL}(d,\mathbb{R})/\text{SL}(d,\mathbb{Z})$, we shall look at the set \begin{align*} T_{\delta}:= &\{g \in G:\text{there exists a vector } v\in \mathbb{Z}^d \text{ with } \|gv\|=\delta \}\\ = & \cup_{v\in \mathbb{Z}^d} \{g \in G: \|gv\|=\delta \}. \end{align*} This is a countable union and each member in the union is a submanifold of $G$ with lower dimension and hence of zero Haar measure. \end{proof} For $x\ge 0$, it would be interesting to give an estimate for the distribution function \begin{align*} \Phi_i(\delta):= \mu(\{\Lambda \in \mathcal{L}: \lambda_i(\Lambda) < \delta \})= \mu(\{\Lambda \in \mathcal{L}: \lambda_i(\Lambda) \le \delta \}) \end{align*} For $i=1$, Kleinbock and Margulis gave both lower and upper bounds for $\Phi_1(x)$ \cite{Kleinbock1999LogarithmLF} using a generalized Siegel's formula: \begin{theorem}[\cite{Kleinbock1999LogarithmLF}, Proposition 7.1] There exists $C_d, C_d'$ such that \begin{equation*} C_d \delta^d - C_d' \delta^{2d} \le \Phi_1(\delta) \le C_d \delta^d, \end{equation*} for $\delta << 1$. \end{theorem} The main result in this second is a generalization of the above result to $\lambda_i$: \begin{theorem}\label{di distance linear} For $1\le i < d$, there exists $C_d$ and $C_d'$ such that \begin{equation*} C_d \delta^{di} - o(\delta^{di}) \le \Phi_i(\delta):=\mu(\{\Lambda \in \mathcal{L}: \lambda_i(\Lambda) \le \delta \}) \le C_d' \delta^{di}, \end{equation*} for all $\delta << 1$. \end{theorem} \begin{corollary}\label{non-dynamical logarithm law} For $1 \le i\le d-1$, we have \begin{equation*} \lim_{t\to \infty}\frac{-\log\mu \left(\left\{\Lambda \in \mathcal{L}: \lambda_i(\Lambda) \le e^{-t} \right\}\right)}{t} = di \end{equation*} \end{corollary} \begin{proof} Take $\delta=e^{-t}$. \end{proof} \vspace{5mm} For the proof we will use a generalized version of Siegel's mean value formula in geometry of numbers: For a lattice $\Lambda$ in $\mathbb{R}^d$, let $P(\Lambda)$ denote the set of primitive vectors in $\Lambda$, i.e. those vectors that are not a proper integer multiple of any other element in $\Lambda$. Given a real-valued function $f$ on $\mathbb{R}^d$, we define a function $\hat{f}$ on the homogeneous space $X=G/\Gamma:=\text{SL}(d,\mathbb{R})/\text{SL}(d,\mathbb{Z})$ by $$\hat{f}:=\sum_{v\in P(\Lambda)} f(v)$$ \begin{theorem}[Classical Siegel's Formula \cite{SI45}] For any $f \in L^1(\mathbb R^d)$, one has $$\int_{X} \hat{f} d\mu = c_d \int_{\mathbb{R}^d} f dv,$$ where $c_d=\frac{1}{\zeta(d)}:=\frac{1}{\sum_{n=1}^{\infty} \frac{1}{n^d}}$. \end{theorem} Below is a generalization of classical Siegel's formula. First let us recall the notion of primitive tuple from geometry of numbers: \begin{definition} For $1\le k \le d$, we say that an ordered $k$-tuple of vectors $(v_1,\dots,v_k) \in \underbrace{\Lambda \times \cdots \times \Lambda}_{d\text{-times}}$ for a lattice $\Lambda \subset \mathbb{R}^d$ is primitive if it is extendable to a basis of $\Lambda$, and denote by $P^k(\Lambda)$ the set of all such $k$-tuples. Note that $P^1(\Lambda)=P(\Lambda)$ above \footnote{Any primitive vector in a lattice can be extended to a basis of the lattice. This follows from the general fact that any element of a free abelian group which is not divisible by any integer bigger than $1$ can be extended to a basis of the abelian group}. \end{definition} Now for a function $f\in \mathbb{R}^{dk}=(\mathbb{R}^d)^k$, we define correspondingly \begin{equation*} \hat{f}^k(\Lambda):=\sum_{(v_1,\dots,v_k)\in P^k(\Lambda)} f(v_1,\dots,v_k). \end{equation*} Here the superscript on $\hat{f}^k$ should not be confused with the composition (power) of a function. Then we have a generalized Siegel's Formula for primitive tuples which will be helpful for us to estimate the distribution for $\lambda_i(\Lambda)$. \begin{theorem}[Generalized Siegel's Formula for primitive tuples]\label{generalizedsiegel} For $1\le k < d$ and $\phi \in L^1(\mathbb{R}^{dk})$, we have \begin{equation} \int_{X} \hat{f}^k d\mu = c_{k,d} \int_{\mathbb{R}^{dk}} f dv_1 \cdots dv_k, \end{equation} where $c_{d,k}=\frac{1}{\zeta(d)\cdots\zeta(d-k+1)}$. \end{theorem} \begin{proof} Let $\{e_1,\dots,e_d\}$ be the canonimcal basis of $\mathbb{R}^d$. For $G=\text{SL}(d,\mathbb{R})$ and $\Gamma=\text{SL}(d,\mathbb{Z})$ and the $k$-tuple $(e_1,\dots,e_k)$, be , let \begin{align*} G_k:&=\{g\in G: g.e_i=e_i,\forall 1 \le i \le k\}, \\ \Gamma_k:&=\{g\in \Gamma: g.e_i=e_i,\forall 1 \le i \le k\}. \end{align*} be the stabilizer subgroup of $(e_1,\dots,e_k)$ in $G$ and $\Gamma$, respectively. Now consider the subset $$L:=\{(v_1,\dots,v_k) \in \mathbb{R}^{dk}: v_1,\dots,v_k \text{ are linearly independent vectors in }\mathbb{R}^d \}$$ \begin{claim} L is open dense in $\mathbb{R}^{dk}$ and in particular $\mathbb{R}^{dk}-L$ is of Lebesgue measure zero. \end{claim} \begin{proof}[Proof of claim ]\renewcommand{\qedsymbol}{\ensuremath{\#}} That $L$ is open follows from the condition that linear independence implies that $[v_1,\dots,v_k]$ is a full-rank matrix (there exists at least one $k\times k$ submatrix with determinant zero). To see it is dense, we observe that this is equivalent to proving that the set of full-rank $d\times k$ matrices, denoted $F$, is dense in $M(d\times k,\mathbb{R})$. Noticing that its complement $F^c$ is contained in some subvariety (of stricly lower dimension) $$\{A\in M(d\times k,\mathbb{R}): \det(A_{k\times k})=0\},$$ for some $k\times k$ submatrix of $A$. Therefore $F$ must be dense in $\mathbb{R}^{dk}$. \end{proof} \begin{claim} $L$ is equal to the $G$-orbit of the $k$-tuple $(e_1,\dots,e_k)$ in $\mathbb{R}^{dk}$. \end{claim} \begin{proof}[Proof of claim ]\renewcommand{\qedsymbol}{\ensuremath{\#}} Indeed, for any $g\in G=\text{SL}(d,\mathbb{R})$, the tuple $(g.e_1,\dots,g.e_k)$ corresponds to the first $k$ columns of the matrix $g$. However, any $k$ linearly independent vectors $v_1,\cdots, v_k$ in $\mathbb{R}^d$ ($k<d$) can be completed to a $d\times d$ matrix of determinant $1$ (by adding diagonal entries to the last $d-k$ columns). \end{proof} Now consider the map \begin{align*} \phi_G: G \to L\subset \mathbb{R}^{dk}, g\mapsto (g.e_1,\dots,g.e_k) \end{align*} By the Orbit-Stabilizer theorem, we have the identification of homogeneous spaces $\phi_G': G/G_k~\tilde{\longrightarrow}~L$. Since $L$ is open dense in $\mathbb{R}^{dk}$ and the Lebesgue measure on $\mathbb{R}^{dk}$ (viewed as a product (Lebesgue) measure on $\underbrace{\mathbb{R}^{d} \times \cdots \times \mathbb{R}^{d}}_{k\text{-times}}$) is invariant under $G=\text{SL}(d,\mathbb{R})$. The pull-back of the Lebesgue measure on $\mathbb{R}^{dk}$ gives a (unique up to scalar multiple) $G$-invariant Haar measure $\mu_{G/G_k}$ on $G/G_k$ (uniqueness of Haar measure on $G/G_k$ follows from the unimodularity of $G$). \begin{claim} $P^k(\mathbb{Z}^d)$ is equal to the $\Gamma$-orbit of the $k$-tuple $(e_1,\dots,e_k)$ in $\mathbb{R}^{dk}$. \end{claim} \begin{proof}[Proof of claim ]\renewcommand{\qedsymbol}{\ensuremath{\#}} Let $(q_1,\dots,q_k)$ be any $k$-tuple of integer vectors in $\mathbb{Z}^d$ that are extendable to a basis $\{q_1,\dots,q_d \}$ of $\mathbb{Z}^d$, and as a basis we have $\det[q_1 \dots q_d ]=1$ (up to adjusting the sign of the last column $q_d$) and hence $(q_1,\dots,q_k)$ lies in the $\Gamma$-orbit of the $k$-tuple $(e_1,\dots,e_k)$. On the other hand, for any $(g.e_1,\dots,g.e_k)$, where $g\in \Gamma=\text{SL}(d,\mathbb{Z})$, clearly $\{g.e_1,\dots,g.e_k\}$ can be completed to a basis $\{g.e_1 \dots ,g.e_d\}$ of $\mathbb{Z}^d$. \end{proof} It follows that the map \begin{equation*} \phi_{\Gamma}: \Gamma \to P^k(\mathbb{Z}^d) \subset \mathbb{R}^{dk}, \gamma \mapsto (\gamma.e_1,\dots,\gamma.e_k) \end{equation*} gives the identification of $\Gamma$-homogeneous spaces $\phi_{\Gamma}: \Gamma/\Gamma_k ~\tilde{\longrightarrow}~P^k(\mathbb{Z}^d)$, under which the counting measure on $P^k(\mathbb{Z}^d)$ (clearly $\Gamma$-invariant) can be pulled back to a (unique up to scalar multiples) $\Gamma$-invariant Haar measure on $\Gamma/\Gamma_k$, denoted $\mu_{\Gamma/\Gamma_k}$. Note that the summation over $P^k(\mathbb{Z}^d)$ is equal to the integration with respect to the counting measure $\mu_{\Gamma/\Gamma_k}$ over $\Gamma/\Gamma_k$. By the Lemma 1.6, Chapter I in \cite{RA72}, the invariant measures on $G/\Gamma$ and $\Gamma/\Gamma_k$ give an invariant measure on $\Gamma/\Gamma_k$ and we have the quotient integral formula \begin{align*} &\int_{G/\Gamma} \int_{\Gamma/\Gamma_k} \varphi(gl\Gamma_k)~ d\mu_{\Gamma/\Gamma_k}(l\Gamma) d\mu(g)\\ =&\int_{G/\Gamma_k}\varphi(g\Gamma_k)~ d\mu_{G/\Gamma_k}(g\Gamma)\\ =&\int_{G/G_k} \int_{G_k/\Gamma_k} \varphi(gg_k\Gamma_k)~ d\mu_{G_k/\Gamma_k}(g_k\Gamma_k) d\mu_{G/G_k}(gG_k) \end{align*} for any $\varphi \in L^1(G/\Gamma_k)$. Now for any $f\in L^1(\mathbb{R}^{dk})$, we first identify $f$ with a function in $L^1(G/G_k)$ via $\phi_G$. For this new $f$, we define $\varphi_f$ on $G/\Gamma_k$ by setting it as of constant value on each $G_k$-coset: $$\varphi_f(g\Gamma_k):=f(g G_k),\forall g\in G.$$ In other words, $\varphi_f(g\Gamma_k)= \varphi_f(h\Gamma_k)$, whenever $h^{-1}g\in G_k$. It follows that $$\int_{G/\Gamma_k} \varphi_f(g\Gamma_k) d\mu_{G/\Gamma_k}=\mu_{G_k/\Gamma_k}(G_k/\Gamma_k) \cdot \int_{G/G_k} f(g G_k)d\mu_{G/G_k}(g G_k)< \infty.$$ So $\varphi_f \in L^1(G/\Gamma_k)$. \begin{claim} The inner integral on the left hand side is (recall that the integration with respect to counting measure on $\Gamma/\Gamma_d$ is the same as the sum over primitive tuples) $$\int_{\Gamma/\Gamma_k} \varphi_f(gl\Gamma_k) d\mu_{\Gamma/\Gamma_k}(l) =\sum_{(v_1,\dots,v_k)\in P^k(g\mathbb{Z}^d)} f(v_1,\dots,v_k)=:\hat{f}^k(g\mathbb{Z}^d).$$ \end{claim} \begin{proof}[Proof of claim ]\renewcommand{\qedsymbol}{\ensuremath{\#}} First recall by our identification \begin{align*} &\sum_{(v_1,\dots,v_k)\in P^k(g\mathbb{Z}^d)} f(v_1,\dots,v_k)\\ =&\sum_{(he_1,\dots,he_k)\in P^k(g\mathbb{Z}^d), h\in G} f(he_1,\dots,he_k)\\ =&\sum_{(he_1,\dots,he_k)\in P^k(g\mathbb{Z}^d), h\in G} f(hG_k) \tag{viewing $f$ as a function on $G/G_k$}\\ =&\sum_{(g^{-1}he_1,\dots,g^{-1}he_k)\in P^k(\mathbb{Z}^d), h\in G} f(hG_k)\\ =&\sum_{(le_1,\dots,le_k)\in P^k(\mathbb{Z}^d), l\in G} f(glG_k) \tag{change of variable $l:=g^{-1}h$}\\ =&\int_{\Gamma/\Gamma_k} f(glG_k)d\mu_{\Gamma/\Gamma_k}(l\Gamma_k) \tag{summation to integration w.r.t. counting measure}\\ =&\int_{\Gamma/\Gamma_k} \varphi_f(gl\Gamma_k)d\mu_{\Gamma/\Gamma_k}(l\Gamma_k) \tag{definition of $\varphi_f$}\\ \end{align*} \end{proof} Hence this proves \begin{equation*} \int_{X} \hat{f}^k d\mu = c_{k,d} \int_{\mathbb{R}^{dk}} f dv_1 \cdots dv_k, \end{equation*} with $c_{k,d}=\mu_{G_k/\Gamma_k}(G_k/\Gamma_k)$ to be computed in the Appendix A. \end{proof} Now we can give the distribution for $\Phi_i(\delta)$: \begin{proof}[Proof of Theorem \ref{di distance linear}] Let $B$ be the ball centered at $0$ with radius $\delta$. Note that the condition $\lambda_i(\Lambda)<\delta$ implies that there are at least $i$ linearly independent vectors in $\Lambda$ lying in the open ball $B$. However, this does not necessarily mean they can be extended to a basis. But thanks to the Theorem \ref{thmA5}, there exists a basis $v_1,\dots,v_d$ of $\Lambda$ such that $$\|v_1\|=\lambda_1(\Lambda),\|v_2\|_d \asymp_d \lambda_2(\Lambda),\dots,\|v_d\| \asymp_d \lambda_d(\Lambda).$$ It follows that there exists a constant factor $\eta_d>1$ such that if we dilate the ball $B$ by $\eta_d$ to a new ball $B'$ (centered at the origin with radius $\eta_d \delta$), we have $\Lambda \cap B'$ contains $i$ linearly independent vectors that can be extended to a basis of $\Lambda$. It follows that (since by symmetry $v$ and $-v$ must be contained in $\Lambda \cap B'$ simultaneously and any permutation of this $i$-tuple also gives a new primitive $i$-tuple): $$|P^i (\Lambda) \cap B'^i|\ge 2^i i!,$$ where $B'^i:=\underbrace{B' \times \cdots \times B'}_{i\text{-times}}\subset \mathbb{R}^{di}$. \vspace{5mm} Now take $f:=\mathbf{1}_{B'^{i}}$ and $\hat{f}= \hat{f}^i(\Lambda):=\sum_{(v_1,\dots,v_i)\in P^i(\Lambda)} f(v_1,\dots,v_i)$ counts the number of points falling into $B'^i$. The left hand side of the generalized Siegel's formula (Theorem \ref{generalizedsiegel}) yields $$\int_X \hat{f}^i d\mu \ge \int_{\{\Lambda:\lambda_i(\Lambda) \le \delta \}}\hat{f}^i d\mu \ge 2^i i!\mu(\{\lambda:\lambda_i(\Lambda)\le \delta \}).$$ \vspace{5mm} On the other hand $$\int_{\mathbb{R}^{di}}f dv_1 \cdots dv_i=\text{Vol}(B')^i=\eta_d^i \delta^{di} c_d^i,$$ where $c_d$ is the volume of unit ball in $\mathbb{R}^d$. Hence we have the upper bound $$\mu(\{\Lambda:\lambda_i(\Lambda)\le \delta \}) \le \frac{1}{i!}\left(\frac{\eta_d c_d}{2}\right )^i\delta^{di}$$ For the lower bound, for $1\le i <d-1$ and $x>0$, let $N(i,x)$ denote the quantity $$\min \{ |P^i(\Lambda)\cap B(0,x)^i|:\Lambda \in \mathcal{L}, \Lambda \cap B(0,x) \text{~contains at least~} i+1 \text{~linearly independent vectors}\},$$ namely the miminum of the number of all primitive $i$-tuples $(v_1,\cdots, v_i)$ with each component taken from the lattice $\Lambda \cap B(0,x)$ for all unimodular lattice $\Lambda$, given $\Lambda \cap B(0,x)$ contains at least $i+1$ linearly independent vectors. Note that by our assumption and the discussion above, $N(i,\eta_d \delta) \ge 2^i i!$. Since one can always choose a unimodular lattice with the first $i+1$ sucessive minima small enough to be contained in $B(0,x)$ for any $x>0$ whenever $i<d-1$, we have \begin{claim} $N(i,x) \le 2^i(i+1)!$ (independent of $x\in (0,1)$) whenever $i<d-1$ \end{claim} \begin{proof}[Proof of claim ]\renewcommand{\qedsymbol}{\ensuremath{\#}} Consider the unimodular lattice $$\Lambda_{x}:=\left \{\frac{3}{4}x e_1,...\frac{3}{4}x e_{i+1},\frac{1}{(\frac{3}{4}x)^{\frac{i+1}{d-i-1}}}e_{i+2},...\frac{1}{(\frac{3}{4}x)^{\frac{i+1}{d-i-1}}} e_{d} \right\}.$$ Note that when $x<1$, we have $$\Lambda_x \cap B(0,x)=\left\{\pm \frac{3}{4}x e_1,...,\pm \frac{3}{4}x e_{i+1} \right\}$$ since any integer linear combination $$n_1 \frac{3}{4}x e_1+...+n_{i+1} \frac{3}{4}x e_{i+1}$$ with some $|n_j|\ge 2$ or at least two of $n_j\ne 0$ must be outside of $B(0,x)$. So $$|P^i(\Lambda_x)\cap B(0,x)^i|\le 2^i i! {i+1 \choose i}=2^i (i+1)!.$$ Therefore $N(i,x)\le 2^i (i+1)!$. \vspace{5mm} \end{proof} Now set $x=\delta$. The idea is to separated the integration domain into two parts: $\{\Lambda:\hat{f}^i(\Lambda)< N(i,\delta)\}$ and $\{\Lambda:\hat{f}^i(\Lambda)\ge N(i,\delta)\}$. We will see that the integration over the second domain contribute insignificantly as $\delta \to 0$. Hence, \begin{align*} \int_X \hat{f}^id\mu = & \int_{\{\Lambda:\hat{f}^i(\Lambda) < N(i,\delta)\}} \hat{f}^id\mu + \int_{\{\Lambda:\hat{f}^i(\Lambda)\ge N(i,\delta)\}} \hat{f}^id\mu \\ \end{align*} Notice that by our choice of $B'$, $f$ and the definition of $\hat{f}^i$, $f^i(\Lambda)=|P^i(\Lambda)\cap B^i|$, and the first term \footnote{note that if $i=d-1$ this argument won't make sense since $N(d,\delta)$ will become zero if $\delta \to 0$ by the Minkowski's second convex body theorem \ref{Min2}.} \begin{align*} \int_{\{\Lambda:\hat{f}^i(\Lambda) < N(i,\delta)\}} \hat{f}^id\mu =& \int_{\{\Lambda:\hat{f}^i(\Lambda)< N(i,\delta)\}} \hat{f}^id\mu \\ =& \int_{\{\Lambda:\Lambda \cap B \text{~contains no~} i+1 \text{~linearly independent vectors}, ~\hat{f}^i(\Lambda)< N(i,\delta)\}}\hat{f}^i d\mu\\ \le & \int_{L_i}2^i(i+1)! d\mu\\ =& 2^i (i+1)!\mu(S_i) \tag{by the estimate from the claim above} \end{align*} \iffalse Now let us find an upper bound for $\hat{f}^i(\Lambda)$ under this constraint. We have \begin{claim} $\hat{f}^i(\Lambda)\le 2^i i!$ over the set $$\{\Lambda:\Lambda \cap B \text{~contains no~} i+1 \text{~linearly independent vectors}, ~\hat{f}^i(\Lambda)< N(i,\delta)\}.$$ \end{claim} \begin{proof}[Proof of Claim]\renewcommand{\qedsymbol}{\ensuremath{\#}} Indeed, if $\hat{f}^i(\Lambda)> 2^i i!$, then even modulo $\pm$ for each vector, there must be at least two family of primitive $i$-set of vectors in $\Lambda \cap B$. But this in particular implies that there must be at least $i+1$ vectors $v_1,...v_{i+1}$ living in $\Lambda \cap B$, with any $i$ of them \end{proof} \fi where $L_i$ denote the set of unimodular lattices that contain no $i+1$ linearly independent vectors but contain at least one family of primitive $i$-set of vectors (so that the integral will not vanish). But clearly $S_i\subset \{\Lambda:\lambda_i(\Lambda)\le \delta\}$. \vspace{5mm} Now we look at the second term $ \int_{\{\Lambda:\hat{f}^i(\Lambda)\ge N(i,\delta)\}} \hat{f}^id\mu$. If $\hat{f}^i(\Lambda)\ge N(i,\delta)$, by definition it means the ball $B=B(0,\delta)$ contains at least $i+1$ linearly independent vectors in $\Lambda$. But again by symmetry that extra vector has to come in pairs namely $B\cap \Lambda$ has to contain both $v_{i+1}$ and $-v_{i+1}$. Therefore for such $\Lambda$, $$|P^i(\Lambda) \cap B^i)| \le \frac{1}{2}|P^{i+1}(\Lambda) \cap B^{i+1})|.$$ Notice that the left hand side is precisely $\hat{f}^i(\Lambda)$. Let $f_1=\mathbf{1}_{B^{i+1}}$, a function in $\mathbb{R}^{d(i+1)}$, we have \begin{align*} \int_{\{\Lambda:\hat{f}^i(\Lambda) \ge N(i,\delta)\}} \hat{f}^id\mu = & \frac{1}{2}\int_{\{\Lambda:\hat{f}^i(\Lambda)> N(i,\delta)\}} \hat{f_1}^{i+1}d\mu \\ \le & \frac{1}{2}\int_X \hat{f_1}^{i+1} d\mu \\ = & \frac{1}{2}\int_{\mathbb{R}^{d(i+1)}} f_1 dv_1\cdots dv_{i+1} \tag{by the generalized Siegel's formula}\\ = & \frac{1}{2}(\int_{\mathbb{R}^{d}} \mathbf{1}_{B} dv)^{i+1} \\ = & \frac{1}{2} (c_d \delta^d)^{i+1} \end{align*} Therefore we obtain the lower bound \begin{align*} \mu(\{\Lambda: \lambda_i(\Lambda) \le \delta \})\ge & \frac{1}{N(i,\delta)} \left(\int_X \hat{f}^id\mu - \int_{\{\Lambda:\hat{f}^i(\Lambda)\ge N(i,\delta)\}} \hat{f}^id\mu \right) \\ = & \frac{1}{2^i (i+1)!} \left(\int_{\mathbb{R}^{di}} f dv_1\cdots dv_i - \int_{\{\Lambda:\hat{f}^i(\Lambda)\ge N(i,\delta)\}} \hat{f}^id\mu \right)\\ = & \frac{1}{2^i(i+1)!}(c_d^i \delta^{di}- \frac{1}{2} c_d^{i+1} \delta^{d(i+1)}). \end{align*} This finishes the case when $i<d-1$. \vspace{5mm} To cover the remaining case when $i=d-1$, we will study the following example: \begin{example}[Measure of a subset of $\{\Lambda:\lambda_i(\Lambda)\le \delta\}$]~\\ In view of Iwasawa decomposition of $G=\text{SL}(d,\mathbb{R})$, we shall first construct a subset of $G$ that will shrink the first $i$ canonical basis vectors in $\mathbb{Z}^d$: Let $S_i$ denote the collection of all elemnents of the form $kan$, where $ k\in SO(d,\mathbb{R})$, \begin{equation} a=\text{diag}(a_1,...a_i,a_{i+1},...,a_d) \in \text{SL}(d,\mathbb{R}), \end{equation} with $\frac{\sqrt 3}{2}a_{j-1}\le a_j<\frac{\delta}{\sqrt i}, \forall 1\le j \le i$ (assuming $a_0=0$ as convention) and $\frac{\sqrt 3}{2} a_{j-1}\le a_j \le 1$ whenever $j>i$, and \begin{equation} n= \begin{bmatrix} 1 & n_{12} & n_{13} & ... & n_{1d} \\ 0 & 1 & n_{23} & ... & n_{2d} \\ 0 & 0 & 1 & ... & n_{3d} \\ \vdots & \vdots & \vdots &\ddots & \vdots\\ 0 & 0 & 0 & ... & 1 \\ \end{bmatrix} \end{equation} with $\frac{1}{2} \le n_{ij}\le 1$ for all $i<j$. Clearly, with restrictions $\frac{\sqrt 3}{2}a_{j-1}\le a_j$ and $\frac{1}{2}\le n_{ij}$, $S_i$ is contained in the Siegel domain $\Sigma:=\Sigma_{\frac{2}{\sqrt{3}},\frac{1}{2}}$. The reason that we choose to restrict our set to Siegel domain is that over Siegel domain, we have very good control on the overlaps modulo the $\Gamma=\text{SL}(n,\mathbb{Z})$ action thanks to Theorem \ref{finiteness of nonempty intersections}. \begin{claim} $\lambda_j(S_i\mathbb{Z}^d)\le \delta$ for all $j\le i$. \end{claim} \begin{proof}[Proof of Claim ]\renewcommand{\qedsymbol}{\ensuremath{\#}} Indeed, for $kan \in S_i$ and $j\le i$, \begin{align*} \|kane_j \|_2 =&\|[a_1n_{1j},...,a_{i-1}n_{i-1,j},a_i]^T\|_2 \tag{$k$ preserves the distance}\\ =& \sqrt{a_1^2n_{1j}^2+\cdots+a_{i-1}^2n_{i-1,j}^2+a_i^i}\\ \le & \sqrt{i\cdot \frac{\delta^2}{i}}=\delta \end{align*} \end{proof} Therefore $\pi(S_i)\subset \{\Lambda:\lambda_i(\Lambda)\le \delta\}$. Now we shall give a lower bound estimate for the measure of $\pi(S_i)$ in $G/\Gamma$. Let $f=\mathbf{1}_{S_i}$ denote the indicator function of $\pi(S_i)$ on $G/\Gamma$ and let $N_d$ denote the (finite) number of $\gamma$ for which $\Sigma \cap F\gamma$ is nonempty, then since $S_i$ is a subset of the Siegel set $\Sigma$, $S_i= \cup_{\gamma \in \Gamma} (S_i \gamma^{-1} \cap F)$ is a finite union of no more than $N_d$ nonempty sets. Let $m_d$ denotes the largest measure of these $N_d$ sets, it follows that \begin{align*} \int_G f(g) dg =& \int_{G/\Gamma}\sum_{\gamma \in \Gamma}f(g\gamma)d(g\Gamma)\\ \le & N_d m_d \\ \le & N_d \mu_{G/\Gamma}(S_i). \end{align*} Now we compute $\int_G f(g) dg$ via Iwasawa decomposition in view of Theorem \ref{decomposition of haar measure on G}: \begin{align*} &\int_G f(g) dg \\ =& \int_{K}\int_{A}\int_{N} f(kan)\rho(a)dk da dn\\ =& \int_{K}dk \int_0^{\delta/ \sqrt{i}} \int_{\frac{\sqrt 3}{2}a_1}^{\delta/ \sqrt{i}} \cdots \int_{\frac{\sqrt 3}{2}a_{i-1}}^{\delta/ \sqrt{i}} \int_{\frac{\sqrt{3}}{2}a_i}^{1} \cdots \int_{\frac{\sqrt{3}}{2}a_{d-1}}^{1} \frac{\rho(a)}{a_1\dots a_{d-1}} da_{d-1} \dots da_1 \int_{[\frac{1}{2},1]^{d(d-1)}}\prod_{i<j}dn_{ij} \\ \ge & \text{Vol}(K) \frac{1}{2^{d(d-1)}} \int_0^{\delta/ \sqrt{i}} \int_{\frac{\sqrt 3}{2}a_1}^{\delta/ \sqrt{i}} \cdots \int_{\frac{\sqrt 3}{2}a_{i-1}}^{\delta/ \sqrt{i}} \int_{\frac{[\sqrt 3}{2},1]^{d-i-1}} \left( a_{d-1}^1 a_{d-2}^3 \cdots a_1^{2d-3} \right) da_{d-1} \dots da_1 \\ =& \text{Vol}(K) \frac{c_{d,i}}{2^{d(d-1)}} \int_0^{\delta/ \sqrt{i}} \int_{\frac{\sqrt 3}{2}a_1}^{\delta/ \sqrt{i}} \cdots \int_{\frac{\sqrt 3}{2}a_{i-1}}^{\delta/ \sqrt{i}} \left( a_{i}^{2d-1-2i} \cdots a_1^{2d-3} \right) da_{i} \dots da_1 \\ \ge & D_d \delta^{(2d-i-1)i}+o(\delta^{(2d-i-1)i}) \end{align*} where the constant $c_d,i$ comes from the integration w.r.t. the variables $a_{i+1},...a_d$ and the exponential $\delta^{(2d-i-1)i}$ comes from $(2d-i-1)i=2d-1-2i+\cdots+2d-3+i$ (the last $i$ is from the accumulation of the total order of anti-derivatives of polynomials) and when $i=d-1$, $\delta^{(2d-i-1)i}=d(d-1)=di$. Therefore, for $i=d-1$, we have proved $$\mu\{\Lambda:\lambda_i(\Lambda)\le \delta\}\ge C_d' \delta^{d(d-1)}, \text{~as~} \delta<<1.$$ \end{example} This completes the proof for all $1\le u \le d-1$. \end{proof} By looking at the dual lattice, we can also obtain the tail bound for this distribution. \begin{corollary}\label{tail bound} For $1<i\le d$ there exist $C_d$ and $C_d'$ such that \begin{equation*} C_d \delta^{di} \le \mu \left(\left\{\Lambda \in \mathcal{L}: \lambda_i(\Lambda) \ge \frac{1}{\delta} \right\}\right) \le C_d' \delta^{di}, \end{equation*} for all $\delta << 1$. \end{corollary} \begin{proof} Recall the notion of dual lattice (cf. \ref{dual lattice}) and notice that the dual map $$*:X\to X, \Lambda \mapsto \Lambda^*$$ is measure-preserving. By the Theorem \ref{sucessive minima of dual lattice}. We have $$1\le \lambda_r(\Lambda) \lambda_{d+1-r} (\Lambda^*) \le d!$$ for any $r=1,2,\cdots d$. Hence the corollary follows. \end{proof} \section{Application to the logarithm laws for the flows on the homogeneous space $G/\Gamma$.} Now we define $\Delta_i(\Lambda):=-\log (\lambda_i(\Lambda))$. It follows from taking the negative logarithm of all sides of the equation \ref{eq:ineq} that $\Delta_i(\Lambda)$ is uniformly continuous. Recall from \cite{Kleinbock1999LogarithmLF}: \begin{definition} For a function $\Delta$ on a $G$-homogeneous space $X$, define the tail distribution $\Phi_{\Delta}(z):=\mu\{x\in X:\delta(x)\ge z\}$. For $k>0$, we will also say that $\delta$ is $k$ distance-like if it is uniformly continuous and in addition there exist constants $C_d$ and $C_d'$ such that \begin{equation*} C_de^{-kz} \le \Phi_{\Delta}(z) \le C_d'e^{-kz}, \forall z\in \mathbb{R}. \end{equation*} \end{definition} It follows from our Theorem \ref{di distance linear} that $\Delta_i$ is $di$ distance-like in the space of unimodular lattices. And as an immediate consequence of Theorem 1.7 in \cite{Kleinbock1999LogarithmLF}, we have the non-unipotent version of logarithm law: \begin{theorem} For any nonzero $(z_1,\dots z_d)$ with $z_1+\dots+z_d=0$, and for almost all unimodular lattice $\Lambda$ in $X=\text{SL}(d,\mathbb{R})/\text{SL}(d,\mathbb{Z})$ we have \begin{equation} \limsup_{t\to \infty}\frac{\Delta_i(\exp(tz)\Lambda)}{\log t}=\frac{1}{di}. \end{equation} \end{theorem} For the unipotent flow and first successive minimum, Athreya and Margulis proved the following logarithm law: \begin{theorem}[\cite{AM09}, Theorem 2.1] Let $(u_t)_{t\in \mathbb{R}}$ be a unipotent one-parameter subgroup of $\text{SL}(d,\mathbb{R})$ and $X:=\text{SL}(d,\mathbb{R})/\text{SL}(d,\mathbb{Z})$. For $\mu$-a.e. $\Lambda$ in $X$, we have \begin{equation*} \limsup_{t\to \infty}\frac{-\log \lambda_1(h_t\Lambda)}{\log t}=\frac{1}{d}. \end{equation*} \end{theorem} We shall generalize this theorem to higher $\lambda_i$'s: \begin{theorem}\label{generalize logarithm law for unbounded flow} Let $(g_t)_{t\in \mathbb{R}}$ be an unbounded one-parameter subgroup of $\text{SL}(d,\mathbb{R})$ and $X:=\text{SL}(d,\mathbb{R})/\text{SL}(d,\mathbb{Z})$. For $\mu$-a.e. $\Lambda$ in $X$, we have \begin{equation*} \limsup_{t\to \infty}\frac{-\log \lambda_i(h_t\Lambda)}{\log t}=\frac{1}{di}. \end{equation*} \end{theorem} To prove this theorem, we first observe that by Borel-Cantelli Lemma, the upper bound holds for all flows: \begin{lemma}[The upper bound] For $\mu$-almost every $\Lambda \in X$, $1\le i \le d-1$, and any one parameter subgroup $(h_t)_{t\in \mathbb{R}}$ of $G=\text{SL}(d,\mathbb{R})$, \begin{equation*} \limsup_{t\to \infty}\frac{-\log \lambda_i(h_t\Lambda)}{\log t}\le \frac{1}{di}. \end{equation*} \end{lemma} \begin{proof} For any $\epsilon>0$, and for $k\ge 1$, let $r_k=(\frac{1}{di}+\epsilon)\log k$ and let $t_k$ be any sequence going to $\infty$ as $k\to \infty$, we have by Theorem \ref{di distance linear} and the fact that $u_{t_k}$ is measure-preserving that \begin{equation*} \mu(\{\Lambda \in X: \lambda_i(u_{t_k}\Lambda) \le e^{-r_k} \}) \le C_d' (e^{r_k})^{di}, \end{equation*} which is equivalent to \begin{equation*} \mu(\{\Lambda \in X: -\log \lambda_i(u_{t_k}\Lambda) \ge r_k \}) \le C_d' \frac{1}{k^{1+di\epsilon}}. \end{equation*} Since the summatin on the right hand side over $k$ is finite, by Borel-Cantelli Lemma, we have \begin{equation*} \mu(\limsup_{k\to \infty}\{\Lambda \in X: -\log \lambda_i(u_{t_k}\Lambda) \ge r_k \})=0. \end{equation*} Taking the complement, this means \begin{equation*} \mu(\cup_{N} \cap_{k\ge N}\{\Lambda \in X: -\log \lambda_i(u_{t_k}\Lambda) < r_k \})=\mu(\liminf_{k\to \infty}\{\Lambda \in X: -\log \lambda_i(u_{t_k}\Lambda) < r_k \})=1. \end{equation*} In other words, for $\mu$-almost every $\Lambda \in X$, there exists $N$ such that $k\ge N$ implies \begin{equation*} -\log \lambda_i(u_{t_k}\Lambda) < r_k:=(\frac{1}{di}+\epsilon)\log(k) \end{equation*} Since $t_k\to \infty$ is arbitrary, we have \begin{equation*} \limsup_{t\to \infty}\frac{-\log \lambda_i(h_t\Lambda)}{\log t}\le \frac{1}{di}. \end{equation*} for $\mu$-almost all $\Lambda$. \end{proof} \vspace{5mm} To show the lower bound, we shall use a logarithm law for hitting time of unbounded flow against the spherical shrinking target due to Kelmer and Yu \cite{Kelmer2017ShrinkingTP}. \begin{theorem}[Special Case of Theorem 1.1, \cite{Kelmer2017ShrinkingTP}]\label{spherical log law} Let $\{B_t\}_{t>0}$ denote a monotone family of spherical (meaning each set $B_t$ is invariant under the left action of $K=\text{SO}(d,\mathbb{R})$) shrinking (meaning $B_t\supset B_s$ for $t\ge s$ and $\lim_{t\to \infty}\mu(B_t))$ targets in $X := G/\Gamma:=\text{SL}(d,\mathbb{R})/\text{SL}(d,\mathbb{Z})$. Let $\{g_m\}_{m\in \mathbb{Z}}$ denote an unbounded discrete time flow on $X$ . Then for a.e. $\Lambda \in X$ \begin{equation} \lim_{t\to \infty}\frac{\log(\min\{m\in \mathbb{N}:g_m\Lambda\in B_t\})}{-\log(\mu(B_t))}=1 \end{equation} \end{theorem} The quantity $\min\{m\in \mathbb{N}:g_m.x\in B_t\}$ is often called the first \textit{hitting time} with respect to the flow $\{g_m\}$ and the target set $B_t$. \begin{proof}[Proof of \ref{generalize logarithm law for unbounded flow} (the lower bound)] We will take the shrinking targets as \begin{equation*} B_t:=\{\Lambda: \lambda_i(g_m \Lambda)\le e^{-t}\}, t\ge 0. \end{equation*} These sets are clearly spherical, namely $SO(d,\mathbb{R})$-invariant since $\lambda_i$. So by Theorem \ref{spherical log law}, \begin{equation} \lim_{t\to \infty}\frac{\log \min\{m\in \mathbb{N}:\lambda_i(g_m \Lambda)\le e^{-t}\}}{-\log \mu(\{\Lambda: \lambda_i(g_m \Lambda)\le e^{-t}\})}=1.\footnote{The set $\{m\in \mathbb{N}:\lambda_i(g_m \Lambda)\le e^{-t}\}$ is non-empty because $(g_m)$-action is ergodic by Howe-Moore theorem, and thus almost every $(g_m)$-orbit is dense. } \end{equation} By Corollary \ref{non-dynamical logarithm law}, \begin{equation} \lim_{t\to \infty}\frac{-\log\mu \left(\left\{\Lambda \in \mathcal{L}: \lambda_i(\Lambda) \le e^{-t} \right\}\right)}{t} = di \end{equation} Therefore by taking the product and reciprocal, \begin{equation}\label{product of two limits} \lim_{t\to \infty}\frac{t}{\log \min\{m\in \mathbb{N}:\lambda_i(g_m \Lambda)\le e^{-t}\}}=\frac{1}{di}. \end{equation} But observe that \begin{equation} \lambda_i\left( g_{\min\{m\in \mathbb{N}:\lambda_i(g_m \Lambda)\le e^{-t}\}} \Lambda \right) \le e^{-t}, \end{equation} and therefore \begin{equation}\label{hitting time ineq} -\log \lambda_i\left( g_{\min\{m\in \mathbb{N}:\lambda_i(g_m \Lambda)\le e^{-t}\}} \Lambda \right) \ge t, \end{equation} It follows that \begin{align*} & \limsup_{t\to \infty}\frac{-\log \lambda_i(h_t\Lambda)}{\log t}\\ \ge & \limsup_{t\to \infty}\frac{-\log \lambda_i\left( g_{\min\{m\in \mathbb{N}:\lambda_i(g_m \Lambda)\le e^{-t}\}} \Lambda \right)}{\log \min\{m\in \mathbb{N}:\lambda_i(g_m \Lambda)\le e^{-t}\}} \tag{By the of limsup. The subscript can be considered as a subsequence.}\\ \ge & \limsup_{t\to \infty} \frac{t}{\log \min\{m\in \mathbb{N}:\lambda_i(g_m \Lambda)\le e^{-t}\}}\tag{By the hitting time inequality \ref{hitting time ineq}}\\ = &\frac{1}{di}. \tag{By the limit \ref{product of two limits}} \end{align*} This finishes the proof of lower bound and thus the whole logarithm law theorem. \end{proof} \begin{appendices} \section{Computation of the coefficient $c_{d,k}$ for the generalized Siegel's formula.} \end{appendices} The computation of the coefficient in the generalized Siegel's formula requires the Poisson summation formula. Let us first recall the notion of admissible functions and Poisson summation formula from Fourier analysis: \begin{definition} A function $f:\mathbb{R}^d \to \mathbb{R}$ is called \textit{admissible} if there exist constants $c_1,c_2>0$ such that both $|f(x)|$ and $|\hat{f}(x)|$ are bounded by $\frac{c_1}{(1+\|x\|)^{d+c_2}}$, where $f\hat{f}(t):=\int_{\mathbb{R}^d}f(x)e^{2\pi i\langle x,t \rangle} dx$ is the Fourier transform of $f$. \end{definition} \begin{theorem}[Poisson Summation Formula] Given any unimodular lattice $\Lambda \in \mathbb{R}^d$, a vector $v$ and an admissible function $f:\mathbb{R}^d\to \mathbb{R}$, we have $$\sum_{x\in \Lambda}f(x+v)=\sum_{w \in \Lambda^*}e^{-2\pi i \langle v,w \rangle}\hat{f}(t),$$ where $\Lambda^*$ is the dual lattice of $\Lambda$, cf. \ref{dual lattice}. \end{theorem} \begin{proposition} As in the proof of Theorem \ref{generalizedsiegel}, let $\{e_1,\dots,e_d\}$ be the canonimcal basis of $\mathbb{R}^d$. For $G=\text{SL}(d,\mathbb{R})$ and $\Gamma=\text{SL}(d,\mathbb{Z})$ and the $k$-tuple $(e_1,\dots,e_k)$, be , let \begin{align*} G_k:&=\{g\in G: g.e_i=e_i,\forall 1 \le i \le k\}, \\ \Gamma_k:&=\{g\in \Gamma: g.e_i=e_i,\forall 1 \le i \le k\}. \end{align*} be the stabilizer subgroup of $(e_1,\dots,e_k)$ in $G$ and $\Gamma$, respectively. Let $dg$ denote the Haar measure on $G$ (scaled as above) and $dg_k:=d\mu_{G_k}(g_k)$,$d(g\Gamma):=d\mu_{G/\Gamma}(g\Gamma)$, $d\mu_{G/\Gamma_k}(g\Gamma_k)$ denoted the induced Haar measures on $G_k, G/\Gamma$ and $G/\Gamma_k$ respectively. Then, $$\mu_{G_k/\Gamma_k}(G_k/\Gamma_k)=\frac{1}{\zeta(d-k+1)\cdots \zeta(d)}$$ \end{proposition} \begin{proof}~\\ We start from the case when $k=1$. In this case, \begin{align*} G_1:=&\text{Stab}_G\{e_1\} =\{g\in \text{SL}(d,\mathbb{R}):ge_1=e_1\} =\begin{bmatrix} 1 & \mathbb{R}^{1 \times (d-1)} \\ 0 & \text{SL}(d-1,\mathbb{R}) \end{bmatrix}\\ \Gamma_1:=& \text{Stab}_\Gamma\{e_1\} =\{g\in \text{SL}(d,\mathbb{R}):ge_1=e_1\} =\begin{bmatrix} 1 & \mathbb{Z}^{1\times (d-1)} \\ 0 & \text{SL}(d-1,\mathbb{Z}) \end{bmatrix} \end{align*} For the computation, we first need to recall the Poiss For $f\in C_c(\mathbb{R}^d)$, namely a countinuous function with compact support function. Futher more, assume that $f$ is $K$-invariant and $f(0)\ne \hat{f}(0):=\int_{\mathbb{R}^d}f(x)dx$. Such function exists. For example, there exists $\eta\in (0,1)$ such that \begin{equation*} f(x)= \begin{cases} \frac{1-\eta \|x\|}{(1+\|x\|)^{d+1}} & \text{if } x \in B[0,1]\\ 0 & \text{if } x \notin B[0,1] \end{cases} \end{equation*} satisfy $f(0) \ne \hat{f}(0)$. Other properties are immediate. \vspace{5mm} Let $\tilde{F}(g):G\to \mathbb{R}$ be defined as $$\tilde{F}(g):=\sum_{v\in \mathbb{Z}^d}f(gv).$$ It follows that $\tilde{F}$ is bounded and for any $\gamma\in \Gamma=\text{SL}(d,\mathbb{Z})$, $$F(g\gamma)=\sum_{\mathbb{Z}^d}f(gv\gamma)=\sum_{\mathbb{Z}^d}f(gv)=F(g).$$ The $\Gamma$-invariance of $\tilde{F}$ induces a function $F:G/\Gamma \to \mathbb{R}$ by $$F(g\Gamma):=\sum_{v\in \mathbb{Z}^d}f(gv).$$ \vspace{0.5cm} Consider the following decomposition of $\mathbb{Z}^d$: \begin{equation} \mathbb{Z}^d=\{0\}\bigsqcup \bigsqcup_{\gamma \Gamma_1 \in \Gamma/\Gamma_1} \bigsqcup_{j=1}^\infty j e_1. \end{equation} It follows that $F\in C_c(G/\Gamma)$ and that \begin{align} \int_{G/\Gamma}F(g\Gamma)d(g\Gamma) =&\int_{G/\Gamma}\sum_{v\in \mathbb{Z}^d}f(gv) d(g\Gamma) \nonumber\\ =&\int_{G/\Gamma}f(0)d(g\Gamma)+\int_{G/\Gamma}\sum_{\gamma\Gamma_1 \in \Gamma/\Gamma_1}\sum_{j=1}^{\infty}f(jg\gamma e_1) d(g\Gamma_1) \nonumber\\ =&\int_{G/\Gamma}f(0)d(g\Gamma)+\int_{G/\Gamma_1}\sum_{j=1}^{\infty}f(jg\gamma e_1) d(g\Gamma_1) \nonumber\\ =& f(0)\mu(G/\Gamma)+\sum_{j=1}^{\infty} \int_{G/\Gamma_1}f(jg e_1) d(g\Gamma_1) \label{eq:decomposition of Z^d} \end{align} To treat the section part of the sum above, we introduce the following subgroups : \begin{align*} W_1:=& \begin{bmatrix} 1 & 0 \\ 0 & SL(d-1,\mathbb{R}) \end{bmatrix},\\ U_1:=& \begin{bmatrix} 1 & \mathbb{R}^{1\times (d-1)} \\ 0 & I_{d-1} \end{bmatrix},\\ A_t:=&\begin{bmatrix} t & 0 \\ 0 & t^{-\frac{1}{d-1}} \end{bmatrix}. \end{align*} The measures on them are canonical ones: $\mu_{W_1}$ is identified with the Haar measure on $\text{SL}(d-1,\mathbb{R})$, again defined through Iwasawa decomposition above; $\mu_{U_1}$ is identified with the Lebesgue measure on $\mathbb{R}^{d-1}$; and the measure $A_t$ is identified with $\frac{dt}{t}$ on $\mathbb{R}_{>0}$. Clearly $G_1=W_1U_1$. \begin{claim} $t^d \frac{dt}{t} dw du$ defines a right invariant measure on $A_tW_1U_1$. \end{claim} \begin{proof}[Proof of Claim]\renewcommand{\qedsymbol}{\ensuremath{\#}} Indeed, for any continuous function $f$ with compact support defined on $A_tW_1U_1$, identified with $\mathbb{R}_{t>0}\times \text{SL}(d-1,\mathbb{R}) \times \mathbb{R}^{d-1}$ and $a=\text{diag}(t,t^{-\frac{1}{d-1}}I_{d-1})$, $w'\in W_1$ and $u'\in U_1$. Notice that $awa'^{-1}=w$ and the Jacobian of the map $u'\mapsto aua'^{-1}$ is $(t^{1+\frac{1}{d-1}})^{d-1}=t^{d-1}$, the same change of variable argument for the integral $$\int_A \int_{W_1} \int_{U_1} f(awua'w'u')dadwdu$$ gives the right invariance of $t^d \frac{dt}{t} dw du$. \end{proof} Observe that $$K\cap AW_1U_1=\begin{bmatrix} 1 & 0\\ 0 & \text{SO}(d-1,\mathbb{R}) \end{bmatrix}\cong \text{SO}(d-1,\mathbb{R})$$ and that the map $$K\times G_1 \to KG_1, (k,g) \mapsto k^{-1}g$$ has its fiber at the identity equal to $K\cap AW_1U_1\cong \text{SO}(d-1,\mathbb{R})$, we have by the quotient integral formula and the proof of Theorem 8.32 in \cite{KN02}, for any compactly supported continuous function $\phi$ on $G$, $$\int_{G}\phi(jg e_1) d(g\Gamma)= \frac{1}{\text{Vol}(\text{SO}(d-1,\mathbb{R}))}\int_{KA_tW_1U_1}\phi(jkawu e_1) dk t^{d}da dw du.$$ Since any compactly supported function $\Phi$ on $G/\Gamma_1$ can be expressed as $$\Phi(g\Gamma_1)=\sum_{\gamma \in \Gamma_1}\phi(g\gamma),$$ for some compactly supported continuous function $\phi$ on $G$, we have by quotient integral formula and the uniqueness of Haar measure on homogeneous space $G/\Gamma_1$ $$\int_{G/\Gamma_1}f(jg e_1) d(g\Gamma)= \int_{KA_tW_1U_1/\Gamma_1}f(jkawu e_1) dk t^{d}da d(wu\Gamma_1)$$ where $f\in C_c(\mathbb{R}^d)$ as above and $j\ge 1$. Now it follows from \ref{eq:decomposition of Z^d} that \begin{align*} &\int_{G/\Gamma}F(g\Gamma)d(g\Gamma)\\ =& f(0)\mu(G/\Gamma)+\sum_{j=1}^{\infty} \int_{G/\Gamma_1}f(jg e_1) d(g\Gamma_1)\\ =& f(0)\mu(G/\Gamma)+ \frac{1}{\text{Vol}(\text{SO}(d-1,\mathbb{R}))}\sum_{j=1}^{\infty}\int_{K}\int_{A_tW_1U_1/\Gamma_1}f(jkawue_1) dk t^{d}da d(wu\Gamma_1) \\ =& f(0)\mu(G/\Gamma)+ \frac{\text{Vol}(\text{SO}(d,\mathbb{R})}{\text{Vol}(\text{SO}(d-1,\mathbb{R}))}\sum_{j=1}^{\infty}\int_{A_tW_1U_1/\Gamma_1}f(jawue_1) t^{d}da d(wu\Gamma_1) \tag{$f$ is $K$-invariant by assumption}\\ =& f(0)\mu(G/\Gamma)+ \text{Vol}(S^{d-1})\sum_{j=1}^{\infty}\int_{A_tW_1U_1/\Gamma_1}f(jawue_1) t^{d}da d(wu\Gamma_1)\\ =& f(0)\mu(G/\Gamma)+ \text{Vol}(S^{d-1})\sum_{j=1}^{\infty}\int_{A_t}\int_{W_1U_1/\Gamma_1}f(jawue_1) t^{d}da\\ =& f(0)\mu(G/\Gamma)+ \text{Vol}(S^{d-1})\sum_{j=1}^{\infty}\int_{A_t}\int_{W_1U_1/\Gamma_1}f(jtwue_1) t^{d-1}dt\\ =& f(0)\mu(G/\Gamma)+ \text{Vol}(S^{d-1})\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z}))\sum_{j=1}^{\infty}\int_{0}^{\infty}f(jte_1) t^{d-1}dt \tag{$\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z}))=\text{Vol}(W_1U_1/\Gamma_1)$ since $\text{Vol}(\mathbb{R}^d/\mathbb{Z}^d)$=1.}\\ =& f(0)\mu(G/\Gamma)+ \text{Vol}(S^{d-1})\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z})) \sum_{j=1}^{\infty}\frac{1}{j^d}\int_{0}^{\infty}f(te_1) t^{d-1}dt \\ \tag{change of variable $t\mapsto \frac{t}{j}$} \end{align*} Note that the above only works $d>2$. For the case when $d=2$, this $\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z}))$ has to be replaced by $1$. \begin{claim} $$\text{Vol}(S^{d-1})\int_{0}^{\infty}f(te_1) t^{d-1}dt=\hat{f}(0),$$ \end{claim} where $\hat{f}(0)=\int_{\mathbb{R}^d}f(x)dx$ is the Fourier transform of $f$ at $0$. \begin{proof}[Proof of Claim]\renewcommand{\qedsymbol}{\ensuremath{\#}} Indeed, notice that $f$ is $K$-invariant (rotation invariant), so $$f(te_1)=f(x), \forall x\in \mathbb{R}^d \text{ with }\|x\|=t.$$ By the spherical coordinate in $\mathbb{R}^d$, we have \begin{align*} x_1 &= r \cos(\varphi_1) \\ x_2 &= r \sin(\varphi_1) \cos(\varphi_2) \\ x_3 &= r \sin(\varphi_1) \sin(\varphi_2) \cos(\varphi_3) \\ &\,\,\,\vdots\\ x_{d-1} &= r \sin(\varphi_1) \cdots \sin(\varphi_{d-2}) \cos(\varphi_{d-1}) \\ x_d &= r \sin(\varphi_1) \cdots \sin(\varphi_{d-2}) \sin(\varphi_{d-1}) . \end{align*} where $0 \le \phi_{d-1}\le 2\pi$ and $0 \le \phi_{i}\le \pi$ for all $i\le d-1$. and \begin{align*} \int_{\mathbb{R}^d}f(x)dx =& \int_{0}^{\infty}\cdots\int_{0}^{\infty}f(x_1,\dots,x_d)dx_1\cdots dx_d\\ =& \int_{[0,\pi]^{d-1}\times [0,2\pi]}\int_{0}^{\infty} f(x_1,\dots,x_d)\left|\det\frac{\partial (x_i)}{\partial\left(r,\varphi_j\right)}\right| dr\,d\varphi_1 \, d\varphi_2\cdots d\varphi_{d-1} \\ =& \int_{[0,\pi]^{d-1}\times [0,2\pi]}\int_{0}^{\infty} f(re_1) r^{d-1}\sin^{d-2}(\varphi_1)\sin^{d-3}(\varphi_2)\cdots \sin(\varphi_{d-2})\, dr\,d\varphi_1 \, d\varphi_2\cdots d\varphi_{n-1}\\ =& \text{Vol}(S^{d-1})\int_{0}^{\infty} f(re_1) r^{d-1}dr. \end{align*} \end{proof} Therefore, we have \begin{align} \int_{G/\Gamma}F(g\Gamma)d(g\Gamma) =& f(0)\mu(G/\Gamma)+ \text{Vol}(S^{d-1})\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z})) \sum_{j=1}^{\infty}\frac{1}{j^d}\int_{0}^{\infty}f(te_1) t^{d-1}dt \nonumber \\ =& f(0)\mu(G/\Gamma)+ \hat{f}(0)\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z})) \sum_{j=1}^{\infty}\frac{1}{j^d} \nonumber \\ =& f(0)\mu(G/\Gamma)+ \hat{f}(0)\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z})) \zeta(d). \label{eq:recursive relation} \end{align} In other to find $\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z}))$, we shall look at the dual version of the above equation. For any $g\in G$, $g\mathbb{Z}^d$ defines a lattice and its dual lattice is $g^*\mathbb{Z}^d={}^tg^{-1}\mathbb{Z}^d$ (Proposition \ref{dual is same as transpose inverse}). But the automorphism $$*:G\to G, g\mapsto g^*$$ clearly preserves the Haar measure on $G$ and $\gamma \mathbb{Z}^d=\gamma^* \mathbb{Z}^d$ for all $\gamma \in \Gamma$. So it also preserves the Haar measure on $G/\Gamma$. On the other hand, by the Poisson summation formula: $$F(g\Gamma)=\sum_{v\in \mathbb{Z}^d}f(gv)=\sum_{v\in \mathbb{Z}^d}\hat{f}(g^*v)=:\hat{F}(g^*).$$ Since $\hat{\hat{f}}(0)=f(0)$, by replacing $f$ in the recursion equation \ref{eq:recursive relation} by $\hat{f}$, we have \begin{align*} & f(0)\mu(G/\Gamma)+ \hat{f}(0)\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z})) \zeta(d)\\ =& \int_{G/\Gamma}F(g\Gamma)d(g\Gamma) =\int_{G/\Gamma}\hat{F}(g\Gamma)d(g\Gamma) \\ =& \hat{f}(0)\mu(G/\Gamma)+ \hat{\hat{f}}(0)\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z})) \zeta(d)\\ =& \hat{f}(0)\mu(G/\Gamma)+ f(0). \end{align*} Since we have chosen $f(0)=\hat{f}(0)$ at the beginning, this yields: $$\text{Vol}(G/\Gamma)=\text{Vol}(\text{SL}(d,\mathbb{R})/\text{SL}(d,\mathbb{Z}))=\zeta(d)\text{Vol}(\text{SL}(d-1,\mathbb{R})/\text{SL}(d-1,\mathbb{Z}))=\zeta(d)\text{Vol}(G_1/\Gamma_1),$$ for $d>2$ and with our discussion above (before the claim), $$\text{Vol}(\text{SL}(2,\mathbb{R})/\text{SL}(2,\mathbb{Z}))=\zeta(2).$$ By induction, we have $$\text{Vol}(G/\Gamma)=\zeta(d)\cdots \zeta(d-k+1)\text{Vol}(G_k/\Gamma_k)$$ This gives the constant we need for the generalized Siegel formula as well as $$\text{Vol}(G/\Gamma)=\zeta(d)\cdots \zeta(2).$$ \end{proof} \section*{Acknowledgement} The author would like to thank Professors Michael Bersudsky and Nimish Shah for their encouragement and helpful discussions on this project. \printbibliography[ heading=bibintoc, title={Bibliography} ] \end{document}
2024-02-18T23:39:47.053Z
2023-01-02T02:10:22.000Z
algebraic_stack_train_0000
380
15,985
proofpile-arXiv_065-1951
\section{Introduction} The Jacobi inversion problem, that is finding a preimage of Abel's map, plays a central role in the theory of Riemann surfaces, and finds important applications in the solution of integrable systems, see for example \cite{bbeim1994}. Solving the Jacobi inversion problem on elliptic curves led to integration of the Euler top and the Lagrange top. A solution to the Euler top was accomplished in terms of ratios of theta functions, called later the Jacobi elliptic functions. A solution to the Lagrange top was given in terms of the Weierstrass elliptic functions $\wp$ and $\wp'$. In the Weierstrass approach elliptic functions are derived from an entire function called the sigma function $\sigma$ and defined by a series which satisfies a heat equation, see \cite[Eqs (7.)--(9.) Art.\;5]{weier}. The concept of the sigma function was extended to hyperelliptic curves by Klein in \cite{klein}. However, no progress was made in constructing sigma functions in higher genera until the end of the 20-th century. An explicit solution of the Jacobi inversion problem on the hyperelliptic curve of genus $2$ was proposed in \cite[Art.\;11]{bakerMPF}, provided that the curve has a canonical form: $y^2 = \lambda_0 + \lambda_1 x + \lambda_2 x^2 + \lambda_3 x^3 + \lambda_4 x^4 + 4 x^5$. This solution is given in terms of multiply periodic $\wp$ functions obtained from the entire function $\vartheta(u)=e^{au^2} \theta(v)$, where $v=(v_1,v_2)$ and $u=(u_1,u_2)$ are integrals of the first kind normalized and not normalized, respectively, $au^2$ denotes a quadratic function of $u$ with a $2\times 2$ symmetric matrix $a$, and $\theta$ denotes the Riemann theta function. In fact, $\vartheta$ served as the sigma function related to the canonical form of the class of genus two hyperelliptic curves, and renamed $\sigma$ in \cite[Art.\;12]{bakerMPF}. A series representation of this genus $2$ sigma function was not known at that time. In 1990's, the ideas of \cite{bakerMPF} were successfully extended to hyperelliptic curves of arbitrary genera in \cite{belHKF}. A multivariable sigma function was defined through the Riemann theta function, as above; and a solution of the Jacobi inversion problem on hyperelliptic curves was found. At the same time, a great progress was made in constructing multivariable sigma functions. The theory of multivariable sigma functions developed in \cite{bel1999, bl2002, bl2004, bl2008} (\cite{belMDSF} in whole) gives a method of constructing series representations of multivariable sigma functions related to canonical plane algebraic curves. Such a sigma function is defined by a system of heat equations. The system has a unique solution if a curve has a canonical form called an $(n,s)$-curve with co-prime $n$ and $s$. Introduced in \cite{bel1999}, $(n,s)$-curves serve as a generalization of the Weierstrass canonical form of elliptic curves. Every class of bi-rationally equivalent plane algebraic curves has a representative $(n,s)$-curve. The method of constructing series of multivariable sigma functions is illustrated in \cite{egoy}, where the systems of heat equations and the series for the $(2,5)$, $(2,7)$ and $(3,4)$ curves are given. Series expansions of the sigma functions for the $(3,4)$ and $(3,5)$ curves can be found in \cite{bl2018}. Sigma functions for space curves are constructed in \cite{mats}. Another approach to solving the Jacobi inversion problem, which uses ratios of theta functions, received less development, see \cite[Art.\;212]{bakerAF}, and \cite[sect.\;III.6.6, sect.\;VI.3]{FarKra}. An idea how to modify a theta function in the case of the extended Jacobi inversion problem\footnote{A solution of the Jacobi inversion problem is supposed to be a divisor of degree equal to the genus of a curve. If a divisor of degree greater than the genus of a curve is required, then the problem of inverting Abel's map is called the extended Jacobi inversion problem.} is proposed in \cite{bf2008} as a generalization of \cite{fed}. Below we develop the ideas of \cite{bakerMPF} and \cite{belHKF}, and suggest a solution of the Jacobi inversion problem on non-hyperelliptic curves. This solution provides a non-special divisor of the degree equal to the genus of a curve. The case of special divisors is a subject of another investigation. The extended Jacobi inversion problem can be solved by degenerating the sigma function of a curve of a higher genus, see \cite{bl2019} for the case of finding a degree $2$ divisor on an elliptic curve. The paper is organized as follows. In Preliminaries we briefly recall the notions of an $(n,s)$-curve and the Sat\={o} weight, how to construct entire rational functions as well as differentials of the first and the second kinds on such a curve, and the known solutions of the Jacobi inversion problem. In section 3 we give a solution of the Jacobi inversion problem on a non-hyperelliptic curve in general, and display how to adapt the proposed method to hyperelliptic curves. In section 4 an explicit solution of the Jacobi inversion problem on trigonal curves is given. Similar solution on tetragonal and pentagonal curves are presented in sections 5 and 6, respectively. \section{Preliminaries} \subsection{$(n,s)$-Curves} In \cite{bel1999}, with a pair of fixed co-prime integers $n$ and $s$ a family of curves is defined: \begin{subequations}\label{nsCurve} \begin{equation} \mathcal{V}_{(n,s)}=\{(x,y)\in \Complex^2 \mid f(x,y) =0\}, \end{equation} where \begin{gather} f(x,y) = -y^n + x^s + \sum_{j=0}^{n-2} \sum_{i=0}^{s-2} \lambda_{ns-in- js} y^j x^i, \label{fEq}\\ \lambda_{k\leqslant 0}=0, \quad \lambda_{k}\in \Complex. \label{ModCond} \end{gather} \end{subequations} The parameters $\lambda\equiv (\lambda_k)$ of such a family are varying. The condition \eqref{ModCond} guarantees that the genus of a curve from $\mathcal{V}_{(n,s)}$ does not exceed \begin{equation}\label{Vgenus} g = \tfrac{1}{2} (n-1)(s-1). \end{equation} Every family $\mathcal{V}_{(n,s)}$ with fixed $n$ and $s$ is considered as a fibre bundle over the space of parameters $\Lambda = \{\lambda \in \Complex^{2g-M}\} \simeq \Complex^{2g-M}$. Here $M$ is called the modality, and denotes the number of parameters $\lambda_{k\leqslant 0}$ assigned to zero. Let $ \mathcal{V}_{(n,s)}^0 \,{\subset}\, \mathcal{V}_{(n,s)}$ be the submanifold of degenerate curves with genera less than $g$. In what follows, we always consider curves from $\mathcal{V}_{(n,s)}^c = \mathcal{V}_{(n,s)} \backslash \mathcal{V}_{(n,s)}^0$. That is all branch points of a curve are distinct, and the genus of a curve in defined by \eqref{Vgenus}. Any degenerate curve can be replaced with a non-degenerate curve from another family with smaller values of $n$, $s$. All $(n,s)$-curves have the property that infinity is a single point which is a branch point where all $n$ sheets join together. This point serves as the base point. In the vicinity of infinity on the curve \eqref{nsCurve} in a local parameter $\xi$ the following expansions hold \begin{gather}\label{nsParam} x = \xi^{-n},\qquad y = \xi^{-s}(1+O(\lambda)). \end{gather} The negative exponent of the leading term in the expansion about infinity serves as the Sat\={o} weight. Thus, the Sat\={o} weights of $x$ and $y$ are $\wgt x = n$, $\wgt y=s$. The weights are assigned to parameters of the curve: $\wgt \lambda_k = k$. Note, that $f$ is homogeneous with respect to the Sat\={o} weight, and $\wgt f= ns$. The Sat\={o} weight introduces an order in the set of monomials $y^jx^i$, that is $\wgt y^jx^i = js+in$. We use the ordered list of monomials $\mathfrak{M}$ as a characteristic of an $(n,s)$-curve. Actually, $\mathfrak{M} = \{\mathcal{M}_{ js+in - 2g+1} = y^jx^i \mid i, j\geqslant 0\}$. \subsection{Abel's map} Let $\mathfrak{W}_{(n,s)} = \{\mathfrak{w}_1,\, \mathfrak{w}_2,\,\dots,\, \mathfrak{w}_g\}$ be the Weierstrass sequence of $\mathcal{V}^c_{(n,s)}$ of genus $g$, namely $$\mathfrak{W}_{(n,s)} = (\{0\} \cup \Natural) \backslash \{js+in \mid i,\, j \geqslant 0\}.$$ Let $\mathrm{d} u \equiv (\mathrm{d} u_{\mathfrak{w}_1},\,\mathrm{d} u_{\mathfrak{w}_2},\,\dots,\,\mathrm{d} u_{\mathfrak{w}_g})^t$ denote differentials of the first kind on the curve. Actually, \begin{gather}\label{uDif} \mathrm{d} u_{\mathfrak{w}} = \frac{\mathcal{M}_{-\mathfrak{w}} \,\mathrm{d} x}{\partial_y f(x,y)},\quad 1 \leqslant \mathfrak{w} \leqslant 2g-1, \end{gather} where $\mathfrak{w}$ runs the Weierstrass sequence, and $\wgt u_{\mathfrak{w}} = - \mathfrak{w}$. Note that the first $g$ monomials $\{\mathcal{M}_{-\mathfrak{w}_g}$, \ldots, $\mathcal{M}_{-\mathfrak{w}_2}$, $\mathcal{M}_{-\mathfrak{w}_1}\}$ from the list $\mathfrak{M}$ are employed. Let $\mathrm{d} r \equiv (\mathrm{d} r_{\mathfrak{w}_1}$, $\mathrm{d} r_{\mathfrak{w}_2}$, \ldots, $\mathrm{d} r_{\mathfrak{w}_g})^t$ denote differentials of the the second kind, $\wgt \mathrm{d} r_{\mathfrak{w}} = \mathfrak{w}$. Let \begin{gather}\label{rDif} \mathrm{d} r_{\mathfrak{w}_i} = \bigg(\mathfrak{w}_i \mathcal{M}_{\mathfrak{w}_i} + \sum_{-2g < \kappa < \mathfrak{w}_i} d_\kappa (\lambda) \mathcal{M}_\kappa \bigg) \frac{\mathrm{d} x}{\partial_y f(x,y)},\quad i =1,\, \dots,\,g, \end{gather} and coefficients $d_\kappa$ are polynomials in $\lambda$. The following condition completely determines the principle parts of $\mathrm{d} r$: \begin{gather}\label{rCond} \res_{\xi = 0} \bigg( \int_0^\xi \mathrm{d} u (\xi) \bigg) \, \mathrm{d} r(\xi)^t = 1_g, \end{gather} where $\mathrm{d} u (\xi)$ and $\mathrm{d} r (\xi)$ denote expansions near infinity of differentials of the first and second kind, and $1_g$ denotes the identity matrix of size $g$. The holomorphic parts of $\mathrm{d} r$ are not essential in the further computations. In what follows, we deal with $\mathrm{d} r_\ell$ of weights $1\leqslant \ell \leqslant n-1$, and of the form \eqref{rDif} with $0 < \kappa < \mathfrak{w}_i$. Let $P$ be a point of a fixed curve from $\mathcal{V}^c_{(n,s)}$, then \begin{gather*} u(P) = \int_{\infty}^P \mathrm{d} u, \qquad\qquad r(P) = \int_{\infty}^P \mathrm{d} r \end{gather*} are integrals of the first and second kinds on the curve, respectively. In fact, $u$ serves as Abel's map on the curve, that is $u(P) \equiv \mathcal{A}(P)$. Abel's map of a divisor $D= \sum_{i=1}^n P_i$ is defined by $\mathcal{A}(D) = \sum_{i=1}^n \mathcal{A}(P_i)$. The definition of $r$ requires a regularization since $\mathrm{d} r$ have a singularity at infinity, for more details see \cite{bl2018}. The integrals $r$ of the second kind define zeta functions $\zeta$ on the curve, up to adding abelian functions. Let $\{\mathfrak{a}_n, \mathfrak{b}_n \mid n =1,\,\dots,\, g\}$ be a canonical basis on the curve, and $\omega = (\omega_{in})$ $\omega' = (\omega'_{in})$ denote period matrices of the first and second kinds: \begin{gather*} \omega_{in} = \int_{\mathfrak{a}_n} \mathrm{d} u_{\mathfrak{w}_i}, \qquad\qquad \omega'_{in} = \int_{\mathfrak{b}_n} \mathrm{d} u_{\mathfrak{w}_i}. \end{gather*} If $\mathfrak{P}$ denotes the lattice of periods generated by the columns of $(\omega,\omega')$, then $\mathfrak{J} = \Complex^g / \mathfrak{P}$ is a Jacobian variety (Jacobian) of the curve under consideration. Note that $\mathfrak{J}$ serves as the fundamental domain. We denote coordinates of $\mathfrak{J}$, and of the space $\Complex^g$ where $\mathfrak{J}$ is embedded, by $u=(u_{\mathfrak{w}_1},\,u_{\mathfrak{w}_2},\, \dots,\, u_{\mathfrak{w}_g})^t$, $\wgt u_{\mathfrak{w}}=-\mathfrak{w}$. Let $\sigma_{(n,s)}(u;\lambda)$ be the entire function called the sigma function of the family $\mathcal{V}_{(n,s)}$ of curves, $\sigma_{(n,s)}: \Complex^g\times \Lambda \mapsto \Complex$. The sigma function is defined as an analytic series in $u$ and $\lambda$. It is known from \cite{bel1999}, that $$\wgt \sigma_{(n,s)} = - \tfrac{1}{24} (n^2-1)(s^2-1).$$ With help of the sigma function $\sigma$ of a curve, zeta functions and abelian functions on the Jacobian of this curve are defined: \begin{subequations}\label{wpDefs} \begin{align} &\zeta_{i}(u) = \frac{\partial}{\partial u_i} \log \sigma(u), \label{zetaDef} \\ &\wp_{i,j}(u) = - \frac{\partial^2}{\partial u_i \partial u_j} \log \sigma(u), \label{wp2Def}\\ &\wp_{i,j,k}(u) = - \frac{\partial^3}{\partial u_i \partial u_j \partial u_k} \log \sigma(u), \label{wp3Def} \quad \text{etc.} \end{align} \end{subequations} For brevity, we write $\sigma(u)$, $\zeta_i(u)$, $\wp_{i,j}(u)$, etc. instead of $\sigma(u;\lambda)$, $\zeta_i(u;\lambda)$, $\wp_{i,j}(u;\lambda)$, etc. The abelian functions are periodic over the lattice $\mathfrak{P}$ defined above. In terms of the abelian functions uniformization of the curve is expressed. \subsection{The Jacobi inversion problem}\label{ss:JIP} Let $\mathcal{S}^g$ be a symmetric product of $g$ samples of the same curve $V$ of genus $g$, and $\mathcal{D}_g \subset \mathcal{S}^g$ consists of all degree $g$ non-special divisors. Abel's map establishes a one-to-one correspondence between $\mathcal{D}_g$ and $\mathfrak{J}_g \equiv \mathcal{A}(\mathcal{D}_g) \subset \mathfrak{J}$. Note that $\sigma(u)\neq 0$ if $u \in \mathfrak{J}_g$, and $\sigma(u)=0$ if $u \in \mathfrak{J}_0 \equiv \mathfrak{J} \backslash \mathfrak{J}_g$ by the Riemann vanishing theorem. \newtheorem*{JIProblem}{The Jacobi inversion problem} \begin{JIProblem} Given $u \in \mathfrak{J}_g$ such that $\sigma(u)$ does not vanish, find a non-special degree $g$ divisor $D_g$ such that $\mathcal{A}(D_g) = u$. \end{JIProblem} It is known that the Jacobi inversion problem in this formulation has a unique solution. Every divisor $D_g \in \mathcal{D}_g$ serves as a representative of a class of equivalent divisors $\{D \mid \deg D \geqslant g,\, \mathcal{A}(D) = \mathcal{A}(D_g)\}$. There is only one divisor $D_g$ of degree $g$ in each class, and it is called a reduced divisor. We leave the Jacobi inversion problem for points from $\mathfrak{J}_0 $ beyond our consideration in this paper. Now we recall the known solutions of the declared Jacobi inversion problem. \begin{exam} A uniformization of the Weierstrass canonical curve ${-}y^2 + 4x^3 - g_2 x - g_3 =0$ is given by $(x,y)= \big(\wp(u;g_2,g_3), - \wp' (u;g_2,g_3)\big)$ with the standard Weierstrass $\wp$-function and $\sigma$-function. The Weierstrass canonical curve is equivalent to $\mathcal{V}_{(2,3)}$ of the form $-y^2 + x^3 + \lambda_4 x + \lambda_6 = 0$, and the corresponding sigma function $\sigma_{(2,3)}$ relates to the Weierstrass $\sigma$-function as $\sigma_{(2,3)}(u; \lambda_4,\lambda_6) = \sigma (u; -4 \lambda_4, -4 \lambda_6)$. Then $\wp_{(2,3)}(u; \lambda_4,\lambda_6) = 2 \wp(u; -4 \lambda_4, -4 \lambda_6)$, and a uniformization of $\mathcal{V}_{(2,3)}$ is given by $(x,y)= \big(\wp_{(2,3)}(u; \lambda)$, $- \tfrac{1}{2} \wp'_{(2,3)} (u; \lambda)\big)$. \end{exam} \begin{exam} In \cite[Art.\;11]{bakerMPF} the reader can find a solution of the Jacobi inversion problem on $(2,5)$-curve. If $D=(x_1,\,y_1)+(x_2,\,y_2)$, and $u=\mathcal{A}(D)$, then\footnote{The given formulas are obtained with the following differentials of the first and second kinds \begin{align*} & \mathrm{d} u_1 = \frac{x\,\mathrm{d} x}{-2y},& &\mathrm{d} r_3 = \frac{x^2 \mathrm{d} x}{-2y},&\\ & \mathrm{d} u_3 = \frac{\mathrm{d} x}{-2y},& &\mathrm{d} r_1 = (3x^3 +\lambda_4 x) \frac{\mathrm{d} x}{-2y}.& \end{align*}} \begin{subequations}\label{JIP25} \begin{gather} x_1+x_2 = \wp_{1,1}(u), \qquad x_1 x_2 = - \wp_{1,3}(u),\\ y_i = -\tfrac{1}{2} \big( x_i \wp_{1,1,1}(u) + \wp_{1,1,3}(u) \big),\qquad i=1,\, 2. \end{gather} \end{subequations} \end{exam} \begin{exam}[\textbf{Hyperelliptic curves}] A solution of the Jacobi inversion problem on a hyperelliptic curve is proposed in \cite[Theorem 2.2]{belHKF}. Let a non-degenerate hyperelliptic curve of genus $g$ be defined\footnote{A $(2,2g+1)$-curve serves as a canonical form of hyperelliptic curves of genus $g$.} by \begin{equation}\label{V22g1Eq} -y^2 + x^{2 g+1} + \sum_{i=1}^{2g} \lambda_{2i+2} x^{2g-i} = 0. \end{equation} Let $u = \mathcal{A}(D)$ be the Abel image of a degree $g$ non-special divisor $D$ on the curve. Then $D$ is uniquely defined by the system of equations \begin{subequations}\label{EnC22g1} \begin{align} &x^{g} - \sum_{i=1}^{g} x^{g-i} \wp_{1,2i-1}(u) = 0,\\ &2 y + \sum_{i=1}^{g} x^{g-i} \wp_{1,1,2i-1}(u) = 0. \end{align} \end{subequations} \end{exam} \subsection{Entire rational functions on a curve}\label{ss:EnRatF} Let $\mathcal{R}$ be an entire rational function on a curve $V$ of genus $g$ from $\mathcal{V}^c_{(n,s)}$. Let $\mathcal{R}$ be of weight $N$, that is the divisor of zeros consists of $N$ points, and $N$ poles are located at infinity. The function $\mathcal{R}$ has the form of a linear combination of the first $N-g+1$ monomials from the ordered list $\mathfrak{M}$. Indeed, according to the Riemann---Roch theorem, only $N-g$ zeros can be chosen arbitrary, the remaining $g$ zeros are determined by the curve equation. In general, an entire rational function of weight $N\geqslant 2g $ has the form \begin{equation}\label{RN} \mathcal{R}(x,y) = \sum_{j=0}^{n-1} y^j \rho_{j}(x), \end{equation} where $\rho_{j}$, $0\leqslant j \leqslant n-1$, are polynomials in $x$. \section{Solving the Jacobi inversion problem} In what follows, curves are supposed to be non-hyperelliptic. The hyperelliptic case is considered separately in Example~\ref{E:HypC}. \begin{theo}\label{T1} Let $u = \mathcal{A}(D)$ be the Abel image of a degree $g$ non-special divisor $D$ on a non-degenerate $(n,s)$-curve of genus $g$. Then $D$ is uniquely defined by a system of $n-1$ equations with entire rational functions of the weights $2g$, $2g+1$, \ldots, $2g+n-2$, namely \begin{gather}\label{REqs} \begin{split} &\mathcal{R}_{2g}(x,y;u) = 0,\\ &\mathcal{R}_{2g+1}(x,y;u) = 0,\\ &\vdots\\ &\mathcal{R}_{2g+n-2}(x,y;u) = 0. \end{split} \end{gather} The entire rational functions have the form \begin{equation}\label{R2gi} \mathcal{R}_{2g+l}(x,y;u) = \sum_{j=0}^{n-2} y^j \rho_{j}^{[2g+l]}(x;u), \qquad 0 \leqslant l \leqslant n-2, \end{equation} where $\rho_{j}^{[2g+l]}$ denotes a polynomial in $x$ with abelian functions of $u$ as coefficients. \end{theo} \begin{theo}\label{T2} Let $V$ be a non-degenerate $(n,s)$-curve of genus $g$ with a list of monomials $\mathfrak{M}$ sorted in the ascending order of the Sat\={o} weight, and $\mathcal{M}_{\mathfrak{w}}$ denotes a monomial of weight $\mathfrak{w}+2g-1$. Let $\widetilde{\mathcal{M}}_{\ell}$ be the entire rational function equal to $(\partial_y f)\mathrm{d} r_\ell/ \mathrm{d} x$, and the differentials of the second kind $\mathrm{d} r_\ell$, $\ell=1$, \ldots, $n-1$, be defined by \eqref{rCond}. Then the entire rational functions in \eqref{REqs} have the form \begin{equation}\label{R2giAb} \mathcal{R}_{2g+\ell-1}(x,y;u) = \widetilde{\mathcal{M}}_{\ell} - \sum_{i=1}^{g} A_{\ell,\mathfrak{w}_i}(u) \mathcal{M}_{-\mathfrak{w}_i}, \qquad 1 \leqslant \ell \leqslant n-1, \end{equation} where $A_{\ell,\mathfrak{w}_i}$ denote abelian functions on the Jacobian variety of $V$, and $\mathfrak{w}_i$ runs the Weierstrass gap sequence. \end{theo} \begin{proof}[Proof of Theorem~\ref{T1}] As distinct from \eqref{RN}, entire rational functions of the weights not greater than $2g+n-2$ do not contain the term $y^{n-1} \rho_{n-1}(x)$. That is, the highest degree of $y$ in \eqref{R2gi} is $n-2$. Indeed, since $\wgt y^{n-1} = s(n-1)$ is greater than $2g+n-2=s (n-1)-1$, the minimal weight of a function which contains the term $y^{n-1}$ is $2g+n-1$. The system \eqref{REqs} admits a matrix form \begin{gather*} \tens{R}(x;u) \tens{Y} = 0, \intertext{where} \tens{R}(x;u) = \begin{pmatrix} \rho_{0}^{[2g]} & \rho_{1}^{[2g]} & \dots & \rho_{n-2}^{[2g]} \\ \rho_{0}^{[2g+1]} & \rho_{1}^{[2g+1]} & \dots & \rho_{n-2}^{[2g+1]} \\ \vdots & \vdots & \ddots & \vdots \\ \rho_{0}^{[2g+n-2]} & \rho_{1}^{[2g+n-2]} & \dots & \rho_{n-2}^{[2g+n-2]} \end{pmatrix},\quad \tens{Y} = \begin{pmatrix} 1 \\ y \\ \vdots \\ y^{n-2} \end{pmatrix}. \end{gather*} For brevity, we omit the arguments of polynomials and write $\rho_{j}^{[2g+l]}$ instead of $\rho_{j}^{[2g+l]}(x;u)$. Let $\mathfrak{m}$ and $\mathfrak{r}$ be the natural numbers such that $s= n \mathfrak{m} + \mathfrak{r}$. That is $\mathfrak{m}$ is the integer part of $s/n$ and $\mathfrak{r}$ is the remainder. Then \begin{gather*} \deg \rho_{j}^{[2g+l]} = \Big[ \frac{2g + l - j s }{n} \Big] = \Big[\mathfrak{m} (n-j-1) + \frac{\mathfrak{r}}{n} (n-j-1) - \frac{ n - l - 1}{n} \Big]. \end{gather*} Now we find the degree of $\det \tens{R}(x;u)$ as a polynomial in $x$. The latter is a sum of terms of the form $\prod_{j=0}^{n-2} \rho_{jy}^{[2g+l(j)]}$, where the function $l(j)$ is a permutation of the values $j$ running from $0$ to $n-2$. Then \begin{multline*} \deg \prod_{j=0}^{n-2} \rho_{j}^{[2g+l(j)]} \leqslant \Big[ \sum_{j=0}^{n-2} \Big( \mathfrak{m} (n-j-1) + \frac{\mathfrak{r}}{n} (n-j-1) - \frac{1}{n}(n - l(j) - 1)\Big) \Big] \\ = \Big[ \frac{1}{2} (n-1) n \mathfrak{m} + \frac{\mathfrak{r}}{2}(n-1) - \frac{1}{2} (n-1) \Big] = \Big[ \frac{1}{2} (n-1) (s-1)\Big] = g. \end{multline*} Therefore, the divisor of zeros of $\mathcal{X}(x;u)=\det \tens{R}(x;u)$ has degree $g$, and these values give $x$-coordinates of the support of the required $D$. The corresponding $y$-coordinates are determined from the system \eqref{REqs}. \end{proof} \begin{proof}[Proof of Theorem~\ref{T2}] We suppose that $u=\mathcal{A}(D)$, where $D$ is a degree $g$ non-special divisor on $V$. From the vanishing properties of the sigma function, we know that $\sigma\big(u - \mathcal{A}(x,y) \big)$ has zeros exactly at $D=\sum_{k=1}^g (x_k,\,y_k)$. By the residue theorem, we have \begin{equation*} \sum_{k=1}^g r_{\ell} (x_k,\,y_k) = \frac{1}{2\pi \mathrm{i}}\oint_{C_{\infty}} \bigg(\int_{\infty}^{(x,y)} \mathrm{d} r_{\ell} \bigg) \, \mathrm{d} \log \sigma\big(u - \mathcal{A}(x,y) \big), \end{equation*} where $\mathrm{d} r_{\ell}$ and $r_{\ell}$ are a differential and the corresponding integral of the second kind of weight $\ell$, and $C_{\infty}$ denotes a contour encircling infinity which corresponds to the boundary of the fundamental domain in the Jacobian $\mathfrak{J}$ of the curve. Then we compute the residue at infinity ($\xi=0$) on the right hand side using the parameterization \eqref{nsParam} of $V$. Thus, \begin{equation}\label{rExpr} \sum_{k=1}^g r_{\ell} (x_k,\,y_k) = - \res\limits_{\xi=0} \bigg(r_{\ell}(\xi) \frac{\mathrm{d}}{\mathrm{d} \xi} \log \sigma\big(u- \mathcal{A}(\xi)\big) \bigg) \equiv R_{\ell} (u). \end{equation} In fact, \eqref{rExpr} gives a definition of $\zeta_{\ell}$. Differentiating \eqref{rExpr} with respect to $x_1$, we find \begin{equation}\label{RatFExpr} \frac{\mathrm{d} r_{\ell}(x_1,\,y_1)}{\mathrm{d} x_1} = \big(\partial_u R_{\ell} (u) \big)^t\frac{\mathrm{d} u(x_1,\,y_1)}{\mathrm{d} x_1}. \end{equation} Multiplying \eqref{RatFExpr} by $\partial_{y_1} f(x_1,y_1)$, we find the relations \eqref{REqs} with the entire rational functions of the form \eqref{R2giAb}, where $A_{\ell,\mathfrak{w}_i}(u) = \partial_{u_{\mathfrak{w}_i}} R_{\ell} (u)$. Note that $\mathrm{d} u_{\mathfrak{w}}(x,y) / \mathrm{d} x= \mathcal{M}_{-\mathfrak{w}} /(\partial_y f)$, where $\mathfrak{w}$ runs the Weierstrass gap sequence $\mathfrak{W}=\{\mathfrak{w}_i\} $. Instead of the second kind differentials defined by \eqref{rCond}, one can use a simpler form: $\mathrm{d} r_{\ell} = \ell \mathcal{M}_{\ell} \, \mathrm{d} x/(\partial_y f)$, $\ell=1$, \ldots, $n-1$. Then by simplifying \eqref{rExpr} to the form with no $\zeta_k$, $k<\ell$, one finds the relations \eqref{REqs} with the entire rational functions of the form \eqref{R2giAb}. \end{proof} The proposed method is applicable to hyperelliptic curves. Though $n-1=1$ in this case, two entire rational functions are needed to define $D$ such that $u=\mathcal{A}(D)$. \begin{exam}[\textbf{Hyperelliptic curves}]\label{E:HypC} Recall that the curve \eqref{V22g1Eq} has the Weierstrass sequence $\mathfrak{W}=\{2i-1 \mid i=1,\, \dots,\, g\}$. The list of monomials is $\mathfrak{M} = \{x^k \mid k =0,\,1,\, \dots \}$, and $\mathcal{M}_{-(2i-1)} = x^{g-i}$, $i=1,\, \dots,\, g$, produce differentials of the first kind. Then we need differentials of the second kind of the weights $1$ and $2$. The latter are generated by the monomials $x^g$ and $y$, namely \begin{align*} &(\mathrm{d} r_1(x,y),\, \mathrm{d} r_2(x,y)) = \frac{\mathrm{d} x}{\partial_y f} \big( x^g,\, 2 y \big). \end{align*} Using the method described in the proof of Theorem~\ref{T2}, we find that the preimage $D$ of $u=\mathcal{A}(D)$ is uniquely defined by the system \begin{equation*} \mathcal{R}_{2g}(x,y;u)=0,\qquad \mathcal{R}_{2g+1}(x,y;u)=0 \end{equation*} with two entire rational functions of the weights $2g$ and $2g+1$: \begin{align*} \mathcal{R}_{2g}(x,y;u) &= x^{g} - \sum_{i=1}^{g} \wp_{1,2i-1}(u) x^{g-i},\\ \mathcal{R}_{2g+1}(x,y;u) &= 2 y + \sum_{i=1}^{g} \wp_{1,1,2i-1}(u) x^{g-i}, \end{align*} which coincides with \eqref{EnC22g1}. \end{exam} Below we illustrate the proposed method with multiple examples. We consider trigonal, tetragonal and pentagonal curves. \section{Jacobi inversion problem on trigonal curves} A uniformization of Jacobian varieties of trigonal $(n,s)$-curves, which is, in fact, a solution of the Jacobi inversion problem, is proposed in \cite{bel00}. The latter solution is obtained from the Klein formula \cite[Eq.\;(4.1)]{bel00}, which connects the fundamental bi-differential to a quadratic function in differentials of the first kind with $\wp$ functions as coefficients. Below we suggest an alternative solution of the Jacobi inversion problem on trigonal curves based on the results of the previous section. Trigonal $(n,s)$-curves split into two types: $(3,3\mathfrak{m}+1)$ and $(3,3\mathfrak{m}+2)$-curves, where $\mathfrak{m}$ is a natural number. \begin{theo}[$(3,3\mathfrak{m}+1)$-Curves]\label{T:C33m1} Let a non-degenerate $(3,3\mathfrak{m}+1)$-curve of genus $g=3\mathfrak{m}$, $\mathfrak{m} \in \Natural$, be defined by \begin{equation}\label{V33m1Eq} -y^3 + x^{3\mathfrak{m}+1} + y \sum_{i=0}^{2\mathfrak{m}} \lambda_{3i+2} x^{2\mathfrak{m}-i} + \sum_{i=1}^{3\mathfrak{m}} \lambda_{3i+3} x^{3\mathfrak{m}-i} = 0. \end{equation} Let $u = \mathcal{A}(D)$ be the Abel image of a degree $g$ non-special divisor $D$ on the curve. Then $D$ is uniquely defined by the system of equations \begin{subequations}\label{REqsC33m1} \begin{equation} \mathcal{R}_{6\mathfrak{m}}(x,y;u)=0,\qquad \mathcal{R}_{6\mathfrak{m}+1}(x,y;u)=0 \end{equation} with two entire rational functions of the weights $2g=6\mathfrak{m}$, $2g+1=6\mathfrak{m}+1$: \begin{align} \mathcal{R}_{6\mathfrak{m}}(x,y;u) &= x^{2\mathfrak{m}} - \sum_{i=1}^{3\mathfrak{m}} \wp_{1,\mathfrak{w}_i}(u) \mathcal{M}_{-\mathfrak{w}_i},\\ \mathcal{R}_{6\mathfrak{m}+1}(x,y;u) &= 2 y x^{\mathfrak{m}} - \sum_{i=1}^{3\mathfrak{m}} \big(\wp_{2,\mathfrak{w}_i}(u) - \wp_{1,1,\mathfrak{w}_i}(u) \big) \mathcal{M}_{-\mathfrak{w}_i}, \end{align} \end{subequations} where \begin{align*} \mathcal{M}_{-(3i-2)} & = y x^{\mathfrak{m}-i},\quad i=1,\, \dots,\, \mathfrak{m},\\ \mathcal{M}_{-(3i-1)} & = x^{2\mathfrak{m}-i}, \quad i=1,\, \dots,\, 2\mathfrak{m}. \end{align*} \end{theo} \begin{theo}[$(3,3\mathfrak{m}+2)$-Curves]\label{T:C33m2} Let a non-degenerate $(3,3\mathfrak{m}+2)$-curve of genus $g=3\mathfrak{m}+1$, $\mathfrak{m} \in \Natural$, be defined by \begin{equation}\label{V33m2Eq} -y^3 + x^{3\mathfrak{m}+2} + y \sum_{i=0}^{2\mathfrak{m} + 1} \lambda_{3i+1} x^{2\mathfrak{m}+1-i} + \sum_{i=1}^{3\mathfrak{m} +1} \lambda_{3i+3} x^{3\mathfrak{m}+1-i} = 0. \end{equation} Let $u = \mathcal{A}(D)$ be the Abel image of a degree $g$ non-special divisor $D$ on the curve. Then $D$ is uniquely defined by the system of equations \begin{subequations}\label{REqsC33m2} \begin{equation} \mathcal{R}_{6\mathfrak{m}+2}(x,y;u)=0,\qquad \mathcal{R}_{6\mathfrak{m}+3}(x,y;u)=0 \end{equation} with two entire rational functions of the weights $2g=6\mathfrak{m}+2$, $2g+1=6\mathfrak{m}+3$: \begin{align} \mathcal{R}_{6\mathfrak{m}+2}(x,y;u) &= y x^{\mathfrak{m}} - \sum_{i=1}^{3\mathfrak{m}+1} \wp_{1,\mathfrak{w}_i}(u) \mathcal{M}_{-\mathfrak{w}_i},\\ \mathcal{R}_{6\mathfrak{m}+3}(x,y;u) &= 2 x^{2\mathfrak{m}+1} + \lambda_1 y x^{\mathfrak{m}} - \sum_{i=1}^{3\mathfrak{m}+1} \big(\wp_{2,\mathfrak{w}_i}(u) - \wp_{1,1,\mathfrak{w}_i}(u) \big) \mathcal{M}_{-\mathfrak{w}_i} , \end{align} \end{subequations} where \begin{align*} \mathcal{M}_{-(3i-1)} & = y x^{\mathfrak{m}-i},\quad i=1,\, \dots,\, \mathfrak{m},\\ \mathcal{M}_{-(3i-2)} & = x^{2\mathfrak{m}+1-i}, \quad i=1,\, \dots,\, 2\mathfrak{m} +1. \end{align*} \end{theo} \subsection{Proof of Theorem~\ref{T:C33m1} ($(3,3\mathfrak{m}+1)$-Curves)} Let $\xi$ be a local parameter in the vicinity of infinity on the curve \eqref{V33m1Eq}, then \begin{equation}\label{xyParamC33m1} x(\xi) = \xi^{-3},\qquad y(\xi) = \xi^{-3\mathfrak{m}-1}\Big(1 + \frac{\lambda_2}{3} \xi^2 + \frac{\lambda_5}{3} \xi^5 + O(\xi^6)\Big). \end{equation} The basis of differentials of the first kind has the form \begin{align*} \mathrm{d} u(x,y) &= \frac{\mathrm{d} x}{\partial_y f} \big(y x^{\mathfrak{m}-1},\, x^{2\mathfrak{m}-1},\, \dots,\, y,\, x^{\mathfrak{m}},\, \dots,\,x,\,1 \big)^t\\ &= \frac{\mathrm{d} x}{\partial_y f} \big(\mathcal{M}_{-1},\, \mathcal{M}_{-2},\, \dots,\, \mathcal{M}_{-(3\mathfrak{m}-2)},\, \mathcal{M}_{-(3\mathfrak{m}-1)}, \notag\\ &\phantom{mmmmmmmmmmn} \dots,\, \mathcal{M}_{-(6\mathfrak{m}-4)},\, \mathcal{M}_{-(6\mathfrak{m}-1)} \big)^t, \notag \end{align*} and the expansions near infinity are \begin{align*} &\mathrm{d} u_{3i-2} = y x^{\mathfrak{m}-i}\frac{ \mathrm{d} x}{\partial_y f} =\xi^{3i-3} \Big(1 + O(\xi^4)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, \mathfrak{m},\\ &\mathrm{d} u_{3i-1} = x^{2\mathfrak{m}-i}\frac{ \mathrm{d} x}{\partial_y f} = \xi^{3i-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 2\mathfrak{m}. \end{align*} By integration with respect to $\xi$ we find expansions for integrals of the first kind: \begin{subequations}\label{Int1s33m1} \begin{align} & u_{3i-2}(\xi) = \frac{\xi^{3i-2}}{3i-2} + O(\xi^{3i+2}),\ i = 1,\, \dots,\, \mathfrak{m},\\ & u_{3i-1}(\xi) = \frac{\xi^{3i-1}}{3i-1} + O(\xi^{3i+1}),\ i = 1,\, \dots,\, 2\mathfrak{m}. \end{align} \end{subequations} The two differentials of the second kind of the smallest Sat\={o} weights are \begin{align}\label{Dif2s33m1} &(\mathrm{d} r_1(x,y),\, \mathrm{d} r_2(x,y)) = \frac{\mathrm{d} x}{\partial_y f} \big( x^{2\mathfrak{m}},\, 2yx^{\mathfrak{m}} \big). \end{align} With the help of \eqref{xyParamC33m1} we find the corresponding expansions near infinity: \begin{align*} &\mathrm{d} r_1(\xi) = \xi^{-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\\ &\mathrm{d} r_2(\xi) = 2 \xi^{-3} \Big(1 + O(\xi^4)\Big) \mathrm{d} \xi. \end{align*} As seen from a direct computation, \eqref{Dif2s33m1} satisfy the condition \eqref{rCond}. Next, we find the integrals of the second kind \begin{align*} & r_1(\xi) = -\xi^{-1} + O(\xi) + c_1,\\ & r_2(\xi) = - \xi^{-2} + O(\xi^2) + c_2, \end{align*} where $c_1$ and $c_2$ are regularization constants\footnote{The reader can find a solution of the regularization problem for integrals of the second kind on non-hyperelliptic curves in \cite{bl2018}, in particular, $c_1 = 0$, $c_2 = -\lambda_2/3$ on $\mathcal{V}_{(3,4)}$, and $c_1 = 0$, $c_2 = -2\lambda_2/3$ on $\mathcal{V}_{(3,7)}$.}, which are not essential in the further computations. The integrals of the first kind \eqref{Int1s33m1} define $\mathcal{A}(\xi)$ from \eqref{rExpr}, namely $$\mathcal{A}(\xi) = \big(u_1(\xi),\, u_2(\xi),\, \dots,\, u_{3\mathfrak{m}-2}(\xi),\, u_{3\mathfrak{m}-1}(\xi),\, \dots,\, u_{6\mathfrak{m}-4}(\xi),\, u_{6\mathfrak{m}-1}(\xi)\big)^t.$$ Then we compute residues on the right hand side of \eqref{rExpr} from the series expansions in~$\xi$. Taking into account that $\mathcal{A}(0)=0$, $\mathcal{A}'(0)=(\delta_{i,1})$, $\mathcal{A}''(0)=(\delta_{i,2})$, where $\delta_{i,k}$ denotes the Kronecker delta and $i$ runs from $1$ to $g=3\mathfrak{m}$, we find \begin{equation*} \frac{\mathrm{d}}{\mathrm{d} \xi} \log \sigma\big(u - \mathcal{A}(\xi)\big) = - \zeta_1(u) - \big(\zeta_2(u) + \wp_{1,1}(u)\big) \xi + O(\xi^2). \end{equation*} Here $u = \mathcal{A}(D)$ is the Abel image of a non-special divisor $D = \sum_{k=1}^{3\mathfrak{m}} (x_k,y_k)$. Thus, \eqref{rExpr} acquires the form \begin{gather}\label{ZetaC33m1} \sum_{k=1}^{3\mathfrak{m}} \begin{pmatrix} r_1(x_k,\,y_k) \\ r_2(x_k,\,y_k) \end{pmatrix} = - \begin{pmatrix} \zeta_1(u) \\ \zeta_2(u) + \wp_{1,1}(u) \end{pmatrix} \equiv \begin{pmatrix} R_1 (u) \\ R_2 (u)\end{pmatrix}. \end{gather} Differentiating the above relations with respect to $x_1$, we obtain the relations \begin{align*} x_1^{2\mathfrak{m}} &= \bigg( y_1 \sum_{i=1}^{\mathfrak{m}} \wp_{1,3i-2}(u) x_1^{\mathfrak{m}-i} + \sum_{i=1}^{2\mathfrak{m}} \wp_{1,3i-1}(u) x_1^{2\mathfrak{m}-i} \bigg),\\ 2y_1 x_1^{\mathfrak{m}} &= \bigg( y_1 \sum_{i=1}^{\mathfrak{m}} \big(\wp_{2,3i-2}(u) - \wp_{1,1,3i-2}(u) \big) x_1^{\mathfrak{m}-i} \\ &\qquad + \sum_{i=1}^{2\mathfrak{m}} \big( \wp_{2,3i-1}(u) - \wp_{1,1,3i-1}(u) \big) x_1^{2\mathfrak{m}-i} \bigg), \end{align*} which coincide with \eqref{REqsC33m1}. $\qede$ \subsection{Example: $(3,4)$-Curve} The family $\mathcal{V}_{(3,4)}$ of genus $3$ is defined by the equation \begin{equation*} -y^3 + x^4 + y (\lambda_2 x^2 + \lambda_5 x + \lambda_8) + \lambda_6 x^2 + \lambda_9 x + \lambda_{12} = 0, \end{equation*} with the Weierstrass gap sequence $\{1,\, 2, \,5\}$. The first three monomials in the list $\mathfrak{M}$, sorted in the ascending order of the Sat\={o} weight, are $\mathcal{M}_{-1} = y$, $\mathcal{M}_{-2} = x$, $\mathcal{M}_{-5} = 1$. They serve as numerators of differentials of the first kind. The next two monomials are $\mathcal{M}_{1}= x^2$, $\mathcal{M}_{2}=y x$; they produce differentials of the second kind $\mathrm{d} r_1$ and $\mathrm{d} r_2$ of the form \eqref{Dif2s33m1}. A solution of the Jacobi inversion problem for $D=\sum_{k=1}^3 (x_k,y_k)$ on $\mathcal{V}_{(3,4)}$ such that $u = \mathcal{A}(D)$ is given by the system \begin{align*} 0=\mathcal{R}_6 (x,y;u) &\equiv x^2 - y \wp_{1,1}(u) - x \wp_{1,2}(u) - \wp_{1,5}(u), \\ 0=\mathcal{R}_7 (x,y;u) &\equiv 2 y x - y \big(\wp_{1,2}(u) - \wp_{1,1,1}(u) \big) - x \big(\wp_{2,2}(u) - \wp_{1,1,2}(u) \big) \\ &\qquad \quad - \big(\wp_{2,5}(u) - \wp_{1,1,5}(u) \big). \end{align*} \begin{rem} Let a trigonal curve of genus $3$ be defined by the equation \begin{equation*} -y^3 + x^4 + y^2 (\lambda_1 x + \lambda_4) + y (\lambda_2 x^2 + \lambda_5 x + \lambda_8) + \lambda_3 x^3 + \lambda_6 x^2 + \lambda_9 x + \lambda_{12} = 0, \end{equation*} with the extra terms $y^2$, $x^3$, $y^2 x$. Then the differentials of the second kind satisfying \eqref{rCond} are \begin{align*} &\begin{pmatrix} \mathrm{d} r_1(x,y) \\ \mathrm{d} r_2(x,y) \end{pmatrix} = \begin{pmatrix} x^2 \\ 2 y x - \lambda_1 x^2 \end{pmatrix} \frac{\mathrm{d} x}{\partial_y f}. \end{align*} Then the expression for $\mathcal{R}_7$ acquires the form \begin{align*} \mathcal{R}_7 (x,y;u) &\equiv 2 y x - \lambda_1 x^2 - y \big(\wp_{1,2}(u) - \wp_{1,1,1}(u) \big) \\ &\qquad \quad - x \big(\wp_{2,2}(u) - \wp_{1,1,2}(u) \big) - \big(\wp_{2,5}(u) - \wp_{1,1,5}(u) \big). \end{align*} \end{rem} \subsection{Example: $(3,7)$-Curve} The family $\mathcal{V}_{(3,7)}$ of genus $6$ is defined by the equation \begin{gather*} \begin{split} y^3 + x^7 &+ y \big(\lambda_2 x^4 + \lambda_5 x^3 + \lambda_8 x^2 + \lambda_{11} x + \lambda_{14} \big) \\ &+ \lambda_6 x^5 + \lambda_9 x^4 + \lambda_{12} x^3 + \lambda_{15} x^2 + \lambda_{18} x + \lambda_{21} = 0 \end{split} \end{gather*} with the Weierstrass gap sequence $\{1,\, 2, \,4,\, 5,\, 8,\, 11\}$. The first six monomials $\mathcal{M}_{-1} = y x$, $\mathcal{M}_{-2} = x^3$, $\mathcal{M}_{-4} = y$, $\mathcal{M}_{-5} = x^2$, $\mathcal{M}_{-8} = x$, $\mathcal{M}_{-11} = 1$ from $\mathfrak{M}$ serve as numerators of differentials of the first kind. The next two monomials $\mathcal{M}_{1}= x^4$, $\mathcal{M}_{2}=y x^2$ produce differentials of the second kind $\mathrm{d} r_1$ and $\mathrm{d} r_2$ of the form \eqref{Dif2s33m1}. A solution of the Jacobi inversion problem for $D=\sum_{k=1}^6 (x_k,y_k)$ on $\mathcal{V}_{(3,7)}$ such that $u = \mathcal{A}(D)$ is given by the system \begin{gather*} \mathcal{R}_{12} (x,y;u) = 0,\qquad\qquad \mathcal{R}_{13} (x,y;u) = 0, \end{gather*} where \begin{align*} \mathcal{R}_{12} (x,y;u) &= x^4 - y x \wp_{1,1}(u) - x^3 \wp_{1,2}(u) - y \wp_{1,4}(u) \\ &\qquad \ - x^2 \wp_{1,5}(u) - x \wp_{1,8}(u) - \wp_{1,11}(u), \\ \mathcal{R}_{13} (x,y;u) &= 2 y x^2 - y x \big(\wp_{1,2}(u) - \wp_{1,1,1}(u) \big) - x^3 \big(\wp_{2,2}(u) - \wp_{1,1,2}(u) \big) \\ &\qquad \quad \ - y \big(\wp_{2,4}(u) - \wp_{1,1,4}(u) \big) - x^2 \big(\wp_{2,5}(u) - \wp_{1,1,5}(u) \big) \\ &\qquad \quad \ - x \big(\wp_{2,8}(u) - \wp_{1,1,8}(u) \big) - \big(\wp_{2,11}(u) - \wp_{1,1,11}(u) \big). \end{align*} \subsection{Proof of Theorem~\ref{T:C33m2} ($(3,3\mathfrak{m}+2)$-Curves)} Let $\xi$ be a local parameter in the vicinity of infinity on the curve \eqref{V33m2Eq}, then \begin{equation}\label{xyParamC33m2} x(\xi) = \xi^{-3},\qquad y(\xi) = \xi^{-3\mathfrak{m}-2}\Big(1 + \frac{\lambda_1}{3} \xi - \frac{\lambda_1^3 }{81} \xi^3 + O(\xi^4)\Big). \end{equation} The basis of differentials of the first kind has the form \begin{align}\label{Dif1s33m2} \mathrm{d} u(x,y) &= \frac{\mathrm{d} x}{\partial_y f} \big(x^{2\mathfrak{m}},\, y x^{\mathfrak{m}-1},\, \dots,\, x^{\mathfrak{m}+1},\, y,\, x^{\mathfrak{m}},\, \dots,\,x,\,1 \big)^t \\ &= \frac{\mathrm{d} x}{\partial_y f} \big(\mathcal{M}_{-1},\, \mathcal{M}_{-2},\, \dots,\, \mathcal{M}_{-(3\mathfrak{m}-2)},\, \mathcal{M}_{-(3\mathfrak{m}-1)}, \notag\\ &\phantom{mmmmmmm} \mathcal{M}_{-(3\mathfrak{m}+1)},\, \dots,\, \mathcal{M}_{-(6\mathfrak{m}-2)},\, \mathcal{M}_{-(6\mathfrak{m}+1)} \big)^t, \notag \end{align} and the expansions near infinity are \begin{align*} &\mathrm{d} u_{3i-1} = y x^{\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} = \xi^{3i-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, \mathfrak{m},\\ &\mathrm{d} u_{3i-2} = x^{2\mathfrak{m}+1-i} \frac{\mathrm{d} x}{\partial_y f} = \xi^{3i-3} \Big(1 - \frac{\lambda_1}{3} \xi + O(\xi^3)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 2\mathfrak{m}+1. \end{align*} By integration with respect to $\xi$ we find expansions for integrals of the first kind: \begin{subequations}\label{Int1s33m2} \begin{align} & u_{3i-1}(\xi) = \frac{\xi^{3i-1}}{3i-1} + O(\xi^{3i+1}),\ i = 1,\, \dots,\, \mathfrak{m},\\ & u_{3i-2}(\xi) = \frac{\xi^{3i-2}}{3i-2} - \frac{\lambda_1}{3} \frac{\xi^{3i-1}}{3i-1} + O(\xi^{3i+1}),\ i = 1,\, \dots,\, 2\mathfrak{m}+1. \end{align} \end{subequations} Let the two differentials of the second kind of the smallest Sat\={o} weights be \begin{align}\label{Dif2s33m2} &(\mathrm{d} r_1(x,y),\, \mathrm{d} r_2(x,y)) = \frac{\mathrm{d} x}{\partial_y f} \big(yx^{\mathfrak{m}},\, 2 x^{2\mathfrak{m}+1}\big). \end{align} With the help of \eqref{xyParamC33m2} we find the corresponding expansions near infinity: \begin{align*} &\mathrm{d} r_1(\xi) = \xi^{-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\\ &\mathrm{d} r_2(\xi) = 2 \xi^{-3} \Big(1 - \frac{\lambda_1}{3} \xi + O(\xi^3)\Big) \mathrm{d} \xi. \end{align*} By a direct computation we find that $\mathrm{d} r_1$ satisfies the condition \eqref{rCond}, and $\mathrm{d} r_2$ does not. Thus, we replace the latter with \begin{equation}\label{Dif2s33m2N} \mathrm{d} \widetilde{r}_2(x,y) = (2 x^{2\mathfrak{m}+1} + \lambda_1 y x^\mathfrak{m}) \frac{\mathrm{d} x}{\partial_y f}, \end{equation} which satisfies \eqref{rCond}. The corresponding integrals of the second kind have the form \begin{align*} & r_1(\xi) = -\xi^{-1} + O(\xi) + c_1,\\ & \widetilde{r}_2(\xi) = - \xi^{-2} - \frac{\lambda_1}{3} \xi^{-1} + O(\xi) + c_2. \end{align*} The integrals of the first kind \eqref{Int1s33m2} define $\mathcal{A}(\xi)$ from \eqref{rExpr}, namely \begin{align*} \mathcal{A}(\xi) &= \big(u_1(\xi),\, u_2(\xi),\, \dots,\, u_{3\mathfrak{m}-2}(\xi),\, u_{3\mathfrak{m}-1}(\xi),\\ &\qquad\qquad u_{3\mathfrak{m}+1}(\xi),\, \dots,\, u_{6\mathfrak{m}-2}(\xi),\, u_{6\mathfrak{m}+1}(\xi)\big)^t. \end{align*} Next, we compute residues on the right hand side of \eqref{rExpr}. Taking into account that $\mathcal{A}(0)=0$, $\mathcal{A}'(0)=(\delta_{i,1})$, $\mathcal{A}''(0)=(\delta_{i,2}-\frac{1}{3} \lambda_1 \delta_{i,1})$, where $\delta_{i,k}$ denotes the Kronecker delta and $i$ runs from $1$ to $g=3\mathfrak{m}+1$, we find \begin{equation*} \frac{\mathrm{d}}{\mathrm{d} \xi} \log \sigma\big(u - \mathcal{A}(\xi)\big) = - \zeta_1(u) - \big(\zeta_2(u) - \tfrac{1}{3} \lambda_1 \zeta_1(u) + \wp_{1,1}(u)\big) \xi + O(\xi^2). \end{equation*} Here $u = \mathcal{A}(D)$ is the Abel image of a non-special divisor $D = \sum_{k=1}^{3\mathfrak{m}+1} (x_k,y_k)$. Thus, \eqref{rExpr} acquires the form \begin{gather}\label{ZetaC33m2} \sum_{k=1}^{3\mathfrak{m}+1} \begin{pmatrix} r_1(x_k,\,y_k) \\ \widetilde{r}_2(x_k,\,y_k) \end{pmatrix} = - \begin{pmatrix} \zeta_1(u) \\ \zeta_2(u) + \wp_{1,1}(u) \end{pmatrix} \equiv \begin{pmatrix} R_1 (u) \\ R_2 (u)\end{pmatrix}. \end{gather} Differentiating the above relations with respect to $x_1$, we find the relations \begin{subequations}\label{RrelsC33m2} \begin{align} &y_1 x_1^{\mathfrak{m}} = \bigg( y_1 \sum_{i=1}^{\mathfrak{m}} \wp_{1,3i-1}(u) x_1^{\mathfrak{m}-i} + \sum_{i=1}^{2\mathfrak{m}+1} \wp_{1,3i-2}(u) x_1^{2\mathfrak{m}+1-i} \bigg),\\ &2 x_1^{2\mathfrak{m}+1} + \lambda_1 y_1 x_1^{\mathfrak{m}} = \bigg( y_1 \sum_{i=1}^{\mathfrak{m}} \big(\wp_{2,3i-1}(u) - \wp_{1,1,3i-1}(u) \big) x_1^{\mathfrak{m}-i} \\ &\phantom{mmmmmmmmmm} + \sum_{i=1}^{2\mathfrak{m}+1} \big( \wp_{2,3i-2}(u) - \wp_{1,1,3i-2}(u) \big) x_1^{2\mathfrak{m}+1-i} \bigg), \notag \end{align} \end{subequations} which coincide with \eqref{REqsC33m2}. $\qede$ \begin{rem}\label{R:Dif2} The differentials of the second kind $\mathrm{d} r_1$ defined by \eqref{Dif2s33m2} and $\mathrm{d} \widetilde{r}_2$ defined by \eqref{Dif2s33m2N} on $\mathcal{V}_{(3,3\mathfrak{m}+2)}$ are associated with the differentials of the first kind \eqref{Dif1s33m2}, according to \cite[Art.\;138]{bakerAF}. Recall that differentials of the second kind $\mathrm{d} r$ and of the first kind $\mathrm{d} u$ form an associated system if the algebraic part of the fundamental bi-differential, see for example \cite[sect.\;10.1--10.2]{belMDSF}, in the form \cite[Eq.\;(1.9)]{belHKF} is symmetric. The integrals of the second kind $r_{\mathfrak{w}_i}$ obtained from the differentials associated with differentials of the first kind are expressed through the corresponding zeta functions $\zeta_{\mathfrak{w}_i}$ defined by \eqref{zetaDef} with Abelian functions added, see \eqref{ZetaC33m2} and \eqref{ZetaC33m1}. As a result, relations of the type \eqref{RrelsC33m2} have the simplest form. \end{rem} \subsection{Example: $(3,5)$-Curve} The family $\mathcal{V}_{(3,5)}$ of genus $4$ is defined by the equation \begin{equation*} -y^3 + x^5 + y (\lambda_1 x^3 + \lambda_4 x^2 + \lambda_7 x + \lambda_{10}) + \lambda_6 x^3 + \lambda_9 x^2 + \lambda_{12} x + \lambda_{15} = 0, \end{equation*} and the Weierstrass sequence is $\{1,\, 2, \,4,\, 7\}$. The first four monomials in the list $\mathfrak{M}$ sorted in the ascending order of the Sat\={o} weight are $\mathcal{M}_{-1} = x^2$, $\mathcal{M}_{-2} = y$, $\mathcal{M}_{-4} = x$, $\mathcal{M}_{-7} = 1$; they serve as numerators of differentials of the first kind. The next two monomials are $\mathcal{M}_{1}= y x$, $\mathcal{M}_{2} = x^3$; they produce differentials of the second kind: $\mathrm{d} r_1 = y x \mathrm{d} x/(\partial_y f)$ and $\mathrm{d} r_2 = (2 x^3 + \lambda_1 y x) \mathrm{d} x/(\partial_y f)$. A solution of the Jacobi inversion problem for $D=\sum_{k=1}^4 (x_k,y_k)$ on $\mathcal{V}_{(3,5)}$ such that $u = \mathcal{A} (D)$ is given by the system \begin{align*} 0=\mathcal{R}_8 (x,y;u) &\equiv y x - \wp_{1,1}(u) x^2 - \wp_{1,2}(u) y - \wp_{1,4}(u) x - \wp_{1,7}(u), \\ 0=\mathcal{R}_9 (x,y;u) &\equiv 2 x^3 + \lambda_1 y x - \big(\wp_{1,2}(u) - \wp_{1,1,1}(u) \big) x^2 - \big(\wp_{2,2}(u) - \wp_{1,1,2}(u) \big) y \\ &\qquad \qquad \qquad\ - \big(\wp_{2,4}(u) - \wp_{1,1,4}(u) \big) x - \big(\wp_{2,7}(u) - \wp_{1,1,7}(u) \big). \end{align*} \section{Jacobi inversion problem on tetragonal curves} There exist two types of tetragonal $(n,s)$-curves: $(4,4\mathfrak{m}+1)$, $(4,4\mathfrak{m}+3)$, where $\mathfrak{m}$ is a natural number. \begin{theo}[$(4,4\mathfrak{m}+1)$-Curves]\label{T:C44m1} Let a non-degenerate $(4,4\mathfrak{m}+1)$-curve of genus $g=6\mathfrak{m}$ be defined by \begin{equation}\label{V44m1Eq} -y^4 + x^{4\mathfrak{m}+1} + y^2 \sum_{i=0}^{2\mathfrak{m}} \lambda_{4i+2} x^{2\mathfrak{m}-i} + y \sum_{i=0}^{3\mathfrak{m}} \lambda_{4i+3} x^{3\mathfrak{m}-i} + \sum_{i=1}^{4\mathfrak{m} } \lambda_{4i+4} x^{4\mathfrak{m}-i} = 0. \end{equation} Let $u = \mathcal{A}(D)$ be the Abel image of a non-special degree $g$ divisor $D$ on the curve. Then $D$ is uniquely defined by the system of equations \begin{subequations}\label{REqsC44m1} \begin{gather} \mathcal{R}_{12\mathfrak{m}}(x,y;u)=0,\qquad \mathcal{R}_{12\mathfrak{m}+1}(x,y;u)=0,\qquad \mathcal{R}_{12\mathfrak{m}+2}(x,y;u)=0 \end{gather} with three entire rational functions of the weights $2g=12\mathfrak{m}$, $2g+1=12\mathfrak{m}+1$, $2g+2=12\mathfrak{m}+2$: \begin{align} \mathcal{R}_{12\mathfrak{m}}(x,y;u) &= x^{3\mathfrak{m}} - \sum_{i=1}^{6\mathfrak{m}} \wp_{1,\mathfrak{w}_{i}}(u) \mathcal{M}_{-\mathfrak{w}_{i}}, \\ \mathcal{R}_{12\mathfrak{m}+1}(x,y;u) &= 2 y x^{2\mathfrak{m}} - \sum_{i=1}^{6\mathfrak{m}} \big( \wp_{2,\mathfrak{w}_{i}}(u) - \wp_{1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \\ \mathcal{R}_{12\mathfrak{m}+2}(x,y;u) &= 3 y^2 x^{\mathfrak{m}} - \lambda_2 x^{3\mathfrak{m}} \\ &\hspace{-10mm} - \sum_{i=1}^{6\mathfrak{m}} \big(\wp_{3,\mathfrak{w}_{i}}(u) - \tfrac{3}{2} \wp_{1,2,\mathfrak{w}_{i}}(u) + \tfrac{1}{2} \wp_{1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \end{align} \end{subequations} where \begin{align*} &\mathcal{M}_{-(4i-3)} = y^2 x^{\mathfrak{m}-i},\quad i=1,\, \dots,\, \mathfrak{m}, \\ &\mathcal{M}_{-(4i-2)} = y x^{2\mathfrak{m}-i},\quad i=1,\, \dots,\, 2\mathfrak{m}, \\ &\mathcal{M}_{-(4i-1)} = x^{3\mathfrak{m}-i},\quad i=1,\, \dots,\, 3\mathfrak{m}. \end{align*} \end{theo} \begin{theo}[$(4,4\mathfrak{m}+3)$-Curves]\label{T:C44m3} Let a non-degenerate $(4,4\mathfrak{m}+3)$-curve of genus $g=6\mathfrak{m}+3$ be defined by \begin{align}\label{V44m3Eq} -y^4 + x^{4\mathfrak{m}+3} &+ y^2 \sum_{i=0}^{2\mathfrak{m}+1} \lambda_{4i+2} x^{2\mathfrak{m}+1-i} \\ &+ y \sum_{i=0}^{3\mathfrak{m}+2} \lambda_{4i+1} x^{3\mathfrak{m}+2-i} + \sum_{i=1}^{4\mathfrak{m} +2} \lambda_{4i+4} x^{4\mathfrak{m}+2-i} = 0. \notag \end{align} Let $u = \mathcal{A}(D)$ be the Abel image of a non-special degree $g$ divisor $D$ on the curve. Then $D$ is uniquely defined by the system of equations \begin{subequations}\label{REqsC44m3} \begin{gather} \mathcal{R}_{12\mathfrak{m}+6}(x,y;u)=0,\qquad \mathcal{R}_{12\mathfrak{m}+7}(x,y;u)=0,\qquad \mathcal{R}_{12\mathfrak{m}+8}(x,y;u)=0 \end{gather} with three entire rational functions of the weights $2g=12\mathfrak{m}+6$, $2g+1=12\mathfrak{m}+7$, $2g+2=12\mathfrak{m}+8$: \begin{align} &\mathcal{R}_{12\mathfrak{m}+6}(x,y;u) = y^2 x^{\mathfrak{m}} - \sum_{i=1}^{6\mathfrak{m}+3} \wp_{1,\mathfrak{w}_{i}}(u) \mathcal{M}_{-\mathfrak{w}_{i}}, \\ &\mathcal{R}_{12\mathfrak{m}+7}(x,y;u) = 2 y x^{2\mathfrak{m}+1} \,{+}\, \lambda_1 y^2 x^{\mathfrak{m}} \,{-}\, \sum_{i=1}^{6\mathfrak{m}+3} \big( \wp_{2,\mathfrak{w}_{i}}(u) - \wp_{1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \\ &\mathcal{R}_{12\mathfrak{m}+8}(x,y;u) = 3x^{3\mathfrak{m}+2} + 2\lambda_1 y x^{2\mathfrak{m}+1} + \lambda_2 y^2 x^{\mathfrak{m}} \\ &\hspace{3mm} - \sum_{i=1}^{6\mathfrak{m}+3} \big(\wp_{3,\mathfrak{w}_{i}}(u) - \tfrac{3}{2} \wp_{1,2,\mathfrak{w}_{i}}(u) + \tfrac{1}{2} \lambda_1 \wp_{1,1,\mathfrak{w}_{i}}(u) + \tfrac{1}{2} \wp_{1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \end{align} \end{subequations} where \begin{align*} &\mathcal{M}_{-(4i-1)} = y^2 x^{\mathfrak{m}-i},\quad i=1,\, \dots,\, \mathfrak{m}, \\ &\mathcal{M}_{-(4i-2)} = y x^{2\mathfrak{m}+1-i},\quad i=1,\, \dots,\, 2\mathfrak{m}+1, \\ &\mathcal{M}_{-(4i-3)} = x^{3\mathfrak{m}+2-i},\quad i=1,\, \dots,\, 3\mathfrak{m}+2. \end{align*} \end{theo} \subsection{Proof of Theorem~\ref{T:C44m1} ($(4,4\mathfrak{m}+1)$-Curves)} Let $\xi$ be a local coordinate in the vicinity of infinity on the curve \eqref{V44m1Eq}, then \begin{equation}\label{xyParamC44m1} x(\xi) = \xi^{-4},\qquad y(\xi) = \xi^{-4\mathfrak{m}-1}\bigg(1 + \frac{\lambda_2}{4} \xi^2 + \frac{\lambda_3}{4} \xi^3 + \frac{\lambda_2^2}{32} \xi^4 + O(\xi^6)\bigg). \end{equation} The basis of differentials of the first kind has the form \begin{align* \mathrm{d} u(x,y) = \frac{\mathrm{d} x}{\partial_y f} \big(y^2 x^{\mathfrak{m}-1},\, y x^{2\mathfrak{m}-1},\, x^{3\mathfrak{m}-1},\, &\dots,\, y^2,\, y x^{\mathfrak{m}},\, x^{2\mathfrak{m}},\, \\ &\dots,\, y,\, x^{\mathfrak{m}},\, \dots,\,x,\,1 \big)^t. \notag \end{align*} and the expansions near infinity are \begin{align*} &\mathrm{d} u_{4i-3} = y^2 x^{\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{4i-4} \Big(1 + \frac{\lambda_2}{4} \xi^2 + O(\xi^4)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, \mathfrak{m},\\ &\mathrm{d} u_{4i-2} = y x^{2\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{4i-3} \Big(1 + O(\xi^3)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 2\mathfrak{m},\\ &\mathrm{d} u_{4i-1} = x^{3\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{4i-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 3\mathfrak{m}. \end{align*} By integration with respect to $\xi$ we find expansions for integrals of the first kind: \begin{subequations}\label{Int1s44m1} \begin{align} & u_{4i-3}(\xi) = \frac{\xi^{4i-3}}{4i-3} + \frac{\lambda_2}{4} \frac{\xi^{4i-1}}{4i-1} + O(\xi^{4i+1}),\ i = 1,\, \dots,\, \mathfrak{m}, \\ & u_{4i-2}(\xi) = \frac{\xi^{4i-2}}{4i-2} + O(\xi^{4i+1}),\ i = 1,\, \dots,\, 2\mathfrak{m}, \\ & u_{4i-1}(\xi) = \frac{\xi^{4i-1}}{4i-1} + O(\xi^{4i+1}),\ i = 1,\, \dots,\, 3\mathfrak{m}. \end{align} \end{subequations} Let the three differentials of the second kind of weights $1$, $2$, $3$ be \begin{align}\label{Dif2C44m1} &\big(\mathrm{d} r_1(x,y),\, \mathrm{d} r_2(x,y),\, \mathrm{d} r_3(x,y) \big) = \frac{\mathrm{d} x}{\partial_y f} \big(x^{3\mathfrak{m}},\, 2 yx^{2\mathfrak{m}},\, 3 y^2 x^{\mathfrak{m}}\big). \end{align} and the corresponding expansions near infinity \begin{align*} &\mathrm{d} r_1(\xi) = \xi^{-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\\ &\mathrm{d} r_2(\xi) = 2 \xi^{-3} \Big(1 + O(\xi^3)\Big) \mathrm{d} \xi,\\ &\mathrm{d} r_3(\xi) = 3 \xi^{-4}\Big(1 + \frac{\lambda_2}{4} \xi^2 + O(\xi^4)\Big) \mathrm{d} \xi. \end{align*} In order to satisfy \eqref{rCond}, we replace $\mathrm{d} r_3$ with \begin{equation}\label{Dif3s44m1N} \mathrm{d} \widetilde{r}_3(x,y) = (3 y^2 x^{\mathfrak{m}} - \lambda_2 x^{3\mathfrak{m}}) \frac{\mathrm{d} x}{\partial_y f}. \end{equation} The differentials of the second kind $\mathrm{d} r_1$, $\mathrm{d} r_2$, $\mathrm{d} \widetilde{r}_3$ form an associated system with differentials of the first kind. The corresponding integrals of the second kind have the form \begin{align*} & r_1(\xi) = -\xi^{-1} + O(\xi) + c_1,\\ & r_2(\xi) = - \xi^{-2} + O(\xi) + c_2,\\ & \widetilde{r}_3(\xi) = - \xi^{-3} + \frac{\lambda_2}{4} \xi^{-1} + O(\xi) + c_3. \end{align*} The integrals of the first kind \eqref{Int1s44m1} define $\mathcal{A}(\xi)$ from \eqref{rExpr}, namely \begin{align*} \mathcal{A}(\xi) = \big(u_1(\xi),\, u_2(\xi),\, &u_3(\xi),\, \dots,\, u_{4\mathfrak{m}-3}(\xi),\, u_{4\mathfrak{m}-2}(\xi),\, u_{4\mathfrak{m}-1}(\xi),\, \dots,\, \\ & u_{8\mathfrak{m}-2}(\xi),\, u_{8\mathfrak{m}-1}(\xi),\, \dots,\, u_{12\mathfrak{m}-5}(\xi),\, u_{12\mathfrak{m}-1}(\xi)\big)^t. \end{align*} Next, we compute residues on the right hand side of \eqref{rExpr}. Taking into account that $\mathcal{A}(0)=0$, $\mathcal{A}'(0)=(\delta_{i,1})$, $\mathcal{A}''(0)=(\delta_{i,2})$, $\mathcal{A}^{(3)}(0)=(2\delta_{i,3}+\frac{1}{2} \lambda_2 \delta_{i,1})$, where $\delta_{i,k}$ denotes the Kronecker delta and $i$ runs from $1$ to $g = 6\mathfrak{m}$, we find \begin{multline*} \frac{\mathrm{d}}{\mathrm{d} \xi} \log \sigma\big(u - \mathcal{A}(\xi)\big) = - \zeta_1(u) - \big(\zeta_2(u) + \wp_{1,1}(u)\big) \xi \\ - \big( \zeta_3(u) + \tfrac{1}{4}\lambda_2 \zeta_1(u) + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \big) \xi^2 + O(\xi^3). \end{multline*} Here $u = \mathcal{A}(D)$ is the Abel's image of a non-special divisor $D = \sum_{k=1}^{6\mathfrak{m}} (x_k,y_k)$. Thus, \eqref{rExpr} acquires the form \begin{align* & \sum_{k=1}^{6\mathfrak{m}} \begin{pmatrix} r_1(x_k,\,y_k) \\ r_2(x_k,\,y_k) \\ \widetilde{r}_3(x_k,\,y_k) \end{pmatrix} = - \begin{pmatrix} \zeta_1(u) \\ \zeta_2(u) + \wp_{1,1}(u) \\ \zeta_3(u) + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \end{pmatrix} \equiv \begin{pmatrix} R_1 (u) \\ R_2 (u) \\ R_3 (u) \end{pmatrix}. \end{align*} Differentiating the above relations with respect to $x_1$, we find the relations \begin{align*} &x_1^{3\mathfrak{m}} = y_1^2 \sum_{i=1}^{\mathfrak{m}} \wp_{1,4i-3} x_1^{\mathfrak{m}-i}+ y_1 \sum_{i=1}^{2\mathfrak{m}} \wp_{1,4i-2}x_1^{2\mathfrak{m}-i} + \sum_{i=1}^{3\mathfrak{m}} \wp_{1,4i-1} x_1^{3\mathfrak{m}-i}, \\ &2y_1 x_1^{2\mathfrak{m}} = y_1^2 \sum_{i=1}^{\mathfrak{m}} \big(\wp_{2,4i-3} - \wp_{1,1,4i-3} \big) x_1^{\mathfrak{m}-i} + y_1 \sum_{i=1}^{2\mathfrak{m}} \big(\wp_{2,4i-2} - \wp_{1,1,4i-2} \big) x_1^{2\mathfrak{m}-i} \\ &\phantom{mmmmm} + \sum_{i=1}^{3\mathfrak{m}} \big( \wp_{2,4i-1} - \wp_{1,1,4i-1} \big) x_1^{3\mathfrak{m}-i},\\ &3 y_1^2 x_1^{\mathfrak{m}} - \lambda_2 x_1^{3\mathfrak{m}} = y_1^2 \sum_{i=1}^{\mathfrak{m}} \big(\wp_{3,4i-3}- \tfrac{3}{2} \wp_{1,2,4i-3} + \tfrac{1}{2} \wp_{1,1,1,4i-3}\big) x_1^{\mathfrak{m}-i} \\ &\phantom{mmmmmmmmm} + y_1 \sum_{i=1}^{2\mathfrak{m}} \big(\wp_{3,4i-2}- \tfrac{3}{2} \wp_{1,2,4i-2} + \tfrac{1}{2} \wp_{1,1,1,4i-2}\big) x_1^{2\mathfrak{m}-i} \\ &\phantom{mmmmmmmmm} + \sum_{i=1}^{3\mathfrak{m}} \big(\wp_{3,4i-1}- \tfrac{3}{2} \wp_{1,2,4i-1} + \tfrac{1}{2} \wp_{1,1,1,4i-1}\big) x_1^{3\mathfrak{m}-i}, \end{align*} which coincide with \eqref{REqsC44m1}. The argument of abelian functions is omitted, that is $\wp_{i,j}$, $\wp_{i,j,k}$, $\wp_{i,j,k,l}$ stand for $\wp_{i,j}(u)$, $\wp_{i,j,k}(u)$, $\wp_{i,j,k,l}(u)$. $\qede$ \subsection{Proof of Theorem~\ref{T:C44m3} ($(4,4\mathfrak{m}+3)$-Curves)} Let $\xi$ be a local coordinate in the vicinity of infinity on the curve \eqref{V44m3Eq}, then \begin{equation}\label{xyParamC44m3} x(\xi) = \xi^{-4},\qquad y(\xi) = \xi^{-4\mathfrak{m}-3}\bigg(1 + \frac{\lambda_1}{4} \xi + \Big( \frac{\lambda_2 }{4} - \frac{\lambda_1^2}{32} \Big) \xi^2 + O(\xi^4)\bigg). \end{equation} The basis of differentials of the first kind has the form \begin{align*} \mathrm{d} u(x,y) = \frac{\mathrm{d} x}{\partial_y f} \big(x^{3\mathfrak{m}+1},\, y x^{2\mathfrak{m}},\, y^2 x^{\mathfrak{m}-1},\, &\dots,\, y^2,\, x^{2\mathfrak{m}+1},\, y x^{\mathfrak{m}},\, \\ &\dots,\, y,\, x^{\mathfrak{m}},\, \dots,\,x,\,1 \big)^t. \notag \end{align*} and the expansions near infinity are \begin{align*} &\mathrm{d} u_{4i-1} = y^2 x^{\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{4i-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi, \quad i = 1,\, \dots,\, \mathfrak{m},\\ &\mathrm{d} u_{4i-2} = y x^{2\mathfrak{m}+1-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{4i-3} \Big(1 - \frac{\lambda_1}{4} \xi + O(\xi^3)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 2\mathfrak{m}+1, \\ &\mathrm{d} u_{4i-3} = x^{3\mathfrak{m}+2-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{4i-4} \Big(1 - \frac{\lambda_1}{2} \xi - \Big(\frac{\lambda_2}{4} - \frac{5\lambda_1^2}{32} \Big) \xi^2 + O(\xi^4)\Big) \mathrm{d} \xi,\\ &\qquad i = 1,\, \dots,\, 3\mathfrak{m}+2. \end{align*} By integration with respect to $\xi$ we find expansions for integrals of the first kind: \begin{subequations}\label{Int1s44m3} \begin{align} & u_{4i-1}(\xi) = \frac{\xi^{4i-1}}{4i-1} + O(\xi^{4i+1}),\ \ i = 1,\, \dots,\, \mathfrak{m},\\ & u_{4i-2}(\xi) = \frac{\xi^{4i-2}}{4i-2} - \frac{\lambda_1}{4} \frac{\xi^{4i-1}}{4i-1} + O(\xi^{4i+1}),\ i = 1,\, \dots,\, 2\mathfrak{m}+1, \\ & u_{4i-3}(\xi) = \frac{\xi^{4i-3}}{4i-3} - \frac{\lambda_1}{2} \frac{\xi^{4i-2}}{4i-2} - \frac{8 \lambda_2 - 5 \lambda_1^2}{32} \frac{\xi^{4i-1}}{4i-1} + O(\xi^{4i+1}), \\ &\quad i = 1,\, \dots,\, 3\mathfrak{m}+2. \notag \end{align} \end{subequations} Let the three differentials of the second kind of weights $1$, $2$, $3$ be \begin{align} &\big(\mathrm{d} r_1(x,y),\, \mathrm{d} r_2(x,y),\, \mathrm{d} r_3(x,y) \big) = \frac{\mathrm{d} x}{\partial_y f} \big(y^2 x^{\mathfrak{m}},\, 2 yx^{2\mathfrak{m}+1},\, 3x^{3\mathfrak{m}+2}\big). \end{align} and the corresponding expansions near infinity \begin{align*} &\mathrm{d} r_1(\xi) = \xi^{-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\\ &\mathrm{d} r_2(\xi) = 2 \xi^{-3} \Big(1 - \frac{\lambda_1}{4} \xi + O(\xi^3) \Big) \mathrm{d} \xi,\\ &\mathrm{d} r_3(\xi) = 3 \xi^{-4}\Big(1 - \frac{\lambda_1}{2} \xi - \Big(\frac{\lambda_2}{4} - \frac{5\lambda_1^2}{32} \Big) \xi^2 + O(\xi^4)\Big) \mathrm{d} \xi. \end{align*} With the help of \eqref{rCond}, we find the differentials of the second kind which form an associated system with differentials of the first kind: \begin{align* \begin{pmatrix} \mathrm{d} r_1(x,y) \\ \mathrm{d} \widetilde{r}_2(x,y) \\ \mathrm{d} \widetilde{r}_3(x,y) \end{pmatrix} = \frac{\mathrm{d} x}{\partial_y f} \begin{pmatrix} y^2 x^{\mathfrak{m}} \\ 2 yx^{2\mathfrak{m}+1} + \lambda_1 y^2 x^{\mathfrak{m}} \\ 3x^{3\mathfrak{m}+2} + 2 \lambda_1 yx^{2\mathfrak{m}+1} + \lambda_2 y^2 x^{\mathfrak{m}} \end{pmatrix}. \end{align*} By integration with respect to $\xi$ we obtain \begin{align*} & r_1(\xi) = -\xi^{-1} + O(\xi) + c_1,\\ & \widetilde{r}_2(\xi) = - \xi^{-2} - \frac{\lambda_1}{2} \xi^{-1} + O(\xi) + c_2,\\ & \widetilde{r}_3(\xi) = - \xi^{-3} - \frac{\lambda_1}{4} \xi^{-2} - \frac{8 \lambda_2 - \lambda_1^2}{32} \xi^{-1} + O(\xi) + c_3. \end{align*} The integrals of the first kind \eqref{Int1s44m3} define $\mathcal{A}(\xi)$ from \eqref{rExpr}, namely \begin{align*} \mathcal{A}(\xi) = \big(u_1(\xi),\, u_2(\xi),\, &u_3(\xi),\, \dots,\, u_{4\mathfrak{m}-1}(\xi),\, u_{4\mathfrak{m}+1}(\xi),\, u_{4\mathfrak{m}+2}(\xi),\, \dots,\, \\ & u_{8\mathfrak{m}+2}(\xi),\, u_{8\mathfrak{m}+5}(\xi),\, \dots,\, u_{12\mathfrak{m}+1}(\xi),\, u_{12\mathfrak{m}+5}(\xi)\big)^t. \end{align*} Next, we compute residues on the right hand side of \eqref{rExpr}. Taking into account that $\mathcal{A}(0)=0$, $\mathcal{A}'(0)=(\delta_{i,1})$, $\mathcal{A}''(0)=(\delta_{i,2} - \tfrac{1}{2} \lambda_1 \delta_{i,1})$, $\mathcal{A}^{(3)}(0)=(2\delta_{i,3}- \tfrac{1}{2} \lambda_1 \delta_{i,2} - (\frac{1}{2} \lambda_2 - \tfrac{5}{16} \lambda_1^2) \delta_{i,1})$, where $\delta_{i,k}$ denotes the Kronecker delta and $i$ runs from $1$ to $g = 6\mathfrak{m}+3$, we find \begin{align*} \frac{\mathrm{d}}{\mathrm{d} \xi} \log \sigma\big(u - \mathcal{A}(\xi)\big) &= - \zeta_1(u) - \big(\zeta_2(u) - \tfrac{1}{2} \lambda_1\zeta_1(u) + \wp_{1,1}(u)\big) \xi \\ &- \big( \zeta_3(u) - \tfrac{1}{4}\lambda_1 \zeta_2(u) - \big(\tfrac{1}{4}\lambda_2 - \tfrac{5}{32} \lambda_1^2 \big) \zeta_1(u) \\ &\qquad + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{3}{4} \lambda_1 \wp_{1,1}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \big) \xi^2 + O(\xi^3). \end{align*} Here $u = \mathcal{A}(D)$ is the Abel's image of a non-special divisor $D = \sum_{k=1}^{6\mathfrak{m}+3} (x_k,y_k)$. Thus, \eqref{rExpr} acquires the form \begin{align*} & \sum_{k=1}^{6\mathfrak{m}+3} \begin{pmatrix} r_1(x_k,\,y_k) \\ \widetilde{r}_2(x_k,\,y_k) \\ \widetilde{r}_3(x_k,\,y_k) \end{pmatrix} = - \begin{pmatrix} \zeta_1(u) \\ \zeta_2(u) + \wp_{1,1}(u) \\ \zeta_3(u) + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{1}{2} \lambda_1 \wp_{1,1}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \end{pmatrix} \equiv \begin{pmatrix} R_1 (u) \\ R_2 (u) \\ R_3 (u) \end{pmatrix}. \end{align*} Differentiating the above relations with respect to $x_1$, we find the relations \eqref{REqsC44m3}. $\qede$ \section{Jacobi inversion problem on pentagonal curves} There exist four types of pentagonal $(n,s)$-curves: $(5,5\mathfrak{m}+1)$, $(5,5\mathfrak{m}+2)$, $(5,5\mathfrak{m}+3)$, and $(5,5\mathfrak{m}+4)$, where $\mathfrak{m}$ is a natural number. \begin{theo}[$(5,5\mathfrak{m}+1)$-Curves]\label{T:C55m1} Let a non-degenerate $(5,5\mathfrak{m}+1)$-curve of genus $g=10\mathfrak{m}$ be defined by \begin{align}\label{V55m1Eq} -y^5 + x^{5\mathfrak{m}+1} + & y^3 \sum_{i=0}^{2\mathfrak{m}} \lambda_{5i+2} x^{2\mathfrak{m}-i} + y^2 \sum_{i=0}^{3\mathfrak{m}} \lambda_{5i+3} x^{3\mathfrak{m}-i} \\ &+ y \sum_{i=0}^{4\mathfrak{m}} \lambda_{5i+4} x^{4\mathfrak{m}-i} + \sum_{i=1}^{5\mathfrak{m} } \lambda_{5i+5} x^{5\mathfrak{m}-i} = 0. \notag \end{align} Let $u = \mathcal{A}(D)$ be the Abel image of a non-special degree $g$ divisor $D$ on the curve. Then $D$ is uniquely defined by the system of equations \begin{subequations}\label{REqsC55m1} \begin{gather} \mathcal{R}_{20\mathfrak{m} + l}(x,y;u)=0,\qquad l=0,1,2,3 \end{gather} with four entire rational functions of the weights $2g + l =20\mathfrak{m} + l$, $l=0,1,2,3$: \begin{align} \mathcal{R}_{20\mathfrak{m}}(x,y;u) &= x^{4\mathfrak{m}} - \sum_{i=1}^{10\mathfrak{m}} \wp_{1,\mathfrak{w}_{i}}(u) \mathcal{M}_{-\mathfrak{w}_{i}}, \\ \mathcal{R}_{20\mathfrak{m}+1}(x,y;u) &= 2 y x^{3\mathfrak{m}} - \sum_{i=1}^{10\mathfrak{m}} \big( \wp_{2,\mathfrak{w}_{i}}(u) - \wp_{1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \\ \mathcal{R}_{20\mathfrak{m}+2}(x,y;u) &= 3 y^2 x^{2\mathfrak{m}} - \lambda_2 x^{4\mathfrak{m}} \\ &\hspace{-10mm} - \sum_{i=1}^{10\mathfrak{m}} \big(\wp_{3,\mathfrak{w}_{i}}(u) - \tfrac{3}{2} \wp_{1,2,\mathfrak{w}_{i}}(u) + \tfrac{1}{2} \wp_{1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \\ \mathcal{R}_{20\mathfrak{m}+3}(x,y;u) &= 4 y^3 x^{\mathfrak{m}} - 2 \lambda_2 y x^{3\mathfrak{m}} - \lambda_3 x^{4\mathfrak{m}} \\ &\hspace{-15mm} - \sum_{i=1}^{10\mathfrak{m}} \big(\wp_{4,\mathfrak{w}_{i}}(u) - \tfrac{1}{2} \wp_{2,2,\mathfrak{w}_{i}}(u) - \tfrac{4}{3} \wp_{1,3,\mathfrak{w}_{i}}(u) - \tfrac{1}{3} \lambda_2 \wp_{1,1,\mathfrak{w}_{i}}(u) \notag \\ & + \wp_{1,1,2,\mathfrak{w}_{i}}(u) - \tfrac{1}{6} \wp_{1,1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \end{align} \end{subequations} where \begin{align*} &\mathcal{M}_{-(5i-4)} = y^3 x^{\mathfrak{m}-i},\quad i=1,\, \dots,\, \mathfrak{m}, \\ &\mathcal{M}_{-(5i-3)} = y^2 x^{2\mathfrak{m}-i},\quad i=1,\, \dots,\, 2\mathfrak{m}, \\ &\mathcal{M}_{-(5i-2)} = y x^{3\mathfrak{m}-i},\quad i=1,\, \dots,\, 3\mathfrak{m}, \\ &\mathcal{M}_{-(5i-1)} = x^{4\mathfrak{m}-i},\quad i=1,\, \dots,\, 4\mathfrak{m}. \end{align*} \end{theo} \begin{theo}[$(5,5\mathfrak{m}+2)$-Curves]\label{T:C55m2} Let a non-degenerate $(5,5\mathfrak{m}+2)$-curve of genus $g=10\mathfrak{m}+2$ be defined by \begin{align}\label{V55m2Eq} -y^5 + x^{5\mathfrak{m}+2} + & y^3 \sum_{i=0}^{2\mathfrak{m}} \lambda_{5i+4} x^{2\mathfrak{m}-i} + y^2 \sum_{i=0}^{3\mathfrak{m}+1} \lambda_{5i+1} x^{3\mathfrak{m}+1-i} \\ &+ y \sum_{i=0}^{4\mathfrak{m}+1} \lambda_{5i+3} x^{4\mathfrak{m}+1-i} + \sum_{i=1}^{5\mathfrak{m} +1} \lambda_{5i+5} x^{5\mathfrak{m}+1-i} = 0. \notag \end{align} Let $u = \mathcal{A}(D)$ be the Abel image of a non-special degree $g$ divisor $D$ on the curve. Then $D$ is uniquely defined by the system of equations \begin{subequations}\label{REqsC55m2} \begin{gather} \mathcal{R}_{20\mathfrak{m}+4+ l}(x,y;u)=0,\qquad l=0,1,2,3 \end{gather} with four entire rational functions of the weights $2g + l =20\mathfrak{m} +4 + l$, $l=0,1,2,3$: \begin{align} \mathcal{R}_{20\mathfrak{m}+4}(x,y;u) &= y^2 x^{2\mathfrak{m}} - \sum_{i=1}^{10\mathfrak{m}+2} \wp_{1,\mathfrak{w}_{i}}(u) \mathcal{M}_{-\mathfrak{w}_{i}}, \\ \mathcal{R}_{20\mathfrak{m}+5}(x,y;u) &= 2 x^{4\mathfrak{m}+1} + \lambda_1 y^2 x^{2\mathfrak{m}} \\ &\quad - \sum_{i=1}^{10\mathfrak{m}+2} \big( \wp_{2,\mathfrak{w}_{i}}(u) - \wp_{1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \\ \mathcal{R}_{20\mathfrak{m}+6}(x,y;u) &= 3 y^3 x^{\mathfrak{m}} - \lambda_1 x^{4\mathfrak{m}+1}\\ &\hspace{-20mm} - \sum_{i=1}^{10\mathfrak{m}+2} \big(\wp_{3,\mathfrak{w}_{i}}(u) - \tfrac{3}{2} \wp_{1,2,\mathfrak{w}_{i}}(u) + \tfrac{1}{2} \lambda_1 \wp_{1,1,\mathfrak{w}_{i}}(u) + \tfrac{1}{2} \wp_{1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \\ \mathcal{R}_{20\mathfrak{m}+7}(x,y;u) &= 4 y x^{3\mathfrak{m}+1} + 2 \lambda_1 y^3 x^{\mathfrak{m}} + 2 \lambda_3 y^2 x^{2\mathfrak{m}} \\ &\hspace{-20mm} - \sum_{i=1}^{10\mathfrak{m}+2} \big(\wp_{4,\mathfrak{w}_{i}}(u) - \tfrac{1}{2} \wp_{2,2,\mathfrak{w}_{i}}(u) - \tfrac{4}{3} \wp_{1,3,\mathfrak{w}_{i}}(u) - \tfrac{2}{3} \lambda_1 \wp_{1,2,\mathfrak{w}_{i}}(u) \notag \\ & + \tfrac{1}{6} \lambda_1^2 \wp_{1,1,\mathfrak{w}_{i}}(u) + \wp_{1,1,2,\mathfrak{w}_{i}}(u) - \tfrac{1}{6} \wp_{1,1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \end{align} \end{subequations} where \begin{align*} &\mathcal{M}_{-(5i-3)} = y^3 x^{\mathfrak{m}-i},\quad i=1,\, \dots,\, \mathfrak{m}, \\ &\mathcal{M}_{-(5i-1)} = y^2 x^{2\mathfrak{m}-i},\quad i=1,\, \dots,\, 2\mathfrak{m}, \\ &\mathcal{M}_{-(5i-4)} = y x^{3\mathfrak{m}+1-i},\quad i=1,\, \dots,\, 3\mathfrak{m}+1, \\ &\mathcal{M}_{-(5i-2)} = x^{4\mathfrak{m}+1-i},\quad i=1,\, \dots,\, 4\mathfrak{m}+1. \end{align*} \end{theo} \begin{theo}[$(5,5\mathfrak{m}+3)$-Curves]\label{T:C55m3} Let a non-degenerate $(5,5\mathfrak{m}+3)$-curve of genus $g=10\mathfrak{m}+4$ be defined by \begin{align}\label{V55m3Eq} -y^5 + x^{5\mathfrak{m}+3} + & y^3 \sum_{i=0}^{2\mathfrak{m}+1} \lambda_{5i+1} x^{2\mathfrak{m}+1-i} + y^2 \sum_{i=0}^{3\mathfrak{m}+1} \lambda_{5i+4} x^{3\mathfrak{m}+1-i} \\ &+ y \sum_{i=0}^{4\mathfrak{m}+2} \lambda_{5i+2} x^{4\mathfrak{m}+2-i} + \sum_{i=1}^{5\mathfrak{m} +2} \lambda_{5i+5} x^{5\mathfrak{m}+2-i} = 0. \notag \end{align} Let $u = \mathcal{A}(D)$ be the Abel image of a non-special degree $g$ divisor $D$ on the curve. Then $D$ is uniquely defined by the system of equations \begin{subequations}\label{REqsC55m3} \begin{gather} \mathcal{R}_{20\mathfrak{m}+8 + l}(x,y;u)=0,\qquad l=0,1,2,3 \end{gather} with four entire rational functions of the weights $2g + l =20\mathfrak{m} + 8 + l$, $l=0,1,2,3$: \begin{align} &\mathcal{R}_{20\mathfrak{m}+8}(x,y;u) = y x^{3\mathfrak{m}+1} - \sum_{i=1}^{10\mathfrak{m}+4} \wp_{1,\mathfrak{w}_{i}}(u) \mathcal{M}_{-\mathfrak{w}_{i}}, \\ &\mathcal{R}_{20\mathfrak{m}+9}(x,y;u) = 2 y^3 x^{\mathfrak{m}} - \lambda_1 y x^{3\mathfrak{m}+1} \\ &\phantom{\mathcal{R}_{20\mathfrak{m}+9}(x,y;u)} \quad - \sum_{i=1}^{10\mathfrak{m}+4} \big( \wp_{2,\mathfrak{w}_{i}}(u) - \wp_{1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \\ &\mathcal{R}_{20\mathfrak{m}+10}(x,y;u) = 3 x^{4\mathfrak{m}+2} + \lambda_1 y^3 x^{\mathfrak{m}} + 2 \lambda_2 y x^{3\mathfrak{m}+1} \\ &\quad - \sum_{i=1}^{10\mathfrak{m}+4} \big(\wp_{3,\mathfrak{w}_{i}}(u) - \tfrac{3}{2} \wp_{1,2,\mathfrak{w}_{i}}(u) - \tfrac{1}{2} \lambda_1 \wp_{1,1,\mathfrak{w}_{i}}(u) + \tfrac{1}{2} \wp_{1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \\ &\mathcal{R}_{20\mathfrak{m}+11}(x,y;u) = 4 y^2 x^{2\mathfrak{m}+1} - 2 \lambda_1 x^{4\mathfrak{m}+2} + 2 \lambda_2 y^3 x^{\mathfrak{m}} - \lambda_1 \lambda_2 y x^{3\mathfrak{m}+1} \\ &\quad - \sum_{i=1}^{10\mathfrak{m}+4} \big(\wp_{4,\mathfrak{w}_{i}}(u) - \tfrac{1}{2} \wp_{2,2,\mathfrak{w}_{i}}(u) - \tfrac{4}{3} \wp_{1,3,\mathfrak{w}_{i}}(u) + \tfrac{2}{3} \lambda_1 \wp_{1,2,\mathfrak{w}_{i}}(u) \notag \\ &\quad - \tfrac{1}{3} (\lambda_2 - \tfrac{1}{2} \lambda_1^2 ) \wp_{1,1,\mathfrak{w}_{i}}(u) + \wp_{1,1,2,\mathfrak{w}_{i}}(u) - \tfrac{1}{6} \wp_{1,1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \end{align} \end{subequations} where \begin{align*} &\mathcal{M}_{-(5i-2)} = y^3 x^{\mathfrak{m}-i},\quad i=1,\, \dots,\, \mathfrak{m}, \\ &\mathcal{M}_{-(5i-4)} = y^2 x^{2\mathfrak{m}+1-i},\quad i=1,\, \dots,\, 2\mathfrak{m}+1, \\ &\mathcal{M}_{-(5i-1)} = y x^{3\mathfrak{m}+1-i},\quad i=1,\, \dots,\, 3\mathfrak{m}+1, \\ &\mathcal{M}_{-(5i-3)} = x^{4\mathfrak{m}+2-i},\quad i=1,\, \dots,\, 4\mathfrak{m}+2. \end{align*} \end{theo} \begin{theo}[$(5,5\mathfrak{m}+4)$-Curves]\label{T:C55m4} Let a non-degenerate $(5,5\mathfrak{m}+4)$-curve of genus $g=10\mathfrak{m}+6$ be defined by \begin{align}\label{V55m4Eq} -y^5 + x^{5\mathfrak{m}+4} + & y^3 \sum_{i=0}^{2\mathfrak{m}+1} \lambda_{5i+3} x^{2\mathfrak{m}+1-i} + y^2 \sum_{i=0}^{3\mathfrak{m}+2} \lambda_{5i+2} x^{3\mathfrak{m}+2-i} \\ &+ y \sum_{i=0}^{4\mathfrak{m}+3} \lambda_{5i+1} x^{4\mathfrak{m}+3-i} + \sum_{i=1}^{5\mathfrak{m} +3} \lambda_{5i+5} x^{5\mathfrak{m}+3-i} = 0. \notag \end{align} Let $u = \mathcal{A}(D)$ be the Abel image of a non-special degree $g$ divisor $D$ on the curve. Then $D$ is uniquely defined by the system of equations \begin{subequations}\label{REqsC55m4} \begin{gather} \mathcal{R}_{20\mathfrak{m}+12 + l}(x,y;u)=0,\qquad l=0,1,2,3 \end{gather} with four entire rational functions of the weights $2g + l =20\mathfrak{m} +12 + l$, $l=0,1,2,3$: \begin{align} \mathcal{R}_{20\mathfrak{m}+12}(x,y;u) &= y^3 x^{\mathfrak{m}} - \sum_{i=1}^{10\mathfrak{m}+6} \wp_{1,\mathfrak{w}_{i}}(u) \mathcal{M}_{-\mathfrak{w}_{i}}, \\ \mathcal{R}_{20\mathfrak{m}+13}(x,y;u) &= 2 y^2 x^{2\mathfrak{m}+1} + \lambda_1 y^3 x^{\mathfrak{m}} \\ &\quad - \sum_{i=1}^{10\mathfrak{m}+6} \big( \wp_{2,\mathfrak{w}_{i}}(u) - \wp_{1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \\ \mathcal{R}_{20\mathfrak{m}+14}(x,y;u) &= 3 y x^{3\mathfrak{m}+2} + 2 \lambda_1 y^2 x^{2\mathfrak{m}+1} + \lambda_2 y^3 x^{\mathfrak{m}} \\ &\hspace{-25mm} - \sum_{i=1}^{10\mathfrak{m}+6} \big(\wp_{3,\mathfrak{w}_{i}}(u) - \tfrac{3}{2} \wp_{1,2,\mathfrak{w}_{i}}(u) + \tfrac{1}{2} \lambda_1 \wp_{1,1,\mathfrak{w}_{i}}(u) + \tfrac{1}{2} \wp_{1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \\ \mathcal{R}_{20\mathfrak{m}+15}(x,y;u) &= 4 x^{4\mathfrak{m}+3} + 3 \lambda_1 y x^{3\mathfrak{m}+2} + 2 \lambda_2 y^2 x^{2\mathfrak{m}+1} + \lambda_3 y^3 x^{\mathfrak{m}} \\ &\hspace{-25mm} - \sum_{i=1}^{10\mathfrak{m}+6} \big(\wp_{4,\mathfrak{w}_{i}}(u) - \tfrac{1}{2} \wp_{2,2,\mathfrak{w}_{i}}(u) - \tfrac{4}{3} \wp_{1,3,\mathfrak{w}_{i}}(u) + \tfrac{5}{6} \lambda_1 \wp_{1,2,\mathfrak{w}_{i}}(u) \notag \\ & \hspace{-10mm} + \tfrac{1}{3} (\lambda_2 - \lambda_1^2 ) \wp_{1,1,\mathfrak{w}_{i}}(u) + \wp_{1,1,2,\mathfrak{w}_{i}}(u) - \tfrac{1}{2} \lambda_1 \wp_{1,1,1,\mathfrak{w}_{i}}(u) \notag \\ & \hspace{35mm} - \tfrac{1}{6} \wp_{1,1,1,1,\mathfrak{w}_{i}}(u) \big) \mathcal{M}_{-\mathfrak{w}_{i}}, \notag \end{align} \end{subequations} where \begin{align*} &\mathcal{M}_{-(5i-1)} = y^3 x^{\mathfrak{m}-i},\quad i=1,\, \dots,\, \mathfrak{m}, \\ &\mathcal{M}_{-(5i-2)} = y^2 x^{2\mathfrak{m}+1-i},\quad i=1,\, \dots,\, 2\mathfrak{m}+1, \\ &\mathcal{M}_{-(5i-3)} = y x^{3\mathfrak{m}+2-i},\quad i=1,\, \dots,\, 3\mathfrak{m}+2, \\ &\mathcal{M}_{-(5i-4)} = x^{4\mathfrak{m}+3-i},\quad i=1,\, \dots,\, 4\mathfrak{m}+3. \end{align*} \end{theo} \subsection{Proof of Theorem~\ref{T:C55m1} ($(5,5\mathfrak{m}+1)$-Curves)} We use the following parameterization of \eqref{V55m1Eq} \begin{equation*} x(\xi) = \xi^{-5},\quad y(\xi) = \xi^{-5\mathfrak{m}-1}\bigg(1 + \frac{\lambda_2}{5} \xi^2 + \frac{\lambda_3}{5} \xi^3 + \Big(\frac{\lambda_4}{5} + \frac{\lambda_2^2}{5^2} \Big)\xi^4 + O(\xi^5)\bigg). \end{equation*} The basis of differentials of the first kind has the form \begin{align*} &\mathrm{d} u_{5i-4} = y^3 x^{\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} = \xi^{5i-5} \Big(1 + \frac{2\lambda_2}{5} \xi^2 + \frac{\lambda_3}{5} \xi^3 + O(\xi^5)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, \mathfrak{m},\\ &\mathrm{d} u_{5i-3}= y^2 x^{2\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-4} \Big(1 + \frac{\lambda_2}{5} \xi^2 + O(\xi^4)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 2\mathfrak{m},\\ &\mathrm{d} u_{5i-2}= y x^{3\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-3} \Big(1 + O(\xi^3)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 3\mathfrak{m},\\ &\mathrm{d} u_{5i-1}= x^{4\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi, \quad i = 1,\, \dots,\, 4\mathfrak{m}. \end{align*} By integration with respect to $\xi$ we find the integrals of the first kind \begin{align*} \mathcal{A}(\xi) = \big(& u_1(\xi),\, u_2(\xi),\, u_3(\xi),\, u_4(\xi),\, \dots,\, u_{5\mathfrak{m}-4}(\xi),\, u_{5\mathfrak{m}-3}(\xi),\, \\ &u_{5\mathfrak{m}-2}(\xi),\, u_{5\mathfrak{m}-1}(\xi),\, \dots,\, u_{10\mathfrak{m}-3}(\xi),\, u_{10\mathfrak{m}-2}(\xi),\, u_{10\mathfrak{m}-1}(\xi),\, \dots,\, \\ & u_{15\mathfrak{m}-2}(\xi),\, u_{15\mathfrak{m}-1}(\xi),\, \dots,\, u_{20\mathfrak{m}-6}(\xi),\, u_{20\mathfrak{m}-1}(\xi) \big)^t, \end{align*} and then \begin{align}\label{DA0C55m1} \mathcal{A}(0) = 0,\quad \mathcal{A}'(0) = (\delta_{i,1}),\quad \mathcal{A}''(0) = (\delta_{i,2}),\quad \mathcal{A}^{(3)}(0) = \big(2\delta_{i,3} + \tfrac{4}{5}\lambda_2 \delta_{i,1} \big). \end{align} Let the four differentials of the second kind of the smallest Sat\={o} weights be \begin{align}\label{Diff2DefsS} &\big(\mathrm{d} r_1,\, \mathrm{d} r_2,\, \mathrm{d} r_3,\, \mathrm{d} r_4 \big) = \frac{\mathrm{d} x}{\partial_y f} \big(x^{4\mathfrak{m}},\, 2 yx^{3\mathfrak{m}},\, 3 y^2 x^{2\mathfrak{m}},\, 4 y^3 x^{\mathfrak{m}}\big). \end{align} We start from the simplest form of these differentials. Then we apply the condition \eqref{rCond} to expansions of \eqref{Diff2DefsS} near infinity, and find the differentials of the second kind associated with differentials of the first kind: \begin{align* &\begin{pmatrix} \mathrm{d} r_1 \\ \mathrm{d} r_2 \\ \mathrm{d} \widetilde{r}_3 \\ \mathrm{d} \widetilde{r}_4 \end{pmatrix} = \frac{\mathrm{d} x}{\partial_y f} \begin{pmatrix} x^{4\mathfrak{m}} \\ 2 yx^{3\mathfrak{m}} \\ 3 y^2 x^{2\mathfrak{m}} - \lambda_2 x^{4\mathfrak{m}} \\ 4 y^3 x^{\mathfrak{m}} - 2\lambda_2 yx^{3\mathfrak{m}} - \lambda_3 x^{4\mathfrak{m}} \end{pmatrix}. \end{align*} By integration with respect to $\xi$ we obtain \begin{align*} & r_1(\xi) = -\xi^{-1} + O(\xi) + c_1,\\ & r_2(\xi) = - \xi^{-2} + O(\xi) + c_2,\\ & \widetilde{r}_3(\xi) = - \xi^{-3} + \frac{2\lambda_2}{5} \xi^{-1} + O(\xi) + c_3,\\ & \widetilde{r}_4(\xi) = - \xi^{-4} + \frac{\lambda_2}{5} \xi^{-2} + \frac{\lambda_3}{5} \xi^{-1} + O(\xi) + c_4. \end{align*} Using \eqref{DA0C55m1}, we find \begin{multline*} \frac{\mathrm{d}}{\mathrm{d} \xi} \log \sigma\big(u - \mathcal{A}(\xi)\big) = - \zeta_1(u) - \big(\zeta_2(u) + \wp_{1,1}(u)\big) \xi \\ - \big(\zeta_3(u) + \tfrac{2}{5}\lambda_2 \zeta_1(u) + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \big) \xi^2 - \big(\zeta_4(u) + \tfrac{1}{5}\lambda_2 \zeta_2(u) + \tfrac{1}{5}\lambda_3 \zeta_1(u) \\ + \tfrac{1}{2} \wp_{2,2}(u) + \tfrac{4}{3} \wp_{1,3}(u) + \tfrac{8}{15} \lambda_2 \wp_{1,1}(u) - \wp_{1,1,2}(u) + \tfrac{1}{6} \wp_{1,1,1,1}(u) \big)\xi^3+ O(\xi^4), \end{multline*} where $u = \mathcal{A}(D)$ is the Abel's image of a non-special divisor $D = \sum_{k=1}^{10\mathfrak{m}} (x_k,y_k)$. Then we construct four equations of the form \eqref{rExpr}: \begin{align}\label{ZetaC55m1} & \sum_{k=1}^{10\mathfrak{m}} \begin{pmatrix} r_1(x_k,\,y_k) \\ r_2(x_k,\,y_k) \\ \widetilde{r}_3(x_k,\,y_k) \\ \widetilde{r}_4(x_k,\,y_k) \\ \phantom{\tfrac{4}{3} \big(\big)} \end{pmatrix} = - \begin{pmatrix} \zeta_1(u) \\ \zeta_2(u) + \wp_{1,1}(u) \\ \zeta_3(u) + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \\ \zeta_4(u) + \tfrac{1}{2} \wp_{2,2}(u) + \tfrac{4}{3} \wp_{1,3}(u) + \tfrac{1}{3} \lambda_2 \wp_{1,1}(u) \\ - \wp_{1,1,2}(u) + \tfrac{1}{6} \wp_{1,1,1,1}(u) \end{pmatrix}. \end{align} Differentiating all equations with respect to $x_1$, we obtain \eqref{REqsC55m1}. $\qede$ \subsection{Proof of Theorem~\ref{T:C55m2} ($(5,5\mathfrak{m}+2)$-Curves)} We use the following parameterization of \eqref{V55m2Eq} \begin{equation*} \begin{split} x(\xi) &= \xi^{-5},\\ y(\xi) &= \xi^{-5\mathfrak{m}-2} \bigg(1 + \frac{\lambda_1}{5} \xi + \Big(\frac{\lambda_3}{5} - \frac{\lambda_1^3}{5^3} \Big) \xi^3 + \Big(\frac{\lambda_4}{5} - \frac{\lambda_1 \lambda_3}{5^2} + \frac{\lambda_1^4}{5^4} \Big)\xi^4 + O(\xi^5)\bigg). \end{split} \end{equation*} The basis of differentials of the first kind has the form \begin{align*} &\mathrm{d} u_{5i-3} = y^3 x^{\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} = \xi^{5i-4} \Big(1 + \frac{\lambda_1}{5} \xi - \frac{3 \lambda_1^2}{5^2} \xi^2 + O(\xi^4)\Big) \mathrm{d} \xi, \quad i = 1,\, \dots,\, \mathfrak{m},\\ &\mathrm{d} u_{5i-1}= y^2 x^{2\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 2\mathfrak{m},\\ &\mathrm{d} u_{5i-4}= y x^{3\mathfrak{m}+1-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-5} \Big(1 - \frac{\lambda_1}{5} \xi - \frac{2 \lambda_1^2}{5^2} \xi^2 - \Big(\frac{2\lambda_3}{5} - \frac{7\lambda_1^3}{5^3}\Big) \xi^3+ O(\xi^5)\Big) \mathrm{d} \xi, \\ &\qquad i = 1,\, \dots,\, 3\mathfrak{m}+1,\\ &\mathrm{d} u_{5i-2}= x^{4\mathfrak{m}+1-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-3} \Big(1 - \frac{2\lambda_1}{5} \xi + O(\xi^3)\Big) \mathrm{d} \xi, \quad i = 1,\, \dots,\, 4\mathfrak{m}+1. \end{align*} By integration with respect to $\xi$ we find the integrals of the first kind \begin{align*} \mathcal{A}(\xi) = \big(& u_1(\xi),\, u_2(\xi),\, u_3(\xi),\, u_4(\xi),\, \dots,\, u_{5\mathfrak{m}-3}(\xi),\, u_{5\mathfrak{m}-2}(\xi),\, \\ &u_{5\mathfrak{m}-1}(\xi),\, u_{5\mathfrak{m}+1}(\xi),\, \dots,\, u_{10\mathfrak{m}-1}(\xi),\, u_{10\mathfrak{m}+1}(\xi),\, u_{10\mathfrak{m}+3}(\xi),\, \dots,\, \\ & u_{15\mathfrak{m}+1}(\xi),\, u_{15\mathfrak{m}+3}(\xi),\, \dots,\, u_{20\mathfrak{m}-2}(\xi),\, u_{20\mathfrak{m}+3}(\xi) \big)^t, \end{align*} and then \begin{gather}\label{DA0C55m2} \begin{split} &\mathcal{A}(0) = 0,\quad \mathcal{A}'(0) = (\delta_{i,1}),\quad \mathcal{A}''(0) = (\delta_{i,2} - \tfrac{1}{5} \lambda_1 \delta_{i,1}),\\ &\mathcal{A}^{(3)}(0) = \big( 2\delta_{i,3} + \tfrac{2}{5}\lambda_1 \delta_{i,2} - (\tfrac{2}{5})^2 \lambda_1^2 \delta_{i,1} \big). \end{split} \end{gather} Let the four differentials of the second kind of weights $1$, $2$, $3$, $4$ be: \begin{align*} &\big(\mathrm{d} r_1,\, \mathrm{d} r_2,\, \mathrm{d} r_3,\, \mathrm{d} r_4 \big) = \frac{\mathrm{d} x}{\partial_y f} \big(y^2 x^{2\mathfrak{m}},\, 2 x^{4\mathfrak{m}+1},\, 3 y^3 x^{\mathfrak{m}},\, 4 y x^{3\mathfrak{m}+1}\big). \end{align*} With the help of \eqref{rCond} we find the differentials of the second kind associated with differentials of the first kind: \begin{align*} &\begin{pmatrix} \mathrm{d} r_1 \\ \mathrm{d} \widetilde{r}_2 \\ \mathrm{d} \widetilde{r}_3 \\ \mathrm{d} \widetilde{r}_4 \end{pmatrix} = \frac{\mathrm{d} x}{\partial_y f} \begin{pmatrix} y^2 x^{2\mathfrak{m}} \\ 2 x^{4\mathfrak{m}+1} + \lambda_1 y^2 x^{2\mathfrak{m}} \\ 3 y^3 x^{\mathfrak{m}} - \lambda_1 x^{4\mathfrak{m}+1} \\ 4 y x^{3\mathfrak{m}+1} + 2 \lambda_1 y^3 x^{\mathfrak{m}} + 2 \lambda_3 y^2 x^{2\mathfrak{m}} \end{pmatrix}. \end{align*} By integration with respect to $\xi$ we obtain \begin{align*} & r_1(\xi) = -\xi^{-1} + O(\xi) + c_1,\\ & \widetilde{r}_2(\xi) = - \xi^{-2} - \frac{\lambda_1}{5} \xi^{-1} + O(\xi) + c_2,\\ & \widetilde{r}_3(\xi) = - \xi^{-3} + \frac{\lambda_1}{5} \xi^{-2} - \frac{\lambda_1^2}{5^2} \xi^{-1} + O(\xi) + c_3,\\ & \widetilde{r}_4(\xi) = - \xi^{-4} - \frac{2\lambda_1}{15} \xi^{-3} - \frac{\lambda_1^2}{5^2} \xi^{-2} - 2\Big(\frac{\lambda_3}{5} - \frac{\lambda_1^3}{5^3} \Big) \xi^{-1} + O(\xi) + c_4. \end{align*} Using \eqref{DA0C55m2}, we find \begin{multline*} \frac{\mathrm{d}}{\mathrm{d} \xi} \log \sigma\big(u - \mathcal{A}(\xi)\big) = - \zeta_1(u) - \big(\zeta_2(u) - \tfrac{1}{5}\lambda_1 \zeta_1(u) + \wp_{1,1}(u)\big) \xi \\ - \big(\zeta_3(u) + \tfrac{1}{5}\lambda_1 \zeta_2(u) - \tfrac{2}{5^2}\lambda_1^2 \zeta_1(u) + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{3}{10} \lambda_1 \wp_{1,1}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \big) \xi^2 \\ - \big(\zeta_4(u) - \tfrac{2}{5}\lambda_1 \zeta_3(u) - \tfrac{3}{5^2}\lambda_1^2 \zeta_2(u) - (\tfrac{2}{5}\lambda_3 - \tfrac{7}{5^3} \lambda_1^3) \zeta_1(u) + \tfrac{1}{2} \wp_{2,2}(u) + \tfrac{4}{3} \wp_{1,3}(u) \\ + \tfrac{1}{15} \lambda_1 \wp_{1,2}(u) - \tfrac{13}{6\cdot 5^2} \lambda_1^2 \wp_{1,1}(u) - \wp_{1,1,2}(u) + \tfrac{1}{5} \lambda_1 \wp_{1,1,1}(u) + \tfrac{1}{6} \wp_{1,1,1,1}(u) \big)\xi^3+ O(\xi^4), \end{multline*} where $u = \mathcal{A}(D)$ is the Abel's image of a non-special divisor $D = \sum_{k=1}^{10\mathfrak{m}+2} (x_k,y_k)$. Then we construct four equations of the form \eqref{rExpr}: \begin{align}\label{ZetaC55m2} & \sum_{k=1}^{10\mathfrak{m}+2} \begin{pmatrix} r_1(x_k,\,y_k) \\ \widetilde{r}_2(x_k,\,y_k) \\ \widetilde{r}_3(x_k,\,y_k) \\ \widetilde{r}_4(x_k,\,y_k) \\ \phantom{\tfrac{4}{3} \big(\big)} \end{pmatrix} = - \begin{pmatrix} \zeta_1(u) \\ \zeta_2(u) + \wp_{1,1}(u) \\ \zeta_3(u) + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{1}{2}\lambda_1 \wp_{1,1}(u) - \tfrac{1}{2} \wp_{1,1,1}(u)\\ \zeta_4(u) + \tfrac{1}{2} \wp_{2,2}(u) + \tfrac{4}{3} \wp_{1,3}(u) + \tfrac{2}{3} \lambda_1 \wp_{1,2}(u) \\ - \tfrac{1}{6} \lambda_1^2 \wp_{1,1}(u) - \wp_{1,1,2}(u) + \tfrac{1}{6} \wp_{1,1,1,1}(u) \end{pmatrix}. \end{align} Differentiating all equations with respect to $x_1$, we obtain \eqref{REqsC55m2}. $\qede$ \subsection{Proof of Theorem~\ref{T:C55m3} ($(5,5\mathfrak{m}+3)$-Curves)} We use the following parameterization of \eqref{V55m3Eq} \begin{equation*} \begin{split} x(\xi) &= \xi^{-5},\\ y(\xi) &= \xi^{-5\mathfrak{m}-3} \bigg(1 + \frac{\lambda_1}{5} \xi + \Big(\frac{\lambda_2}{5} + \frac{\lambda_1^2}{5^2} \Big) \xi^2 \\ &\qquad\qquad\quad + \Big(\frac{\lambda_4}{5} - \frac{\lambda_2^2}{5^2} - \frac{3}{5^3} \lambda_1^2 \lambda_2 - \frac{2}{5^4} \lambda_1^4 \Big)\xi^4 + O(\xi^5)\bigg). \end{split} \end{equation*} The basis of differentials of the first kind has the form \begin{align*} &\mathrm{d} u_{5i-2} = y^3 x^{\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} = \xi^{5i-3} \Big(1 + \frac{2\lambda_1}{5} \xi + O(\xi^3)\Big) \mathrm{d} \xi, \quad i = 1,\, \dots,\, \mathfrak{m},\\ &\mathrm{d} u_{5i-4}= y^2 x^{2\mathfrak{m}+1-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-5} \Big(1 + \frac{\lambda_1}{5} \xi - \Big(\frac{\lambda_2}{5} + \frac{2\lambda_1^2}{5^2} \Big) \xi^2 \\ &\qquad\qquad - \Big(\frac{6\lambda_1 \lambda_2}{5^2} + \frac{7\lambda_1^3}{5^3}\Big) \xi^3 + O(\xi^5)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 2\mathfrak{m}+1,\\ &\mathrm{d} u_{5i-1} = y x^{3\mathfrak{m}+1-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 3\mathfrak{m}+1,\\ &\mathrm{d} u_{5i-3}= x^{4\mathfrak{m}+2-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-4} \Big(1 - \frac{\lambda_1}{5} \xi - 3\Big(\frac{\lambda_2}{5} + \frac{\lambda_1^2}{5^2}\Big) \xi^2 + O(\xi^4)\Big) \mathrm{d} \xi, \\ &\qquad i = 1,\, \dots,\, 4\mathfrak{m}+2. \end{align*} By integration with respect to $\xi$ we find the integrals of the first kind \begin{align*} \mathcal{A}(\xi) = \big(& u_1(\xi),\, u_2(\xi),\, u_3(\xi),\, u_4(\xi),\, \dots,\, u_{5\mathfrak{m}-2}(\xi),\, u_{5\mathfrak{m}-1}(\xi),\, \\ &u_{5\mathfrak{m}+1}(\xi),\, u_{5\mathfrak{m}+2}(\xi),\, \dots,\, u_{10\mathfrak{m}+1}(\xi),\, u_{10\mathfrak{m}+2}(\xi),\, u_{10\mathfrak{m}+4}(\xi),\, \dots,\, \\ & u_{15\mathfrak{m}+4}(\xi),\, u_{15\mathfrak{m}+7}(\xi),\, \dots,\, u_{20\mathfrak{m}+2}(\xi),\, u_{20\mathfrak{m}+7}(\xi) \big)^t, \end{align*} and then \begin{gather}\label{DA0C55m3} \begin{split} &\mathcal{A}(0) = 0,\quad \mathcal{A}'(0) = (\delta_{i,1}),\quad \mathcal{A}''(0) = ( \delta_{i,2} + \tfrac{1}{5} \lambda_1 \delta_{i,1}),\\ &\mathcal{A}^{(3)}(0) = \big( 2\delta_{i,3} - \tfrac{2}{5}\lambda_1 \delta_{i,2} - \big( \tfrac{2}{5}\lambda_2 + (\tfrac{2}{5})^2 \lambda_1^2 \big) \delta_{i,1} \big). \end{split} \end{gather} Let the four differentials of the second kind of weights $1$, $2$, $3$, $4$ be: \begin{align*} &\big(\mathrm{d} r_1,\, \mathrm{d} r_2,\, \mathrm{d} r_3,\, \mathrm{d} r_4 \big) = \frac{\mathrm{d} x}{\partial_y f} \big(y x^{3\mathfrak{m}+1},\, 2 y^3 x^{\mathfrak{m}},\, 3 x^{4\mathfrak{m}+2},\, 4 y^2 x^{2\mathfrak{m}+1}\big). \end{align*} With the help of \eqref{rCond} we find the differentials of the second kind associated with differentials of the first kind: \begin{align*} &\begin{pmatrix} \mathrm{d} r_1 \\ \mathrm{d} \widetilde{r}_2 \\ \mathrm{d} \widetilde{r}_3 \\ \mathrm{d} \widetilde{r}_4 \end{pmatrix} = \frac{\mathrm{d} x}{\partial_y f} \begin{pmatrix} y x^{3\mathfrak{m}+1}\\ 2 y^3 x^{\mathfrak{m}} - \lambda_1 y x^{3\mathfrak{m}+1} \\ 3 x^{4\mathfrak{m}+2} + \lambda_1 y^3 x^{\mathfrak{m}} + 2\lambda_2 y x^{3\mathfrak{m}+1} \\ 4 y^2 x^{2\mathfrak{m}+1} - 2 \lambda_1 x^{4\mathfrak{m}+2} + 2 \lambda_2 y^3 x^{\mathfrak{m}} - \lambda_1 \lambda_2 y x^{3\mathfrak{m}+1} \end{pmatrix}. \end{align*} By integration with respect to $\xi$ we obtain \begin{align*} & r_1(\xi) = -\xi^{-1} + O(\xi) + c_1,\\ & \widetilde{r}_2(\xi) = - \xi^{-2} + \frac{\lambda_1}{5} \xi^{-1} + O(\xi) + c_2,\\ & \widetilde{r}_3(\xi) = - \xi^{-3} - \frac{\lambda_1}{5} \xi^{-2} - \Big( \frac{\lambda_2}{5} + \frac{\lambda_1^2}{5^2} \Big) \xi^{-1} + O(\xi) + c_3,\\ & \widetilde{r}_4(\xi) = - \xi^{-4} + \frac{2 \lambda_1}{5} \xi^{-3} - \Big( \frac{3\lambda_2}{5} + \frac{\lambda_1^2 }{5^2} \Big) \xi^{-2} - \Big(\frac{\lambda_2 \lambda_1}{5^2} + \frac{2\lambda_1^3}{5^3} \Big) \xi^{-1} + O(\xi) + c_4. \end{align*} Using \eqref{DA0C55m3}, we find \begin{multline*} \frac{\mathrm{d}}{\mathrm{d} \xi} \log \sigma\big(u - \mathcal{A}(\xi)\big) = - \zeta_1(u) - \big(\zeta_2(u) + \tfrac{1}{5}\lambda_1 \zeta_1(u) + \wp_{1,1}(u)\big) \xi \\ - \big(\zeta_3(u) - \tfrac{1}{5}\lambda_1 \zeta_2(u) - (\tfrac{1}{5}\lambda_2 + \tfrac{2}{5^2}\lambda_1^2) \zeta_1(u) + \tfrac{3}{2} \wp_{1,2}(u) + \tfrac{3}{10} \lambda_1 \wp_{1,1}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \big) \xi^2 \\ - \big(\zeta_4(u) + \tfrac{2}{5}\lambda_1 \zeta_3(u) - 3(\tfrac{1}{5} \lambda_2 + \tfrac{1}{5^2}\lambda_1^2) \zeta_2(u) - (\tfrac{6}{5^2}\lambda_2 \lambda_1 + \tfrac{7}{5^3} \lambda_1^3) \zeta_1(u) \\ + \tfrac{1}{2} \wp_{2,2}(u) + \tfrac{4}{3} \wp_{1,3}(u) - \tfrac{1}{15} \lambda_1 \wp_{1,2}(u) - \tfrac{1}{3}(\tfrac{4}{5}\lambda_2 + \tfrac{13}{2\cdot 5^2} \lambda_1^2) \wp_{1,1}(u) \\ - \wp_{1,1,2}(u) - \tfrac{1}{5} \lambda_1 \wp_{1,1,1}(u) + \tfrac{1}{6} \wp_{1,1,1,1}(u) \big)\xi^3+ O(\xi^4), \end{multline*} where $u = \mathcal{A}(D)$ is the Abel's image of a non-special divisor $D = \sum_{k=1}^{10\mathfrak{m}+4} (x_k,y_k)$. Then we construct four equations of the form \eqref{rExpr}: \begin{align}\label{ZetaC55m3} & \sum_{k=1}^{10\mathfrak{m}+4} \begin{pmatrix} r_1(x_k,\,y_k) \\ \widetilde{r}_2(x_k,\,y_k) \\ \widetilde{r}_3(x_k,\,y_k) \\ \widetilde{r}_4(x_k,\,y_k) \\ \phantom{\tfrac{4}{3} \big(\big)} \end{pmatrix} = - \begin{pmatrix} \zeta_1(u) \\ \zeta_2(u) + \wp_{1,1}(u) \\ \zeta_3(u) + \tfrac{3}{2} \wp_{1,2}(u) + \tfrac{1}{2} \lambda_1 \wp_{1,1}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \\ \zeta_4(u) + \tfrac{1}{2} \wp_{2,2}(u) + \tfrac{4}{3} \wp_{1,3}(u) - \tfrac{2}{3} \lambda_1 \wp_{1,2}(u) \\ + \tfrac{1}{3} (\lambda_2 - \tfrac{1}{2} \lambda_1^2 ) \wp_{1,1}(u) - \wp_{1,1,2}(u) + \tfrac{1}{6} \wp_{1,1,1,1}(u) \end{pmatrix}. \end{align} Differentiating all equations with respect to $x_1$, we obtain \eqref{REqsC55m3}. $\qede$ \subsection{Proof of Theorem~\ref{T:C55m4} ($(5,5\mathfrak{m}+4)$-Curves)} We use the following parameterization of \eqref{V55m4Eq} \begin{equation*} \begin{split} x(\xi) &= \xi^{-5},\\ y(\xi) &= \xi^{-5\mathfrak{m}-4} \bigg(1 + \frac{\lambda_1}{5} \xi + \Big(\frac{\lambda_2}{5} - \frac{\lambda_1^2}{5^2} \Big) \xi^2 + \Big(\frac{\lambda_3}{5} - \frac{\lambda_2\lambda_1}{5^2} + \frac{\lambda_1^3}{5^3} \Big)\xi^3 + O(\xi^5)\bigg). \end{split} \end{equation*} The basis of differentials of the first kind has the form \begin{align*} &\mathrm{d} u_{5i-1} = y^3 x^{\mathfrak{m}-i} \frac{\mathrm{d} x}{\partial_y f} = \xi^{5i-2} \Big(1 + O(\xi^2)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, \mathfrak{m},\\ &\mathrm{d} u_{5i-2}= y^2 x^{2\mathfrak{m}+1-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-3} \Big(1 - \frac{\lambda_1}{5} \xi + O(\xi^3)\Big) \mathrm{d} \xi,\quad i = 1,\, \dots,\, 2\mathfrak{m}+1,\\ &\mathrm{d} u_{5i-3} = y x^{3\mathfrak{m}+2-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-4} \Big(1 - \frac{2\lambda_1}{5} \xi - \Big(\frac{\lambda_2}{5} - \frac{3 \lambda_1^2}{5^2} \Big)\xi^2 + O(\xi^4)\Big) \mathrm{d} \xi, \\ &\qquad i = 1,\, \dots,\, 3\mathfrak{m}+2,\\ &\mathrm{d} u_{5i-4}= x^{4\mathfrak{m}+3-i} \frac{\mathrm{d} x}{\partial_y f} =\xi^{5i-5} \Big(1 - \frac{3\lambda_1}{5} \xi - \Big(\frac{2\lambda_2}{5} - \frac{7\lambda_1^2}{5^2} \Big) \xi^2 \\ &\qquad\qquad - \Big(\frac{\lambda_3}{5} - \frac{6\lambda_1\lambda_2 }{5^2}+ \frac{11 \lambda_1^3}{5^3} \Big) \xi^3+ O(\xi^5)\Big) \mathrm{d} \xi, \quad i = 1,\, \dots,\, 4\mathfrak{m}+3. \end{align*} By integration over $\xi$ we find the integrals of the first kind \begin{align*} \mathcal{A}(\xi) = \big(& u_1(\xi),\, u_2(\xi),\, u_3(\xi),\, u_4(\xi),\, \dots,\, u_{5\mathfrak{m}-1}(\xi),\, u_{5\mathfrak{m}+1}(\xi),\, \\ &u_{5\mathfrak{m}+2}(\xi),\, u_{5\mathfrak{m}+3}(\xi),\, \dots,\, u_{10\mathfrak{m}+3}(\xi),\, u_{10\mathfrak{m}+6}(\xi),\, u_{10\mathfrak{m}+7}(\xi),\, \dots,\, \\ & u_{15\mathfrak{m}+7}(\xi),\, u_{15\mathfrak{m}+11}(\xi),\, \dots,\, u_{20\mathfrak{m}+6}(\xi),\, u_{20\mathfrak{m}+11}(\xi) \big)^t, \end{align*} and then \begin{gather}\label{DA0C55m3} \begin{split} &\mathcal{A}(0) = 0,\quad \mathcal{A}'(0) = (\delta_{i,1}),\quad \mathcal{A}''(0) = (\delta_{i,2} - \tfrac{3}{5} \lambda_1 \delta_{i,1}),\\ &\mathcal{A}^{(3)}(0) = \big(2\delta_{i,3} - \tfrac{4}{5}\lambda_1 \delta_{i,2} - \big( \tfrac{4}{5}\lambda_2 - \tfrac{14}{5^2} \lambda_1^2 \big) \delta_{i,1} \big). \end{split} \end{gather} Let the four differentials of the second kind of weights $1$, $2$, $3$, $4$ be: \begin{align*} &\big(\mathrm{d} r_1,\, \mathrm{d} r_2,\, \mathrm{d} r_3,\, \mathrm{d} r_4 \big) = \frac{\mathrm{d} x}{\partial_y f} \big(y^3 x^{\mathfrak{m}},\, 2 y^2 x^{2\mathfrak{m}+1},\, 3 y x^{3\mathfrak{m}+2},\, 4 x^{4\mathfrak{m}+3}\big). \end{align*} With the help of \eqref{rCond} we find the differentials of the second kind associated with differentials of the first kind: \begin{align*} &\begin{pmatrix} \mathrm{d} r_1 \\ \mathrm{d} \widetilde{r}_2 \\ \mathrm{d} \widetilde{r}_3 \\ \mathrm{d} \widetilde{r}_4 \end{pmatrix} = \frac{\mathrm{d} x}{\partial_y f} \begin{pmatrix} y^3 x^{\mathfrak{m}} \\ 2 y^2 x^{2\mathfrak{m}+1} + \lambda_1 y^3 x^{\mathfrak{m}} \\ 3 y x^{3\mathfrak{m}+2}+ 2 \lambda_1 y^2 x^{2\mathfrak{m}+1} + \lambda_2 y^3 x^{\mathfrak{m}} \\ 4 x^{4\mathfrak{m}+3} + 3 \lambda_1 y x^{3\mathfrak{m}+2} + 2 \lambda_2 y^2 x^{2\mathfrak{m}+1} + \lambda_3 y^3 x^{\mathfrak{m}} \end{pmatrix}. \end{align*} By integration with respect to $\xi$ we obtain \begin{align*} & r_1(\xi) = -\xi^{-1} + O(\xi) + c_1,\\ & \widetilde{r}_2(\xi) = - \xi^{-2} - \frac{3\lambda_1}{5} \xi^{-1} + O(\xi) + c_2,\\ & \widetilde{r}_3(\xi) = - \xi^{-3} - \frac{2\lambda_1}{5} \xi^{-2} - \Big( \frac{2\lambda_2}{5} - \frac{\lambda_1^2}{5^2} \Big) \xi^{-1} + O(\xi) + c_3,\\ & \widetilde{r}_4(\xi) = - \xi^{-4} - \frac{\lambda_1}{5} \xi^{-3} - \Big( \frac{\lambda_2}{5} - \frac{\lambda_1^2}{5^2} \Big) \xi^{-2} - \Big(\frac{\lambda_3}{5} - \frac{\lambda_2 \lambda_1}{5^2} + \frac{\lambda_1^3}{5^3} \big) \xi^{-1} + O(\xi) + c_4. \end{align*} Using \eqref{DA0C55m3}, we find \begin{multline*} \frac{\mathrm{d}}{\mathrm{d} \xi} \log \sigma\big(u - \mathcal{A}(\xi)\big) = - \zeta_1(u) - \big(\zeta_2(u) - \tfrac{3}{5}\lambda_1 \zeta_1(u) + \wp_{1,1}(u)\big) \xi \\ - \big(\zeta_3(u) - \tfrac{2}{5}\lambda_1 \zeta_2(u) - (\tfrac{2}{5}\lambda_2 - \tfrac{7}{5^2}\lambda_1^2) \zeta_1(u) + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{9}{10} \lambda_1 \wp_{1,1}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \big) \xi^2 \\ - \big(\zeta_4(u) - \tfrac{1}{5}\lambda_1 \zeta_3(u) - (\tfrac{1}{5} \lambda_2 - \tfrac{3}{5^2}\lambda_1^2) \zeta_2(u) - (\tfrac{1}{5} \lambda_3 - \tfrac{6}{5^2}\lambda_2 \lambda_1 + \tfrac{11}{5^3} \lambda_1^3) \zeta_1(u) \\ + \tfrac{1}{2} \wp_{2,2}(u) + \tfrac{4}{3} \wp_{1,3}(u) - \tfrac{17}{15} \lambda_1 \wp_{1,2}(u) - \tfrac{1}{3} (\tfrac{8}{5}\lambda_2 - \tfrac{83}{2\cdot 5^2} \lambda_1^2) \wp_{1,1}(u) \\ - \wp_{1,1,2}(u) + \tfrac{3}{5} \lambda_1 \wp_{1,1,1}(u) + \tfrac{1}{6} \wp_{1,1,1,1}(u) \big)\xi^3+ O(\xi^4), \end{multline*} where $u = \mathcal{A}(D)$ is the Abel's image of a non-special divisor $D = \sum_{k=1}^{10\mathfrak{m}+6} (x_k,y_k)$. Then we construct four equations of the form \eqref{rExpr}: \begin{align}\label{ZetaC55m4} & \sum_{k=1}^{10\mathfrak{m}+6} \begin{pmatrix} r_1(x_k,\,y_k) \\ \widetilde{r}_2(x_k,\,y_k) \\ \widetilde{r}_3(x_k,\,y_k) \\ \widetilde{r}_4(x_k,\,y_k) \\ \phantom{\tfrac{4}{3} \big(\big)} \\ \phantom{\tfrac{4}{3} \big(\big)} \end{pmatrix} = - \begin{pmatrix} \zeta_1(u) \\ \zeta_2(u) + \wp_{1,1}(u) \\ \zeta_3(u) + \tfrac{3}{2} \wp_{1,2}(u) - \tfrac{1}{2} \lambda_1 \wp_{1,1}(u) - \tfrac{1}{2} \wp_{1,1,1}(u) \\ \zeta_4(u) + \tfrac{1}{2} \wp_{2,2}(u) + \tfrac{4}{3} \wp_{1,3}(u) - \tfrac{5}{6} \lambda_1 \wp_{1,2}(u) \\ - \tfrac{1}{3}(\lambda_2 - \lambda_1^2) \wp_{1,1}(u) \\ - \wp_{1,1,2}(u) + \tfrac{1}{2} \lambda_1 \wp_{1,1,1}(u) + \tfrac{1}{6} \wp_{1,1,1,1}(u) \end{pmatrix}. \end{align} Differentiating all equations with respect to $x_1$, we obtain \eqref{REqsC55m4}. $\qede$ \section{Conclusions} The Jacobi inversion problem is solved in general by Theorems~\ref{T1} and \ref{T2}, where an $(n,s)$-curve is supposed to be non-hyperelliptic. However, the same approach works for hyperelliptic curves, as explained in Example~\ref{E:HypC}. Theorems~\ref{T:C33m1} and \ref{T:C33m2} give solutions on trigonal curves of the types $(3,3\mathfrak{m}+1)$ and $(3,3\mathfrak{m}+2)$, $\mathfrak{m}$ is a natural number. Theorems~\ref{T:C44m1} and \ref{T:C44m3} give solutions on tetragonal curves of the types $(4,4\mathfrak{m}+1)$ and $(4,4\mathfrak{m}+3)$. Theorems~\ref{T:C55m1}, \ref{T:C55m2}, \ref{T:C55m3}, and \ref{T:C55m4} give solutions on pentagonal curves of the types $(5,5\mathfrak{m}+1)$, $(5,5\mathfrak{m}+2)$, $(5,5\mathfrak{m}+3)$, and $(5,5\mathfrak{m}+4)$. The Jacobi inversion problem is solved in terms of multiply periodic $\wp$ functions defined by \eqref{wp2Def}, \eqref{wp3Def} from the multivariable sigma function $\sigma$ of a curve under consideration.
2024-02-18T23:39:47.064Z
2023-01-02T02:12:47.000Z
algebraic_stack_train_0000
381
18,032
proofpile-arXiv_065-1984
\section{Introduction} Advances in experimental methods for the shock loading of materials allow researchers to get information on their structure and dynamic properties under pressures to 1 TPa and above in laboratory conditions \cite{A01,A02,A03,A04}. Of special interest here are quasi-isentropic (ramp) compression experiments which allow determination of the crystal structure of compressed material \cite{A01,A02,A03}. With this information it becomes possible, for example, to judge the state of material inside massive planets within our Solar system and beyond. With the understanding of what happens with materials under such extreme conditions, the scientists can adequately simulate and predict the interior state of planets. In this paper we consider structural changes in the light alkaline-earth metals beryllium and magnesium under high and ultrahigh pressures. Beryllium, the second lightest metal in the periodic table, exhibits a number of unique properties. Its electronic density of states (DOS) greatly differs from the nearly-free-electron DOS and has a pseudogap near the Fermi level \cite{A05}. Under ambient conditions Be has the hcp structure, a very high Debye temperature (higher than all other elemental metal have), a small Poisson's ratio, and a lattice ratio, $c/a$, much smaller than its ideal value. Beryllium is particularly intriguing due to its hardly observable polymorphism under high pressures and temperatures. So, at the atmospheric pressure and a temperature above 1.5 kK, its structural transition to the bcc phase before melting was observed in early X-ray experiments \cite{A06,A07}. It was later shown \cite{A08} that under the increasing pressure the hcp-bcc boundary on the $PT$-diagram has a negative slope. New theoretical studies \cite{A09,A10} based on calculations from first principles, suggest that at high temperatures ($T$$>$1 kK) and moderate pressures ($\lesssim$10 GPa), the phase diagram of beryllium exhibits the so-called bcc pocket that originates at $P$$=$0 and lies between the hcp stability region and the melting curve. But recent experiments on Be compression on the diamond anvil cell \cite{A11} did not find any signs of this cubic structure on the phase diagram. Moreover, results of \emph{ab initio} calculations \cite{A09,A12,A13,A14,A15} clearly show that the bcc structure should also be most energetically preferable at high pressures, above 100 GPa, where another hcp-bcc boundary exists. This boundary has a negative slope giving at $P$$>$100 GPa and $T$$>$3.5 kK the triple hcp-bcc-liquid point. However the efforts taken to prove the presence of the hcp-bcc boundary at high pressures and temperatures have not been a success either in static \cite{A11} or dynamic \cite{A16} experiments. Calculations from first principles \cite{A14,A17} show that no other structural transformation occurs in beryllium up to 1 TPa. Unlike beryllium, magnesium does not exhibit such peculiar physical properties at low pressures. Under ambient conditions it also has the hcp structure. Its electronic density of states is similar to the nearly-free-electron DOS for simple metals \cite{A05,A18}. Its compression results in an hcp-bcc structural transformation at $P$$\approx$50 GPa and $T$$=$300 K well detectable in experiment \cite{A19,A20} and convincingly reproducible in \emph{ab initio} calculations \cite{A21,A22,A23}. It was a surprise to detect in experiment its transition to the dhcp structure under moderate pressures about 10 GPa and temperatures above 1 kK \cite{A24}. But the authors of later experiments \cite{A20} could not identify the observed structure surely. The region of this phase on the $PT$-diagram seems to be limited by a narrow area (5-10 GPa) along the melting curve; above 12 GPa it is no longer detectable. Mg compression to 211 GPa at room temperature did not reveal any other structural transformations in static experiments \cite{A20}. \emph{Ab initio} calculations predict a more interesting, compared to beryllium, structural behavior of magnesium at pressures above 200 GPa. As suggested in early papers \cite{A25,A26}, at least one more structural transformation is expected to occur at a pressure of several hundred GPa. More precise evidence of structural transformations in Mg was reported in recent works \cite{A22,A27} where some transformations at pressures to 1 TPa and slightly above were predicted with random structure search algorithms. So, a transition to the fcc structure is expected to occur in Mg under pressures to about 0.46 TPa. The further increase of pressure indicates that the simple hexagonal (sh) packing of atoms becomes energetically most favourable (at $P$$\approx$0.75 TPa), and the simple cubic (sc) one does at pressures about 1 TPa \cite{A27}. The authors of paper \cite{A22} calculated hcp, bcc, fcc and sh phase boundaries at elevated temperatures but they limited themselves by a maximum of 2 kK and did not tried to calculate the melting curve. It is also said in Refs. \cite{A22,A27} that the high-pressure fcc, sh, and sc phases are the so-called electride structures with electron 'blobs' in the interstitial region, i.e., they have non-nuclear maxima in the electronic density. All of this makes magnesium resemble another alkaline-earth metal, calcium, which also has a high-pressure electride simple cubic phase \cite{A28,A29,A30}. From all the above we can see that under high pressures magnesium becomes anything but a trivial metal and demonstrates some exotic features. Note that \emph{ab initio} calculations \cite{A31} show that non-nuclear maxima are also present in the electronic density of beryllium at the normal specific volume. In the present work we study the structural behavior of beryllium and magnesium crystals under pressure. Unlike previous works \cite{A22,A23,A27} where the pseudopotential method was employed, here the full-potential method (FP-LMTO) is used for structural stability analysis. Changes in the band structure of beryllium and magnesium under compression are compared and their effects are discussed. Some parallels are drawn with other alkaline-earth metals and not only with them. The calculated phase boundaries of beryllium and magnesium are presented in $PT$-coordinates. The position of melting curves is determined and calculated results are compared with other theoretical works and experiments. \section{DETAILS OF CALCULATIONS} In this work, calculations were done with the full-potential all-electron linear muffin-tin orbital method FP-LMTO implemented in the LmtART code \cite{A32}. Phonon spectra calculations for the metals of interest are based on linear response theory. The valence electrons for beryllium are its all 4 electrons, and 2$s$, 2$p$, and 3$s$ for magnesium. The crystal structures under study include not only experimentally observed hcp and bcc, but also face-centered cubic (fcc), double hexagonal closed-packed (dhcp), simple hexagonal, simple cubic, and tetragonal $\beta$-tin ($\beta$-Sn) ones. The last phase was included because it is experimentally observed in compressed calcium at low temperatures \cite{A33} and can potentially compete with the other phases of Be and Mg. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig1}} \caption{\label{fig1} Phonon spectra of hcp Be (upper panel) and hcp Mg (lower panel) from calculation and experiment \cite{A40}. Phonon frequencies in different directions of the Brillouin zone are shown for Be, and the phonon density of states for magnesium. The red line shows calculation at $T$$=$0 K, and line with black dots shows experimental results at room temperature.} \end{figure} The inner parameters of the method were thoroughly adjusted in order to make our \emph{ab initio} calculations as accurate as possible. An improved tetrahedron method \cite{A34} was used for integration over the Brillouin zone. The $\vec{k}$-meshes were as follows: 30$\times$30$\times$30 for all cubic structures, 30$\times$30$\times$24 for hcp, 30$\times$30$\times$20 for dhcp, and 24$\times$24$\times$30 for sh and $\beta$-Sn. The cutoff energy for representing the basis functions as a set of plane waves in the interstitial region depended on the magnitude of compression. At the equilibrium specific volume $V$$=$$V_0$ it was 850 eV. The basis set included MT-orbitals with moments to $l^b_{max}$$=$2. Potential and charge density expansions in terms of spherical harmonics were done to $l^w_{max}$$=$6. The $c/a$ ratios of tetragonal and hexagonal structures were always optimized. The internal FP-LMTO parameters such as the linearization energy, tail energies, etc. were chosen using an approach similar to that one used in Ref. \cite{A35}. Calculation parameters including the exchange-correlation functional were selected so as to reproduce best the ground state properties and phonon spectra of the two metals under study. The PBE functional \cite{A36} was taken for both metals. The equilibrium specific volume $V_0$ of Be and Mg was reproduced no worse than 1\% compared to experiment. Pressure versus volume was determined by differentiating an analytical expression that approximates the calculated dependence of $E$ on $V$. The dependence $E(V)$ was approximated with a formula by Parsafar and Mason \cite{A37}. Crystal internal energy versus relative specific volume $V/V_0$ was calculated for values within intervals from 1.05 to 0.04 for Be and to 0.07 for Mg. Phonon spectra were calculated in the same intervals. Phonon frequencies were determined using meshes of $\vec{q}$-points which measured 10$\times$10$\times$10 for cubic structures, 10$\times$10$\times$6 for close-packed hexagonal phases, and 8$\times$8$\times$10 for sh. The contribution of lattice vibrations to free energy was determined in a quasiharmonic approximation (QHA) \cite{A38} with use of the calculated phonon spectra. The well-known Lindemann criterion was used to evaluate the melting curve. The procedure of its calculation is described in Ref. \cite{A39}. The accuracy of calculated phonon frequencies is demonstrated in Figure~\ref{fig1} that shows the corresponding spectra of beryllium and magnesium calculated in this work in comparison with experimental data. They are seen to agree well. \section{Results and discussion} Consider first the relative stability of beryllium and magnesium structures of interest at $T$$=$0 K. Figure 2 presents results obtained in this work for beryllium. Hereafter Gibbs thermodynamic potentials versus pressure are presented relative to the bcc potential. As seen from the figure, hcp beryllium is thermodynamically most favorable at $P$ below 0.4 TPa which agrees well with other calculations \cite{A09,A14} and is not contradict experimental data \cite{A11,A41}. At higher pressures the bcc structure becomes energetically more favorable. It can be seen that further compression does not lead to any other structural changes. Our studies show the situation to remain unchanged at least up to $P$$\sim$250 TPa. Calculations also show the bcc phase to remain dynamically stable. It is well seen from figure~\ref{fig2} that above about 20 TPa the slope of the curves $\Delta G$ weakly changes with the increasing pressure - all curves are almost parallel; the electron spectrum of Be stops changing significantly, demonstrating no fundamental changes under further compression. This can be seen in figure~\ref{fig3} that shows the evolution of the electronic density of states of beryllium at high pressures. At $V/V_0$$=$0.35, a pseudogap is seen to be still present in the electronic spectrum of Be but it disappears under further compression and the DOS becomes more and more similar to the nearly-free-electron density of states. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig2}} \caption{\label{fig2} Gibbs energy difference for different beryllium phases at high pressures and zero temperature. The inset shows the pressure region below 1 TPa.} \end{figure} \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig3}} \caption{\label{fig3} Electronic density of states of bcc beryllium for several compressions ($T$$=$0 K) corresponding to pressures $P$$\approx$0.66 TPa ($V/V_0$$=$0.35); $P$$\approx$12 TPa ($V/V_0$=0.1); and $P$$\approx$32 TPa ($V/V_0$$=$0.06).} \end{figure} As mentioned earlier, \emph{ab initio} results show magnesium is more diverse in structural changes under pressure than beryllium. Figure~\ref{fig4} shows the relative difference of thermodynamic potentials calculated for magnesium structures of interest at $T$$=$0 K. The hcp phase is thermodynamically most favorable up to pressures about 0.05 TPa which agrees with available experimental data \cite{A20,A42}. Then the structural transformations bcc$\rightarrow$fcc$\rightarrow$sh$\rightarrow$sc are seen to occur at pressures about 0.48, 0.76, and 1.1 TPa, respectively. These values agree quite well with other \emph{ab initio} calculations \cite{A22,A27}. Our calculations suggest that the simple cubic structure remains thermodynamically most favourable and dynamically stable at least to a pressure of 12 TPa. It is seen from figure~\ref{fig4} that in energy the $\beta$-Sn structure is quite close to the simple hexagonal phase but it does not become energetically more favorable anywhere in the pressure range under study. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig4}} \caption{\label{fig4} Gibbs energy difference for different magnesium phases under high pressures and zero temperature. The inset shows the region of low pressures below 0.1 TPa.} \end{figure} \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig5}} \caption{\label{fig5} Electronic densities of states of bcc and sc magnesium at several compressions ($T$$=$0 K) corresponding to the following pressures: $P$$\approx$0.43 TPa ($V/V_0$$=$0.3); $P$$\approx$1.72 TPa ($V/V_0$$=$0.16); and $P$$\approx$3.36 TPa ($V/V_0$$=$0.12).} \end{figure} Structural transformations in magnesium are accompanied by significant changes in the electronic spectrum. Figure~\ref{fig5} presents the evolution of the electronic DOS at zero temperature and at different compressions. It is seen from the figure that at $V/V_0$$=$0.3 the electron density of states looks very much like the DOS of nearly-free electrons but further compression results in the appearance of a pseudogap that increases as $P$ grows. Finally, at a pressure of about 2.7 TPa, a narrow band gap ($\lesssim$0.1 eV) appears in the spectrum of valence electrons of the most stable cubic structure and magnesium becomes a semiconductor (see fig.~\ref{fig5}, $V/V_0$$=$0.12). The presence of a pseudogap in strongly compressed magnesium was earlier reported in theoretical work \cite{A22} but the authors did not consider pressures above 1 TPa and did not observe transition to a semiconducting state. Note that the conventional density functional methods are known to markedly underestimate the width of the band gap; it can be determined more accurately with the Green's-function theory (GW calculations) \cite{A43}. \emph{Ab initio} calculations \cite{A44,A45} show a narrow band gap to also form in the electronic spectra of calcium and strontium under compression at zero temperature but much lower pressures. In fcc and sc Ca, it appears at $P$$\sim$30 GPa. However this state exists very shortly at pressures range $\sim$30-40 GPa. In the electronic spectrum of fcc strontium, the gap forms at $P$ below 3.5 GPa and disappears as the pressure slightly increases. Experiments show that the electrical resistivity of calcium and strontium significantly grows in the corresponding pressure ranges \cite{A44}. Under these pressures they both are semimetals with a very low concentration of charge carriers. Our calculations show that for magnesium, the pressure of transition into a semiconducting state is much higher and the interval of its existence is significantly larger, from $\sim$2.7 to 8.6 TPa, than in other alkaline-earth metals. Worthy of noting is one more similarity to calcium. A band gap also appears in fcc Mg but this structure is not thermodynamically most favorable in the above pressure range. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig6}} \caption{\label{fig6} Internal energy of dhcp Mg versus $c/a$ at $T$$=$0 K for relative specific volumes $V/V_0$$=$0.2 ($P$$\approx$1 TPa), $V/V_0$$=$0.15 ($P$$\approx$2 TPa), $V/V_0$$=$0.1 ($P$$\approx$5.1 TPa). The vertical dotted line marks the ideal $c/a$$=$3.266.} \end{figure} Experimental data \cite{A43} show that the magnesium's neighbor in the periodic table, sodium, becomes an optically transparent insulator at $P$$\sim$0.2 TPa. For a band gap to form in sodium, 5-fold compression is required \cite{A43}, while the semiconducting state of magnesium is reached at about 7.5-fold compression. Sodium has a dhcp structure at $P$$\gtrsim$0.2 TPa with unexpectedly low $c/a$ ratio $\approx$1.4 which is much smaller than the ideal value, 3.266. The dhcp structure of magnesium is not thermodynamically most favorable, but if note how its energy depends on $c/a$ at different compressions, we can easily see analogies with sodium. Figure~\ref{fig6} presents internal energy versus $c/a$ for several compressions of dhcp magnesium. It is seen that at $V/V_0$$=$0.2 the curve $E(c/a)$ has only one energy minimum rather close to the ideal value of $c/a$ (the vertical dotted line). However, another minimum corresponding to low $c/a$$<$3 appears on the curve as compression increases. At $V/V_0$$=$0.1 the minima correspond to $c/a$$\approx$2 and $\approx$4.4. On whole, the behavior of $E(c/a)$ under growing compression is very much similar to what is observed for sodium (see Supplemental Material of Ref. \cite{A43}). But for magnesium, appearance of the band gap is not observed for either of the two minima of the dhcp structure. It can thus be stated that only beryllium under high and ultrahigh pressures behaves very much differently from other alkaline-earth metals. Exhibiting some unique features under ambient conditions, it becomes more and more similar to an ordinary simple metal as compression increases. Whereas the behavior of magnesium under pressure looks like that of the alkaline-earth metals heavier than beryllium. At pressures above 0.7 TPa, transformations into open structures occur in Mg, the crystal packing factor markedly decreases ($\sim$50\% for the sh phase), the electride structure appears \cite{A22,A27}, and the spectrum of valence electrons eventually change so that a narrow band gap forms in it at $P$$>$2.5 TPa. Consider the compression isotherms of beryllium and magnesium. Figure~\ref{fig7} shows calculated dependencies of pressure versus specific volume at room temperature in comparison with available experimental data. The curves calculated for the metals are seen to agree well with experiment. The inset shows the cold pressure of some magnesium structures in the higher compression region. It is seen that the curves almost coincide at $V/V_0$$>$0.3, but with growing compression the difference in $P$ between the fcc, bcc structures and the open phases sh, sc markedly increases. This behavior is caused by electronic band rearrangement which makes the energy and pressure of the sh and sc structures lower compared to the other phases. Compressibility of simple phase noticeably increases. A similar situation is observed in light alkali metals \cite{A46}, whose open structures also become energetically most favorable as compression increases. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig7}} \caption{\label{fig7} Pressure versus relative specific volume at room temperature for Be and Mg in comparison with experiment. Calculated results are shown by lines and experimental data are shown by symbols: stars \cite{A41}, diamonds \cite{A11}; triangles \cite{A42}, and circles \cite{A20}. The isotherm calculated for Mg is given with account for the hcp-bcc transition. The inset shows the ultrahigh pressure region with calculations for several Mg phases.} \end{figure} Now consider shock wave compression of beryllium and magnesium. Additional experimental data \cite{A16,A49,A50,A51} which help better understand their behavior under these conditions have recently appeared. Figure~\ref{fig8} compares the shock Hugoniots of Be and Mg from our calculations with data from different experiments. The horizontal lines mark the approximate pressures of the hcp$\rightarrow$bcc transition and melting. It should be noted that the presence of this transition in shock compressed beryllium remains debatable. It is seen that the calculations reproduce the shock compression of the two metals very well. The change of volume due to the transition is small and not seen in the pictures. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig8}} \caption{\label{fig8} Shock Hugoniots of beryllium and magnesium in $P$-$\rho$ coordinates from our calculations (red lines) and experiments: triangles \cite{A47}, diamonds \cite{A48}, inverse triangles \cite{A49}, and circles \cite{A16}. The dashed lines show approximate boundaries of the hcp$\rightarrow$bcc transition and melting under shock compression.} \end{figure} Earlier in paper \cite{A21} we calculated the elastic constants of several magnesium structures for different compressions at $T$$=$0 K. With these monocrystal constants we calculated longitudinal, shear and bulk sound velocities as functions of applied pressure for magnesium polycrystals using Voigt-Reuss-Hill averaging \cite{A52}. Figure~\ref{fig9} compares our calculations with recent measurements of sound velocities on the Hugoniot \cite{A50,A51}. The calculated and experimental data are seen to agree well. The hcp$\rightarrow$bcc transition results in a small jump of sound velocities which can hardly be detected in experiment with the available measurement accuracy. Since our calculations were done for zero temperature, they do not reproduce the softening of longitudinal and shear sound velocities while approaching melting. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig9}} \caption{\label{fig9} Magnesium sound velocity versus pressure under shock loading from calculation of elastic constants \cite{A21}, and from experiments: green squares \cite{A51}, red symbols \cite{A50}. The vertical lines mark the approximate pressures of the hcp-bcc transition and melting on the Hugoniot \cite{A49}.} \end{figure} Figure~\ref{fig10} presents a $PT$-diagram of beryllium calculated in this work in comparison with other \emph{ab initio} calculations and experiment. The issue of whether a pocket of the bcc phase exists at low pressures and high temperatures \cite{A09,A10} remained beyond the scope of this work. Circles in Fig.~\ref{fig10} mark the possible boundary of the region where the hcp phase of beryllium exist according to static experiment data \cite{A11}. Look first at the melting curve. For more correct comparison, the melting curve obtained for beryllium from the Lindemann criterion was calculated with the Debye temperature determined by the logarithmic phonon moment, as it is done in paper \cite{A54}. As seen from Fig. 10, our curve underestimates the melting point with the increasing pressure compared to QMD calculations. Also, the hcp existence region from experiment \cite{A11} extends in temperature a bit higher than our curve runs. However it unexpectedly well agrees with the estimated melting point of Be on the Hugoniot from experiment \cite{A16}. Despite that the Lindemann criterion and phonon spectra at zero temperature often give quite good melting curves \cite{A35,A39} that agree with experiment and MD calculations, here we are having noticeable discrepancies. As shown in paper \cite{A54}, the melting temperature of beryllium from the Lindemann criterion at $P$$=$300 GPa is underestimated by 24\% compared to QMD calculations. Our calculations for this pressure give a smaller difference, 12\% only. Multiphase EOS calculations \cite{A55} where the melting curve is determined in the same way as we do, give a line close to ours (the gray line in Fig.~\ref{fig10}). In further discussion of the $PT$-diagram of beryllium, we will rely on the melting curves of works \cite{A12,A54}, which are obtained more correctly from the physical point of view and agree very well with each other. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig10}} \caption{\label{fig10} $PT$-diagram of beryllium. Red lines show calculations of this work (QHA): the dashed one is the hcp-bcc boundary; the dash-dotted one is the Hugoniot, and the dotted line is the melting curve. The hcp-bcc boundaries from other calculations are shown by the dashed blue line for calculated data from \cite{A09} (QHA); the dashed green line for data from \cite{A12} (QHA); the dashed and dash-dot-dotted magenta lines for calculations from \cite{A14} in QHA approximation and with full anharmonicity, respectively; open triangles for calculations with full anharmonicity from \cite{A15}. Melting curves from QMD calculations are shown by the blue line \cite{A09}, stars \cite{A12}, closed inverse triangles \cite{A53}, and the black line \cite{A54}. The gray line shows the melting curve obtained with a multiphase EOS \cite{A55}. The circles show data from static experiment \cite{A11}, marking the boundary of the region where the hcp phase exists. The square shows the estimation of the experimental melting point on the Hugoniot \cite{A16}. The principal isentrope is from the SESAME 2010 EOS \cite{A56}.} \end{figure} Consider the hcp-bcc boundary. Fig.~\ref{fig10} presents these boundaries from calculations of two types. Results of the first type are obtained with the phonon spectra calculated at $T$$=$0 K and quasiharmonic approximation. The second type is QMD calculations which include all anharmonic effects. So, in paper \cite{A15}, the thermodynamic integration (TDI) method was used, while in Ref. \cite{A14}, phonon spectra were determined with account for temperature, using velocity autocorrelation functions. Most interesting here are results from \cite{A14} where the hcp-bcc phase boundary was obtained in a unified manner for both types of calculations, QHA and QMD (magenta lines in Fig.~\ref{fig10}). They show that the contribution of anharmonic effects grows quite fast with the increasing temperature and at $T$$>$2 kK becomes essential for the determination of hcp-bcc boundary. An 'anharmonic curve' calculated in paper \cite{A14} agrees quite well with result from TDI calculations \cite{A15} (open triangles in Fig.~\ref{fig10}). The situation with quasiharmonic calculations looks more intricate. Their results fall into two groups: with smaller \cite{A09,A14} and larger \cite{A12} (our work too) stability region of hcp structure at high temperatures (magenta and blue dashes versus green and red ones). Our hcp-bcc boundary (red dashed line) is steeper at high temperatures and at $T$$>$3 kK agrees well with results by Benedict et al \cite{A12}. Note that an analysis performed in Ref. \cite{A12} for the influence of anharmonicity effects on hcp and bcc phase stability, did not find them to contribute significantly which obviously disagrees with results from paper \cite{A14}. Data from our work and from \cite{A12} do not also contradict to static experiment \cite{A11} despite the use of quasiharmonic approximation in distinction from similar calculations presented in Refs. \cite{A09,A14}. It is quite challenging to understand what caused these discrepancies. The calculations were done with different codes but with one and the same exchange correlation functional PBE. Therefore the choice of functional is not a cause. There is a difference in approaches to quasiharmonic phonon spectra calculation, specifically, supercells were used in Refs. \cite{A09,A14}, while the linear response method was used in our work and in paper \cite{A12}. Now turn to the possibility of observing the hcp$\rightarrow$bcc transition under shock compression. It is seen from fig.~\ref{fig10} that the shock Hugoniot curves from our calculations and from [12] agree excellently. These calculations predict that the Hugoniot should be expected to cross the hcp-bcc boundary at $P$$\approx$191 GPa, $T$$\approx$4250 K. It is however quite probable that the anharmonic effects will change the transition pressure at high $T$ and shift the boundary to higher $P$, as demonstrated in Ref. \cite{A14}. In this case the Hugoniot may not cross the hcp-bcc line. Then it becomes clear why no signs of a structural transformation are observed in dynamic experiment \cite{A16}. This is also in accordance with data from static experiment \cite{A11} where the bcc structure was not detected. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig11}} \caption{\label{fig11} $PT$-diagram of magnesium. Red lines show results of this work (QHA): the dashed line shows the hcp-bcc boundary, the dash-dotted one is the Hugoniot, the solid line is the melting curve, and the dotted line is the melting curve for the hcp-bcc-liquid triple point at 20 GPa (the vertical gray dashed line, see the text). The orange dashed curve show the principal isentrope from our calculations. Magenta and blue dashed lines show hcp-bcc boundaries calculated in Refs. \cite{A57} and \cite{A58} (GGA calculation), respectively. The black lines show data from static experiment \cite{A20}: the dashed line is the hcp-bcc boundary and the solid one is the melting curve. Triangles show points along the Hugoniot from experiment \cite{A49}, obtained with the Mie-Gruneisen EOS. The arrow points to the pressure corresponding to the onset of melting by data from \cite{A49}. The circles are points along the melting curve from dynamic experiments \cite{A59}. The gray dash-dot-dotted line is the melting curve from QMD calculation \cite{A60}.} \end{figure} Let's see further the $PT$-diagram of magnesium. Figure~\ref{fig11} shows its phase diagram at relatively low pressures along with data from other calculations and experiments. Here all hcp$\rightarrow$bcc boundaries are from calculations in quasiharmonic approximation. Like in beryllium, the boundary has a negative slope. Our calculations agree well with calculations \cite{A58} at high temperatures, about 1 kK and above, and a bit worse at lower $T$. But they all are within the experimental error obtained in the determination of the transition pressure in experiment \cite{A19}. The curve calculated in work \cite{A57} markedly underestimates the hcp phase stability region at high temperatures. At the same time static experiment \cite{A20} gives the steepest hcp-bcc boundary (black dashed line in fig.~\ref{fig11}) among all results. This may be indicative of the necessity to consider full anharmonicity in order to reproduce the slope of the phase boundary correctly. But shock-wave experiments \cite{A49,A50}, including those where the X-ray diffraction method is used for crystal structure analysis, give somewhat other results. So, the calculated Hugoniot shown in fig.~\ref{fig11} is seen to cross the hcp-bcc boundary determined in static experiments \cite{A20} at $P$$\approx$37 GPa, while estimates from Ref. \cite{A50} give 28.4 GPa. Note here that our Hugoniot agrees very well with experimental points \cite{A49} determined with the Mie-Gruneisen EOS (green triangles in fig.~\ref{fig11}). The value 28.4 GPa is close to the transition pressure from our calculations, 29.8 GPa. That is, theoretical studies here agree quite well with dynamic experiments but certain disagreement with static experiments is present. The solid red line in fig.~\ref{fig11} shows the melting curve $T_m$($P$) calculated in this work. It agrees quite well with data from static experiment \cite{A20} and shock experiment \cite{A59}. But the material of samples used in work \cite{A59} was not pure magnesium; that was an alloy with a magnesium content of 96\%. In more recent dynamic experiments \cite{A49,A50} where pure magnesium was studied, the onset of melting under shock conditions was detected at about 55.5 GPa. In fig.~\ref{fig11} this pressure is pointed to by an arrow. The authors of paper \cite{A50} estimate the temperature at which shocked magnesium starts to melt to be about 3 kK which is somewhat lower than our estimate 3.4 kK. Since we use the Lindemann criterion, the trend of our melting curve is strongly dependent on the position of the hcp-bcc-liquid triple point from which it runs after the hcp-bcc transition. At this point calculations give a pressure of about 15 GPa. Estimation of the experiment \cite{A50} gives $\sim$20 GPa (the vertical dashed line in fig.~\ref{fig11}). By analogy with beryllium, it seems reasonable to suppose that the anharmonic effects may slightly shift the triple point to the high pressure region. In fig.~\ref{fig11} we added a melting curve corresponding to a triple point at $P$$=$20 GPa (red dots). On the one hand, the new line does not contradict to static experiment \cite{A20} within its accuracy, and on the other hand, it moves us closer to the result obtained in paper \cite{A50}, taking into account also the finite error of these measurement. Our Hugoniot crosses this line at $P$$=$56 GPa and $T$$=$3.2 kK. \begin{figure} \centering{ \includegraphics[width=8.0cm]{fig12}} \caption{\label{fig12} $PT$-diagram of magnesium calculated at ultrahigh pressures. The red solid line is the melting curve, the dashed ones are crystal phase boundaries, and the orange dashed line is the principal isentrope from our calculations.} \end{figure} Figure~\ref{fig12} presents a phase diagram of magnesium obtained in this work for higher pressures up to 1.6 TPa. What draws attention here is the shape of the melting curve. It has small maxima at melting from bcc and fcc phases. The presence of a maximum on the bcc-liquid equilibrium curve for Mg is also confirmed by first-principles molecular dynamics calculations \cite{A60}. At pressures from $\sim$0.2 to $\sim$0.8 TPa, the melting curve changes weakly, but for the open structures sh and sc, $T_m$ is seen to steadily increase as pressure grows. This shape of the curve is similar to the $T_m$($P$) dependencies determined for calcium and strontium in experiment \cite{A61} where transition to open structures was accompanied by a noticeable growth of the melting curve after its more flattened previous part. Characteristic maxima are known to be also present on the $T_m$($P$) curves of alkali metals (for example, see \cite{A62,A63}). As shown in our calculations, transition to a negative slope of the melting curve is caused by softening of some phonon modes in bcc and fcc magnesium as it happens, for example, in sodium and potassium \cite{A64}. So, for the bcc phase, the transverse modes $T_2$ and $T_1$ in the $\Gamma$N direction of the Brillouin zone sequentially undergo softening with increasing compression. This eventually leads to dynamic instability of the lattice and negative elastic constants $C'$$=$($C_{11}$-$C_{12}$)$/$2 and $C_{44}$. While for the fcc phase, only the transverse phonon mode in the $\Gamma$X direction softens and only the constant $C_{44}$ takes negative values. Figure~\ref{fig12} also shows the principal isentrope calculated in this work. It is seen to cross all the crystalline structures of magnesium. State-of-the-art experimental ramp compression techniques \cite{A01,A02,A03} make it quite possible to detect the structural transition of interest even at very high pressures. Our calculations predict the isentrope to cross bcc-fcc, fcc-sh, and sh-sc boundaries at pressures $\sim$0.5, 0.76, and 1.08 TPa, respectively. It is worth noting that magnesium under pressure demonstrates quite a wide diversity of structural transformations and exhibits nonstandard physical properties for metal. Therefore its more intense experimental studies under high compressions may bring some interesting discoveries. \section{Conclusion} In this work we have studied the structural properties of beryllium and magnesium under high and ultrahigh pressures using the FP-LMTO method for calculations. It is shown that magnesium under pressure demonstrated a wider diversity of structural transformations than beryllium. Besides the hcp$\rightarrow$bcc transition, three more transitions occur in Mg in the interval of pressures from 0.45 to 1.1 TPa. As a result of structural transformations in this metal, open phases, simple hexagonal and cubic, appear. The electronic spectrum of Mg changes strongly during compression, first a pseudogap forms in it, and then a narrow band gap appears at $P$$>$2.5 TPa, which indicates the onset of a semiconducting state. Magnesium under pressure behaves more similarly to heavier alkaline-earth metals, calcium and strontium. Beryllium under compression, on the contrary, gradually ceases to demonstrate unique properties. Its electronic spectrum becomes more and more similar to the spectrum of nearly-free electrons. After the hcp$\rightarrow$bcc transition at $P$$\sim$0.4 TPa and $T$$=$0 K, its structure remains unchanged up to very high pressures, at least to $\sim$250 TPa. The neighbors of beryllium and magnesium in the periodic table, lithium, sodium and aluminum, also demonstrate a large structural diversity under compression \cite{A27,A62,A63}. Thus, we can state here that beryllium as a crystal has one more peculiarity: it is very resistant to structural changes during compression in comparison with the other metals surrounding it in the periodic table.
2024-02-18T23:39:47.235Z
2023-01-02T02:09:26.000Z
algebraic_stack_train_0000
386
6,104
proofpile-arXiv_065-1992
\section{Introduction} Systematic understanding of strongly correlated gapless states has been a long standing challenge in theoretical physics \cite{MS8977,MS8916}. An example of strongly correlated gapless state is the $n+1$D critical point of spontaneous symmetry breaking transition that completely breaks the symmetry described by a finite group $G$. It is well known that the critical state has an unbroken symmetry $G$. It was pointed out that the critical state also has an unbroken dual algebraic $(n-1)$-symmetry $\t G^{(n-1)}$.\cite{JW191213492,KZ200308898} Symmetry and dual algebraic higher symmetry together form a more complete characterization of the critical point. To stress their importance, we put the two symmetry together, and refer to the combined symmetry as \textsf{categorical symmetry}.\cite{JW191213492,KZ200308898,KZ200514178} This \textsf{categorical symmetry}\ point of view corresponds to viewing critical point in terms of both order parameter and disorder parameter at equal footing. In addition to the symmetry $G$ and the algebraic $(n-1)$-symmetry $\t G^{(n-1)}$, the critical point may have additional emergent symmetries. Putting all the emergent symmetries together, we obtain a \emph{maximal \textsf{categorical symmetry}}. The emergent \textsf{categorical symmetry}\ was proposed to be a general and essential feature of a critical point. In particular, it was proposed in \Rf{JW191213492,KZ200514178} that the emergent maximal \textsf{categorical symmetry}\ may largely determine the local low energy properties of strongly correlated critical points. A general classifying understanding of gapless states and critical points is a long standing challenge in theoretical physics. It is well known that a gapless state can have emergent symmetry. Now we realize that such an emergent symmetry can be a combination of 0-symmetry, higher symmetry \cite{NOc0605316,NOc0702377,KT13094721,GW14125148}, anomalous symmetry \cite{H8035,CGL1314,W1313,KT14030617}, anomalous higher symmetry \cite{KT13094721,GW14125148,TK151102929,T171209542,P180201139,DT180210104,BH180309336,ZW180809394,WW181211968,WW181211967,GW181211959,WW181211955,W181202517}, beyond-anomalous symmetry \cite{CW220303596}, non-invertible symmetry \cite{PZh0011021,CSh0107001,FSh0204148,CY180204445,TW191202817,I210315588,Q200509072}, algebraic higher symmetry \cite{KZ200308898,KZ200514178}, and/or non-invertible gravitational anomaly \cite{KW1458,FV14095723,M14107442,KZ150201690,KZ170200673,JW190513279}. Since emergent symmetries are so rich, it may be reasonable to conjecture that a gapless state is largely characterized by its maximal emergent symmetry. Thus we may develop a general classifying theory of gapless states via their maximal emergent symmetries. It was proposed in \Rf{KZ201102859,KZ210703858,KZ220105726,KZ200514178,CW220303596,FT220907471} that all those seemingly very different symmetries have a unified holographic description as topological orders $\eM$ in one higher dimension if the symmetries are finite. Similar ideas were discussed for some special cases in \Rf{FT180600008,CZ190312334,JW190513279,TW191202817,JW191209391,JW191213492,LB200304328}. Such a topological orders in one higher dimension happen to be the \textsf{categorical symmetry}\ mentioned above. The statement that ``\textsf{categorical symmetry}\ = topological order in one higher dimension'' has the following physical meaning:\cite{KZ200514178,CW220303596} \frmbox{\texttt{A system} (described by a Hamiltonian) with a generalized finite symmetry is \emph{exactly locally reproduced} by \texttt{a boundary} (described by a boundary Hamiltonian) of the corresponding topological order in one higher dimension.} Here, \emph{exactly locally reproduced} means that the local symmetric operators in the system have a one-to-one correspondence with the local operators on the boundary of the topological order. The corresponding local operators have identical correlations on the respective ground states, assuming the bulk topological order has an infinite gap. This way, using maximal emergent symmetry to characterize a gapless state becomes equivalent to using maximal topological order in one higher dimension for that purpose. For example, a state with a non-trivial unbroken \textsf{categorical symmetry}\ must be gapless.\cite{L190309028,JW191213492,KZ200514178,CW220506244} Such a state corresponds to a $\one$-condensed boundary of the corresponding topological order in one higher dimension which also must be gapless.\cite{CW220506244} Thus, we can systematically study gapless systems by viewing them as boundaries of topological orders in one higher dimension. Let us use a simple example to illustrate the above abstract statements. The 1+1D Ising CFT at the $\Z_2$ symmetry breaking critical point has the usual $\Z_2$ symmetry of the Ising model. The CFT also has a dual $\t \Z_2$ symmetry.\cite{BT170402330,FT180600008,JW191213492} The combined transformation of $\Z_2$ and $\t \Z_2$ gives us another $\Z_2$ symmetry, denoted as $\Z_2^f$. Putting all the $\Z_2$, $\t\Z_2$, and $\Z_2^f$ symmetries together, we obtain the \textsf{categorical symmetry}. The presence of these symmetries at the Ising critical point can be understood more easily in terms of the holographic picture.\cite{JW191213492} The \textsf{categorical symmetry}\ in the 1+1D $\Z_2$-symmetric Ising model is the 2+1D $\Z_2$ topological order $\eG\mathrm{au}_{\Z_2}$ described by $\Z_2$ gauge theory, which has three types of topological excitations: $e$ -- the $\Z_2$ charge, $m$ -- $\Z_2$ vortex and, $f$ -- the bound state of $\Z_2$ charge and $\Z_2$ vortex which is a fermion. The three $\Z_2$ symmetries are associated with conservation (mod 2) of $ e $, $ m $ and $ f $ particles. We will denote $\Z_2$, $\t\Z_2$, and $\Z_2^f$ symmetries as $\Z_2^m$, $\Z_2^e$, and $\Z_2^f$ symmetries respectively. In fact, $\Z_2^m$, $\Z_2^e$, and $\Z_2^f$ symmetries are generated by string operators, which create $m$, $e$, and $f$ particles at string ends, respectively. The symmetries in the 1+1D Ising model and 2+1D $\Z_2$ topological order have the following relation: the $e$ particle in the 2+1D topological order corresponds to the $\Z_2$-symmetry charge, the $m$ particle corresponds to the $\Z_2$-symmetry domain wall, while the $f$ particle corresponds to the bound state of the $\Z_2$-symmetry charge and domain wall. The Ising critical point at the $\Z_2$ symmetry breaking transition is a state with the full \textsf{categorical symmetry}\ $\eG\mathrm{au}_{\Z_2}$. The 1+1D Ising critical point can also be viewed as an $\one$-condensed boundary of the 2+1D $\eG\mathrm{au}_{\Z_2}$ topological order. However, the Ising critical point also has an emergent $\Z_2^{em}$ symmetry which exchanges the $\Z_2^m$ and $\Z_2^e$ symmetries. Thus the \textsf{categorical symmetry}\ $\eG\mathrm{au}_{\Z_2}$ mentioned above is not a maximal \textsf{categorical symmetry}. Indeed, if we put the $\Z_2^m$, $\Z_2^e$, $\Z_2^f$, and $\Z_2^{em}$ symmetries together, we obtain a larger \textsf{categorical symmetry}\ -- 2+1D double-Ising topological order $\eM_\text{dIs}$. We believe 2+1D double-Ising topological order $\eM_\text{dIs}$ to be the maximal \textsf{categorical symmetry}\ for the 1+1D Ising critical point. In this case, the 1+1D Ising critical point can also be viewed as an $\one$-condensed boundary of the 2+1D $\eM_\text{dIs}$ topological order.\cite{CZ190312334} We would like to remark that both the $\Z_2^m$ and the $\Z_2^e$ symmetry are anomaly-free, while the $\Z_2^f$ symmetry is neither anomalous nor anomaly-free. The latter is not anomaly-free because of the non-trivial self-statistics of the corresponding $ f $ particles in the bulk topological order.\footnote{See \Rf{CW220303596} for more details} Similarly, if we combine $\Z_2^m$ and $\Z_2^e$ into a single symmetry (denoted as $\Z_2^e \vee \Z_2^m$ symmetry), then again $\Z_2^e \vee \Z_2^m$ symmetry is neither anomalous nor anomaly-free. Also the $\Z_2^{em}$ symmetry is neither anomalous nor anomaly-free.\footnote{Here we have used a strict definition of symmetry anomaly, which is described as a boundary of a corresponding SPT state in one higher dimension.} In the next section, we will propose a definition of maximal \textsf{categorical symmetry}, using the holographic picture of symmetry. In section III, we will discuss some simple 1+1D strongly correlated gapless liquids, and their emergent maximal \textsf{categorical symmetry}. In particular, we compute the modular invariant partition functions for strongly correlated gapless liquids for systems with $S_3$ or anomalous $S_3$ symmetries. In section IV, we present a way to compute \textsf{categorical symmetry}\ using symmetry twists. \section{Definition of maximal \textsf{categorical symmetry}} \subsection{\textsf{categorical symmetry}\ and holographic decomposition} We have mentioned that a state with a non-trivial unbroken \textsf{categorical symmetry}\ $\eM$ must be gapless.\cite{L190309028,JW191213492,KZ200514178,CW220506244} Such a state corresponds to a $\one$-condensed boundary of the corresponding topological order $\eM$ in one higher dimension.\cite{CW220506244} Thus the gaplessness of the state is directly associated the non-trivialness of unbroken \textsf{categorical symmetry}\ $\eM$. This supports the idea that a gapless state is characterized by its \textsf{categorical symmetry}\ $\eM$. However, the correspondence between the gapless state and its \textsf{categorical symmetry}\ $\eM$ is not one-to-one. This is because the same topological order $\eM$ can have many different $\one$-condensed boundaries, which leads to many different gapless states with the same \textsf{categorical symmetry}. These gapless states have varying numbers of gapless excitations. A gapless state with more gapless excitations may have a large emergent symmetry, \ie may be a $\one$-condensed boundary of a larger topological order. This leads to a notion of maximal \textsf{categorical symmetry}\ $\eM_\text{max}$ for the gapless state: it is the largest topological order in one higher dimension which has a $\one$-condensed boundary that has the same number of gapless excitations as that of the gapless state. \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{Rsymm} \end{center} \caption{ An anomaly-free algebraic higher symmetry in $n$-dimensional space can be described by the gapped excitations above a ground state which is a symmetric product state. Those excitations, plus the reference ground state, form a fusion $n$-category $\cR$. The \textsf{categorical symmetry}\ $\eM$ of $\cR$ is the center of $\cR$: $\eM=\eZ(\cR)$. $\eM$ describes the bulk topological order in $(n+1)$-dimensional space, that has a boundary state corresponding to the reference ground state. The associated boundary excitations correspond to the excitations above the reference ground state in the symmetric sub-Hilbert space. The fusion $n$-category $\cR$ that describes the anomaly-free algebraic higher symmetry has a defining property: there exists a fusion $n$-category $\t\cR$ such that $\eZ(\t\cR)=\eM$ (\ie $\t\cR$ is also a boundary of $\eM$), and the stacking of $\cR$ and $\t\cR$ through $\eM$ gives rise to a trivial topological order in $n$-dimensional space: $\cR \boxtimes_{\eM}\t\cR =n\mathcal{V}\mathrm{ec} = $ trivial. $\bt$ is the domain wall between the two boundaries $\cR$ and $\t\cR$. } \label{Rsymm} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{RsymmC} \end{center} \caption{ The bulk $\eM$ and the boundary $\t\cR$ have a large energy gap $\Del$. The low energy excitations of energy scale $E$ are on boundary $\cC$ in the middle part, and on $\underline{\cC}$ in the left part. The right part represents a vacuum. The middle part describes a system with emergent algebraic symmetry $\cR$ and emergent \textsf{categorical symmetry}\ $\eM$, below the energy scale $E\ll \Del$. Above $\Del$, the middle and the left parts both describe the same topological phase $\underline{\cC}$. From middle to left, $\Del$ is reduced to below $E$, and the emergent symmetries disappear. $\bt$ describes such a symmetry breaking process. } \label{RsymmC} \end{figure} To make the above definition more precise, let us first give a more detailed description of \textsf{categorical symmetry}. Consider a generalized anomaly-free symmetry (\ie an algebraic higher symmetry) in $n$-dimensional space. Algebraic higher symmetries include symmetries described by groups and higher groups, as well as non-invertible symmetries beyond groups and higher groups. Here, ``anomaly-free'' means that \emph{the symmetry allows a non-degenerate ground state on closed spaces with any topology.}\cite{KZ200514178} Such a symmetry can be fully described by excitations above the non-degenerate ground state (called the symmetry-charge-objects). These excitations, with their fusion and braiding properties, are fully described by a \emph{local fusion $n$-category} $\cR$. Thus an anomaly-free algebraic higher symmetry is described by a local fusion $n$-category $\cR$, \cite{KZ200514178} rather than by a group or a higher group. In this case, the \textsf{categorical symmetry}\ $\eM$ (\ie the bulk topological order) for the algebraic higher symmetry $\cR$ is the center of $\cR$: $\eM=\eZ(\cR)$. The anomaly-free algebraic higher symmetry $\cR$ has a dual symmetry which is also described by a local fusion $n$-category $\t\cR$. $\t\cR$ satisfies the defining property: $\eM=\eZ(\t\cR) $ and $\cR \boxtimes_{\eM} \t\cR = n\mathcal{V}\mathrm{ec}$ (see Fig. \ref{Rsymm}). In fact,the above relations define the notion of local fusion $n$-category: \emph{a fusion $n$-category $\cR$ is local if there exists a fusion $n$-category $\t\cR$ such that $\eZ(\cR) =\eZ(\t\cR) $ and $\cR \boxtimes_{\eM} \t\cR = n\mathcal{V}\mathrm{ec}$.} We say $\cR$ and $\t\cR$ are dual to each other. In fact, they describe a symmetry-dual-symmetry pair. Now let us consider a system $\cC$ described by a Hamiltonian in $n$-dimensional space with an anomaly-free algebraic higher symmetry $\cR$. Such a system also has a \textsf{categorical symmetry}\ $\eM = \eZ(\cR)$, in the sense that the ground state in the symmetric sub-Hilbert space\footnote{If the system $\cC$ has a spontaneous symmetry breaking, then the ground state in the symmetric sub-Hilbert space is the symmetric linear superposition of the degenerate ground states.} of the system $\cC$ corresponds to a boundary ground state of $\eM$. The boundary excitations above the boundary ground state correspond to the excitations of the system $\cC$ in the symmetric sub-Hilbert space.\cite{JW191213492,KZ200514178} If the system $\cC$ is gapped, then $\cC$ can be viewed as a fusion $n$-category formed by the topological excitations and the symmetry-charge-objects above the ground state. In this case, the \textsf{categorical symmetry}\ $\eM$ is also given by the center of $\cC$, $\eM=\eZ(\cC)$, which is the same as the center of $\cR$, $\eM=\eZ(\cR) = \eZ(\cC)$. If we ignore the symmetry $\cR$, the system may has a topological order described by a fusion $n$-category $\underline{\cC}$, which is formed by the topological excitations in $\cC$, after we drop the symmetry-charge-objects. The center of $\underline{\cC}$ is trivial $\eZ(\underline{\cC}) =n\eV\mathrm{ec}$, since $\underline{\cC}$ is realizable by a lattice model (\ie the topological order $\underline{\cC}$ is anomaly-free).\cite{KW1458,KZ150201690} Also, the fusion $n$-category $\underline{\cC}$ is given by the stacking of $\cC$ and the dual symmetry $\t\cR$ through $\eM$: $\underline{\cC} = \cC \boxtimes_{\eM} \t\cR$ (see Fig. \ref{RsymmC}).\cite{KZ200514178,FT220907471} \Rf{KZ200514178} discussed extensively the pair $ \cC \boxtimes_{\eM}$, and how \textsf{categorical symmetry}\ $\eM$ constrains the properties of $\cC$. This leads to classifications of symmetry protected topological (SPT) orders,\cite{GW0931,CLW1141,CGL1314} and symmetry enriched topological (SET) orders,\cite{CGW1038} for anomaly-free algebraic higher symmetries $\cR$ (\ie for non-invertible higher symmetries), using the sandwich structure in Fig. \ref{RsymmC}. \Rf{FT220907471} proposed to regard the pair $ \boxtimes_{\eM}\t\cR$ (called a "quiche") as a symmetry even when $\t\cR$ is not a local fusion $n$-category, by regarding the topological defects in $ \boxtimes_{\eM}\t\cR$ as the topological defects describing symmetry. We would like to point out that Fig. \ref{RsymmC} represents a picture of an emergence of algebraic higher symmetry $\cR$, as well as an emergence of \textsf{categorical symmetry}\ $\eM$. Let $\Del$ be the energy gap for the bulk topological order $\eM$ and the gapped boundary $\t\cR$. The boundary $\cC$ contains low energy excitations and may even be gapless. Below the energy gap $\Del$, the system described by the middle part of Fig. \ref{RsymmC} has an emergent algebraic higher symmetry $\cR$ and an emergent \textsf{categorical symmetry}\ $\eM$. So, if we take the $\Del \to \infty$ limit, the middle part of Fig. \ref{RsymmC} describe a system with an algebraic higher symmetry $\cR$ and a \textsf{categorical symmetry}\ $\eM$. Above the energy gap $\Del$, the system described by the middle part of Fig. \ref{RsymmC} has no emergent symmetry and has a topological order described by $\underline{\cC}$. In this case, the middle and left parts of Fig. \ref{RsymmC} describe the same phase -- a topological order described by $\underline{\cC}$. In fact, the junction $\bt$ between the middle and left parts is a very special junction, which describes the physical process where the energy gap $\Del$ reduces from a large value $\Del \gg E$ in the middle part to a small value $\Del < E$ in the left part, without phase transition. So below the energy scale $E$, the middle part has an emergent \textsf{categorical symmetry}\ $\eM$ and an emergent anomaly-free symmetry $\cR$, while the left part has no emergent symmetry. \subsection{Holo-equivalent symmetries} \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{RRsymmC} \end{center} \caption{Consider two systems, $\underline{\cC}$ and $\underline{\cC'}$, with holo-equivalent symmetries, $\cR$ and $\cR'$. By definition, they have the same \textsf{categorical symmetry}\ $\eM$ and are described by boundaries of same topological order $\eM$. As a result, they have corresponding states with the same local low energy properties, described by the same boundary $\cC$ of $\eM$. But the corresponding states may have different global low energy properties $\cC\boxtimes_\eM \cR \neq \cC\boxtimes_\eM \cR'$. } \label{RRsymmC} \end{figure} As an application of the above holographic picture, let us define the notion of holo-equivalence: \emph{two anomaly-free algebraic higher symmetries $\cR$ and $\cR'$ are holo-equivalent if they have the same bulk: $\eZ(\cR) = \eZ(\cR')=\eM$.}\cite{KZ200514178} To understand the physical meaning of holo-equivalence of two symmetries, let us consider two systems with holo-equivalent symmetries, $\cR$ and $\t\cR'$. Both systems can have many different states, which can be viewed as different boundaries of same bulk $\eZ(\cR)=\eZ(\cR')=\eM$. We see that the states of the two systems have a one-to-one correspondence such that the corresponding states are the same boundary of the same bulk $\eM$ (see Fig. \ref{RRsymmC}). As a result, the corresponding states have the same local low energy properties. Here, ``the same local low energy properties'' means that the local symmetric operators in the system with symmetry $\cR$ have a one-to-one correspondence with the local symmetric operators in the system with symmetry $\cR'$. The corresponding local operators have identical correlations on the respected ground states in the respected symmetric sub-Hilbert space. On the other hand, the corresponding states may have different global low energy properties (such as different ground state degeneracies). \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{QFTQFT} \end{center} \caption{ Decompositions of an anomaly-free quantum field theory $QFT_{af}$ expose the possible emergent symmetries in the quantum field theory. The bulk topological orders $\eM,\eM'$ are the revealed emergent \textsf{categorical symmetries}. The gapped boundaries of $\eM,\eM'$, $\t\cR,\t\cR'$, describe the revealed emergent symmetries $\cR,\cR'$. The gapless anomalous field theories, $QFT_{ano}$ and $QFT_{ano}'$, are $\one$-condensed boundaries of $\eM,\eM'$. } \label{QFTQFT} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{QFTR} \end{center} \caption{The composite system $ QFT_{ano} \boxtimes_{\eM} \t\cR$, where the energy gaps of the bulk $\eM$ and the boundary $\t\cR$ are assumed to be infinite. We also assume that the thickness of the slab is finite and is much larger than the correlation length of the bulk $\eM$. } \label{QFTR} \end{figure} \subsection{Maximal \textsf{categorical symmetry}} As the second application of the holographic picture, let us define the notion of maximal \textsf{categorical symmetry}. Consider an $n+1$D anomaly-free gapless conformal field theory,\footnote{In this paper, ``field theory'' is defined as a ground state plus all its low energy excitations. A field theory is anomaly-free if it can be realized by a lattice model. A ``conformal field theory'' is gapless with a linear dispersion with a single velocity.} $QFT_{af}$, which is described by an invariant partition function $Z^{af}$, under a mapping class group transformation.\cite{JW190513279} If $QFT_{af}$ has some emergent symmetry, then we can decompose it as a stacking in Fig. \ref{QFTQFT}:\cite{KZ200514178,FT220907471} \begin{align} QFT_{af} = QFT_{ano} \boxtimes_{\eM} \t\cR , \end{align} to expose the emergent symmetry. Here $QFT_{ano}$ is an anomalous field theory described by a multi-component partition function $Z^{ano}_a$.\cite{JW190513279,JW191209391} $\eM$ is the exposed emergent \textsf{categorical symmetry}\ and $\cR$ is a gapped boundary of $\eM$ describing the exposed emergent symmetry. The above decomposition has a meaning that the invariant partition function $Z^{af}$ can be constructed from the multi-component partition function $Z^{ano}_a$, and the data describing the gapped bulk $\eM$ and the gapped boundary $\t\cR$. See next section for a detailed construction for 1+1D field theory. In fact, the above decomposition makes sense even when $QFT_{af} $ and $ QFT_{ano}$ are gapped (in this case, they are not conformal field theories): \frmbox{ The decomposition $QFT_{af} = QFT_{ano} \boxtimes_{\eM} \t\cR$ means that the partition function of the system $QFT_{af} $ (gapless or gapped) is the same as the partition function of the composite system $QFT_{ano} \boxtimes_{\eM} \t\cR $ (see Fig. \ref{QFTR}), assuming the bulk $\eM$ and the boundary $\t\cR$ have infinite energy gap. } To summarize, \frmbox{the decomposition $ QFT_{af} = QFT_{ano} \boxtimes_{\eM} \t\cR $ reveals an emergent \textsf{categorical symmetry}\ $\eM$ in the state $QFT_{af}$,\cite{KZ200514178} assuming the bulk $\eM$ and the boundary $\t\cR$ have infinite energy gap.} In fact, the above is valid regardless if $\t\cR$ is a local fusion higher category or not. If $\t\cR$ is also a local fusion higher category, then the field theory $QFT_{af}$ (gapless or gapped) also has an emergent anomaly-free algebraic higher symmetry described by $\cR$, where $\cR$ is the dual of $\t\cR$: \frmbox{The decomposition $ QFT_{af} = QFT_{ano} \boxtimes_{\eM} \t\cR $ reveals an emergent anomaly-free symmetry $\cR$ in the state $QFT_{af}$,\cite{KZ200514178} if $\t\cR$ is a local fusion higher category, assuming the bulk $\eM$ and the boundary $\t\cR$ have infinite energy gap.} In \Rf{FT220907471}, the pair $(\t\cR, \eM)$, by definition, is regarded as an even more generalized symmetry for any boundary $\t\cR$ of $\eM$, without requiring $\t\cR$ being a local fusion higher category. The above decomposition is not unique. For a given $QFT_{af}$, we may have several decompositions (see Fig. \ref{QFTQFT}): \begin{align} QFT_{af} = QFT_{ano} \boxtimes_{\eM} \t\cR =QFT_{ano}' \boxtimes_{\eM'} \t\cR' . \end{align} Those different decompositions reveal different parts of the emergent symmetry of $QFT_{af}$. Here we have assumed that the bulks $\eM$, $\eM'$ and the boundaries $\t\cR$, $\t\cR'$ have infinite energy gap and all the excitations on the boundaries $QFT_{ano}$ and $QFT_{ano}'$ have finite energy gaps. Those excitations, plus the possible degenerate ground states (\ie the global excitations) from $\eM$, $\eM'$ $\t\cR$, and $\t\cR'$, give rise to the finite energy excitations of $QFT_{af}$. In other words, we have assumed that $QFT_{ano}$ is a $\one$-condensed boundary of $\eM$ or a nearly $\one$-condensed boundary of $\eM$. By `` nearly $\one$-condensed'', we mean that if $QFT_{ano}$ is induced by some non-trivial condensations, those condensations are assumed to be weak and only lead to small energy gaps. Similarly, we have assumed that $QFT_{ano}'$ is a $\one$-condensed boundary of $\eM'$ or a nearly $\one$-condensed boundary of $\eM'$. The decomposition that gives rise to the \emph{largest} $\eM$ reveals the maximal \textsf{categorical symmetry}\ in the state $QFT_{af}$. But what is meaning of a larger \textsf{categorical symmetry}\ (\ie a larger topological order in one higher dimension)? Here we define a topological order to be larger if it has a higher total quantum dimension. Let us review what is quantum dimension. A topological order $\eM$ can have two kinds of excitations: elementary excitations and descendent excitations.\cite{KW1458} The descendent excitations are formed by the condensation of elementary excitations. Let us label all the elementary excitations by $a$. Each elementary excitation has a quantum dimension $d_a$ describing the number of its internal degrees of freedom. For example, a spin-1/2 particle has a quantum dimension $d=2$. The total quantum dimension of $ \eM $ is $D^2 = \sum_{a\in \eM} d_a^2$. So the maximal \textsf{categorical symmetry}\ corresponds to the topological order $\eM$ with the largest total quantum dimension. This leads to a definition of maximal \textsf{categorical symmetry}\ for an anomaly-free gapless state. Note that the three gapless states $QFT_{af}$, $QFT_{ano}$, $QFT_{ano}'$, are \emph{local low energy equivalent} since they only differ by stacking gapped states with large energy gaps. Here, we propose that \frmbox{the local-low-energy-equivalent classes of gapless liquid states are largely characterized by their emergent maximal \textsf{categorical symmetries}. } In \Rf{W0213}, the notion of projective symmetry group (PSG) was introduced to characterize gapless states (as well as gapped states). The maximal \textsf{categorical symmetry}\ is a much more improved version of PSG, which can characterize gapless state much more completely. \section{Examples and constructions of maximal \textsf{categorical symmetries}} In this section, we give some 1+1D examples of the holographic picture described above. In 1+1D, anomaly-free 1+1D gapless states (\ie CFTs) are described by modular invariant partition functions: \begin{align} Z^{af}(\tau+1)= Z^{af}(-1/\tau)=Z^{af}(\tau). \end{align} \subsection{1+1D Ising critical point} \subsubsection{Modular invariant and modular covariant partition functions } For example, the 1+1D CFT describing the $\Z_2$ symmetry breaking transition, denoted as $Is_{af}$, has the following modular invariant partition function on a ring-like space \cite{CFT12}: \begin{align} Z^{af}_{Is}(\tau,\bar \tau) = |\chi^\text{Is}_0(\tau)|^2+|\chi^\text{Is}_\frac12(\tau)|^2 + |\chi^\text{Is}_{\frac1{16}}(\tau)|^2 \end{align} where $\chi^\text{Is}_h(\tau)$ are the conformal characters of Ising CFT (the (4,3) minimal model), and the subscript $h$ is the scaling dimension of the corresponding primary fields. \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{IsingZ2} \end{center} \caption{A decomposition of an anomaly-free Ising critical point $Is_{af}$ exposes an emergent symmetry $\Z_2$, as well as the emergent \textsf{categorical symmetry}: $\eM =\eG\mathrm{au}_{\Z_2}$. The symmetry $\Z_2$ is described by $\cR =\mathcal{R}\mathrm{ep}_{\Z_2}$. The dual of $\cR$ is $\t\cR =\mathcal{V}\mathrm{ec}_{\Z_2}$. } \label{IsingZ2} \end{figure} To reveal the emergent \textsf{categorical symmetry}, following \Rf{JW191213492}, we restrict to the sub Hilbert space of $\Z_2$ invariant states. Restricting to the symmetric Hilbert space converts the symmetry to the non-invertible gravitational anomaly, since the symmetric sub Hilbert space $\cV_\text{symm}$ does not have the tensor product decomposition $\cV_\text{symm} \neq \otimes \cV_i$, where $\cV_i$ is local Hilbert space on a site $i$. The partition function in symmetric sub Hilbert space is given by \begin{align} Z(\tau,\bar \tau) = |\chi^\text{Is}_0(\tau)|^2+|\chi^\text{Is}_\frac12(\tau)|^2 \end{align} which is not modular invariant. But it is part of 4-component partition function \cite{JW190513279}: \begin{align} \label{Z2cri} \begin{pmatrix} Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};\one}(\tau,\bar \tau)\\ Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};e}(\tau,\bar \tau)\\ Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};m}(\tau,\bar \tau)\\ Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};f}(\tau,\bar \tau)\\ \end{pmatrix} &= \begin{pmatrix} |\chi^\text{Is}_0(\tau)|^2+|\chi^\text{Is}_\frac12(\tau)|^2\\ |\chi^\text{Is}_{\frac1{16}}(\tau)|^2\\ |\chi^\text{Is}_{\frac1{16}}(\tau)|^2\\ \chi^\text{Is}_0(\tau) \bar \chi^\text{Is}_\frac12(\tau) +\chi^\text{Is}_\frac12(\tau) \bar \chi^\text{Is}_0(\tau) \\ \end{pmatrix} , \end{align} which is modular covariant \begin{align} \label{STtrans} Z_a(\tau+1) = T_{ab} Z_b(\tau), \ \ \ \ Z_a(-1/\tau) = S_{ab} Z_b(\tau), \end{align} with the $S,T$ matrices given by \begin{align} \label{Z2STmat} T^{\eG\mathrm{au}_{\Z_2}}&= \begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-1 \end{pmatrix} , & S^{\eG\mathrm{au}_{\Z_2}}&=\frac12 \begin{pmatrix} 1&1&1&1\\ 1&1&-1&-1\\ 1&-1&1&-1\\ 1&-1&-1&1 \end{pmatrix} \end{align} The above four-component partition function is the partition function of an anomalous CFT, denoted as $Is_{\eG\mathrm{au}_{\Z_2}}$, which can be viewed as a gapless boundary of a 2+1D $\Z_2$ topological order (\ie a 2+1D $\Z_2$ gauge theory) denoted by $\eG\mathrm{au}_{\Z_2}$ (see Fig. \ref{IsingZ2}). The $\Z_2$ bulk topological order $\eG\mathrm{au}_{\Z_2}$ has 4 types of excitations $\one,e,m,f$, where $\one,e,m$ are bosons and $f$ is a fermion. $e,m,f$ have $\pi$ mutual statistics between them. They have the following fusion rule \begin{align} e\otimes e = m\otimes m = f\otimes f = \one,\ \ \ e\otimes m = f. \end{align} The above fusion rule implies the mod-2 conservation of $e$-particles, $m$-particles, and $f$-particles, which lead to the three $\Z_2$ symmetries discussed in the introduction. For the original $\Z_2$ symmetry, the $e$-particles and the $f$-particles carry its charge while the $m$-particles do not carry $\Z_2$ charge. For the dual $\t\Z_2$ symmetry, the $m$-particles and the $f$-particles carry its charge while the $e$-particles do not carry $\t\Z_2$ charge. For the $\Z_2^f$ symmetry, the $m$-particles and the $e$-particles carry its charge while the $f$-particles do not carry $\Z_2^f$ charge. Thus we may also denote the $\Z_2$ symmetry as $\Z_2^m$ symmetry and the $\t\Z_2$ symmetry as $\Z_2^e$ symmetry. The $\Z_2$ topological order $\eG\mathrm{au}_{\Z_2}$ is characterized by the $S,T$ matrices in \eqn{Z2STmat}. Such a topological order in one higher dimension is the non-invertible gravitational anomaly converted from the $\Z_2$ symmetry, which is also referred to as a \textsf{categorical symmetry}. We see that the $S,T$ matrix for the topological order in one higher dimension constrain the partition function of 1+1D CFT via the modular covariance condition \eq{STtrans}. This is how a \textsf{categorical symmetry}\ largely determines a gapless state. The above results can also be obtained within 1+1D CFT, if we consider the following four partition functions with $\Z_2$ twisted boundary conditions \cite{CY180204445}, \begin{align}\label{Z2bc} &Z_{++}=\zb,\quad Z_{+-}=\zbh,\nonumber \\ &Z_{-+}= \zbv, \quad Z_{--}= \zbx , \end{align} where the vertical direction is the time direction. We find \begin{align} \label{IsingZ1} Z_{++}(\tau) &= |\chi^\text{Is}_0|^2+|\chi^\text{Is}_\frac{1}{2}|^2 +|\chi^\text{Is}_\frac{1}{16}|^2 \\ Z_{+-}(\tau) &= |\chi^\text{Is}_0|^2+|\chi^\text{Is}_\frac{1}{2}|^2 -|\chi^\text{Is}_\frac{1}{16}|^2 \nonumber \\ Z_{-+}(\tau) &= |\chi^\text{Is}_\frac{1}{16}|^2 +\chi^\text{Is}_0 \bar \chi^\text{Is}_\frac{1}{2} + \chi^\text{Is}_\frac12 \bar \chi^\text{Is}_0 \nonumber \\ Z_{--}(\tau) &= |\chi^\text{Is}_\frac{1}{16}|^2 - \chi^\text{Is}_0 \bar \chi^\text{Is}_\frac{1}{2} - \chi^\text{Is}_\frac12 \bar \chi^\text{Is}_0 \nonumber \end{align} In the $G$-symmetry-twist basis of partition functions, the $S$ and $T$ matrix for modular transformation is \begin{equation}\label{ZpropG} \begin{split} Z_{g',h'}(-1/\tau) &= S_{(g',h'),(g,h)} Z_{g,h}(\tau),\\ Z_{g',h'}(\tau+1) &= T_{(g',h'),(g,h)} Z_{g,h}(\tau),\\ Z_{g',h'}(\tau) &= R_{(g',h'),(g,h)}(u) Z_{g,h}(\tau),\\ S_{(g',h'),(g,h)} &= \del_{(g',h'),(h^{-1},g)},\\ T_{(g',h'),(g,h)} &= \del_{(g',h'),(g,hg)},\\ R_{(g',h'),(g,h)}(u) &= \del_{(g',h'),(ugu^{-1},uhu^{-1})} , \end{split} \end{equation} where \begin{align} g,h,g',h'\in G, \ \ \ gh=hg,\ \ \ g'h'=h'g', \end{align} describe the symmetry twists of the symmetry group $G$. For $G=\Z_2=\{+,-\}$, we find \begin{align} \label{Z2ST1} S=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} ,\ \ T=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix} ,\ \ R=1 . \end{align} This way, we can obtain modular covariant multi-component partition functions for 1+1D CFT's with symmetry. Including the symmetry twist and considering modular covariant multi-component partition functions is a way to expose the symmetry in a CFT. The modular invariant single component partition function corresponds to a point of view of ignoring the symmetry. We note that each component of partition function is a polynomial of $q=\ee^{\ii 2\pi \tau}$ and $\bar q$, times a factor $q^{-\frac c {24} + h} \bar q^{-\frac {\bar c} {24} + \bar h}$. Here $c,\bar c$ are the central charges for the right movers and left movers, and $h,\bar h$ are the right and left scaling dimensions of the primary field of the corresponding sector. We can choose a different basis where the expansion coefficients are all non-negative integers. Such a basis is the so called quasi particle basis: \begin{align} Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};\one} =&\frac{Z_{++}+Z_{+-}}{2},\ \ Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};e} =\frac{Z_{++}-Z_{+-}}{2} \\ Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};m} =&\frac{Z_{-+}+Z_{--}}{2},\ \ Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};f} =\frac{Z_{-+}-Z_{--}}{2}. \nonumber \end{align} The partition functions in the quasiparticle basis\cite{JW190513279} are given by \eqn{Z2cri}, and transform as \eqn{STtrans}, with $S,T$ given by \eqn{Z2STmat}. Note that $T$ is always diagonal in the quasiparticle basis. This example demonstrates how to convert a symmetry to a non-invertible gravitational anomaly (characterized by the $S,T$ matrices for the topological order in one higher dimension). We can view a global symmetry as a non-invertible gravitational anomaly, \ie as a topological order in one higher dimension. Viewing the CFT at the $\Z_2$ symmetry breaking transition as the gapless boundary of 2+1D $\Z_2$ topological order, not only allow us to see the $\Z_2$ symmetry, it also allow us to see two additional symmetries, $\t\Z_2$ and $\Z_2^f$. \subsubsection{The decomposition in terms of partition functions } Using this explicit example, we can explain the decomposition (see Fig. \ref{IsingZ2}) \begin{align} Is_{af} = Is_{\eG\mathrm{au}_{\Z_2}} \boxtimes_{\eM} \t\cR = Is_{\eG\mathrm{au}_{\Z_2}} \boxtimes_{\eG\mathrm{au}_{\Z_2}} \mathcal{V}\mathrm{ec}_{\Z_2} \end{align} in more detail. The decomposition reveals an emergent $\Z_2$ symmetry which is described by the fusion 1-category $\cR=\mathcal{R}\mathrm{ep}_{\Z_2}$ (formed by the $\Z_2$ charges $e$), which is the dual of $\t\cR = \mathcal{V}\mathrm{ec}_{\Z_2}$. In fact, $\t\cR = \mathcal{V}\mathrm{ec}_{\Z_2}$ itself describes a symmetry $\t \Z_2$, which the dual of the $\Z_2$ symmetry. The fusion 1-category $\t\cR=\mathcal{V}\mathrm{ec}_{\Z_2}$ is formed by the $\Z_2$ symmetry-breaking domain-wall $m$.\cite{FT180600008,JW191213492} A gapped boundary is described by $\tau$ independent multi-component partition function $Z^\text{gapped}_a$ which is modular covariant \begin{align} Z^\text{gapped}_a = T_{ab} Z^\text{gapped}_b, \ \ \ \ Z^\text{gapped}_a = S_{ab} Z^\text{gapped}_b, \end{align} The gapped boundary $\t\cR=\mathcal{V}\mathrm{ec}_{\Z_2}$ in Fig. \ref{IsingZ2} is an $e$-condensed boundary (so that the boundary excitations are given by $m$'s). Such a $\mathcal{V}\mathrm{ec}_{\Z_2}$-boundary is described by the following constant multi-component partition function \begin{align} \label{VecZ2} \begin{pmatrix} Z^{\eG\mathrm{au}_{\Z_2}}_{e\text{-cnd};\one}\\ Z^{\eG\mathrm{au}_{\Z_2}}_{e\text{-cnd};e}\\ Z^{\eG\mathrm{au}_{\Z_2}}_{e\text{-cnd};m}\\ Z^{\eG\mathrm{au}_{\Z_2}}_{e\text{-cnd};f}\\ \end{pmatrix} &= \begin{pmatrix} 1\\ 1\\ 0\\ 0\\ \end{pmatrix} , \end{align} where $Z^{\eG\mathrm{au}_{\Z_2}}_{e\text{-cnd};e}=1$ indicates the $e$-condensation, and $Z^{\eG\mathrm{au}_{\Z_2}}_{e\text{-cnd};\one}=1$ indicates the $\one$-condensation on the boundary. In fact the trivial particle $\one$ always condenses on the boundary, and $Z^{\eG\mathrm{au}_{\Z_2}}_{e\text{-cnd};\one}$ is always a positive integer, describing the ground state degeneracy of the boundary. Now, the formal decomposition $ Is_{af} = Is_{\eG\mathrm{au}_{\Z_2}} \boxtimes_{\eG\mathrm{au}_{\Z_2}} \mathcal{V}\mathrm{ec}_{\Z_2} $ have an explicit meaning \begin{align} Z^{af}_{Is} (\tau,\bar\tau) = \sum_{a=\{\one,e,m,f\}} Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};a} (\tau,\bar\tau) (Z^{\eG\mathrm{au}_{\Z_2}}_{e\text{-cnd};a} )^* . \end{align} \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{IsingtZ2} \end{center} \caption{ The second decomposition of an anomaly-free Ising critical point $Is_{af}$ exposes an emergent dual-symmetry $\t\Z_2$: with $\cR =\mathcal{V}\mathrm{ec}_{\Z_2}$ and its dual $\t\cR =\mathcal{R}\mathrm{ep}_{\Z_2}$, as well as the emergent \textsf{categorical symmetry}: $\eM =\eG\mathrm{au}_{\Z_2}$. } \label{IsingtZ2} \end{figure} The Ising critical point $Is_{af}$ has another decomposition (see Fig. \ref{IsingtZ2}) \begin{align} Is_{af} = Is_{\eG\mathrm{au}_{\Z_2}} \boxtimes_{\eM} \t\cR = Is_{\eG\mathrm{au}_{\Z_2}} \boxtimes_{\eG\mathrm{au}_{\Z_2}} \mathcal{R}\mathrm{ep}_{\Z_2} \end{align} This second decomposition reveals an emergent $\t\Z_2$ symmetry which is the dual of the $\Z_2$ symmetry and is described by the fusion 1-category $\cR=\mathcal{V}\mathrm{ec}_{\Z_2}$ (formed by the $\Z_2$ symmetry-breaking domain-wall $m$). The dual of $\t\Z_2$ symmetry is the $\Z_2$ symmetry, which is described by the fusion 1-category $\t\cR=\mathcal{R}\mathrm{ep}_{\Z_2}$ (formed by the $\Z_2$ charges $e$). The gapped boundary $\t\cR=\mathcal{R}\mathrm{ep}_{\Z_2}$ in Fig. \ref{IsingtZ2} is an $m$-condensed boundary (so that the boundary excitations are given by $e$'s). Such a $\mathcal{R}\mathrm{ep}_{\Z_2}$-boundary is described by the following constant multi-component partition function \begin{align} \label{RepZ2} \begin{pmatrix} Z^{\eG\mathrm{au}_{\Z_2}}_{m\text{-cnd};\one}\\ Z^{\eG\mathrm{au}_{\Z_2}}_{m\text{-cnd};e}\\ Z^{\eG\mathrm{au}_{\Z_2}}_{m\text{-cnd};m}\\ Z^{\eG\mathrm{au}_{\Z_2}}_{m\text{-cnd};f}\\ \end{pmatrix} &= \begin{pmatrix} 1\\ 0\\ 1\\ 0\\ \end{pmatrix} , \end{align} where $Z^{\eG\mathrm{au}_{\Z_2}}_{m\text{-cnd};m}=1$ indicates the $m$-condensation on the boundary. The formal decomposition $ Is_{af} = Is_{\eG\mathrm{au}_{\Z_2}} \boxtimes_{\eG\mathrm{au}_{\Z_2}} \mathcal{R}\mathrm{ep}_{\Z_2} $ implies the following relation between partition functions: \begin{align} Z^{af}_{Is} (\tau,\bar\tau) = \sum_{a=\{\one,e,m,f\}} Z^{\eG\mathrm{au}_{\Z_2}}_{\one\text{-cnd};a} (\tau,\bar\tau) (Z^{\eG\mathrm{au}_{\Z_2}}_{m\text{-cnd};a})^* . \end{align} \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{CFTdIs} \end{center} \caption{ The third decomposition of an anomaly-free Ising critical point $Is_{af}$ exposes the maximal emergent \textsf{categorical symmetry}: $\eM =\eM_\text{dIs}$. } \label{CFTdIs} \end{figure} \subsubsection{Maximal \textsf{categorical symmetry}} The above two decompositions only reveal emergent $\Z_2$ symmetry or dual $\Z_2$ symmetry. Their associated \textsf{categorical symmetry}\ $\eM = \eG\mathrm{au}_{\Z_2}$. $\eG\mathrm{au}_{\Z_2}$ is not the maximal \textsf{categorical symmetry}. To reveal the emergent maximal \textsf{categorical symmetry}, we need to consider another decomposition \begin{align} Is_{af} = Is_\text{dIs} \boxtimes_{\eM_\text{dIs}} \cF_\text{Is} . \end{align} Here $\eM_\text{dIs} = \eM_\text{Is} \boxtimes \bar \eM_\text{Is} $ is the 1+2D double-Ising topological order, which has 9 anyons labeled by $(h,\bar h)$, $h,\bar h=0,\frac12, \frac1{16}$. $h=0,\frac12, \frac1{16}$ correspond to the three anyons $\one,\psi,\si$ in the Ising topological order $\eM_\text{Is}$: \begin{align} \begin{matrix} \text{anyons}: & \one & \psi & \si \\ d_a: & 1 & 1 & \sqrt2 \\ h_a: & 0 & \frac12 & \frac1{16} \\ \end{matrix} \end{align} where $d_a$ is the quantum dimension and $h_a$ is the topological spin of the corresponding anyon. $Is_\text{dIs}$ is an anomalous CFT (an $\one$-condensed boundary of $\eM_\text{dIs}$) which is described by the following multi-component partition function, one component for each anyon $(h,\bar h)$: \begin{align} Z^{\eM_\text{dIs}}_{\one\text{-cnd};(h,\bar h)}(\tau,\bar \tau) = \chi^\text{Is}_{h}(\tau) \bar \chi^\text{Is}_{\bar h}(\bar \tau),\ \ h,\bar h=0,\frac12, \frac1{16} . \end{align} $\cF_\text{Is}$ is the gapped boundary of $\eM_\text{dIs}$ which is described by the following modular covariant multi-component constant partition function: \begin{align} Z^{\eM_\text{dIs}}_{\cF_\text{Is};(0,0)} &= Z^{\eM_\text{dIs}}_{\cF_\text{Is};(\frac12,\frac12)} = Z^{\eM_\text{dIs}}_{\cF_\text{Is};(\frac1{16},\frac1{16})} =1, \nonumber\\ \text{ others } &= 0. \end{align} In fact, $\cF_\text{Is}$ is the fusion 1-category formed by $\one,\psi,\si$. The relation between partition functions \begin{align} Z^{af}_{Is} (\tau,\bar\tau) = \sum_{h,\bar h} Z^{\eM_\text{dIs}}_{\one\text{-cnd};(h,\bar h)} (\tau,\bar\tau) (Z^{\eM_\text{dIs}}_{\cF_\text{Is};(h,\bar h)})^* \end{align} confirms the decomposition $ Is_{af} = Is_\text{dIs} \boxtimes_{\eM_\text{dIs}} \cF_\text{Is} $. We would like to pointed out that $\cF_\text{Is}$ is not a local fusion 1-category, since there is no dual local fusion 1-category $\t\cR$ that satisfies $\cF_\text{Is} \boxtimes_{\eM_\text{dIs}} \t\cR = \mathcal{V}\mathrm{ec}$. $\si$ in $\cF_\text{Is}$ having a non-integeral quantum dimension $\sqrt 2$ also implies that $\cF_\text{Is}$ is not a local fusion 1-category.\cite{TW191202817} Therefore, $\cF_\text{Is}$ does not describe an anomaly-free algebraic symmetry. Thus, \frmbox{the decomposition $ Is_{af} = Is_\text{dIs} \boxtimes_{\eM_\text{dIs}} \cF_\text{Is} $ reveals an emergence of \textsf{categorical symmetry}\ $ \eM_\text{dIs} $, but there is no emergent anomaly-free algebraic symmetry for this emergent \textsf{categorical symmetry}. } This is an interesting example, where emergent anomaly-free symmetry, even after including those that are beyond group or beyond higher group, can no longer properly describe the emergent symmetry. One may try to use emergent anomalous symmetry to properly describe the emergent symmetry. But, anomaly for non-invertible symmetry is not easy to define. This example demonstrate that \textsf{categorical symmetry}\ is a simple, unified, and systematic way to describe the most general emergent symmetry. \subsection{1+1D critical points for models with $G$ symmetry or dual $\t G$ symmetry} \subsubsection{Two 1+1D lattice models with group-like symmetry $G$ and algebraic symmetry $\t G$} We consider two 1+1D lattice models on a ring, where lattice sites are labeled by $i$, the links labeled by $ij$. In the first model, the physical degrees of freedom live on the vertices and are labeled by group elements $g$ of a finite group $G$. The many-body Hilbert space is spanned in the following local basis \begin{align} |\{g_i\}\>, \ \ \ g_i \in G. \end{align} The Hamiltonian is given by \begin{align} \label{HG} H_G = - J \sum_{i} f(g_ig_{i+1}^{-1}) - \sum_i \sum_{h \in G} L_h(i), \end{align} where $ f(g)$ is a positive function that is peaked at $ g = \mathrm{id}$. Also, the operator $L_h(i)$ is given by \begin{align} L_h(i) |g_1,\cdots,g_i,\cdots,g_N\>= |g_1,\cdots,hg_i,\cdots,g_N\>. \end{align} The Hamiltonian $H_G$ has an on-site $G$ symmetry \begin{align} \label{Uh} U_h H_G = H_G U_h, \ \ \ U_h =\prod_i L_h(i). \end{align} We see that when $J \gg 1$, $H_G$ is in the symmetry breaking phase, and when $J \ll 1$, $H_G$ is in the symmetric phase. The second bosonic lattice model has degrees of freedom living on the links. On an oriented link $ij$ pointing from $i$-site to $j$ site, the degrees of freedom are labeled by $g_{ij} \in G$. The many-body Hilbert space has the following local basis \begin{align} |\{g_{ij}\}\>, \ \ \ g_{ij} \in G. \end{align} Here, $g_{ij}$'s on links with opposite orientations satisfy \begin{align} g_{ij}=g_{ji}^{-1}. \end{align} The second model is related to the first model. A state $|g_1,\cdots,g_i,\cdots,g_N\>$ in the first model is mapped to a state $|\cdots,g_{i,i+1},\cdots\>$ in the second model where $g_{i,i+1}=g_ig_{i+1}^{-1}$. This connection allows us to design the Hamiltonian of the second model as \begin{align} \label{HtG} H_{\t G} = &- J \sum_{i} f(g_{i,i+1}) - \sum_i \sum_{h \in G} Q_h(i) , \end{align} where the star term $Q_h(i)$ acts on the two links $(i,i+1)$ and $(i-1,i)$: \begin{align} \label{Qh} &\ \ \ \ Q_h(i) |\cdots,g_{i-1,i} , g_{i,i+1},\cdots\> \nonumber\\ & = |\cdots, g_{i-1,i}h^{-1}, hg_{i,i+1} , \cdots\>. \end{align} The second model has an algebraic symmetry, denoted as $\t G$ \cite{KZ200514178}, \begin{align}\label{Wq} W_q H_{\t G} = H_{\t G} W_q ,~~~ W_q = \Tr \prod_{i} R_q(g_{i,i+1}), \end{align} where $R_q$ is an irreducible representation of $G$. We see that the algebraic symmetry $\t G$ is generated by the Wilson loop operators $W_q$, for all irreducible representations $q$. We note that the algebraic symmetry $\t G$ is different from the usual symmetry characterized by a group $G$, when $G$ is non-Abelian. But when $G$ is Abelian the algebraic symmetry \begin{table*}[t] \caption{The point-like excitations and their fusion rules in 2+1D $\eG\mathrm{au}_{S_3}$ topological order (\ie $S_3$ gauge theory with charge excitations). The $S_3$ group are generated by $(1,2)$ and $(1,2,3)$. Here $\bm 1$ is the trivial excitation. $a_1$ and $a_2$ are pure $S_3$ charge excitations, where $a_1$ corresponds to the 1-dimensional representation, and $a_2$ the 2-dimensional representation of $S_3$. $b$ and $c$ are pure $S_3$ flux excitations, where $b$ corresponds to the conjugacy class $\{(1,2,3),(1,3,2)\}$, and $c$ conjugacy class $\{(1,2),(2,3),(1,3)\}$. $b_1$, $b_2$, and $c_1$ are charge-flux bound states. $d,h$ are the quantum dimension and the topological spin of an excitation. } \label{S3FusionRules} \setlength\extrarowheight{-2pt} \setlength{\tabcolsep}{2pt} \centering \begin{tabular}{|c | c|c|c|c|c|c|c|c|} \hline $d,h$ & $1,0$ & $1,0$ & $2,0$ & $2,0$ & $2,\frac13$ & $2,-\frac13$ & $3,0$ & $3,\frac12$\\ \hline $\otimes$ & $\bm 1$ & $a_1$ & $a_2$ & $b$ & $b_1$ & $b_2$ & $c$ & $c_1$ \\ \hline $ \bm 1 $ & $\bm 1$ & $a_1$ & $a_2$ & $b$ & $b_1$ & $b_2$ & $c$ & $c_1$ \\ $a_1$ & $a_1$ & $\bm 1$ & $a_2$ & $b$ &$b_1$ & $b_2$ & $c_1$ & $c$ \\ $a_2$ & $a_2$ & $a_2$ & $\bm 1\oplus a_1\oplus a_2$ & $b_1\oplus b_2$ & $b\oplus b_2$ & $b\oplus b_1$ & $c\oplus c_1$ & $c\oplus c_1$\\ $b$ & $b$ & $b$ & $b_1\oplus b_2$ & $\bm 1\oplus a_1\oplus b$ & $b_2\oplus a_2$ & $b_1\oplus a_2$ & $c\oplus c_1$ & $c\oplus c_1$ \\ $b_1$ & $b_1$ & $b_1$ & $b\oplus b_2$ & $b_2\oplus a_2$ & $\bm 1\oplus a_1\oplus b_1$ & $b\oplus a_2$ & $c\oplus c_1$ & $c\oplus c_1$ \\ $b_2$ & $b_2$ & $b_2$ & $b\oplus b_1$ & $b_1\oplus a_2$ & $b\oplus a_2$ & $\bm 1\oplus a_1\oplus b_2$ & $c\oplus c_1$ & $c\oplus c_1$ \\ $c$ & $c$ & $c_1$ & $c\oplus c_1$ & $c\oplus c_1$ & $c\oplus c_1$ & $c\oplus c_1$ & $\bm 1\oplus a_2\oplus b\oplus b_1\oplus b_2$ & $a_1 \oplus a_2\oplus b\oplus b_1\oplus b_2$ \\ $c_1$ & $c_1$ & $c$ & $c\oplus c_1$ & $c\oplus c_1$ & $c\oplus c_1$ & $c\oplus c_1$ & $a_1 \oplus a_2\oplus b\oplus b_1\oplus b_2$ & $\bm 1\oplus a_2\oplus b\oplus b_1\oplus b_2$ \\ \hline \end{tabular} \end{table*} $\t G$ happen to be the usual symmetry $G$. \subsubsection{Critical points and their holographic picture} Let us assume that for a proper function $f(g)$, the model $H_G$ has a continuous spontaneous symmetry breaking transition at $J=J_c$. Due to the duality, the model $H_{\t G}$ also has a continuous transition at $J=J_c$. What are partition functions for those two critical points? \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{CFTCFTG} \end{center} \caption{ A decomposition of an anomaly-free critical point $CFT_{af}$ of model $H_G$ \eq{HG}, exposes an emergent symmetry $G$, as well as the emergent \textsf{categorical symmetry}: $\eM =\eG\mathrm{au}_{G}$. The symmetry $G$ is described by $\cR =\mathcal{R}\mathrm{ep}_{G}$. The dual of $\cR$ is $\t\cR =\mathcal{V}\mathrm{ec}_{G}$. } \label{CFTCFTG} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.8]{CFTCFTtG} \end{center} \caption{ A decomposition of an anomaly-free critical point $CFT_{af}'$ of model $H_{\t G}$ \eq{HtG}, exposes an emergent algebraic symmetry $\t G$, as well as the emergent \textsf{categorical symmetry}: $\eM =\eG\mathrm{au}_{G}$. The algebraic symmetry $\t G$ is described by $\cR =\mathcal{V}\mathrm{ec}_{G}$. The dual of $\cR$ is $\t\cR =\mathcal{R}\mathrm{ep}_{G}$. } \label{CFTCFTtG} \end{figure} Both the symmetry $G$ and the dual algebraic symmetry $\t G$ have the same \textsf{categorical symmetry}, given by 2+1D $G$-gauge theory $\eG\mathrm{au}_G$ (see Table \ref{S3FusionRules} for $G=S_3$).\cite{JW191213492,KZ200514178} Therefore the critical point in model $H_G$, denoted as $CFT_{af}$, is given by Fig. \ref{CFTCFTG}. This is because the symmetry $G$ is described by local fusion 1-category $\cR = \mathcal{R}\mathrm{ep}_G$. The dual of $\cR = \mathcal{R}\mathrm{ep}_G$ is the local fusion 1-category $\t\cR = \mathcal{V}\mathrm{ec}_G$, which leads to Fig. \ref{CFTCFTG}. On the other hand, the critical point in model $H_{\t G}$, denoted as $CFT'_{af}$, is given by Fig. \ref{CFTCFTtG}. This is because the algebraic symmetry $\t G$ is described by local fusion 1-category $\cR = \mathcal{V}\mathrm{ec}_G$. The dual of $\cR = \mathcal{V}\mathrm{ec}_G$ is the local fusion 1-category $\t\cR = \mathcal{R}\mathrm{ep}_G$, which leads to Fig. \ref{CFTCFTtG}. The anomalous gapless boundary $CFT_{ano}$ in Fig. \ref{CFTCFTG} and \ref{CFTCFTtG} is an $\one$-condensed boundary of $\eG\mathrm{au}_G$. For $G=S_3$, it is described by the following multi component partition function, labeled by the anyons in $\eG\mathrm{au}_{S_3}$: \begingroup \allowdisplaybreaks \begin{align} \label{S3bdy} Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};\one} &= |\chi^{m6}_{0}|^2 + |\chi^{m6}_{3}|^2 + |\chi^{m6}_{\frac{2}{5}}|^2 + |\chi^{m6}_{\frac{7}{5}}|^2 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};a_1} &= \chi^{m6}_{0} \bar\chi^{m6}_{3} + \chi^{m6}_{3} \bar\chi^{m6}_{0} + \chi^{m6}_{\frac{2}{5}} \bar\chi^{m6}_{\frac{7}{5}} + \chi^{m6}_{\frac{7}{5}} \bar\chi^{m6}_{\frac{2}{5}} \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};a_2} &= |\chi^{m6}_{\frac{2}{3}}|^2 + |\chi^{m6}_{\frac{1}{15}}|^2 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};b} &= |\chi^{m6}_{\frac{2}{3}}|^2 + |\chi^{m6}_{\frac{1}{15}}|^2 \\ Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};b_1} &= \chi^{m6}_{0} \bar\chi^{m6}_{\frac{2}{3}} + \chi^{m6}_{3} \bar\chi^{m6}_{\frac{2}{3}} + \chi^{m6}_{\frac{2}{5}} \bar\chi^{m6}_{\frac{1}{15}} + \chi^{m6}_{\frac{7}{5}} \bar\chi^{m6}_{\frac{1}{15}} \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};b_2} &= \chi^{m6}_{\frac{2}{3}} \bar\chi^{m6}_{0} + \chi^{m6}_{\frac{2}{3}} \bar\chi^{m6}_{3} + \chi^{m6}_{\frac{1}{15}} \bar\chi^{m6}_{\frac{2}{5}} + \chi^{m6}_{\frac{1}{15}} \bar\chi^{m6}_{\frac{7}{5}} \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};c} &= |\chi^{m6}_{\frac{1}{8}}|^2 + |\chi^{m6}_{\frac{13}{8}}|^2 + |\chi^{m6}_{\frac{1}{40}}|^2 + |\chi^{m6}_{\frac{21}{40}}|^2 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};c_1} &= \chi^{m6}_{\frac{1}{8}} \bar\chi^{m6}_{\frac{13}{8}} + \chi^{m6}_{\frac{13}{8}} \bar\chi^{m6}_{\frac{1}{8}} + \chi^{m6}_{\frac{1}{40}} \bar\chi^{m6}_{\frac{21}{40}} + \chi^{m6}_{\frac{21}{40}} \bar\chi^{m6}_{\frac{1}{40}} . \nonumber \end{align} \endgroup where $ \chi^{m6}_{h}(\tau)$ is the conformal character of $(6,5)$ minimal model, and $h$ is the scaling dimension of the corresponding primary field. The $(6,5)$ minimal model has a center charge $c=\frac45$. We can see the above boundary to be $\one$-condensed, because only the $\one$-component of the partition function contains the conformal character $|\chi^{m6}_{0}|^2$ for the identity primary field.\cite{CW220506244} The gapped boundary $\mathcal{V}\mathrm{ec}_{S_3}$ in Fig. \ref{CFTCFTG} is induced by condensing the condensible algebra $\cA_c = \one \oplus a_1 \oplus 2 a_2$ formed by $S_3$ charges. It is described by the following multi component partition function: \begingroup \allowdisplaybreaks \begin{align} \label{VecS3} Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{V}\mathrm{ec}_{S_3};\one} &= 1 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{V}\mathrm{ec}_{S_3};a_1} &= 1 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{V}\mathrm{ec}_{S_3};a_2} &= 2 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{V}\mathrm{ec}_{S_3};b} &= 0 \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{V}\mathrm{ec}_{S_3};b_1} &= 0 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{V}\mathrm{ec}_{S_3};b_2} &= 0 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{V}\mathrm{ec}_{S_3};c} &= 0 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{V}\mathrm{ec}_{S_3};c_1} &= 0 \nonumber \end{align} \endgroup The gapped boundary $\mathcal{R}\mathrm{ep}_{S_3}$ in Fig. \ref{CFTCFTtG} is induced by condensing the condensible algebra $\cA_f = \one \oplus b \oplus c$ formed by $S_3$ flux. It is described by the following multi component partition function: \begingroup \allowdisplaybreaks \begin{align} \label{RepS3} Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{R}\mathrm{ep}_{S_3};\one} &= 1 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{R}\mathrm{ep}_{S_3};a_1} &= 0 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{R}\mathrm{ep}_{S_3};a_2} &= 0 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{R}\mathrm{ep}_{S_3};b} &= 1 \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{R}\mathrm{ep}_{S_3};b_1} &= 0 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{R}\mathrm{ep}_{S_3};b_2} &= 0 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{R}\mathrm{ep}_{S_3};c} &= 1 \nonumber \\ Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{R}\mathrm{ep}_{S_3};c_1} &= 0 \nonumber \end{align} \endgroup The critical point in the $G$-symmetric model $H_G$ is given by $CFT_{af}$ in Fig. \ref{CFTCFTG} via the decomposition $CFT_{af} = CFT_{ano} \boxtimes_{\eG\mathrm{au}_{S_3}} \mathcal{V}\mathrm{ec}_{S_3}$. Thus the modular invariant partition function for $CFT_{af}$ is given by \begin{align} & Z_{af} = \sum_a Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};a}(\tau,\bar\tau) (Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{V}\mathrm{ec}_{S_3};a})^* \\ &= |\chi^{m6}_{0} + \chi^{m6}_{3}|^2 + |\chi^{m6}_{\frac{2}{5}} + \chi^{m6}_{\frac{7}{5}}|^2 + 2 |\chi^{m6}_{\frac{2}{3}}|^2 + 2 |\chi^{m6}_{\frac{1}{15}}|^2 . \nonumber \end{align} The critical point in the $\t G$-symmetric model $H_{\t G}$ is given by $CFT'_{af}$ in Fig. \ref{CFTCFTtG} via the decomposition $CFT'_{af} = CFT_{ano} \boxtimes_{\eG\mathrm{au}_{S_3}} \mathcal{R}\mathrm{ep}_{S_3}$. Thus the modular invariant partition function for $CFT'_{af}$ is given by \begin{align} & Z_{af}' = \sum_a Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};a}(\tau,\bar\tau) (Z^{\eG\mathrm{au}_{S_3}}_{\mathcal{R}\mathrm{ep}_{S_3};a} )^* \\ &= |\chi^{m6}_{0}|^2 + |\chi^{m6}_{3}|^2 + |\chi^{m6}_{\frac{2}{5}}|^2 + |\chi^{m6}_{\frac{7}{5}}|^2 + |\chi^{m6}_{\frac{2}{3}}|^2 + |\chi^{m6}_{\frac{1}{15}}|^2 \nonumber\\ &\ \ \ \ + |\chi^{m6}_{\frac{1}{8}}|^2 + |\chi^{m6}_{\frac{13}{8}}|^2 + |\chi^{m6}_{\frac{1}{40}}|^2 + |\chi^{m6}_{\frac{21}{40}}|^2 . \nonumber \end{align} Through the above examples, we see that the holographic picture of emergent symmetry Fig. \ref{RsymmC} can give rises to concrete partition functions for the critical point in the model $H_G$ and the model $H_{\t G}$. The different choices of the gapped boundary $\t\cR$ give rise to different dual lattice models. Although the partition functions for the model $H_G$ and model $H_{\t G}$ are different, the partition function for the $G$-symmetric sub-Hilbert space of model $H_G$ and the partition function for the $\t G$-symmetric sub-Hilbert space of the model $H_{\t G}$ are the same, and both are given by the $\one$-component of the multi-component partition function $ Z^{\eG\mathrm{au}_{S_3}}_{\one\text{-cnd};\one} = |\chi^{m6}_{0}|^2 + |\chi^{m6}_{3}|^2 + |\chi^{m6}_{\frac{2}{5}}|^2 + |\chi^{m6}_{\frac{7}{5}}|^2 $. This implies that the model $H_G$ and the model $H_{\t G}$ are identical within the respective symmetric sub-Hilbert space. In other words, the model $H_G$ and the model $H_{\t G}$ are local low energy equivalent. In addition to $\cA_c =\one\oplus a_1\oplus 2 a_2$, $\cA_f =\one\oplus b\oplus c$, the $S_3$-gauge theory $\eG\mathrm{au}_{S_3}$ also has two other Lagrangian condensable algebras: $\t\cA_c =\one\oplus a_1\oplus 2 b$, $\t\cA_f =\one\oplus a_2\oplus c$. We note that the $S_3$-gauge theory $\eG\mathrm{au}_{S_3}$ has an automorphism that exchanges $a_2$ and $b$. The condensable algebras $\t\cA_c $, $\t\cA_f $ are generated from $\cA_c $, $\cA_f $ through the automorphism. Thus, we denote the boundary induced by $\t\cA_c $-condensation as $\t\mathcal{V}\mathrm{ec}_G$, and the boundary induced by $\t\cA_f $-condensation as $\t\mathcal{R}\mathrm{ep}_G$. Replace the gapped boundaries $\mathcal{V}\mathrm{ec}_G$ and $\mathcal{R}\mathrm{ep}_G$ in Fig. \ref{CFTCFTG} and \ref{CFTCFTtG} by $\t\mathcal{V}\mathrm{ec}_G$ and $\t\mathcal{R}\mathrm{ep}_G$ will gives us two other lattice models, denoted as $\t H_G$ and $\t H_{\t G}$. All the four lattice models $H_G$, $H_{\t G}$, $\t H_G$, and $\t H_{\t G}$ are local low energy equivalent. Because the two boundaries $\mathcal{V}\mathrm{ec}_G$ and $\t\mathcal{V}\mathrm{ec}_G$ are related by automorphism, we believe that we can choose a proper lattice regulation such that $H_G$ and $\t H_G$ have the same form, \ie the lattice model is self-dual under the $a_2 \leftrightarrow b$ exchange. Similarly, we believe that we can choose a proper lattice regulation such that $H_{\t G}$ and $\t H_{\t G}$ have the same form. \subsubsection{Maximal \textsf{categorical symmetry}} The emergent \textsf{categorical symmetry}\ $\eG\mathrm{au}_{S_3}$ for the four models, $H_G$, $H_{\t G}$, $\t H_G$ and $\t H_{\t G}$, is not the maximal \textsf{categorical symmetry}. From the expression of partition function for the critical point, we see that the maximal \textsf{categorical symmetry}\ is given by the double $(6,5)$-minimal model: $\eM_{dm6} =\eM_{m6} \boxtimes \bar \eM_{m6}$, where $\eM_{m6}$ the topological order of single $(6,5)$-minimal model with the following set of anyons: \begin{widetext} \begin{align} \begin{matrix} \text{anyons}\ (s,r): & (1,1) & (2,1) & (3,1) & (4,1) & (5,1) & (1,2) & (2,2) & (3,2) & (4,2) & (5,2) & \\ d_{(s,r)}: & 1 & \sqrt{3} & 2 & \sqrt{3} & 1 & \frac{1+\sqrt{5}}{2} & \frac{\sqrt {15}+\sqrt{3}}{2} & 1+\sqrt{5} & \frac{\sqrt{15}+\sqrt{3}}{2} & \frac{1+\sqrt{5}}{2} \\ h_{(s,r)}: & 0 & \frac{ 1}{8 } & \frac{ 2}{3 } & \frac{ 13}{8 } & 3 & \frac{ 2}{5 } & \frac{ 1}{40 } & \frac{ 1}{15 } & \frac{ 21}{40 } & \frac{ 7}{5} \\ \end{matrix} \end{align} \end{widetext} where we label anyons by $(s,r)$, $s=1,2,3,4,5$ and $r=1,2$. The scaling dimensions of the corresponding primary fields are given by \begin{align} h_{s,r} = \frac{(pr-qs)^2-(p-q)^2}{4pq},\ \ p=6,\ q=5. \end{align} Using the conformal characters of $\chi^{m6}_{s,r}(\tau) = \chi^{m6}_{h_{s,r}}(\tau)$, we can construct two modular invariant partition functions \begin{align} Z_{af}' &= \sum_{s,r} |\chi^{m6}_{s,r}(\tau)|^2 \\ &= |\chi^{m6}_{0}|^2 + |\chi^{m6}_{\frac{1}{8}}|^2 + |\chi^{m6}_{\frac{2}{3}}|^2 + |\chi^{m6}_{\frac{13}{8}}|^2 + |\chi^{m6}_{3}|^2 \nonumber\\ &\ \ \ \ + |\chi^{m6}_{\frac{2}{5}}|^2 + |\chi^{m6}_{\frac{1}{40}}|^2 + |\chi^{m6}_{\frac{1}{15}}|^2 + |\chi^{m6}_{\frac{21}{40}}|^2 + |\chi^{m6}_{\frac{7}{5}}|^2 , \nonumber \end{align} and \begin{align} & Z_{af} = \sum_{s=\text{odd},r} |\chi^{m6}_{s,r}(\tau)|^2 +\sum_{s=\text{odd},r} \chi^{m6}_{s,r}(\tau) \bar \chi^{m6}_{6-s,r}(\bar \tau) \nonumber\\ & = |\chi^{m6}_{0}|^2 + |\chi^{m6}_{\frac{2}{3}}|^2 + |\chi^{m6}_{3}|^2 + |\chi^{m6}_{\frac{2}{5}}|^2 + |\chi^{m6}_{\frac{1}{15}}|^2 + |\chi^{m6}_{\frac{7}{5}}|^2 \nonumber\\ &\ \ \ \ + \chi^{m6}_{0} \bar \chi^{m6}_{3} + |\chi^{m6}_{\frac{2}{3}} |^2 + \chi^{m6}_{3} \bar \chi^{m6}_{0} + \chi^{m6}_{\frac{2}{5}} \bar \chi^{m6}_{\frac{7}{5}} \nonumber\\ &\ \ \ \ + |\chi^{m6}_{\frac{1}{15}}|^2 + \chi^{m6}_{\frac{7}{5}} \bar \chi^{m6}_{\frac{2}{5}} \\ &= |\chi^{m6}_{0} + \chi^{m6}_{3}|^2 + |\chi^{m6}_{\frac{2}{5}} + \chi^{m6}_{\frac{7}{5}}|^2 + 2 |\chi^{m6}_{\frac{2}{3}}|^2 + 2 |\chi^{m6}_{\frac{1}{15}}|^2 . \nonumber \end{align} The above modular invariant partition functions happen to describe the critical points of the model $H_{\t G}$ and the model $H_G$, respectively. The partition function $Z_{af}'$ is obtained by choosing the gapped boundary $\t\cR$ in Fig. \ref{RsymmC} to be described the following constant multi-component partition function \begin{align} Z^{\eM_{dm6}}_{h,h'} &= \del_{h,h'}, \ \ h,h' \in \{ 0 , \frac{ 1}{8 } , \frac{ 2}{3 } , \frac{ 13}{8 } , 3 , \frac{ 2}{5 } , \frac{ 1}{40 } , \frac{ 1}{15 } , \frac{ 21}{40 } , \frac{ 7}{5} \} \end{align} The partition function $Z_{af}$ is obtained by choosing the gapped boundary $\t\cR$ to be described by \begin{align} &\ \ \ \ Z^{\eM_{dm6}}_{0,0} =Z^{\eM_{dm6}}_{3,3} =Z^{\eM_{dm6}}_{0,3} =Z^{\eM_{dm6}}_{3,0} \nonumber\\ & =Z^{\eM_{dm6}}_{\frac25,\frac25} =Z^{\eM_{dm6}}_{\frac75,\frac75} =Z^{\eM_{dm6}}_{\frac25,\frac75} =Z^{\eM_{dm6}}_{\frac75,\frac25} =1, \nonumber\\ & \ \ \ \ Z^{\eM_{dm6}}_{\frac23,\frac23} = Z^{\eM_{dm6}}_{\frac1{15},\frac1{15}} = 2, \ \ \text{other } Z^{\eM_{dm6}}_{h,h'} = 0. \end{align} The above two gapped boundaries are not described by local fusion 1-category. Thus, the critical points in the four models, $H_G$, $H_{\t G}$, $\t H_G$ and $\t H_{\t G}$, have the same emergent maximal \textsf{categorical symmetry}\ $\eM_{dm6} =\eM_{m6} \boxtimes \bar \eM_{m6}$, without the associated emergent anomaly-free symmetry.\footnote{The associated emergent anomaly-free symmetry for a \textsf{categorical symmetry}\ $\eM$ is described by a fusion $n$-category $\cR$ that satisfies $\eZ(\cR) = \eM$, and there exists a $\t\cR$ such that $\eZ(\t\cR) = \eM$ and $\cR \boxtimes_{\eM} \t\cR = n\mathcal{V}\mathrm{ec}$. If such a fusion $n$-category $\cR$ does not exists, then the \textsf{categorical symmetry}\ $\eM$ does not have associated emergent anomaly-free symmetry. } \subsection{Gapless states with anomalous $S_3$ symmetry} In this subsection, we consider 1+1D gapless states with anomalous $S_3$ symmetry. The 1+1D anomalous $S_3$ symmetries are classified by $H^3(S_3;{\mathbb{R}/\mathbb{Z}})=\Z_3\times\Z_2 \cong \Z_6$.\cite{CGL1314} We label those anomalies by $S_3^{(m)}$, $m\in \{0,1,2,3,4,5\}$. The \textsf{categorical symmetry}\ for an anomalous $S_3^{(m)}$ symmetry is given by a topological order $\eG\mathrm{au}_{S_3}^{(m)}$ that is described in the IR limit by the 2+1D Dijkgraaf-Witten gauge theory\cite{DW9093} with gauge charges. Note that the time reverse conjugation of an anomalous symmetry $S_3^{(m)}$ gives rise to another anomalous symmetry $S_3^{(-m \text{ mod } 6)}$. We will study the modular invariant partition function for those gapless states, and their emergent maximal \textsf{categorical symmetry}. \subsubsection{Anomalous $S_3^{(1)}$ symmetry} A gapless state for a lattice system with anomalous $S_3^{(1)}$ symmetry has the following decomposition \begin{align} CFT_{af} = CFT_{ano}\boxtimes_{\eG\mathrm{au}_{S_3}^{(1)}} \t\cR , \end{align} where the \textsf{categorical symmetry}\ $\eG\mathrm{au}_{S_3}^{(1)}$ (\ie the 2+1D $\eG\mathrm{au}_{S_3}^{(1)}$ topological order) has anyons given by \begin{align} \begin{matrix} \text{anyons}: & \one & a_1 & a_2 & b & b_1 & b_2 & c & c_1 \\ d_a: & 1 & 1 & 2 & 2 & 2 & 2 & 3 & 3 \\ s_a: & 0 & 0 & 0 & \frac19 & \frac49 & \frac79 & \frac14 & \frac34 \\ \end{matrix} \end{align} where anyon $a_1,a_2$ carry the $S_3$-charges. In fact $a_1$ carries the non-trivial 1-dimensional representation of $S_3$, and $a_2$ carries the 2-dimensional irreducible representation of $S_3$. If the gapless state does not break any \textsf{categorical symmetry}\ $\eG\mathrm{au}_{S_3}^{(1)}$, then $CFT_{ano}$ in the decomposition is given by a $\one$-condensed boundary of $\eG\mathrm{au}_{S_3}^{(1)}$, which is described by the following multi-component partition function: \begin{widetext} \begingroup \allowdisplaybreaks \begin{align} \label{S3a1one} Z_{\one\text{-cnd};\one}^{\eG\mathrm{au}_{S_3}^{(1)}} &= \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{1,0; 1,0; 1,0} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{2,1; 2,\frac{1}{4}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};a_1}^{\eG\mathrm{au}_{S_3}^{(1)}} &= \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{1,0; 2,\frac{1}{4}; 2,-\frac{1}{4}} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{2,1; 1,0; 1,0} \nonumber \\ Z_{\one\text{-cnd};a_2}^{\eG\mathrm{au}_{S_3}^{(1)}} &= \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{6,1; 1,0; 1,0} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{6,1; 2,\frac{1}{4}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};b}^{\eG\mathrm{au}_{S_3}^{(1)}} &= \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{5,\frac{10}{9}; 1,0; 1,0} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{5,\frac{10}{9}; 2,\frac{1}{4}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};b_1}^{\eG\mathrm{au}_{S_3}^{(1)}} &= \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{8,\frac{4}{9}; 1,0; 1,0} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{8,\frac{4}{9}; 2,\frac{1}{4}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};b_2}^{\eG\mathrm{au}_{S_3}^{(1)}} &= \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{7,\frac{7}{9}; 1,0; 1,0} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{7,\frac{7}{9}; 2,\frac{1}{4}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};c}^{\eG\mathrm{au}_{S_3}^{(1)}} &= \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{3,\frac{1}{2}; 1,0; 2,-\frac{1}{4}} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{4,1; 2,\frac{1}{4}; 1,0} \nonumber \\ Z_{\one\text{-cnd};c_1}^{\eG\mathrm{au}_{S_3}^{(1)}} &= \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{3,\frac{1}{2}; 2,\frac{1}{4}; 1,0} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{4,1; 1,0; 2,-\frac{1}{4}}. \end{align} \endgroup \end{widetext} In the above, we have used a more abbreviated notation, where $\chi^{CFT_1\times CFT_2 \times \cdots}_{a_1,h_1;a_2,h_2;\cdots}$ is product of conformal characters of $CFT_i$ for the primary fields labeled by $a_i$ with scaling dimension $h_i$. For example \begin{align} \label{cterm} &\ \ \ \ \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{2,1; 2,\frac{1}{4}; 2,-\frac{1}{4}} \nonumber\\ &= \chi^{so(9)_2 }_{2,1}(\tau) \chi^{u(1)_2 }_{ 2,\frac{1}{4} }(\tau) \chi^{\bar{u}(1)_2}_{ 2,-\frac{1}{4}} (\bar \tau) \chi^{\bar{E}(8)_1}(\bar \tau), \end{align} where $\chi^{so(9)_2 }_{2,1}(\tau)$ is the conformal character of $so(9)_2$ CFT, for the second primary field with scaling dimension $h=1$; $\chi^{u(1)_2 }_{2,\frac{1}{4}}(\tau)$ is the conformal character of $u(1)_2$ CFT, for the second primary field with scaling dimension $h=\frac14$; $\chi^{\bar u(1)_2 }_{2,-\frac{1}{4}}(\bar\tau)$ is the conformal character of $\bar u(1)_2$ CFT, for the second primary field with scaling dimension $h=\frac14$; $\chi^{\bar E(8)_1 }$ is the conformal character of $\bar E(8)_1$ CFT (the complex conjugate of $E(8)$ level-1 Kac-Moody algebra). The $\bar E(8)_1$ CFT has only one primary field (the identity), whose index is suppressed. Eq. \eqref{S3a1one} describes a gapless state that does not break the \textsf{categorical symmetry}\ $\eG\mathrm{au}_{S_3}^{(1)}$. The gapless state is described by a $so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1$ chiral CFT with central charge $(c, \bar c)=(9,9)$. To describe the anomalous symmetry $S_3^{(1)}$, we need to choose $\t\cR$ in the decomposition the gapped boundary of $\eG\mathrm{au}_{S_3}^{(1)}$ obtained from the condensation of all the $S_3$-charges. In other words, $\t\cR$ is a $\one\oplus a_1\oplus a_2$-condensed boundary of $\eG\mathrm{au}_{S_3}^{(1)}$, described by the following multi-component partition function:\cite{CW220506244} \begingroup \allowdisplaybreaks \begin{align} Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};\one}^{\eG\mathrm{au}_{S_3}^{(1)}} &= 1, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};a_1}^{\eG\mathrm{au}_{S_3}^{(1)}} &= 1, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};a_2}^{\eG\mathrm{au}_{S_3}^{(1)}} &= 2, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};b}^{\eG\mathrm{au}_{S_3}^{(1)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};b_1}^{\eG\mathrm{au}_{S_3}^{(1)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};b_2}^{\eG\mathrm{au}_{S_3}^{(1)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};c}^{\eG\mathrm{au}_{S_3}^{(1)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};c_1}^{\eG\mathrm{au}_{S_3}^{(1)}} &= 0. \end{align} \endgroup From the decomposition $CFT_{af} = CFT_{ano}\boxtimes_{\eG\mathrm{au}_{S_3}^{(1)}} \t\cR$, we find the modular invariant partition function of the gapless state that does not break the \textsf{categorical symmetry}\ $\eG\mathrm{au}_{S_3}^{(1)}$: \begin{align} \label{ZafS31} & Z_{af} = \sum_a Z^{\eG\mathrm{au}_{S_3}^{(1)}}_{\one\text{-cnd};a}(\tau,\bar\tau) (Z^{\eG\mathrm{au}_{S_3}^{(1)}}_{\one\oplus a_1\oplus 2a_2\text{-cnd};a})^* \\ &= \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{1,0; 1,0; 1,0} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{2,1; 2,\frac{1}{4}; 2,-\frac{1}{4}} \nonumber\\ & + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{1,0; 2,\frac{1}{4}; 2,-\frac{1}{4}} + \chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{2,1; 1,0; 1,0} \nonumber\\ & + 2\chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{6,1; 1,0; 1,0} + 2\chi^{so(9)_2 \times u(1)_2 \times \bar{u}(1)_2\times\bar{E}(8)_1}_{6,1; 2,\frac{1}{4}; 2,-\frac{1}{4}} . \nonumber \end{align} From the above result, we see that such a gapless state has a \textsf{categorical symmetry}\ larger than $\eG\mathrm{au}_{S_3}^{(1)}$ \begin{align} \eM_\text{max}=\eM_{so(9)_2} \times \eM_{(2,-2,0)}\times \eM_{\bar E(8)_1}, \end{align} where $ \eM_{so(9)_2}$ is the 2+1D topological order described by $so(9)$ level-2 Chern-Simons theory, $ \eM_{\bar E(8)_1}$ is the 2+1D topological order described by time-reversal conjugate of $E(8)$ level-1 Chern-Simons theory, and $\eM_{(2,-2,0)}$ is the 2+1D Abelian topological order described by $K$-matrix $K=\begin{pmatrix} 2 &0 \\ 0 &-2\\ \end{pmatrix}$. We notice that the conformal character $\chi^{u(1)_2}_{2;\frac14}$ is also contained in the $u(1)$ level-$2n^2$ CFT $u(1)_{2n^2}$, $n\in \Z$. Therefore, the gapless state \eq{ZafS31} has an even larger \textsf{categorical symmetry}\ \begin{align} \eM=\eM_{so(9)_2}\times\eM_{(2n^2,-2n^2,0)}\times\eM_{\bar E(8)_1}, \end{align} where $\eM_{(2n^2,-2n^2,0)}$, $ n\in \Z$, is the 2+1D Abelian topological order described by $K$-matrix $K=\begin{pmatrix} 2n^2 &0 \\ 0 &-2n^2\\ \end{pmatrix}$. When $n\to \infty$, the total quantum dimensions of the \textsf{categorical symmetry}\ also approaches to $\infty$. Thus the maximal \textsf{categorical symmetry}\ for the gapless state \eq{ZafS31} contains, at least, the \textsf{categorical symmetry}\ (denoted as $\eG\mathrm{au}_{U(1)}$) for a continuous $U(1)$ symmetry (which is a braided fusion category with infinite objects). In other words, the gapless state \eq{ZafS31} has an emergent $U(1)$ symmetry. \subsubsection{Anomalous $S_3^{(2)}$ symmetry} Similarly, a gapless state for a lattice system with anomalous $S_3^{(2)}$ symmetry has the decomposition \begin{align} CFT_{af} = CFT_{ano}\boxtimes_{\eG\mathrm{au}_{S_3}^{(2)}} \t\cR , \end{align} where the \textsf{categorical symmetry}\ is the 2+1D topological order $\eG\mathrm{au}_{S_3}^{(2)}$, which has anyons given by \begin{align} \begin{matrix} \text{anyons}: & \one & a_1 & a_2 & b & b_1 & b_2 & c & c_1 \\ d_a: & 1 & 1 & 2 & 2 & 2 & 2 & 3 & 3 \\ s_a: & 0 & 0 & 0 & \frac29 & \frac59 & \frac89 & 0 & \frac12 \\ \end{matrix}. \end{align} If the gapless state does not break any \textsf{categorical symmetry}\ $\eG\mathrm{au}_{S_3}^{(2)}$, then $CFT_{ano}$ in the decomposition is given by a $\one$-condensed boundary of $\eG\mathrm{au}_{S_3}^{(2)}$, which is described by the following multi-component partition function: \begingroup \allowdisplaybreaks \begin{align} \label{S3a2one} Z_{\one\text{-cnd};\one}^{\eG\mathrm{au}_{S_3}^{(2)}} &= \chi^{{E(8)_1\times \overline{so}(9)_2}}_{1,0} \nonumber \\ Z_{\one\text{-cnd};a_1}^{\eG\mathrm{au}_{S_3}^{(2)}} &= \chi^{{E(8)_1\times \overline{so}(9)_2}}_{2,-1} \nonumber \\ Z_{\one\text{-cnd};a_2}^{\eG\mathrm{au}_{S_3}^{(2)}} &= \chi^{{E(8)_1\times \overline{so}(9)_2}}_{6,-1} \nonumber \\ Z_{\one\text{-cnd};b}^{\eG\mathrm{au}_{S_3}^{(2)}} &= \chi^{{E(8)_1\times \overline{so}(9)_2}}_{7,-\frac{7}{9}} \nonumber \\ Z_{\one\text{-cnd};b_1}^{\eG\mathrm{au}_{S_3}^{(2)}} &= \chi^{{E(8)_1\times \overline{so}(9)_2}}_{8,-\frac{4}{9}} \nonumber \\ Z_{\one\text{-cnd};b_2}^{\eG\mathrm{au}_{S_3}^{(2)}} &= \chi^{{E(8)_1\times \overline{so}(9)_2}}_{5,-\frac{10}{9}} \nonumber \\ Z_{\one\text{-cnd};c}^{\eG\mathrm{au}_{S_3}^{(2)}} &= \chi^{{E(8)_1\times \overline{so}(9)_2}}_{4,-1} \nonumber \\ Z_{\one\text{-cnd};c_1}^{\eG\mathrm{au}_{S_3}^{(2)}} &= \chi^{{E(8)_1\times \overline{so}(9)_2}}_{3,-\frac{1}{2}} . \end{align} \endgroup The above gapless state is described by a $E(8)_1 \times \overline{so}(9)_2 $ chiral CFT with central charge $(c, \bar c)=(8,8)$. To describe the anomalous symmetry $S_3^{(2)}$, we need to choose $\t\cR$ in the decomposition the gapped boundary of $\eG\mathrm{au}_{S_3}^{(2)}$ obtained from the condensation of all the $S_3$-charges. In other words, $\t\cR$ is a $\one\oplus a_1\oplus 2a_2$-condensed boundary of $\eG\mathrm{au}_{S_3}^{(2)}$, described by the following multi-component partition function:\cite{CW220506244} \begingroup \allowdisplaybreaks \begin{align} Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};\one}^{\eG\mathrm{au}_{S_3}^{(2)}} &= 1, \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};a_1}^{\eG\mathrm{au}_{S_3}^{(2)}} &= 1, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};a_2}^{\eG\mathrm{au}_{S_3}^{(2)}} &= 2, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};b}^{\eG\mathrm{au}_{S_3}^{(2)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};b_1}^{\eG\mathrm{au}_{S_3}^{(2)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};b_2}^{\eG\mathrm{au}_{S_3}^{(2)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};c}^{\eG\mathrm{au}_{S_3}^{(2)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};c_1}^{\eG\mathrm{au}_{S_3}^{(2)}} &= 0. \nonumber \end{align} \endgroup From the decomposition $CFT_{af} = CFT_{ano}\boxtimes_{\eG\mathrm{au}_{S_3}^{(2)}} \t\cR$, we find the modular invariant partition function of the gapless state that does not break the \textsf{categorical symmetry}\ $\eG\mathrm{au}_{S_3}^{(2)}$: \begin{align} \label{ZafS32} & Z_{af} = \sum_a Z^{\eG\mathrm{au}_{S_3}^{(2)}}_{\one\text{-cnd};a}(\tau,\bar\tau) (Z^{\eG\mathrm{au}_{S_3}^{(2)}}_{\one\oplus a_1\oplus 2a_2\text{-cnd};a})^* \\ &= \chi^{{E(8)_1\times \overline{so}(9)_2}}_{1,0} + \chi^{{E(8)_1\times \overline{so}(9)_2}}_{2,-1} + 2\chi^{{E(8)_1\times \overline{so}(9)_2}}_{6,-1} . \nonumber \end{align} From the above result, we see that such a gapless state has a \textsf{categorical symmetry}\ larger than $\eG\mathrm{au}_{S_3}^{(2)}$ \begin{align} \eM=\eM_{E(8)_1} \times \eM_{\overline{so}(9)_2} , \end{align} which is a maximal \textsf{categorical symmetry}. \subsubsection{Anomalous $S_3^{(3)}$ symmetry} Last, we consider a gapless state for a lattice system with anomalous $S_3^{(3)}$ symmetry, which has the decomposition \begin{align} CFT_{af} = CFT_{ano}\boxtimes_{\eG\mathrm{au}_{S_3}^{(3)}} \t\cR , \end{align} where the \textsf{categorical symmetry}\ is the 2+1D topological order $\eG\mathrm{au}_{S_3}^{(3)}$, which has anyons given by \begin{align} \begin{matrix} \text{anyons}: & \one & a_1 & a_2 & b & b_1 & b_2 & c & c_1 \\ d_a: & 1 & 1 & 2 & 2 & 2 & 2 & 3 & 3 \\ s_a: & 0 & 0 & 0 & 0 & \frac13 & \frac23 & \frac14 & \frac34 \\ \end{matrix}. \end{align} If the gapless state does not break any \textsf{categorical symmetry}\ $\eG\mathrm{au}_{S_3}^{(3)}$, then $CFT_{ano}$ in the decomposition is given by a $\one$-condensed boundary of $\eG\mathrm{au}_{S_3}^{(3)}$: \begin{widetext} \begingroup \allowdisplaybreaks \begin{align} \label{S3a3one} Z_{\one\text{-cnd};\one}^{\eG\mathrm{au}_{S_3}^{(3)}} &= \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 1,0; 1,0; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 2,\frac{1}{4}; 5,-3; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 1,0; 5,-3; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 2,\frac{1}{4}; 1,0; 2,-\frac{1}{4}} \nonumber\\ & \ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 1,0; 6,-\frac{2}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 2,\frac{1}{4}; 10,-\frac{7}{5}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 1,0; 10,-\frac{7}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 2,\frac{1}{4}; 6,-\frac{2}{5}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};a_1}^{\eG\mathrm{au}_{S_3}^{(3)}} &= \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 1,0; 5,-3; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 2,\frac{1}{4}; 1,0; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 1,0; 1,0; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 2,\frac{1}{4}; 5,-3; 2,-\frac{1}{4}} \nonumber\\ & \ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 1,0; 10,-\frac{7}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 2,\frac{1}{4}; 6,-\frac{2}{5}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 1,0; 6,-\frac{2}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 2,\frac{1}{4}; 10,-\frac{7}{5}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};a_2}^{\eG\mathrm{au}_{S_3}^{(3)}} &= \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 1,0; 3,-\frac{2}{3}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 2,\frac{1}{4}; 3,-\frac{2}{3}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 1,0; 8,-\frac{1}{15}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 2,\frac{1}{4}; 8,-\frac{1}{15}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};b}^{\eG\mathrm{au}_{S_3}^{(3)}} &= \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 1,0; 3,-\frac{2}{3}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 2,\frac{1}{4}; 3,-\frac{2}{3}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 1,0; 8,-\frac{1}{15}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 2,\frac{1}{4}; 8,-\frac{1}{15}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};b_1}^{\eG\mathrm{au}_{S_3}^{(3)}} &= \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 1,0; 3,-\frac{2}{3}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 2,\frac{1}{4}; 3,-\frac{2}{3}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 1,0; 3,-\frac{2}{3}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 2,\frac{1}{4}; 3,-\frac{2}{3}; 2,-\frac{1}{4}} \nonumber\\ & \ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 1,0; 8,-\frac{1}{15}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 2,\frac{1}{4}; 8,-\frac{1}{15}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 1,0; 8,-\frac{1}{15}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 2,\frac{1}{4}; 8,-\frac{1}{15}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};b_2}^{\eG\mathrm{au}_{S_3}^{(3)}} &= \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 1,0; 1,0; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 1,0; 5,-3; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 2,\frac{1}{4}; 1,0; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 2,\frac{1}{4}; 5,-3; 2,-\frac{1}{4}} \nonumber\\ & \ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 1,0; 6,-\frac{2}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 1,0; 10,-\frac{7}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 2,\frac{1}{4}; 6,-\frac{2}{5}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 2,\frac{1}{4}; 10,-\frac{7}{5}; 2,-\frac{1}{4}} \nonumber \\ Z_{\one\text{-cnd};c}^{\eG\mathrm{au}_{S_3}^{(3)}} &= \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{2,\frac{1}{8}; 1,0; 4,-\frac{13}{8}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{2,\frac{1}{8}; 2,\frac{1}{4}; 2,-\frac{1}{8}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{4,\frac{13}{8}; 1,0; 2,-\frac{1}{8}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{4,\frac{13}{8}; 2,\frac{1}{4}; 4,-\frac{13}{8}; 1,0} \nonumber\\ & \ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{7,\frac{1}{40}; 1,0; 9,-\frac{21}{40}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{7,\frac{1}{40}; 2,\frac{1}{4}; 7,-\frac{1}{40}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{9,\frac{21}{40}; 1,0; 7,-\frac{1}{40}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{9,\frac{21}{40}; 2,\frac{1}{4}; 9,-\frac{21}{40}; 1,0} \nonumber \\ Z_{\one\text{-cnd};c_1}^{\eG\mathrm{au}_{S_3}^{(3)}} &= \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{2,\frac{1}{8}; 1,0; 2,-\frac{1}{8}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{2,\frac{1}{8}; 2,\frac{1}{4}; 4,-\frac{13}{8}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{4,\frac{13}{8}; 1,0; 4,-\frac{13}{8}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{4,\frac{13}{8}; 2,\frac{1}{4}; 2,-\frac{1}{8}; 1,0} \nonumber\\ & \ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{7,\frac{1}{40}; 1,0; 7,-\frac{1}{40}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{7,\frac{1}{40}; 2,\frac{1}{4}; 9,-\frac{21}{40}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{9,\frac{21}{40}; 1,0; 9,-\frac{21}{40}; 2,-\frac{1}{4}} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{9,\frac{21}{40}; 2,\frac{1}{4}; 7,-\frac{1}{40}; 1,0} \end{align} \endgroup \end{widetext} The above gapless state is described by a $m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2 $ CFT with central charge $(c, \bar c)=(\frac95,\frac95)$. To describe the anomalous symmetry $S_3^{(3)}$, we need to choose $\t\cR$ in the decomposition the gapped boundary of $\eG\mathrm{au}_{S_3}^{(3)}$ obtained from the condensation of all the $S_3$-charges. In other words, $\t\cR$ is a $\one\oplus a_1\oplus 2a_2$-condensed boundary of $\eG\mathrm{au}_{S_3}^{(3)}$, described by the following multi-component partition function:\cite{CW220506244} \begingroup \allowdisplaybreaks \begin{align} Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};\one}^{\eG\mathrm{au}_{S_3}^{(3)}} &= 1, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};a_1}^{\eG\mathrm{au}_{S_3}^{(3)}} &= 1, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};a_2}^{\eG\mathrm{au}_{S_3}^{(3)}} &= 2, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};b}^{\eG\mathrm{au}_{S_3}^{(3)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};b_1}^{\eG\mathrm{au}_{S_3}^{(3)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};b_2}^{\eG\mathrm{au}_{S_3}^{(3)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};c}^{\eG\mathrm{au}_{S_3}^{(3)}} &= 0, \nonumber \\ Z_{\one\oplus a_1\oplus 2a_2\text{-cnd};c_1}^{\eG\mathrm{au}_{S_3}^{(3)}} &= 0. \end{align} \endgroup From the decomposition $CFT_{af} = CFT_{ano}\boxtimes_{\eG\mathrm{au}_{S_3}^{(3)}} \t\cR$, we find the modular invariant partition function of the gapless state that does not break the \textsf{categorical symmetry}\ $\eG\mathrm{au}_{S_3}^{(3)}$: \begingroup \allowdisplaybreaks \begin{align} \label{ZafS33} Z_{af} &= \sum_a Z^{\eG\mathrm{au}_{S_3}^{(3)}}_{\one\text{-cnd};a}(\tau,\bar\tau) (Z^{\eG\mathrm{au}_{S_3}^{(3)}}_{\one\oplus a_1\oplus 2a_2\text{-cnd};a})^* \\ &= \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 1,0; 1,0; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 2,\frac{1}{4}; 5,-3; 2,-\frac{1}{4}} \nonumber\\&\ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 1,0; 5,-3; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 2,\frac{1}{4}; 1,0; 2,-\frac{1}{4}} \nonumber\\ &\ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 1,0; 6,-\frac{2}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 2,\frac{1}{4}; 10,-\frac{7}{5}; 2,-\frac{1}{4}} \nonumber\\ &\ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 1,0; 10,-\frac{7}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 2,\frac{1}{4}; 6,-\frac{2}{5}; 2,-\frac{1}{4}} \nonumber \\ &\ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 1,0; 5,-3; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{1,0; 2,\frac{1}{4}; 1,0; 2,-\frac{1}{4}} \nonumber\\ &\ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 1,0; 1,0; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{5,3; 2,\frac{1}{4}; 5,-3; 2,-\frac{1}{4}} \nonumber\\ & \ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 1,0; 10,-\frac{7}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{6,\frac{2}{5}; 2,\frac{1}{4}; 6,-\frac{2}{5}; 2,-\frac{1}{4}} \nonumber\\ & \ \ \ \ + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 1,0; 6,-\frac{2}{5}; 1,0} + \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{10,\frac{7}{5}; 2,\frac{1}{4}; 10,-\frac{7}{5}; 2,-\frac{1}{4}} \nonumber \\ & \ \ \ \ +2 \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 1,0; 3,-\frac{2}{3}; 1,0} + 2 \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{3,\frac{2}{3}; 2,\frac{1}{4}; 3,-\frac{2}{3}; 2,-\frac{1}{4}} \nonumber\\ &\ \ \ \ + 2 \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 1,0; 8,-\frac{1}{15}; 1,0} + 2 \chi^{m6 \times u(1)_2 \times \bar m6 \times \bar u(1)_2}_{8,\frac{1}{15}; 2,\frac{1}{4}; 8,-\frac{1}{15}; 2,-\frac{1}{4}} . \nonumber \end{align} \endgroup From the above result, we see that such a gapless state has a \textsf{categorical symmetry}\ larger than $\eG\mathrm{au}_{S_3}^{(3)}$ \begin{align} \eM=\eM_{m6} \times \eM_{(2n^2,-2n^2,0)} \times \eM_{\bar m6} , \ \ n\in \Z. \end{align} where $\eM_{m6}$ is the 2+1D topological order who has a bondary given by the $(6,5)$ minimal model. Again we see that the maximal \textsf{categorical symmetry}\ for the gapless state \eq{ZafS33} contains, at least, the \textsf{categorical symmetry}\ $\eG\mathrm{au}_{U(1)}$ for a continuous $U(1)$ symmetry. In other words, the gapless state \eq{ZafS33} has an emergent $U(1)$ symmetry. \subsection{Summary} Through the above examples, we like to conjecture that that in 1+1D, a maximal \textsf{categorical symmetry}\ is always described by a topological order of ``doubled form'' $\eM_\text{dbl} =\eM_c \boxtimes \bar \eM_c'$, where $\eM_c$ is the emergent \textsf{categorical symmetry}\ for right-movers and $\bar \eM_c$ is the emergent \textsf{categorical symmetry}\ for left-movers. The corresponding CFT have the ``naturality'' discussed in \Rf{MS8916}. In general, the low energy part of a gapless state may be formed by several decoupled sectors, where the interactions between different sectors approach to zero in the infrared limit under renormalization group flow. Consequently, in the low energy limit, there are often emergent symmetries. For example, the original UV symmetry $G$ (the lattice symmetry) may enlarged at low energies, $G\to G\times G\times \cdots$, one copy for each decoupled sector. Since each decoupled low energy sector is not a full system, each sector by itself is often anomalous. Thus the emergent symmetries may be anomalous, and there are may be additional emergent anomalies. It was pointed out that, when restricted to the symmetric sub-Hilbert space, a symmetry can be fully described \cite{JW191209391} by a non-invertible gravitational anomaly.\cite{KW1458,FV14095723,M14107442,KZ150201690,JW190513279} So we can treat the emergent symmetries and emergent anomalies in a unified way by restricting to the symmetric sub-Hilbert space. In this case, we only have an emergent noninvertible gravitational anomaly, which is nothing but the \textsf{categorical symmetry}\ discussed in this paper. \begin{figure}[t] \begin{center} \includegraphics[scale=0.6]{asect} \end{center} \caption{A general picture of a gapless quantum states, which is formed by decoupled gapless sectors. The emergent symmetry and emergent anomaly for each sector are described by non-invertible gravitational anomaly (\ie the topological orders in one higher dimension). Thus, the anomalous sectors are the boundary of corresponding topological orders in one higher dimension. } \label{asect} \end{figure} For a 1+1D gapless state, the decouples sectors include the right-moving sector and left-moving sector. But to have more information describing a gapless state, we want to decompose the gapless state into smallest decoupled sectors. This allows us to see the maximal emergent symmetry and emergent anomalies. For example, we may decompose an 1+1D gapless state into many sectors, each has its own distinct velocities. Since each decoupled sector has its own (non-invertible) gravitational anomaly, it can be viewed as a boundary of topological order in one higher dimension (see Fig. \ref{asect}), which has a form $\eM = \eM_1\boxtimes \eM_2\boxtimes \cdots$, where $\eM_i$ is the emergent \textsf{categorical symmetry}\ for the $i^\text{th}$ sector. This is why we conjecture that \frmbox{1+1D emergent maximal \textsf{categorical symmetry}\ has a form $\eM_\text{max} = \eM_1\boxtimes \eM_2\boxtimes \cdots$.} So far in this paper, we only considered emergent finite \textsf{categorical symmetry}. For a gapless state with an emergent continuous symmetry $QFT_{af}$, we will have an increasing sequence of decompositions \begin{align} QFT_{af} &= QFT_{ano}^{(n)} \boxtimes_{\eM^{(n)}} \t\cR^{(n)} , \nonumber\\ \eM^{(1)} &< \eM^{(2)} < \eM^{(3)} < \cdots. \end{align} In this case, we may use the sequence of finite \textsf{categorical symmetries}, $\eM^{(n)}$, to describe an emergent continuous \textsf{categorical symmetry}, as we have done in last a few subsections. In the above decomposition $QFT_{ano}^{(n)}$ and $\t\cR^{(n)}$ are two boundaries of $\eM^{(n)}$. We have assume that the bulk $\eM^{(n)}$ and the boundary $\t\cR^{(n)}$ have infinite energy gap, while all the excitations on the boundary $QFT_{ano}^{(n)}$ have finite energy. Within such a setting, we cannot arbitrarily enlarge the \textsf{categorical symmetry}\ $\eM^{(n)}$ by stacking an topological order to it. This will enlarge the finite energy excitations beyond what are contained in $QFT_{af}$. \section{Computing \textsf{categorical symmetry}\ using symmetry twists}\label{SymmTwist} Very often, we can identify various symmetries and dual symmetries in a gapless theory. We know that those symmetries and dual symmetries combine into a \textsf{categorical symmetry}\, \ie a topological order in one higher dimension. In this section, we will explore a calculation that turns the symmetries and dual symmetries into a topological order in one higher dimension, \ie a \textsf{categorical symmetry}. \subsection{Symmetry, dual symmetry, and patch operators} We have mentioned that symmetry (or low energy emergent symmetry) is more fully described by a \textsf{categorical symmetry}\ (which is synonymous to topological order in one higher dimension). In this section, we will compute \textsf{categorical symmetry}\ (\ie topological order in one higher dimension) via multi-component partition functions from symmetry twists (\ie topological defect lines).\cite{JW191213492,CW220303596} To understand symmetry twists, we first discuss patch operators. In \Rf{JW191213492}, it was pointed out that the symmetry, anomaly-free or anomalous, in a local system can be described by patch symmetry transformation operators. It turns out that the patch symmetry transformation operators carry more information about the symmetry than the global symmetry transformation operators and allow us to compute the \textsf{categorical symmetry}. As an example, let us consider the 1+1D Ising model of size $L$ with a periodic boundary condition: \begin{align} \label{HIs} &H=-\sum_{I=1}^L \left(BX_I+JZ_{I}Z_{I+1}\right), \end{align} where \begin{align} X=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},\ \ \ \ Z=\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix},\ \ \ \ Y=\begin{pmatrix} 0 & -\ii \\ \ii & 0 \end{pmatrix}. \end{align} The $\Z_2$ symmetry is generated by \begin{align} &U_{\Z_2}= \prod_{I=1,2,\cdots L}X_I . \end{align} This leads to the patch operator that generate the $\Z_2$ symmetry \begin{align} W^m(I,J) = \prod_{I+\frac12<K<J+\frac12} X_K. \end{align} We can use the patch operators to select the so called \emph{local symmetric operators} $O_K$ via \begin{align} W^m(I,J) O_K &= O_K W^m(I,J), \nonumber\\ \text{ for } &K \text{ far away from } I,J, \end{align} where $O_K$ acts on sites near the site $K$. The Hamiltonian is a sum of local symmetric operators. This is how the patch operators $W^m(I,J)$ impose the $\Z_2$ symmetry. We also have a trivial operator $W^\one(I,J)$ which is a product of identity operators \begin{align} W^\one (I,J) = \prod_{I+\frac12<K<J+\frac12} \mathrm{id}_K. \end{align} To see the dual symmetry $\t\Z_2$ explicitly, we make a Kramers-Wannier duality transformation \begin{align} X_I\rightarrow \tilde{X}_{I-\frac12}\tilde{X}_{I+\frac12},~~~ Z_{I}Z_{I+1}\rightarrow \tilde{Z}_{I+\frac12}, \end{align} which transforms the Ising model to the dual Ising model with dual spins living on the links labeled by $I+\frac12$: \begin{align} \label{HDW} &H=-\sum_{I} \left(B\tilde X_{I-\frac12}\tilde{X}_{I+\frac12}+J\tilde{Z}_{I+\frac12}\right). \end{align} We see a dual $\t\Z_2$ symmetry generated by \begin{align} &U_{\t\Z_2}= \prod_{I}\t Z_{I+\frac12} . \end{align} This gives us patch operators for the dual $\t\Z_2$ symmetry \begin{align} W^e(I,J) = \prod_{I<K+\frac12<J}\t Z_{K+\frac12} . \end{align} In the original basis, the patch operators for $\t\Z_2$ are given by \begin{align} W^e(I,J) = Z_I Z_J . \end{align} From the two symmetries, $\Z_2$ and $\t \Z_2$, we can construct the patch operators of a third symmetry $\Z_2^f$, given by \begin{align} W^f(I,J)=W^m(I,J) W^e(I,J). \end{align} This allows us to find all three sets of patch operators, generating the $\Z_2$, $\t\Z_2$, and $\Z_2^f$ symmetries: \begin{align} W^m(I,J) &= \hskip -1em \prod_{I+\frac12<K<J+\frac12 } \hskip -1em X_K, \ \ \ W^e(I,J) = Z_I Z_J , \nonumber\\ W^f(I,J) &= Z_I\Big(\prod_{I+\frac12<K<J+\frac12 } \hskip -1em X_K \Big) Z_J. \end{align} With these explicit formulas for the patch operators, we can compute the associated \textsf{categorical symmetry}\, as outlined in \Rf{CW220303596}. Here we will treat these operators from a slightly different point of view. Since these operators implement their corresponding symmetry on a finite patch, their boundaries realize symmetry twists. From the form of the patch operators, we can identify how the symmetry twists can be implemented in concrete lattice models. In order to study the Ising critical point, as mentioned above, we would like to first transform the above discussion into the language of a Majorana fermion model. At the same time, it is instructive to obtain the form of the patch operators in terms of the Majorana variables. To achieve this, we use the Jordan-Wigner transformation. Our goal is to obtain a Majorana representation of the patch operators that create pairs of $ \Z_2 $ domain walls and $ \Z_2 $ charges. As a starting point for this transformation, we work with the \emph{dual} Ising variables, as in \eqn{HDW}. The JW transformation on these variables is implemented as \begin{equation}\label{JW} \begin{split} \tilde{Z}_{J+\frac{1}{2}} &=1-2f_{J+\frac{1}{2}}^\dagger f_{J+\frac{1}{2}}\\ \tilde{\sigma}^+_{J+\frac{1}{2}} &=f_{J+\frac{1}{2}}^\dagger \prod_{I<J}(1-2f_{I+\frac{1}{2}}^\dagger f_{I+\frac{1}{2}})\\ \tilde{\sigma}^-_{J+\frac{1}{2}} &=f_{J+\frac{1}{2}} \prod_{I<J}(1-2f_{I+\frac{1}{2}}^\dagger f_{I+\frac{1}{2}}) \end{split} \end{equation} where the $f$ operators satisfy canonical fermionic anti-commutation relations $\{f_{I+\frac{1}{2}},f_{J+\frac{1}{2}}^\dagger\}=\delta_{IJ}$, $\{f_{I+\frac{1}{2}},f_{J+\frac{1}{2}}\}=0=\{f_{I+\frac{1}{2}}^\dagger,f_{J+\frac{1}{2}}^\dagger\} $. Let us define Majorana fermions, \begin{equation}\label{Maj} \lambda_{J}=f_{J+\frac{1}{2}}^\dagger+f_{J+\frac{1}{2}}, \quad \lambda_{J+\frac{1}{2}}=\ii(f_{J+\frac{1}{2}}^\dagger-f_{J+\frac{1}{2}}) \end{equation} which satisfy $ \{\lambda_i,\lambda_j\}=2\delta_{ij}, \lambda_i^\dagger=\lambda_i $. In the Majorana representation, we have \begin{equation}\label{Num} \begin{split} f_{J+\frac{1}{2}}^\dagger f_{J+\frac{1}{2}} &= \frac{1}{2}(\lambda_{J}- \ii \lambda_{J+\frac{1}{2}})\cdot \frac{1}{2}(\lambda_{J}+ \ii \lambda_{J+\frac{1}{2}})\\ &= \frac{1}{2}(1+\ii \lambda_{J}\lambda_{J+\frac{1}{2}}) \end{split} \end{equation} Under the JW transformation (\eqn{JW}), the Pauli operators $ \tilde{X}_I $ and $ \tilde{Z}_I $ transform as follows \begin{align} \tilde{X}_{I+\frac{1}{2}} \equiv \tilde{\sigma}_{I+\frac{1}{2}}^+ + \tilde{\sigma}_{I+\frac{1}{2}}^- &\xrightarrow{\text{JW}} \lambda_{I} \prod_{J<I}\left(-\ii \lambda_{J}\lambda_{J+\frac{1}{2}}\right)\\ \tilde{Z}_{I+\frac{1}{2}} & \xrightarrow{\text{JW}} -\ii \lambda_{I}\lambda_{I+\frac{1}{2}} \end{align} We can now transform the patch symmetry operators to the Majorana representation. The patch operator corresponding to $\Z_2^f$-symmetry transforms as \begin{align}\label{WfMaj} W^f(I,J) =& \tilde{X}_{I+\frac12} \left( \prod_{I<K+\frac12<J}\t Z_{K+\frac12}\right) \tilde{X}_{J+\frac12} \notag\\ \xrightarrow{\text{JW}} &\lambda_{I} \prod_{K<I}\left(-\ii \lambda_{K}\lambda_{K+\frac{1}{2}}\right) \prod_{I<K+\frac12<J}\left(-\ii \lambda_{K}\lambda_{K+\frac12}\right) \notag\\ & \ \ \ \prod_{K<J}\left(-\ii \lambda_{K}\lambda_{K+\frac{1}{2}}\right) \lambda_{J} = \lambda_{I} \lambda_{J} \end{align} The patch operator for the $ \Z_2 $ symmetry transforms as \begin{align}\label{WmMaj} W^m(I,J) &= \tilde{X}_{I+\frac12} \tilde{X}_{J+\frac12} \xrightarrow{\text{JW}} \la_I \prod_{I<K+\frac12 < J}\left(-\ii \lambda_{K}\lambda_{K+\frac12}\right) \lambda_{J}\notag\\ &=\prod_{I<K-\frac12<J } \left(-\ii \la_{K-\frac12}\la_K\right) \end{align} Lastly, the patch operator for the dual $ \tilde{\Z}_2 $ symmetry transforms as \begin{align}\label{WeMaj} W^e(I,J) &=\prod_{I<K+\frac12<J}\t Z_{K+\frac12} \xrightarrow{\text{JW}} \prod_{K=I+1}^{J}\left(-\ii \lambda_{2K-1}\lambda_{2K}\right)\notag\\ &= \prod_{I<K+\frac12<J } \left(-\ii \la_K\la_{K+\frac12}\right) \end{align} For completeness, let us transform the Hamiltonian \eqn{HDW} to the Majorana representation as well, \begin{equation}\label{HMaj} H_\text{Maj}=\sum_{I}\ii(B \lambda_{I-\frac{1}{2}}\lambda_{I} + J \lambda_{I}\lambda_{I+\frac{1}{2}}) \end{equation} which at the Ising critical point $ B=J=1 $ becomes \begin{equation} H_\text{Maj}=\sum_{j\in \frac12 \Z} \ii\lambda_{j}\lambda_{j+\frac12} \end{equation} This is the Majorana model describing a $ \Z_2 $-symmetry-breaking critical point, defined on an infinite chain. For convenience of notation, we may equivalently write this as \begin{equation} \label{HMajC} H_\text{Maj}=\sum_{j\in \Z} \ii\lambda_{j}\lambda_{j+1} \end{equation} With this concrete model of the $ \Z_2^m $-symmetry-breaking critical point (a.k.a. Ising critical point) in hand, we would now like to proceed with computing its partition function in the presence of various symmetry twists. Particularly, we want to uncover the maximal \textsf{categorical symmetry}\ of this theory. From \eqn{WmMaj} and \eqn{WeMaj}, we see that the lattice translation $j \to j+1$ in \eqn{HMajC} exchanges $W^e$ and $W^m$. Thus the emergent $ e $-$m $ exchange symmetry at the critical point, $\Z_2^{em}$ is realized by the translation $j\to j+1$. On the other hand, the $ \Z_2^m $ symmetry of the Ising model translates into the fermion parity of the Majorana model. \subsection{Symmetry twists} \label{symmtw} The patch operators discussed above are very closely related to the notion of a disorder operator.\cite{KC3113918} When a symmetry transformation is restricted to a finite patch in 1 spatial dimension, each of the two endpoints represents a disorder operator, which implements a symmetry twist. The disorder operator has associated fusion rules, which are particularly simple when the symmetry involved is $ \Z_2 $. In this case, two symmetry twists become equivalent to no twist. From this discussion, we see that the patch operators discussed in previous sections can tell us how to implement spatial symmetry twists on the Hilbert space. In particular, each endpoint of a patch symmetry operator describes a symmetry twist in the space direction. On the other hand, symmetry twists in the time direction are implemented by applying the global symmetry transformation on the entire Hilbert space of states. In an operational sense, this is implemented by inserting the symmetry transformation operator in the partition function. For the $ \Z_2^m $ symmetry, a non-trivial spatial symmetry twist amounts to introducing antiperiodic boundary conditions for the Majorana degrees of freedom, \[ \la_{j+N}=-\la_j \] In the time direction, a non-trivial $ \Z_2^m $ twist corresponds to \emph{periodic} temporal boundary conditions for the Majorana fermions. In other words, the untwisted case corresponds to antiperiodic boundary conditions along the time direction of the spacetime torus. Time antiperiodicity is automatic from the definition of fermion path integrals, which is why the non-trivial symmetry twist corresponds to periodic and not anti-periodic temporal boundary condition. This is in contrast to the bosonic $ \Z_2 $ boundary conditions discussed above in \eqn{Z2bc} (cf. \Rf{CFT12}, pp.346-347). For the $ \Z_2^{em} $ symmetry, a non-trivial spatial symmetry twist corresponds to considering a Majorana chain with an odd number of sites. A non-trivial temporal symmetry twist is obtained by inserting into the partition function an operator that translates the system by a single lattice site. We will find it useful to represent this operator in terms of momentum space variables. \subsection{Multi-component partition function from symmetry twists of $\Z_2^m$ and $\Z_2^{em}$ symmetries}\label{Z2mZ2emZ} In this section we compute the partition functions of the 1+1D critical Ising theory in the presence of various symmetry twists of $ \Z_2^m $ and $ \Z_2^{em} $. Since there are 4 possible combinations along the space and the time directions each, we should expect a total of 16 possible symmetry twist combinations. Recall the Majorana representation of the critical Ising Hamiltonian, defined on a lattice of size N, \begin{equation}\tag{\ref{HMajC}} H_\text{Maj}=\sum_{j=1}^N \ii\lambda_{j}\lambda_{j+1} \end{equation} where we have left the boundary conditions unspecified for now. We can define Fourier transformed Majorana operators as \begin{equation}\label{MajFT} \lambda_j = \sqrt{\frac{2}{N}}\sum_k \tilde{\lambda}_k \ee^{2\pi \ii kj/N} \end{equation} In terms of these momentum space variables, we have $ k\in \Z_N $ for periodic boundary conditions and $ k\in \frac{1}{2}+\Z_N $ for antiperiodic boundary conditions. The inverse Fourier transformation reads \begin{equation}\label{invFT} \tilde{\lambda}_k = \frac{1}{\sqrt{2N}}\sum_{j=1}^N \lambda_j \ee^{-2\pi \ii kj/N} \end{equation} The $k$-space Majorana operators satisfy the following properties: \begin{equation}\label{MajK} \tilde{\lambda}_k^\dagger = \tilde{\lambda}_{N-k},\quad \{\tilde{\lambda}_k,\tilde{\lambda}_q^\dagger\} =\delta_{k,q} \end{equation} where the Kronecker delta is to be understood in a modulo $ N $ sense. In terms of the $k$-space Majorana modes, the Hamiltonian becomes \begin{equation}\label{HMajFT} H=\sum_{\om_k>0} \omega_k \tilde{\lambda}_k^\dagger \tilde{\lambda}_k + E_0 \end{equation} where the "zero energy" is given by \begin{equation}\label{E0} E_0= \frac{1}{2}\sum_{\om_k<0}\omega_k \end{equation} and $ \omega_k = -v\sin{\frac{2\pi k}{N}} $ and $ v=4 $. \subsubsection*{Even $N$ with periodic boundary conditions} First, let us consider $N=$ even cases. We choose the set of independent $k$-states as $ F_{E,P}=\{k\in \Z |-\frac{N}{4}\leq k\leq 0 \text{ or } \frac{N}{2}\leq k<\frac{3N}{4} \} $. The subscripts indicate $ E $ for \textit{even} $ N $ and $ P $ for \textit{periodic} b.c. Non-zero $k$-modes are described by canonical fermion operators. In addition to these, we find two zero mode operators which do not appear in the Hamiltonian in \eqn{HMajFT}, $ \tilde{\lambda}_0 $ and $ \tilde{\lambda}_N $, which satisfy $ \tilde{\lambda}_0^2=\frac{1}{2}=\tilde{\lambda}_N^2 $, $ \tilde{\lambda}_0^\dagger=\tilde{\lambda}_0 $, and $ \tilde{\lambda}_N^\dagger=\tilde{\lambda}_N $. \textit{Hilbert space}--- We can combine the above-mentioned zero modes into a single fermionic operator \begin{equation}\label{EPZM} c=\frac{1}{\sqrt{2}}( \tilde{\lambda}_0+ \ii \tilde{\lambda}_N), \quad c^\dagger=\frac{1}{\sqrt{2}}( \tilde{\lambda}_0-\ii \tilde{\lambda}_N) \end{equation} Then the ground state $ \ket{0} $ is defined by \begin{equation}\label{EPgrnd} \tilde{\lambda}_k\ket{0} = 0 \quad \forall k \in F_{E,P}', \text{ and } c\ket{0}=0 \end{equation} where $ F_{E,P}'=F_{E,P}\setminus \{0,\frac{N}{2}\} $. Excited states are created by the action of $ \tilde{\lambda}_k^\dagger $ ($ \forall k\in F_{E,P}' $) and $ c^\dagger $ on the ground state. \textit{Fermion number operator}--- We define \begin{equation}\label{EPFNum} F =c^\dagger c+ \sum_{k\in F_{E,P}'} \tilde{\lambda}_k^\dagger \tilde{\lambda}_k \end{equation} which counts the non-zero Majorana modes as well as the fermion created from the two zero modes. Note that the Hilbert space states mentioned above are all eigenstates of the fermion number parity operator $ (-1)^F $. \textit{Translation operator}--- In real space, lattice translation is defined by \begin{equation}\label{EPTreal} T \lambda_j T^\dagger = \lambda_{j+1} \end{equation} which leads to the momentum space relation \begin{equation}\label{EPTk} T \tilde{\lambda}_k T^\dagger = \ee^{\frac{2\pi \ii k}{N}} \tilde{\lambda}_k \end{equation} to be satisfied for all $ k\in F_{E,P} $. It can be checked that the following definition works \begin{equation}\label{EPTkDef} T=\ii\sqrt{2}\tilde{\lambda}_0 \exp\left[\ii K_0+\sum_{k\in F_{E,P}'}\ii\left(\frac{2\pi k}{N}+\pi \right) \tilde{\lambda}_k^\dagger \tilde{\lambda}_k \right] \end{equation} where $ K_0 $ is a yet-undetermined real number which we interpret as "ground state momentum". The translation operator $ T $ is related to the momentum operator $ K $ as $ T=\ee^{\ii K} $. \textit{Partition function}--- The partition function along is defined as \begin{equation}\label{Zdef} Z(\beta) = \text{Tr}\ee^{-\beta H} \end{equation} We can introduce a $ \Z_2^{em} $ twist in the time direction by inserting the operator $ T=\ee^{\ii K} $ in the partition function above. To that end, we define \begin{equation}\label{genZ} Z(\beta,X) = \text{Tr}\left[\ee^{-\beta H+\ii X K} \right] \end{equation} where setting $ X $ to be even or odd corresponds to trivial and non-trivial insertion of the $ \Z_2^{em} $ symmetry transformation respectively. In particular, odd $ X $ implements a non-trivial $ \Z_2^{em} $ twist in the time direction. For \underline{odd $ X $}, we can see that $ T^X \equiv \ee^{\ii X K}$ and $ (-1)^F $ anticommute. This means that this choice of boundary conditions (odd $ X $, even $ N $, periodic) is not consistent with the notion of independent temporal and spatial symmetry twists, so this partition function is not allowed. Also because of this anticommutation, a brute force calculation of the partition function yields 0 anyway, so we can consistently drop it from consideration. For \underline{even $ X $}, $ T^X $ and $ (-1)^F $ commute, so the above subtlety disappears. By linearizing the Hamiltonian near $ k=0 $ and $ k=N/2 $, and taking $ N\to\infty $ we find the following partition function: \begin{align}\label{EPZ} Z(\beta,X) &= 2\ee^{-\beta E_0+\ii X\left(K_0+\frac{\pi}{2}\right)} \sum_{\{n_k\}} \ee^{ \sum_k \left(-\beta\omega_k + \ii X \frac{2\pi k}{N} \right)n_k}\notag \\ &\approx 2 \ee^{\beta \frac{2N}{\pi}-\beta\frac{2\pi v}{12N}} \prod_{k\in\N} \left(1+ \ee^{ -\beta v\frac{2\pi k}{N} - \ii X \frac{2\pi k}{N}} \right) \notag \\ &\qquad \prod_{k\in\N} \left(1+ \ee^{ -\beta v\frac{2\pi k}{N} + \ii X \frac{2\pi}{N}\left(\frac{N}{2}+k\right)} \right) \end{align} where $ \N $ denotes the set of all positive integers. In the second expression above, we have used \eqn{wLin} and $ E_0 \approx -\frac{2N}{\pi}+\frac{2\pi v}{12N} +\mathcal{O}\left(\frac{1}{N^3}\right)$, with $ v=4 $. This expression for $ E_0 $ is obtained by computing the sum in \eqn{E0} in the limit of $ N\to \infty $. We have also made the choice of $ K_0 = -\frac{\pi}{2} $ which will ensure good modular transformation properties. After some algebra, we find \begin{equation}\label{EPZfin} Z_{EP}^{++} = \ee^{\frac{2N\beta}{\pi}} \left|\frac{\theta_2(\tau)}{\eta(\tau)}\right| \end{equation} where $ \tau = \frac{X+\ii\beta v}{N} $ is the modular parameter. The first sign in the superscript indicates even $ X $ (untwisted $ \Z_2^{em} $) and the second one stands for untwisted $ \Z_2^m $ (anti-periodic) temporal boundary conditions. Due to state-operator correspondence, the total energy and the total momentum of ground state on a ring is related to the total central charge $c+\bar c$ and total scaling dimension $h+\bar h$: \begin{align} E_0 &= \# N + (-\frac{c+\bar c}{24} +h+\bar h) v\frac{2\pi}{N} + o(N^{-1}), \nonumber\\ K_0 &= \# N + (-\frac{c-\bar c}{24} +h-\bar h) \frac{2\pi}{N} + o(N^{-1}). \end{align} where $v$ is the velocity and $N$ is the length of ring, so that $2\pi/N$ is the momentum quantum. The sector with the lowest energy has $h=\bar h=0$, whose $E_0$ and $K_0$ allow us to determine central charge $c$ and $\bar c$. From the $E_0$ and $K_0$ of other sectors, we can determine the scaling dimensions $h,\bar h$ of the operator that maps the ground state sector to the other sectors. \textit{Fermion twisted sector}--- Inserting $ (-1)^F $ into the partition functions above, we get their fermion parity twisted versions. Since $ Z_{EP}^{-+} $ is ill-defined as discussed above, its fermion parity twisted partner $ Z_{EP}^{--} $ is similarly afflicted. On the other hand, the fermion parity twisted partner of $ Z_{EP}^{++} $ is well-defined but evaluates to zero because of the presence of a zero mode, i.e. $ Z_{EP}^{+-}=0 $. \subsubsection*{Even $N$ with antiperiodic boundary conditions} For antiperiodic b.c., we choose the set of independent $ k $-states as $ F_{E,A}=\{k\in \Z +\frac{1}{2} |-\frac{N}{4}\leq k\leq 0 \text{ or } \frac{N}{2}\leq k<\frac{3N}{4} \} $. The subscripts indicate $ E $ for \textit{even} $ N $ and $ A $ for \textit{antiperiodic} b.c. Note that $ F_{E,A} $ does not contain any zero modes; there are exactly $ N/2 $ dynamical modes. \textit{Hilbert space}--- The ground state $ \ket{0} $ is defined by \begin{equation}\label{EAgrnd} \tilde{\lambda}_k\ket{0} = 0 \quad \forall k \in F_{E,A} \end{equation} Excited states are created by the action of $ \tilde{\lambda}_k^\dagger $ ($ \forall k\in F_{E,A} $) on the ground state. \textit{Fermion number operator}--- \begin{equation}\label{EAFNum} F =\sum_{k\in F_{E,A}} \tilde{\lambda}_k^\dagger \tilde{\lambda}_k \end{equation} Unlike in the periodic case, here there is no zero mode contribution to the fermion number. \textit{Translation operator}--- In real space, lattice translation is defined as in the periodic case by \begin{equation}\label{EATreal} T \la_j T^\dagger = \la_{j+1} \end{equation} with the understanding that $ \lambda_{N+1} =-\lambda_1$ due to the boundary condition. This leads to the momentum space relation as before: \begin{equation}\label{EATk} T \tilde{\lambda}_k T^\dagger = \ee^{\frac{2\pi \ii k}{N}} \tilde{\lambda}_k \end{equation} for all $ k\in F_{E,A} $. It can be checked that the following definition works \begin{equation}\label{EATkDef} T = \exp\left[\ii K_0+ \sum_{k\in F_{E,A}}\ii \frac{2\pi k}{N} \tilde{\lambda}_k^\dagger \tilde{\lambda}_k \right] \end{equation} \textit{Partition function}--- \begin{align}\label{EAZ} Z(\beta,X) &= \ee^{-\beta E_0+\ii X K_0} \sum_{\{n_k\}} \ee^{ \sum_k \left(-\beta\omega_k + \ii X \frac{2\pi k}{N} \right)n_k}\notag \\ &\approx \ee^{\beta \frac{2N}{\pi}+\beta\frac{2\pi v}{24N}} \prod_{k\in \N -\frac{1}{2}} \left(1+ \ee^{ -\beta v\frac{2\pi k}{N} - \ii X \frac{2\pi k}{N}} \right) \notag \\ &\qquad \prod_{k\in \N -\frac{1}{2}} \left(1+ \ee^{ -\beta v\frac{2\pi k}{N} + \ii X \frac{2\pi}{N}\left(\frac{N}{2}+k\right)} \right) \end{align} where $ \N $ denotes the set of all positive integers. In the second expression above, we have used \begin{equation}\label{wLin} \omega_k \approx \begin{cases} -\frac{2\pi vk}{N} &\text{ for } k \lesssim 0\\ \frac{2\pi v}{N}\left(k-\frac{N}{2}\right) &\text{ for } k \gtrsim \frac{N}{2} \end{cases} \end{equation} and $ E_0 \approx -\frac{2N}{\pi}-\frac{2\pi v}{24N} +\mathcal{O}\left(\frac{1}{N^3}\right)$, with $ v=4 $. This expression for $ E_0 $ is obtained by computing the sum in \eqn{E0} in the limit of $ N\to \infty $. In \eqn{EAZ}, we also chose $ K_0 = 0$. Simplifying this expression, we find \begin{align}\label{EAZfin} Z_{EA}^{++}&=\ee^{\frac{2N\beta}{\pi}}\left|\frac{\theta_3(\tau)}{\eta(\tau)}\right|\\ Z_{EA}^{-+}&=\ee^{\frac{2N\beta}{\pi}} \frac{\sqrt{\overline{\theta_3(\tau)} \theta_4(\tau)}}{|\eta(\tau)|} \end{align} for even $ X $ and odd $ X $ respectively (reflected by the first sign in the superscript). \textit{Fermion twisted sector}--- Inserting $ (-1)^F $ in the partition functions above, we get the following \begin{align}\label{EAZF} Z_{EA}^{+-}&=\ee^{\frac{2N\beta}{\pi}}\left|\frac{\theta_4(\tau)}{\eta(\tau)}\right|\\ Z_{EA}^{--}&=\ee^{\frac{2N\beta}{\pi}} \frac{\sqrt{\overline{\theta_4(\tau)} \theta_3(\tau)}}{|\eta(\tau)|} \end{align} \subsubsection*{Odd $N$ with periodic boundary conditions} Now, let us consider $N=$ odd cases. As before, we choose the set of independent $k$-states as $ F_{O,P}=\{k\in \Z |-\frac{N}{4}\leq k\leq 0 \text{ or } \frac{N}{2}\leq k<\frac{3N}{4} \} $, for periodic b.c. The subscripts indicate $ O $ for \textit{odd} $ N $ and $ P $ for \textit{periodic} b.c. $ F_{O,P} $ contains one zero mode, corresponding to $ \tilde{\lambda}_0 $ which does not appear in the Hamiltonain (\eqn{HMajFT}). The remaining modes can be described by canonical fermion operators. \textit{Hilbert space}--- An odd number of Majorana modes is unphysical in and of itself. To define the Hilbert space, we need to introduce an extra "ghost" Majorana fermion $ \tilde{\lambda}_{gh} $. This can be interpreted as the Majorana mode present in the bulk in a topologically non-trivial superselection sector of the 2+1D theory underlying this discussion of the gapless boundary theory. We define a new zero mode operator using this ghost mode and the zero mode $ \tilde{\lambda}_0 $, \begin{equation}\label{OPZM} c=\frac{1}{\sqrt{2}}( \tilde{\lambda}_0+ \ii\tilde{\lambda}_{gh}), \quad c^\dagger=\frac{1}{\sqrt{2}}( \tilde{\lambda}_0-\ii \tilde{\lambda}_{gh}) \end{equation} where $ \tilde{\lambda}_{gh} $ satisfies $ \{ \tilde{\lambda}_{gh}, \tilde{\lambda}_k \}=0 \quad \forall k\in F_{O,P} $, $ \tilde{\lambda}_{gh}^\dagger = \tilde{\lambda}_{gh} $, and $ \tilde{\lambda}_{gh}^2=\frac{1}{2} $. Then $ c $ and $ c^\dagger $ behave like canonical fermion operators. The ground state $ \ket{0} $ is defined by \begin{equation}\label{OPgrnd} \tilde{\lambda}_k\ket{0} = 0 \quad \forall k \in F_{O,P}', \text{ and } c\ket{0}=0 \end{equation} where $ F_{O,P}'=F_{O,P}\setminus \{0\} $. Excited states are created by the action of $ \tilde{\lambda}_k^\dagger $ ($ \forall k\in F_{O,P}' $) and $ c^\dagger $ on the ground state. \textit{Fermion number operator}--- We define the fermion number operator to include the zero mode operator, \begin{equation}\label{OPFNum} F =\sum_{k\in F_{O,P}'} \tilde{\lambda}_k^\dagger \tilde{\lambda}_k+c^\dagger c \end{equation} \textit{Translation operator}--- Using the real space definition, lattice translation is defined in the momentum space by \begin{equation}\label{OPTk} T \tilde{\lambda}_k T^\dagger = \ee^{\frac{2\pi \ii k}{N}} \tilde{\lambda}_k \end{equation} for all $ k\in F_{O,P} $. Additionally we postulate for the ghost Majorana, \begin{equation}\label{OPTkZM} T \tilde{\lambda}_{gh} T^\dagger = \tilde{\lambda}_{gh} \end{equation} It can be checked that the following definition satisfies the above properties \begin{equation}\label{OPTkDef} T = \exp\left[\ii K_0+ \sum_{k\in F_{O,P}'}\ii \frac{2\pi k}{N} \tilde{\lambda}_k^\dagger \tilde{\lambda}_k \right] \end{equation} \textit{Partition function}--- \begin{align}\label{OPZ} Z(\beta,X) &= 2 \ee^{-\beta E_0+\ii X K_0} \sum_{\{n_k\}} \ee^{ \sum_k \left(-\beta\omega_k + \ii X \frac{2\pi k}{N} \right)n_k}\notag \\ &\approx 2 \ee^{\beta \frac{2N}{\pi}-\beta\frac{2\pi v}{48N}+\ii X K_0} \prod_{k\in \N} \left(1+ \ee^{ -\beta v\frac{2\pi k}{N} - \ii X \frac{2\pi k}{N}} \right) \notag \\ &\qquad \prod_{k\in \N -\frac{1}{2}} \left(1+ \ee^{ -\beta v\frac{2\pi k}{N} + \ii X \frac{2\pi}{N}\left(\frac{N}{2}+k\right)} \right) \end{align} where the factor of 2 is due to the zero mode degeneracy. Setting $ K_0 = -\frac{\pi}{8N}$ and using $ E_0 \approx -\frac{2N}{\pi}+\frac{2\pi v}{48N}+\mathcal{O}\left(\frac{1}{N^3}\right)$, we find \begin{align}\label{OPZfin} Z_{OP}^{++}&=\ee^{\frac{2N\beta}{\pi}}\frac{\sqrt{2\overline{\theta_2(\tau)} \theta_3(\tau)}}{|\eta(\tau)|}\\ Z_{OP}^{-+}&=\ee^{\frac{2N\beta}{\pi}} \frac{\sqrt{2\overline{\theta_2(\tau)} \theta_4(\tau)}}{|\eta(\tau)|} \end{align} for even $ X $ and odd $ X $ respectively. \textit{Fermion twisted sector}--- Inserting $ (-1)^F $ in the partition functions above, we get 0 because of the zero mode, i.e. $ Z_{OP}^{+-}=Z_{OP}^{--}=0$. \subsubsection*{Odd $N$ with antiperiodic boundary conditions} In this case we have the set of independent $k$-states given by $F_{O,A}=\{k\in \Z +\frac{1}{2} |-\frac{N}{4}\leq k\leq 0 \text{ or } \frac{N}{2}\leq k<\frac{3N}{4} \} $. The subscripts indicate $ O $ for \textit{odd} $ N $ and $ A $ for \textit{antiperiodic} b.c. $ F_{O,A} $ contains one zero mode, corresponding to $ \tilde{\lambda}_{N/2} $ which does not appear in the Hamiltonain (\eqn{HMajFT}). The remaining modes can be described by canonical fermion operators. \textit{Hilbert space}--- As in the periodic case, to define the Hilbert space, we need to introduce an extra "ghost" Majorana fermion $ \tilde{\lambda}_{gh} $. We define a new zero mode operator using this ghost mode and the zero mode $ \tilde{\lambda}_0 $, \begin{equation}\label{OAZM} c=\frac{1}{\sqrt{2}}(\tilde{\lambda}_{gh} + \ii \tilde{\lambda}_{N/2}), \quad c^\dagger=\frac{1}{\sqrt{2}}( \tilde{\lambda}_{gh}-\ii \tilde{\lambda}_{N/2}) \end{equation} where $ \tilde{\lambda}_{gh} $ satisfies $ \{ \tilde{\lambda}_{gh}, \tilde{\lambda}_k \}=0 \quad \forall k\in F_{O,P} $, $ \tilde{\lambda}_{gh}^\dagger = \tilde{\lambda}_{gh} $, and $ \tilde{\lambda}_{gh}^2=\frac{1}{2} $. Then $ c $ and $ c^\dagger $ behave like canonical fermion operators. The ground state $ \ket{0} $ is defined by \begin{equation}\label{OAgrnd} \tilde{\lambda}_k\ket{0} = 0 \quad \forall k \in F_{O,A}', \text{ and } c\ket{0}=0 \end{equation} where $ F_{O,A}'=F_{O,A}\setminus \{\frac{N}{2}\} $. Excited states are created by the action of $ \tilde{\lambda}_k^\dagger $ ($ \forall k\in F_{O,A}' $) and $ c^\dagger $ on the ground state. \textit{Fermion number operator}--- Similar to the periodic case, we define \begin{equation}\label{OAFNum} F =\sum_{k\in F_{O,A}'} \tilde{\lambda}_k^\dagger \tilde{\lambda}_k +c^\dagger c \end{equation} \textit{Translation operator}--- We need to satisfy \eqn{OPTk} for all $ k\in F_{O,A} $. Additionally we postulate for the ghost Majorana, \begin{equation}\label{OATkZM} T \tilde{\lambda}_{gh} T^\dagger =- \tilde{\lambda}_{gh} \end{equation} It can be checked that the following definition satisfies the above properties \begin{align}\label{OATkDef} T &= 2\ii \tilde{\lambda}_{N/2} \tilde{\lambda}_{gh} \exp\left[\ii K_0+ \sum_{k\in F_{O,A}'}\ii \frac{2\pi k}{N} \tilde{\lambda}_k^\dagger \tilde{\lambda}_k \right]\notag \\ &=(-1)^{c^\dagger c} \exp\left[\ii K_0+ \sum_{k\in F_{O,A}'}\ii \frac{2\pi k}{N} \tilde{\lambda}_k^\dagger \tilde{\lambda}_k \right] \end{align} \textit{Partition function}--- For \underline{odd $ X $}, due to the $ (-1)^{c^\dagger c} $ factor in $ T\equiv \ee^{\ii K} $, the partition function simply evaluates to 0. For \underline{even $ X $}, the zero mode gives a factor of 2 instead of 0, and the partition function is given by \begin{align}\label{OAZ} Z(\beta,X) &= 2\ee^{-\beta E_0+\ii X K_0} \sum_{\{n_k\}} \ee^{ \sum_k \left(-\beta\omega_k + \ii X \frac{2\pi k}{N} \right)n_k}\notag \\ &\approx 2 \ee^{\beta \frac{2N}{\pi}-\beta\frac{2\pi v}{48N}+\ii X K_0} \prod_{k\in \N -\frac{1}{2}} \left(1+ \ee^{ -\beta v\frac{2\pi k}{N} - \ii X \frac{2\pi k}{N}} \right) \notag \\ &\qquad \prod_{k\in \N } \left(1+ \ee^{ -\beta v\frac{2\pi k}{N} + \ii X \frac{2\pi}{N}\left(\frac{N}{2}+k\right)} \right) \end{align} Setting $ K_0 = \frac{\pi}{8N}$ and using $ E_0 \approx -\frac{2N}{\pi}+\frac{2\pi v}{48N}+\mathcal{O}\left(\frac{1}{N^3}\right)$, we find \begin{align}\label{OAZfin} Z_{OA}^{++}&=\ee^{\frac{2N\beta}{\pi}}\frac{\sqrt{2\overline{\theta_3(\tau)} \theta_2(\tau)}}{|\eta(\tau)|}\\ Z_{OA}^{-+}&=0 \end{align} for even $ X $ and odd $ X $ respectively. \textit{Fermion twisted sector}--- Inserting $ (-1)^F $ in the partition functions above compensates for the $ (-1)^{c^\dagger c} $ factor for the odd $ X $ case, while it produces a factor of 0 in the even $ X $ case due to the new factor of $ -1 $ from the fermion twist operator. Therefore we have \begin{align}\label{OAZF} Z_{OA}^{+-}&=0\\ Z_{OA}^{--}&=\ee^{\frac{2N\beta}{\pi}} \frac{\sqrt{2\overline{\theta_4(\tau)} \theta_2(\tau)}}{|\eta(\tau)|} \end{align} for even and odd $ X $ respectively. In the above calculation, we made some ad hoc choices for the way the translation operator acts on the momentum space Majorana modes and consequently for the values of $ K_0$ (ground state momentum) in the various Hilbert space sectors. In a more systematic calculation, we would start with a real space translation operator and derive its form in momentum space. Our only explanation for these choices at the moment is post-hoc, \ie these choices give us nice modular transformation properties of the multi-component partition function. \subsection{Modular transformation properties of the multi-component partition function} \subsubsection*{Symmetry Twist Basis} Let's summarize the 16-component partition function obtained above, \begingroup \allowdisplaybreaks \begin{equation}\label{Zst} \begin{split} Z_{EP}^{++} &= 2|\chi^\text{Is}_{\frac{1}{16}}|^2 \\ Z_{EP}^{-+} &= {\color{violet} N/A} \\ Z_{EP}^{+-} &= 0 \\ Z_{EP}^{--} &= {\color{violet} N/A} \\ Z_{EA}^{++} &= |\chi^\text{Is}_0+\chi^\text{Is}_{\frac{1}{2}}|^2 \\ Z_{EA}^{-+} &= (\chi^\text{Is}_0-\chi^\text{Is}_{\frac{1}{2}})(\bar\chi^\text{Is}_0+\bar\chi^\text{Is}_{\frac{1}{2}}) \\ Z_{EA}^{+-} &= |\chi^\text{Is}_0-\chi^\text{Is}_{\frac{1}{2}}|^2 \\ Z_{EA}^{--} &= (\chi^\text{Is}_0+\chi^\text{Is}_{\frac{1}{2}})(\bar\chi^\text{Is}_0-\bar\chi^\text{Is}_{\frac{1}{2}}) \\ Z_{OP}^{++} &= 2\bar{\chi}^\text{Is}_{\frac{1}{16}}(\chi^\text{Is}_0+\chi^\text{Is}_{\frac{1}{2}}) \\ Z_{OP}^{-+} &= 2\bar\chi^\text{Is}_{\frac{1}{16}}(\chi^\text{Is}_0-\chi^\text{Is}_{\frac{1}{2}}) \\ Z_{OP}^{+-} &= 0 \\ Z_{OP}^{--} &= 0\\ Z_{OA}^{++} &= 2(\bar{\chi}^\text{Is}_0+\bar{\chi}^\text{Is}_{\frac{1}{2}}) \chi^\text{Is}_{\frac{1}{16}}\\ Z_{OA}^{-+} &= 0\\ Z_{OA}^{+-} &= 0 \\ Z_{OA}^{--} &= 2(\bar\chi^\text{Is}_0-\bar\chi^\text{Is}_{\frac{1}{2}})\chi^\text{Is}_{\frac{1}{16}} \end{split} \end{equation} \endgroup The 16-component partition function is expressed in terms of Ising CFT characters. The subscripts include $ E $ and $ O $ for even and odd number of lattice sites, and $ A $ and $ P $ for antiperiodic and periodic boundary conditions respectively. The superscripts have two $ \pm $ signs, the first of which indicate whether $ X $ is even or odd by $ + $ and $ - $ respectively, while the second indicates periodic or antiperiodic temporal b.c. by $ - $ and $ + $ respectively. "N/A" stands for "not allowed", indicating that the corresponding spatial and temporal boundary conditions are incompatible. In the following, we will sometimes also refer to even and odd $ X $ by $ E^X $ and $ O^X $ respectively. Similarly, we will also refer to $ \Z_2^m $ untwisted i.e. antiperiodic temporal b.c. by $ A^f $ and $ \Z_2^m $ twisted i.e. periodic temporal b.c. by $ P^f $. In \eqn{Zst}, we have dropped the $ \mathcal{O}(\ee^N) $ factor from each of the partition function components. \Eqn{Zst} describes the multi-component partition function of the Ising critical point in the so-called symmetry twist basis. Using the known modular transformation properties of the Ising characters, we find that the nine non-zero components transform into each other under modular transformations, but the $ S $ matrix is not unitary. To get a unitary $ S $ matrix in the symmetry twist basis, we need to strip off a factor of $ \sqrt{2} $ from the partition function components corresponding to odd $ N $. This can be interpreted as the quantum dimension of the ghost Majorana degree of freedom; we put in the ghost fermion by hand so it only makes sense to take off the extra factor from the partition function. One can understand this in the same spirit as regulators used in quantum field theory calculations. This issue was also discussed in \Rf{DGG210102218} where the authors found that an odd number of Majorana fermions do not admit a well-defined graded Hilbert space. We approach this issue differently --- we add in a ghost Majorana fermion in order that the Hilbert space may be well-defined, with the "ghost" being interpreted as an insertion of a quasiparticle in the bulk 2+1D topological order. We dub this basis, in which $S$ and $T$ matrices are unitary, the "unitary symmetry twist" (UST) basis. In this basis, the $ 9\times 9 $ modular $S$ and $T$ matrices are found to be given by \begingroup \allowdisplaybreaks \begin{align} \label{MajST1a} S &=\begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \end{pmatrix} \\ \label{MajST1b} T &=\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \ee^{-\ii \frac{\pi}{8}} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \ee^{-\ii \frac{\pi}{8}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ee^{\ii \frac{\pi}{8}} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ee^{\ii \frac{\pi}{8}} & 0 \end{pmatrix} \end{align} \endgroup \subsubsection*{Quasiparticle Basis} The partition functions in \eqn{Zst}, when expanded in terms of $ q\equiv \ee^{2\pi\ii\tau} $ and $ \bar{q} \equiv \ee^{-2\pi\ii\bar\tau} $, do not all have positive integer coefficients. In the so-called "quasiparticle basis", however, these coefficients indicate the degeneracy of different excited states in the spectrum, and hence must be positive integer valued. In order to convert from the symmetry twist to the quasiparticle basis, we must take suitable linear combinations of the different temporal boundary conditions so as to project to different symmetry charge sectors. Each of the partition function components in the symmetry-twist basis has the general form of $ Z = Z_{00} - Z_{10} - Z_{01} + Z_{11} $ where $ Z_{00}, Z_{01}, Z_{10}, Z_{11} $ are polynomials in $ q,\bar q $ with positive integer coefficients. $ Z_{00} $ collects the terms without $ \Z_2^{em} $ twist and $ (-1)^F $ insertion, $ Z_{10} $ those with only $ \Z_2^{em} $ twist, $ Z_{01} $ those with a negative contribution due to only $ (-1)^F $ insertion, and $ Z_{11} $ collects the remaining terms getting a negative sign from both (hence has a positive sign). The four new $ Z $'s can be interpreted as the components of the partition function in the quasiparticle basis. The general prescription to extract them is given by the following formulas (cf. excitation basis of $ \Z_2 $ topological order in \Rf{JW190513279}) \begin{align}\label{STtoQP} \begin{split} Z^{00} &= \frac{Z^{++}+Z^{-+}+Z^{+-}+Z^{--}}{4}\\ Z^{10} &= \frac{Z^{++}-Z^{-+}+Z^{+-}-Z^{--}}{4}\\ Z^{01} &= \frac{Z^{++}+Z^{-+}-Z^{+-}-Z^{--}}{4}\\ Z^{11} &= \frac{Z^{++}-Z^{-+}-Z^{+-}+Z^{--}}{4} \end{split} \end{align} where the superscripts on the r.h.s. indicate the symmetry twist in the time direction for the $ \Z_2^{em} $ and $ \Z_2^m $ symmetries, as in \eqn{Zst}. The subscript labels are suppressed since we apply this formula separately for each of the four spatial symmetry twists. There is, however, a subtlety with applying this definition to the $ EP $ sector of \eqn{Zst}. Since two of the components in this sector, labeled "N/A", correspond to disallowed boundary conditions, we define $ Z^{00} = \frac{1}{2}(Z^{++}+Z^{+-}) $ and $ Z^{01} = \frac{1}{2}(Z^{++}-Z^{+-}) $ for this column, while leaving "N/A" labels for $ Z_{10}, Z_{11} $. The 16-component partition function in this new basis is given by \begin{equation}\label{Zqp} \begin{split} Z_{EP}^{00} &= |\chi^\text{Is}_{\frac{1}{16}}|^2 \\ Z_{EP}^{10} &= {\color{violet} N/A} \\ Z_{EP}^{01} &= {\color{violet} |\chi^\text{Is}_{\frac{1}{16}}|^2 } \\ Z_{EP}^{11} &= {\color{violet} N/A} \\ Z_{EA}^{00} &= |\chi^\text{Is}_0|^2 \\ Z_{EA}^{10} &=|\chi^\text{Is}_{\frac{1}{2}}|^2 \\ Z_{EA}^{01} &= \bar\chi^\text{Is}_{\frac{1}{2}} \chi^\text{Is}_0 \\ Z_{EA}^{11} &=\chi^\text{Is}_{\frac{1}{2}}\bar\chi^\text{Is}_0 \\ Z_{OP}^{00} &= \bar{\chi}^\text{Is}_{\frac{1}{16}}\chi^\text{Is}_0 \\ Z_{OP}^{10} &= \bar\chi^\text{Is}_{\frac{1}{16}}\chi^\text{Is}_{\frac{1}{2}} \\ Z_{OP}^{01} &= {\color{violet} \bar{\chi}^\text{Is}_{\frac{1}{16}}\chi^\text{Is}_0 }\\ Z_{OP}^{11} &= {\color{violet} \bar{\chi}^\text{Is}_{\frac{1}{16}}\chi^\text{Is}_{\frac12} } \\ Z_{OA}^{00} &= \bar{\chi}^\text{Is}_0 \chi^\text{Is}_{\frac{1}{16}}\\ Z_{OA}^{10} &= \bar\chi^\text{Is}_{\frac{1}{2}}\chi^\text{Is}_{\frac{1}{16}}\\ Z_{OA}^{01} &= {\color{violet} \bar\chi^\text{Is}_{\frac{1}{2}}\chi^\text{Is}_{\frac{1}{16}}} \\ Z_{OA}^{11} &= {\color{violet} \bar\chi^\text{Is}_0\chi^\text{Is}_{\frac{1}{16}}} \end{split} \end{equation} We note that the nine distinct partition functions seen here can be interpreted as anomalous partition functions corresponding to the appropriate defect lines inserted into the bulk double Ising topological order corresponding to fusion of chiral $ h=0,\frac{1}{2},\frac{1}{16} $ and anti-chiral $ \bar h=0,\frac{1}{2},\frac{1}{16} $ excitations.\cite{JW190513279} Turns out, the modular $S$ and $T$ matrices in this basis are unitary if we \emph{don't} strip off the factor of $ \sqrt{2} $, unlike in the UST basis above. We can interpret this peculiarity as follows. In the symmetry twist basis, we focused on the 1+1D CFT without considering the bulk topological order, hence the bulk/ghost Majorana should not be included in the partition function calculation. However, in the quasiparticle basis, we are computing the partition function for the boundary along with the 2+1D bulk, \ie with insertion of defect lines in the bulk topological order. For consistency with that description, the bulk/ghost Majorana must not be factored out if we are to retain a unitary description of the non-invertible gravitational anomaly of the 1+1D CFT. The explicit expressions of the $S$ and $T$ matrices in the quasiparticle basis are given by \begin{equation}\label{MajST2} \begin{split} &S=\begin{pmatrix} 0 & \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & 0 & 0 & 0 & 0 \\ \frac{1}{2} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} \\ \frac{1}{2} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & -\frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} \\ -\frac{1}{2} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & -\frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} \\ -\frac{1}{2} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} \\ 0 & \frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} & 0 & 0 & \frac{1}{2} & -\frac{1}{2} \\ 0 & \frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} & 0 & 0 & -\frac{1}{2} & \frac{1}{2} \\ 0 & \frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & \frac{1}{2} & -\frac{1}{2} & 0 & 0 \\ 0 & \frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & \frac{1}{\sqrt{8}} & -\frac{1}{\sqrt{8}} & -\frac{1}{2} & \frac{1}{2} & 0 & 0 \end{pmatrix} \\ &T=\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \ee^{-\frac{\ii \pi}{8}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -\ee^{-\frac{\ii \pi}{8}} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ee^{\frac{\ii \pi}{8}} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\ee^{\frac{\ii \pi}{8}} \end{pmatrix} \end{split} \end{equation} Here, we leave out the disallowed (labeled by N/A) components of \eqn{Zqp} and average over the redundant ones, so that we have nine distinct components of the partition function, \begin{multline} \mathbf{Z}^{QP}= \Bigg(\frac{Z_{EP}^{00}+Z_{EP}^{01}}{2}, Z_{EA}^{00}, Z_{EA}^{10}, Z_{EA}^{01}, Z_{EA}^{11} ,\\ \frac{Z_{OP}^{00}+Z_{OP}^{01}}{2}, \frac{Z_{OP}^{10}+Z_{OP}^{11}}{2}, \frac{Z_{OA}^{00}+Z_{OA}^{11}}{2}, \frac{Z_{OA}^{10} +Z_{OA}^{01}}{2} \Bigg) \end{multline} The $T$ matrix is diagonal in the quasiparticle basis, with the diagonal elements indicating the topological spins of the corresponding excitations. We also note that both $S$ and $T$ matrices are unitary and symmetric, as expected from the properties of minimal models. In particular, \eqn{MajST2} exactly matches the modular transformation matrices of a theory defined by direct product of left and right moving Ising characters. \subsubsection*{Relating the Different Bases} To make the above basis changes and projections more systematic, we look for linear transformations between the symmetry-twist (ST) basis, the unitary symmetry twist (UST) basis, and the quasiparticle (QP) basis. $S$ and $T$ matrices in the symmetry-twist basis have the general form given in \eqn{ZpropG}. For $ \Z_2 \times \Z_2 $ symmetry twists, this gives us $ 16\times 16 $ $S$ and $T$ matrices. To connect these to the $ 9\times 9 $ matrices in the UST basis displayed in \eqns{MajST1a} and \eqref{MajST1b}, we project onto the relevant subspace of non-zero components of the partition function. Moreover, the $T$ matrix gets some complex phase factors, which can be interpreted as anomalies of the partition function. The $ 16\times 16 $ $S$ and $T$ matrices (with the appropriate complex phase factors plugged into the $T$ matrix) can also be transformed into the quasiparticle basis directly by a change of basis combined with a projection. These two transformations, therefore, take us from the appropriately modified \eqn{ZpropG} to \eqns{MajST1a}-\eqref{MajST1b} and \eqn{MajST2} directly. In the Majorana model discussed here, the symmetry $ \Z_2^m \times \Z_2^{em} $ can also be interpreted as a product of two fermion parity $ \Z_2 $ groups, one each for left and right movers. We denote this group as $\Z_2^L\times\Z_2^R= \{++,+-,-+,--\} $, where $ + $ is for periodic and $ - $ for antiperiodic. As explained below \eqn{EPZfin}, the presence and absence of fermion parity operator in the partition function corresponds respectively to periodic and antiperiodic boundary conditions in the time direction. In the space direction, periodic and antiperiodic b.c. correspond to integer and half-integer momenta, as explained above. With this in mind, let's map the superscript and subscript labels on the l.h.s. of \eqn{Zst} to $ \Z_2^L\times\Z_2^R $ elements: \begin{equation}\label{STinZ2Z2} \begin{alignedat}{2} EP &\rightarrow ++ \qquad && E^XA^f \rightarrow --\\ EA &\rightarrow -- \qquad && O^XA^f \rightarrow -+\\ OP &\rightarrow +- \qquad && E^XP^f \rightarrow ++\\ OA &\rightarrow -+ \qquad && O^XP^f \rightarrow +- \end{alignedat} \end{equation} In light of these new symmetry labels, let us re-write \eqn{Zst} and also incorporate the division by $ \sqrt{2} $ for the cases with odd number of lattice sites, as explained above. The result of these operations is shown in \eqn{Zst2}. \begingroup \allowdisplaybreaks \begin{align} Z_{++}^{--} &= 2|\chi^\text{Is}_{\frac{1}{16}}|^2 \nonumber\\ Z_{++}^{-+} &= {\color{violet} N/A} \nonumber\\ Z_{++}^{++} &= 0 \nonumber\\ Z_{++}^{+-} &= {\color{violet} N/A} \nonumber \\ Z_{--}^{--} &= |\chi^\text{Is}_0+\chi^\text{Is}_{\frac{1}{2}}|^2 \label{Zst2} \\ Z_{--}^{-+} &= (\chi^\text{Is}_0-\chi^\text{Is}_{\frac{1}{2}})(\bar\chi^\text{Is}_0+\bar\chi^\text{Is}_{\frac{1}{2}}) \nonumber\\ Z_{--}^{++} &= |\chi^\text{Is}_0-\chi^\text{Is}_{\frac{1}{2}}|^2 \nonumber\\ Z_{--}^{+-} &= (\chi^\text{Is}_0+\chi^\text{Is}_{\frac{1}{2}})(\bar\chi^\text{Is}_0-\bar\chi^\text{Is}_{\frac{1}{2}}) \nonumber \\ Z_{+-}^{--} &= 2\bar{\chi}^\text{Is}_{\frac{1}{16}}(\chi^\text{Is}_0+\chi^\text{Is}_{\frac{1}{2}}) \nonumber\\ Z_{+-}^{-+} &= 2\bar\chi^\text{Is}_{\frac{1}{16}}(\chi^\text{Is}_0-\chi^\text{Is}_{\frac{1}{2}}) \nonumber\\ Z_{+-}^{++} &= 0 \nonumber\\ Z_{+-}^{+-} &= 0 \nonumber \\ Z_{-+}^{--} &= 2(\bar{\chi}^\text{Is}_0+\bar{\chi}^\text{Is}_{\frac{1}{2}}) \chi^\text{Is}_{\frac{1}{16}} \nonumber\\ Z_{-+}^{-+} &= 0 \nonumber\\ Z_{-+}^{++} &= 0 \nonumber\\ Z_{-+}^{+-} &= 2(\bar\chi^\text{Is}_0-\bar\chi^\text{Is}_{\frac{1}{2}})\chi^\text{Is}_{\frac{1}{16}} \nonumber \end{align} \endgroup The appropriately modified form of \eqn{ZpropG}, incorporating the phase factors in the modular $ T $ transformation, is \begin{equation}\label{ZstST} \begin{split} Z_{g',h'}(-1/\tau) &= S_{(g',h'),(g,h)} Z_{g,h}(\tau),\\ Z_{g',h'}(\tau+1) &= T_{(g',h'),(g,h)} Z_{g,h}(\tau),\\ Z_{g',h'}(\tau) &= R_{(g',h'),(g,h)}(u) Z_{g,h}(\tau),\\ S_{(g',h'),(g,h)} &= \del_{(g',h'),(h,g)},\\ T_{(g',h'),(g,h)} &= \begin{cases} \ee^{-\frac{\ii \pi}{8}} \del_{(g',h'),(g,hg)} &\text{ for } g = +-\\ \ee^{\frac{\ii \pi}{8}} \del_{(g',h'),(g,hg)} &\text{ for } g=-+\\ \del_{(g',h'),(g,hg)} &\text{ otherwise } \end{cases},\\ R &= 1 , \end{split} \end{equation} where $ g,h\in \Z_2\times\Z_2 $. These $ S $ and $ T $ matrices seemingly also act on the disallowed components of the partition function. However, this is not really an issue because we are aiming to extract the physically meaningful components of the 16-component partition function by suitable projection and/or change of basis. In this spirit, the 9-component partition function in the UST basis can be obtained by a projection to the subspace of the 9 non-zero components of \eqn{Zst2}, described by the $ 9\times 16 $ matrix, \begin{align} \label{STtoUST} M=\begin{pmatrix} 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} , \end{align} which satisfies $ M M^\dag =I_{9\times 9} $ (\ie it is an isometry). Acting on the 16$ \times $16 $S$ and $T$ matrices described by \eqn{ZstST} with $ M $ (by conjugation), we derive the $ 9\times 9 $ $S$ and $T$ matrices in UST basis given in \eqns{MajST1a} and \eqref{MajST1b}, \begin{equation}\label{USTproj} S^{UST}=MS^{ST}M^\dag, \qquad T^{UST}=MT^{ST}M^\dag \end{equation} where the $S$, $T$ matrices on the l.h.s. stand for those in \eqns{MajST1a} and \eqref{MajST1b} and those on the r.h.s. stands for the ones in \eqn{ZstST}. The next task is to find a linear transformation to go from \eqn{ZstST} to \eqn{MajST2}. We do this in two parts, first a change of basis going from the partition function in \eqn{Zst2} to that in \eqn{Zqp}, and then an appropriate projection onto the 9 independent components of the quasiparticle basis. The change of basis is described by the following block-diagonal matrix, \begin{align} \label{STtoQPmat} N=\begin{pmatrix} A & O & O & O \\ O & \sqrt{2}B & O & O \\ O & O & \sqrt{2} B & O \\ O & O & O & B \end{pmatrix}, \end{align} where \begin{equation}\label{Nsubmat} A=\begin{pmatrix} \frac{1}{2} & 0 & 0 & \frac{1}{2} \\ 0 & 1 & 0 & 0 \\ -\frac{1}{2} & 0 & 0 & \frac{1}{2} \\ 0 & 0 & 1 & 0 \end{pmatrix}, \ \ B=\begin{pmatrix} \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4}\\ \frac{1}{4} & -\frac{1}{4} & -\frac{1}{4} & \frac{1}{4}\\ -\frac{1}{4} & -\frac{1}{4} & \frac{1}{4} & \frac{1}{4}\\ -\frac{1}{4} & \frac{1}{4} & -\frac{1}{4} & \frac{1}{4} \end{pmatrix}, \end{equation} and $ O$ is the $ 4\times 4 $ null matrix. This change of basis is essentially identical to \eqn{STtoQP}, with the extra factors of $ \sqrt{2} $ simply accounting for the fact that we removed a factor of the quantum dimension of the ghost Majorana mode in defining the partition function in the symmetry-twist basis which we must restore when we go to the quasiparticle basis (as argued above). Also note that we have an identity matrix in the subspace of the two components that are not allowed because of the incompatible space and time direction symmetry twists (labeled "N/A" in \eqns{Zst} and \eqref{Zst2}). The action of matrix $ N $ on the 16-component partition function in the symmetry-twist basis produces the 16-component partition function in \eqn{Zqp}, \begin{equation}\label{STtoQP16} \mathbf Z^{QP16} = N \mathbf Z^{ST}, \end{equation} where $ Z^{ST} $ is the 16-component partition function displayed in \eqn{Zst2} and $ Z^{QP16} $ is the 16-component partition function shown in \eqn{Zqp}. From $ Z^{QP16} $, we project out the independent components to form the 9-component partition function $ Z^{QP} $ corresponding to the double Ising quasiparticle basis, i.e. labeled by $ h,\bar h\in\{0,\frac12,\frac1{16}\} $. This is achieved by the action of the $ 9\times 16 $ matrix, \begin{align} P=\begin{pmatrix} \frac{1}{2} & 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & \frac{1}{2} & 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 0 & 0 & \frac{1}{2} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} , \end{align} where we have taken averages over the duplicate components in $ Z^{QP16} $ (cf. \eqn{Zqp}) so as to put them on an equal footing. In equations, we can express the above transformations from 16-component $ Z^{ST} $ to 16-component $ Z^{QP16} $ to 9-component $ Z^{QP} $ is summarized as \begin{equation}\label{QP16toQP} \mathbf Z^{QP}=P \mathbf Z^{QP16}=PN \mathbf Z^{ST} \end{equation} It turns out that $ 4 (PN)^\dag $ is the right inverse of the matrix $ PN $, i.e. $ 4 P N (PN)^\dag =I_{9\times 9} $. In terms of these matrices, we can define a transformation from the $S$, $T$ matrices in \eqn{ZstST} to those in \eqn{MajST2}, \begin{equation}\label{QPproj} S^{QP} = 4 PN S^{ST} N^\dag P^\dag, \qquad T^{QP} = 4 PN T^{ST} N^\dag P^\dag \end{equation} where the $S$, $T$ matrices on the l.h.s. stand for those in \eqn{MajST2} and those on the r.h.s. stand for the ones in \eqn{ZstST}. \Eqn{USTproj} and \eqn{QPproj} are thus the desired transformations that convert $ S $ and $ T $ matrices from the $ \Z_2 \times \Z_2 $ symmetry twist (ST) basis to the UST and QP bases respectively. Therefore, we may view the critical point of $\Z_2$-symmetry breaking transition as a $ \one $-condensed boundary of the 2+1D double-Ising topological order $\eM_\text{dIs}$. It is described by a nine-component partition function labeled by a pair $(h,\bar h)$ \begin{equation} \label{IsIs} Z_{h,\bar h}(\tau,\bar \tau) = \chi^\text{Is}_h(\tau) \bar\chi^\text{Is}_{\bar h}(\bar\tau),\ \ \ \ h,\bar h =0,\frac12,\frac1{16}. \end{equation} A modular invariant partition function is obtained by stacking on the $\eM_\text{dIs}$ bulk and a gapped boundary obtained by condensing $ \one \oplus\si\bar\si\oplus\psi\bar\psi $, as shown in Fig. \ref{CFTdIs}. \begin{align*}\label{IsModInv} Z^{af}_{Is} (\tau,\bar\tau) &= Z^{\eM_\text{dIs}}_{\one\text{-cnd};(\one,\bar \one)} (\tau,\bar\tau) + Z^{\eM_\text{dIs}}_{\one\text{-cnd};(\si,\bar \si)} (\tau,\bar\tau) \\ & \qquad \qquad \qquad \qquad + Z^{\eM_\text{dIs}}_{\one\text{-cnd};(\psi,\bar \psi)} (\tau,\bar\tau) \\ &=|\chi^\text{Is}_0(\tau)|^2 + |\chi^\text{Is}_\frac12(\tau)|^2 + |\chi^\text{Is}_\frac{1}{16}(\tau)|^2 \end{align*} \smallskip \section{Conclusion} In this paper, we have used a decomposition picture to reveal the emergent symmetry in a quantum field theory: $QFT_{af} = QFT_{ano} \boxtimes_{\eM} \t\cR$ (see Fig. \ref{QFTR}). The decomposition $QFT_{af} = QFT_{ano} \boxtimes_{\eM} \t\cR$ means that the partition function of $QFT_{af}$ is reproduced by the composition system $QFT_{ano} \boxtimes_{\eM} \t\cR$, where the bulk $\eM$ and the boundary $\t\cR$ are assumed to have infinite energy gap. Through the decomposition we can see the emergent \textsf{categorical symmetry}\ and the emergent symmetry $\cR$, where $\cR$ is the dual of $\t\cR$. Using such a decomposition picture, we define a notion of \emph{maxical \textsf{categorical symmetry}}. We believe that the maximal \textsf{categorical symmetry}\ is a very detailed characterization of a gapless state. We argue that it largely characterizes and determine the gapless state. In other words, just knowing maximal \textsf{categorical symmetry}\ may allow us to determine low energy dynamical properties, with just a few ambiguities. This may open up a new direction to handle gapless states. ~ We acknowledge helpful discussions with Michael Demarco, Ho Tat Lam, Salvatore Pace, and Carolyn Zhang. This work is partially supported by NSF DMR-2022428 and by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651446, XGW).
2024-02-18T23:39:47.268Z
2023-01-02T02:11:27.000Z
algebraic_stack_train_0000
389
27,203
proofpile-arXiv_065-2033
\section{Introduction} Crowdsourcing platforms such as Amazon Mechanical Turk (AMT) provide online marketplaces for task-based microwork \cite{barr2006ai, irani2015cultural}. AMT in particular has been widely used for research and business purposes \cite{smith_gig_2016}, including tasks for natural language processing, image annotation, online experiments, and user studies (e.g., \cite{callison-burch_creating_2010, nowak_how_2010, grady_crowdsourcing_2010, paolacci_running_2010, kittur_crowdsourcing_2008}). Existing studies have estimated over 100K active workers on AMT and more than 2K working at any given time (e.g.,\cite{difallah_demographics_2018}). Two primary classes of users engage in economic exchanges on the platform: {\em Requesters} who distribute tasks and crowd workers who complete tasks for payment. A decade ago, \cite{silberman2010ethics} called for more worker-centered research in this domain. Since then, researchers have studied AMT worker demographics \cite{ipeirotis2010demographics, ross_who_2010}, motivation \cite{antin_social_2012,brewer_why_2016,gray_ghost_2019,kaplan_striving_2018,kaufmann_more_2011,martin_being_2014}, pay rates \cite{chilton_task_2010,ipeirotis2010demographics,horton_labor_2010,mcinnis_taking_2016,silberman2018responsible,hara_data-driven_2018, hara_worker_2019}, working experience \cite{gupta_turk-life_2014,martin_being_2014,gray_crowd_2016,martin_turking_2016,mcinnis_taking_2016}, and their collective actions \cite{bederson_web_2011,salehi_we_2015}. While researchers have employed a wide range of methods to understand AMT workers and their work, there has been a focus for eliciting workers' own perspective and voices, such as observing worker interactions in their online community forums \cite{martin_being_2014,martin_turking_2016}, collecting worker responses and accounts through surveys and interviews \cite{gupta_turk-life_2014, martin_turking_2016, whiting_fair_2019}, and gathering requirements directly from workers when building tools to support \say{Turking}, usually via "HITs" (i.e., unit of work called "Human Intelligence Tasks" on AMT) \cite{irani_turkopticon_2013,whiting_fair_2019, salehi_we_2015}. We seek to build on this line of research by further enhancing the understanding of why it can be so challenging to devise effective worker-centered support mechanisms for crowd work. While there certainly have been many valuable works in this vein, detailed reflections on the friction involved are less common. Through online observation in AMT worker community forums, we note tension between workers and their media exposure. This tension prompted us to analyze worker interactions and narratives posted in online discussion forums as publicly available content. Our qualitative analysis sheds further light upon the causes and mechanisms that render a worker-centered approach to crowd work difficult. At its core, it is the challenge of effectively eliciting and representing a plurality of diverse workers' voices and perspectives \cite{cooper_representing_1995, haraway_modest_witnesssecond_millenium_1997} that make it difficult to speak for or design interventions that are appreciated by these workers. Because our findings are centered around crowd workers on AMT, we use the term \say{Turkers} to refer to these workers, consistent with how these workers often identify themselves, as we have observed in their online community forums. \section{Related Work} A variety of research has focused on learning about and improving the Turking experience. Researchers who rely on the principles of ethnomethodology have investigated how Turkers arrange their day-to-day work activities in interaction with other geographically distributed workers, Requesters, and the AMT platform as both an organizational entity and a task marketplace \cite{martin_being_2014, martin_turking_2016}. \cite{martin_being_2014} were among the very first to analyze the publicly available conversations in Turkers' online forums. They characterized Turkers as economic actors operating in an online labor market, focusing on workplace issues such as labor relations, wages, and work ethics. In \cite{martin_turking_2016}'s later study, they compared US-based vs.\ India-based Turkers to better understand differences in working experience across geographical boundaries. Similarly, \cite{gupta_turk-life_2014} provides a detailed description of Indian Turkers' working lives, in contrast with that of US-based workers based on surveys and interviews. Some specific aspects of Turking in India include adapting to time zone differences and viewing Turking work as an international professional experience in individual career profiles. Critical designers \cite{irani_turkopticon_2013} have sought to support Turkers via system building efforts, eliciting Turker voices for design ideation. \cite{irani_turkopticon_2013} conducted an open-ended survey as HIT on AMT, getting input from Turkers for compiling \say{Workers' Bill of Rights}. They asked about daily work concerns and then developed a Requester rating system, \emph{Turkopticon}, based on Turker responses. \cite{irani_stories_2016} helped to introduce Turkers into HCI conference venues for their better visibility. They also reflect on their experience of constructing and maintaining the technological system as well as building relations with Turkers. Related to our work, they note that the lack of unity in worker voices in online forums makes it challenging to form collective voices and actions, reflecting the difficulty of worker-centered design and advocacy. \cite{salehi_we_2015} seeks to raise Turkers' voices by organizing collective actions within the Turker community. With the help of the researchers, Turkers collectively authored action guidelines for academic Requesters and a letter to Amazon that advocated fairness and openness toward Turkers \cite{noauthor_guidelines_2017}. Both efforts are mobilized campaigns following the ideation by the Dynamo Turker community. Other researchers \cite{Mankar17-hcomp,whiting_fair_2019} have formulated technological interventions to help workers earn a minimum hourly wage. Both research groups also surveyed workers via HITs on AMT, e.g., regarding their expectations for fair pay. More recent studies of Turking experience and designing for supporting Turking dominantly rely on Turkers' online comments and HIT surveys \cite{hanrahan2021expertise, saito2019predicting, toxtli2020meta, flores2020challenges}. Prior worker-centered studies range from observation of Turkers' online conversations to surveying and interviewing Turkers directly and using their responses to design and develop supportive tools. In particular, active critical designers seek to \say{interrupt worker invisibility} \cite{irani_turkopticon_2013}, helping them to form voices, be heard, and be seen. As all these works have explored some channels to get Turkers' voices and experiences, it is not surprising that it is often the invisibility of Turking that motivates these studies and affects their interpretations. Similarly, \cite{gray_ghost_2019} characterize invisible crowd work as \say{ghost work} through five years of study. Our work builds upon such past scholarship in seeking to provide explanations for the ways in which invisibility operates to make a work-centered approach to crowd work particularly challenging. Regarding the common popular press narrative of exploited Turkers, \cite{moss2020ethical} presents a data-driven rebuttal that resonates with our own analysis. \section{Methods} Our work was prompted by initial observation in Turkers' discussion forums. We began to frequent several known online gathering spaces of crowd workers -- keeping up with the recent updates and reading through historical discussion threads -- to gain a broader understanding of Turkers' online activities and discourse. Those sites include Turker Nation\footnote{\url{https://www.reddit.com/r/TurkerNation/}}, TurkerView\footnote{\url{https://forum.turkerview.com/}}, and several Subreddits\footnote{\url{https://www.reddit.com/reddits/}} dedicated to discussions about Turking or crowd work of all sorts. We began to notice a set of similar discussion threads in Subreddit {\tt r/mturk}\footnote{\url{https://www.reddit.com/r/mturk/}}, an AMT forum (having more than 80K members as of September 2021) in which Turkers discuss their daily Turking experiences. Topics span issues such as how to make sense of posted HITs, how to understand and manage rejections and get payments transferred to a bank account, etc. The Subreddit thus serves as a gathering place for Turkers to help each other navigate the dynamics of AMT work. The set of discussion threads we focused on involve conversations between self-identified journalists and Turkers. Threads were initiated by a journalist, or occasionally a researcher, asking whether Turkers in the forum would participate in an interview to discuss their Turking experiences, for a news article or research on crowd work. The first reply in these threads frequently conveyed some degree of resistance to being interviewed. We began to read through and interpret these discussion threads as recurring events observed in the field \cite{spradley2016participant}. We also further searched the Subreddit using \say{journalist} or \say{interview} as keywords, and our review of search results identified more such threads. As our observation and qualitative analysis went on, we further noticed that community members routinely noted and discussed news media portrayals of them. Many discussion threads stemmed from sharing of a new news story covering Turkers, followed by discussion. As in the threads in which journalists sought Turkers to interview, these discussions also contained Turker reactions to news media publicizing them and their work. Searching through the historical postings about news coverage of Turkers in the Subreddit, we identified another set of 12 threads containing substantial discussions of how Turkers themselves make sense of popular narratives about their images, conditions, and experiences. Overall, we identified 26 threads posted across the past six years, with the latest posted in the last year. Most are substantial discussions; the shortest has three comments while the longest has eighty-nine. These discussion threads evidence similar events: Turkers' reaction to the potential or ongoing publicity about them. These 26 threads were initiated by 21 registered Reddit user accounts; in 5 cases, an interview request was posted more than once as separate threads, but with different conversations unfolding. Each discussion for a thread involved an almost exclusively different set of community members. Only five user accounts, as we identified, appeared in more than one discussion thread. We analyzed these discussion threads based on the principles of thematic analysis, without any epistemological commitment preceding, coding each thread inductively \cite{braun_using_2006, braun2012thematic}. We treated each thread as one conversation and paid close attention to the role and stance of each participant involved. We analyzed each comment in the thread and interpreted it in the context of the conversation. We then memo'ed each comment with interpreted meaning as initial codes. When one comment became sufficiently long to carry complex meanings, we further decomposed it and memo'ed them with multiple codes accordingly. Next, we sought converging themes across these codes and threads. As we organized coded data under different themes, we reviewed the whole coded data set in relation to identified themes to ensure mutual fit. A few codes were dropped in this phase, carrying some special meaning but not connected to any themes, or not supporting a theme-level finding given their sparse presence in the data. \section{Voices of Workers} \label{sec:voices} In this section, we discuss our themed findings from the identified data in the field. For privacy, we anonymize the usernames used in Subreddit discussions. \subsection{Conflicting voices} \paragraph{Telling stories is less attractive than making money.} As previously mentioned, in the first set of postings we identified, each discussion thread started with a request from a journalist or researcher for an interview with Turkers. The first reply from Turkers to each request seemed to be not just subtly uncooperative, but also to convey a primary concern with financial compensation when being requested to perform a task for free that they might otherwise earn money from. Some examples of such first replies include: \begin{quote} \hspace{1em}\emph{“Want work from a turker? Post a HIT on the platform.”} [Turker1] \hspace{1em}\emph{“If you really want turker's opinions, you could set up a hit for it I'm sure.”} [Turker2] \hspace{1em}\emph{“How much does this HIT pay?} [Turker3] \hspace{1em}\emph{“Yeah, a lot of us posted wanting to know what you're paying. We measure our lives by the minute.”} [Turker4] \hspace{1em}\emph{“What's the pay/time commitment? I used to be a decent guy until I realized that an hour of time can mean the difference between being late on the electric bill.”} [Turker5] \end{quote} These initial responses showed no interest in sharing their working stories. Instead, they seemed to view such requests as labor and equivalent to the paid tasks they perform on AMT. Turker4 and Turker5 put their financial consideration in tangible terms: for them, spending time to respond to such a request would take time away from completing more HITs. To some extent, their comments also make a case for the immediate financial gain of microwork -- "an hour of time" can make a difference in their earnings. In contrast, an opportunity to share their stories appears to be uncompelling, at least in the context of these specific requests and their intended usage. On various occasions, we observed some Turkers sharing their prior experience with such interview requests, suggesting that it is not an uncommon event in this community. \begin{quote} \hspace{1em}\emph{“Some reporter shows up in this sub}[Reddit] \emph{about once every 3-4 months soliciting workers for free info. If you want info about our work, make a HIT and pay us for our time. You're literally coming to our forum and asking us to do the type of thing we do on MTurk, but for free.”} [Turker6] \hspace{1em}\emph{“Bummer, I just did a 70 minute interview with the New York Times for an article that's coming out at the end of the week. I didn't get paid for it though.”} [Turker7] \end{quote} In every thread we analyzed, we rarely found that Turkers showed enthusiasm or expressed positive reactions to the interview requests. Instead, we consistently observed \say{negative} experiences of Turkers as to these interview requests in the forum; workers had done one but did not feel they were adequately compensated, or they had seen these requests often and were generally not satisfied with being asked to perform labor without compensation for the time involved. \paragraph{Distrust of media.} While compensation could be one cause for the rarity of positive reactions from Turkers to these interview requests, some comments reveal more aspects of their negative experiences with interviews: \begin{quote} \hspace{1em}\emph{“Twice I've done interviews with a journalist about MTurk and both times they misrepresented what I said. Never again”} [Turker8] \hspace{1em}\emph{"...I have yet to see an article that doesn't quote people out of context and/or distort the facts entirely just to suit the title of the article."}[Turker9] \hspace{1em}\emph{"With all due respect, the last few times there have been magazine articles, it has not gone well. There is always a promise to tell our side, and "give us exposure". The reality ends up being a smear campaign. Here are some examples..."}[Turker10] \hspace{1em}\emph{"Be careful what you write my friend. With the declining state of print media and paid journalism you'll be one of us before too long."}[Turker11] \end{quote} These accounts reflect self-awareness of how Turkers are represented in media narratives. In the elided portion of the quotation above, Turker10 listed three articles from well-known media sources as examples of what they meant by \say{a promise to tell our side} but ending up with \say{being a smear campaign}. In that particular thread, the journalist followed up, referring to existing academic work that demonstrates some visibility is needed for improving Turkers' working condition. However, Turkers in that thread did not seem to appreciate the work. One Turker asserted that they would trust their own experience rather than a theory written by others. Another Turker, more pointedly, alleged that both academic work and journalist articles \emph{\say{utilized the lowest tier of workers to create some sort of sob story that all mturk workers are being taken advantage of}}, and these works are \emph{\say{poorly perceived in the turking community}}. Turker11 wrote with a sarcastic tone. We observe distrust toward the journalists and researchers who post, due to Turkers being disappointed by prior interviews and/or news articles and concerned with distortion and misrepresentation. In Turkers' reactions to news articles we reviewed, this sentiment seems to be salient: they tend to express a low expectation for any coverage up-front and only react more positively when find the article does not paint a hopeless picture of them. For instance, in reaction to recent coverage from a TV channel, one Turker posted: \begin{quote} \hspace{1em}\emph{“At least her team did their study calling us Turkers.”} [Turker12] \end{quote} Turker12 here does not comment about the news coverage itself, but notes that it is not that bad because they were at least being addressed in the right way, showing no greater expectation for the reporting. Indeed, in some threads of interview requests, when the journalist or researcher addressed Turkers alternatively such as \say{MTurks} and \say{Amazonians}, Turkers reacted with a stronger tone, trying to establish their identity as \say{workers} and sometimes \say{Turkers}. \begin{quote} \hspace{1em}\emph{“First things first: we're not MTurks. MTurk is the platform. We are workers.”} [Turker13] \hspace{1em}\emph{"Mturks": the FIRST CLUE you've never even so much as looked at the platform, because if you did, you'd know we're WORKERS (meaning, people).”} [Turker14] \end{quote} In another discussion thread, where Turkers commented on a recent media article about them, concerns expressed showed that workers were apprehensive even before reading the article or became worried when they saw the headline or the first few statements of the article solely emphasizing low pay. However, after reading the full article, upon finding that the news story did not just present a one-sided account of crowd work, workers often expressed relief. \paragraph{Fallacies of representation.} Another Turker had more specific comments in the aforementioned discussion about the TV coverage of Turking, pointing to two potential fallacies in news representation of Turkers: first, who from the Turker population was being interviewed; second, whether the news producers had a formulated narrative beforehand that could lead to selective reporting. \begin{quote} \hspace{1em}\emph{“The problem with the articles she cited is that they're either interviewing people who are ridiculously new to the platform and just view it as "\$5/day beer money" or whatever, or they're stacked by people who have an agenda and seek to push a specific narrative...”} [Turker15] \end{quote} In the comments on different media articles, we found similar voices alleging that there is a pre-set agenda in news reporting about Turkers, particularly about their status of being \say{underpaid and overworked and exploited}. Some Turkers in the forum expressed fatigue regarding this recurrent narrative. \begin{quote} \hspace{1em}\emph{“...the very studies that push the "underpaid and overworked and exploited" were all written with an agenda in mind. Instead of researching what would be a fair pay for turkers, some requesters just up and leave or post their work on other platforms.”} [Turker16] \hspace{1em}\emph{“...Most MTurk communities have seen this article rehashed time and time again, and we all knew what this was going to say long before it came out.”} [Turker17] \end{quote} As we examined the original media coverage being discussed in our identified forum threads, we also noticed common analogies repeatedly used in media coverage to describe crowd work. For example, in one discussion thread, Turkers appear in a media article \cite{lim_why_2018} that refers to gig work as \say{digital slavery} and crowd work as \say{click farms}. Turkers expressed their unease: \begin{quote} \hspace{1em}\emph{“I find the "slavery" analogies offensive…”} [Turker18] \end{quote} Under another article posted that frames crowd work platforms as \say{virtual sweatshop}, one Turker replied sarcastically: \begin{quote} \hspace{1em}\emph{“"digital underclass" is the best description I've heard of turking so far.”} [Turker19] \end{quote} \paragraph{Anxiety about publicity.} When Turkers exhibit resistance to interview requests and passivity to media coverage, they demonstrate unease with the possible or gained publicity. For example, when expressing some dissatisfaction with some media reporting, a Turker voiced clear opposition to such publicity: \begin{quote} \hspace{1em}\emph{“Below, you will find three examples of why this will be a hard pass from almost everyone who has turked for a while. We are fine without extra "help" or "publicity”} [Turker20] \end{quote} Turker20 then commented on three articles that have given unfavorable publicity to Turkers. The reason that Turker20 held was that to portray Turkers as \say{exploited} by digital platforms could turn task Requesters away because crowd work would be perceived as \say{unethical}. However, Turker20 also acknowledged that even in the case of positive publicity about Turking, it would just lead to more workers coming to the platform, competing with existing Turkers for limited task offerings. Similar voices abound: \begin{quote} \hspace{1em}\emph{“Please don't give mturk any more publicity. There is a limited amount of work available, and if people learn about mturk and decide to join, it's less money for us”} [Turker21] \hspace{1em}\emph{“Publicity really isn't appreciated at this juncture. We're just coming away from a rough NYT article and I think most turkers wish the 4th estate} [press and news media] \emph{would forget about us for a little while.”} [Turker22] \hspace{1em}\emph{“Talking to journalists is literally putting your income at risk for no reason.”} [Turker23] \end{quote} Essentially, Turkers would rather have their Turking opportunities secured, instead of gaining visibility that could jeopardize their opportunities for earning income. Their sentiment expresses risk aversion, since there is also the possibility that negative publicity could drive current Turkers away from the platform (thus fewer competitors for existing Turkers), and positive publicity could bring more Requesters to the platform. Potential downsides occupy their focus. \subsection{Diverse voices} \paragraph{The omitted diversity.} In one case when a journalist tried to recruit workers to interview, the journalist even promised that \say{this isn't going to be another digital sweatshop story}. One worker still responded: \begin{quote} \hspace{1em}\emph{“The problem with these journalists is that they just go around and PM random users on this sub[reddit] and just get totally wildly different answers. Mturk is very subjective, some people are terrible at it and/or don[']t use scripts and might only make \$4 a day and meanwhile someone who really gets Mturk and/or uses scripts can make \$40 a day in the same amount of time as someone who doesn[']t use scripts.”} [Turker24] \end{quote} This quotation speaks of some reasons why Turkers are not represented properly in media coverage. The experience of Turking varies wildly from person to person. Excess attention given to a universal, powerless image of Turkers in the media narratives hides some Turkers who are \say{motivated self-starters}, as spoken of by a Turker in the forum. This image overshadows the fact that microwork also requires skills, expertise, and perseverance. While existing academic work has highlighted skills and strategies for Turking (e.g., \cite{savage_becoming_2020, kaplan_striving_2018}), we see here, as one Turker put it: \begin{quote} \hspace{1em}\emph{“But I will say this, the majority of high earners usually don't browse this sub or bother with this subreddit because they are too busy working on making \$100+ a day and just see the people here as fools (for the lack of a better word).”} [Turker25] \end{quote} It could be possible that high-earning Turkers have less visibility in this Subreddit, from which our analysis is drawn. In the same thread, another Turker noted that success in Turking requires dedication and perseverance. These qualities are for learning the nitty-gritty of the platform, closely observing and acting with the dynamics of the task marketplace, and strategically making use of Turking tools. Prior work \cite{gupta_turk-life_2014,savage_becoming_2020, hanrahan2021expertise} has noted that there is a ladder of experience levels among Turkers, from \say{novice} to \say{expert}. It is likely that those aspects are given far less attention in the popular narratives about Turking \cite{chris-turkerview}. \paragraph{Diverse motivations for Turking.} Apart from varying experience levels, motivations for Turking also vary greatly. We observed that some Turkers indeed spoke out about their own conditions and experiences. Consistent with the findings above, these were not early responses to interview requests nor reactions to media articles. We saw that after a post develops into a longer discussion, some Turkers begin to open up more about their individual circumstances. In such accounts, across threads, Turkers shared their diverse motivations for Turking. Some described themselves as socially anxious, autistic, or lacking the competence to thrive in a regular labor market. Turking was characterized as providing them with an alternative mode of work. Some workers expressed appreciation for the flexibility of working outside an office or simply the pleasure of incremental achievements. Some Turkers were explicit about doing \say{side gig} for making use of their idle time to earn \say{guilt-free money} to treat themselves. \begin{quote} \hspace{1em}\emph{“I don't accept HITs that pay that low. I make around \$10/hour and I use that money for my bubble tea habit, Christmas presents for family and generally being able to treat myself occasionally.”} [Turker26] \hspace{1em}\emph{“Sometimes when I drop my daughter off for her 3-4 hour volleyball practices, rather than go home and sit around I go to Starbucks and turk over a cup of coffee or two.”} [Turker27] \hspace{1em}\emph{“I’ve paid entirely for a high end camera with Turk money, and this has become my favorite hobby and returned priceless memories for my family.} [Turker28] \hspace{1em}\emph{“My ability to do Turking adds \$400-\$600 a month without taking me out of my primary role, which means nearly the same amount of income as her previous position with much more family time.”} [Turker29] \end{quote} In these posts, Turking sounds rewarding and enjoyable to Turkers. In contrast, some workers report reliance on Turking for their primary income: \begin{quote} \hspace{1em}\emph{“I do it because I'm not healthy enough to handle a "real" job right now. Turking lets me work from home whenever I feel able and take as many breaks as I need. ”} [Turker30] \hspace{1em}\emph{“I am a mom of three children and am trying to bring in extra money with MTurk. I have a degree but am unable to work outside of the home because two of my children have significant special needs.”} [Turker31] \hspace{1em}\emph{“I'm physically not able to go out to work right now, but I still need food and shelter. I'm not officially disabled (no health insurance = no doctor) so I'm not getting any money from the government. I'm doing as many online jobs as I can right now, but those aren't so plentiful. I turk to survive.”} [Turker32] \end{quote} Prior work \cite{antin_social_2012, ipeirotis2010demographics, kaufmann_more_2011, kaplan_striving_2018} has documented a wide range of motivations for Turking across nationality and geography. From these accounts we collected in the forum, Turkers' motivations and perceptions of Turking vary according to their life conditions. One is naturally less likely to view Turking as entertainment when it serves as one's primary source of income. \paragraph{Diverse career perspectives.} Turkers come to MTurk with different life conditions and attitudes. As we saw in the Subreddit, some workers do not think Turking to be a \say{job} or a \say{career}, bringing correspondingly lower expectations for the work. In contrast, other dedicated Turkers in the forum contend that: \begin{quote} \hspace{1em}\emph{“I treat my work on here like a real job and I guess many other}[s] \emph{do not.”} [Turker33] \hspace{1em}\emph{“If you look at it like a job...then you are getting "training" or learning what that "boss" wants. It isn't worth it sometimes if there is only one hit...but if it's something you can bank like you just mentioned, it's worth it.”} [Turker34] \hspace{1em}\emph{“I don[']t even try and argue with them anymore. More work for me and others who diligently read when they cry and complain.”} [Turker35] \end{quote} Turker33 thinks the view of Turking as \say{a real job} is not common among Turkers. But Turker34 adopts a serious attitude toward Turking as a job and recognizes the personal growth gained through Turking. One Turker from the same thread comments that \say{the learning curve is steep} for crowd work while another affirms that to make the platform work for oneself \say{time, experience, and a certain mindset} are required. Turker35 expresses a very practical attitude that it is not worthwhile to get upset or argue with task Requesters who reject their work, while outrage regarding unfair rejection often features prominently in popular press articles about Turking. As we understand from these online discussions, recognizing Turking as formal career experience seems helpful to breed resilience. These accounts suggest that Turking is seen as a formal career option for some workers, in contrast with the popular media portrayal of an exploited workforce. \section{Discussion} Based on our analysis of online discussion data, we have seen that in the online Turker discourse community, Turkers tend to be disinclined to accept interview requests. Specifically, Turkers are resistant to the idea of being interviewed and sharing their stories, as they fear that publicity will turn their work environment more competitive. They do not applaud media narratives that portray them as a powerless and exploited underclass, even if their working condition is not ideal. In general, Turkers seem averse to media coverage adopting a more \say{activist} tone as a preferable pathway to improve their working status. They express a stronger attitude of self-determination and self-reliance regarding their working decisions and whether or not anything needs to be rectified, and how to go about it if so. They emphasize a simple desire for secure income available from Turking. Many researchers have discussed the invisibility of crowd workers (e.g., \cite{irani_turkopticon_2013, gray_ghost_2019}), and how this can lead to misrepresentation of the work involved and increase precarity of work \cite{suchman_making_1995, star_layers_1999}. Many academic efforts seek to probe into the invisible crowd work and gain a deeper understanding. While online Turker forums could serve as a channel for understanding these communities from outside an opaque platform, there exist additional layers of friction to the elicitation of Turkers' voices and accounts. Turkers do not just easily tell their stories. In order to better understand Turkers' experiences so that we might devise methods to better support them and their work, we need to better understand and respect their experiences and wishes. Well-meaning but naive or poorly-executed interventions can cause harm and provoke anger from the very people whom such interventions seek to help. While various studies have identifies problems with worker invisibility, simply giving Turkers more exposure can induce stress and resistance. Turkers largely do not seem to want the attention that may disrupt their work and work environment, though they also express a desire that the platform and income opportunities can be further secured or bolstered. One-sided media (or research) accounts of Turkers' stories seem particularly problematic, including distorting the public understanding and narrative around crowd work. Other researchers have also noted this \cite{moss2020ethical}. Many Turkers in the online forum are aware of the simplistic caricatures of crowd work illustrated in popular narratives. In contrast, their own accounts of Turking are often a diverse and mixed bag of upsides and downsides, not necessarily converging into one universal image. There is a clear gap between typical media portrayals of crowd work and workers' own perception of Turking. Use of terminology such as \say{digital slavery} is particularly problematic as misleading, demeaning, and racially charged. Turking itself requires special skills and many HITs are also knowledge-intensive. Turkers' work and professionalism merit our respect. When it is difficult to access an opaque platform and elicit Turkers' voices, journalist practices are particularly susceptible to the trap of \say{parachute journalism} \cite{wizda_parachute_1997}. This refers to the production of quick, one-dimensional news pieces that merely capture the clich\'es and stereotypes, often with stories pre-defined before the reporter hits the ground in the locale to be reported on. Some conditions, such as insufficiency of local knowledge and tight deadlines, provide fertile ground for \say{parachute journalism}. Rather than questioning professional ethics, we suggest more attention be given to the nature of crowd work as a complex sociotechnical issue. Less visible online work requires updated practices to learn about its operation in depth. Merely interviewing a few crowd workers is unlikely to capture the rich diversity of lives and lived experiences surrounding crowd work. Several times in worker discussions we saw a critique of whether a journalist's experience on AMT for a couple of weeks is sufficient to understand Turking themselves, let alone depict it accurately to their readers. When we write \say{Turkers}, it is tempting to assume this name envelops a homogeneous group, rather than recognizing it as an umbrella term encompassing a rich diversity of underlying humanity and experiences. Beyond well-known geographic diversity, there is diversity in a multitude of other respects, including living conditions, motivations, valuation of Turking, skill and experience levels, and career visions \cite{andy_baio_faces_2008, oppenlaender2020crowd}. Consequently, it is challenging to effectively support this diverse workforce via any one-size-fits-all intervention, policy, or regulation. Because our findings are drawn from online observations in Turker forums and their discussion data, our analysis is limited by that observed data. We miss the voices of Turkers who do not post in online forums, the forum we selected, or in other threads excluded from our sample. That said, our findings complement the prior research work by shedding further light upon \emph{why} understanding and representing crowd workers is so difficult. Their size, scale, diversity, invisibility, and resistance to publicity make it particularly difficult to access them and risk bias-prone representations. These constitute clear challenges to recognize in pursuing a worker-centered agenda to support crowd work. It is also critical to strive to understand the constituency of Turkers we wish to aid in such work. For a worker-centered research agenda, what approaches would best enable us to understand the shifting and diverse population of Turkers? How should we respond to the invisibility of the workforce when they are resistant to publicity? How could we achieve a worker-centered approach to crowd work when it is inherently hard to realize so with traditional approaches? The voices of Turkers we have heard lead us to these questions to help guide future research. \section*{Acknowledgments} We thank all those who provided feedback to us in preparing this manuscript. This work was supported in part by Good Systems\footnote{\url{https://goodsystems.utexas.edu}}, a UT Austin Grand Challenge to develop responsible AI technologies. Matthew Lease is also supported as an Amazon Scholar. Our opinions are entirely our own and do not speak to those of the sponsoring agencies. \bibliographystyle{unsrt}
2024-02-18T23:39:47.450Z
2023-01-10T02:01:16.000Z
algebraic_stack_train_0000
404
5,765
proofpile-arXiv_065-2053
\section{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}% \def\subsection{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\subsubsection{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape \centering }% }% \def\paragraph{% \@startsection {paragraph}% {4}% {\parindent}% {\z@}% {-1em}% {\normalfont\normalsize\itshape}% }% \def\subparagraph{% \@startsection {subparagraph}% {5}% {\parindent}% {3.25ex \@plus1ex \@minus .2ex}% {-1em}% {\normalfont\normalsize\bfseries}% }% \def\section@preprintsty{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries }% }% \def\subsection@preprintsty{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries }% }% \def\subsubsection@preprintsty{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape }% }% \@ifxundefined\frontmatter@footnote@produce{% \let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote }{}% \def\@pnumwidth{1.55em} \def\@tocrmarg {2.55em} \def\@dotsep{4.5pt} \setcounter{tocdepth}{3} \def\tableofcontents{% \addtocontents{toc}{\string\tocdepth@munge}% \print@toc{toc}% \addtocontents{toc}{\string\tocdepth@restore}% }% \def\tocdepth@munge{% \let\lambda@section@saved\lambda@section \let\lambda@section\@gobble@tw@ }% \def\@gobble@tw@#1#2{}% \def\tocdepth@restore{% \let\lambda@section\lambda@section@saved }% \def\lambda@part#1#2{\addpenalty{\@secpenalty}% \begingroup \set@tocdim@pagenum{#2}% \parindent \z@ \rightskip\tocleft@pagenum plus 1fil\relax \skip@\parfillskip\parfillskip\z@ \addvspace{2.25em plus\partial@}% \large \bf % \leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@ \hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip \par \nobreak % \endgroup }% \def\tocleft@{\z@}% \def\tocdim@min{5\partial@}% \def\lambda@section{% \lambda@@sections{}{section }% \def\lambda@f@section{% \addpenalty{\@secpenalty}% \addvspace{1.0em plus\partial@}% \bf }% \def\lambda@subsection{% \lambda@@sections{section}{subsection }% \def\lambda@subsubsection{% \lambda@@sections{subsection}{subsubsection }% \def\lambda@paragraph#1#2{}% \def\lambda@subparagraph#1#2{}% \let\toc@pre\toc@pre@auto \let\toc@post\toc@post@auto \def\listoffigures{\print@toc{lof}}% \def\lambda@figure{\@dottedtocline{1}{1.5em}{2.3em}} \def\listoftables{\print@toc{lot}}% \let\lambda@table\lambda@figure \appdef\class@documenthook{% \@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}% \raggedcolumn@sw{\raggedbottom}{\flushbottom}% }% \def\tableft@skip@float{\z@ plus\hsize}% \def\tabmid@skip@float{\@flushglue}% \def\tabright@skip@float{\z@ plus\hsize}% \def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}% \def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}% \def\@makefntext#1{% \def\baselinestretch{1}% \reset@font \footnotesize \leftskip1em \parindent1em \noindent\nobreak\hskip-\leftskip \hb@xt@\leftskip{% \Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}% \hss\@makefnmark\ }% #1% \par }% \prepdef \section{Introduction} The particle spectrum of the Standard Model (SM) is deemed complete following the discovery of a Higgs boson~\cite{Chatrchyan:2012xdj,Aad:2012tfa} at the Large Hadron Collider (LHC). Additionally, the interaction strengths of the Higgs with the SM fermions and gauge bosons are in good agreement with the SM predictions. Despite such triumph of the SM, some longstanding issues on both theoretical and experimental fronts have long been advocating additional dynamics beyond the SM (BSM). Such issues include a non-zero neutrino mass, the existence of dark matter (DM), the observed imbalance between matter and antimatter in the universe, and, the instability (or metastability) of the electroweak (EW) vacuum~\cite{EliasMiro:2011aa,Bezrukov:2012sa,Degrassi:2012ry,Buttazzo:2013uya} in the SM. Interestingly, extensions of the SM Higgs sector can serve as powerful prototypes of BSM physics that can potentially solve the aforesaid issues. Apart from the longstanding issues, some recent experimental observations have thrown fresh insight on as to what could be the nature of some hitherto additional dynamics beyond the SM. One example is the recently reported value of the mass of the $W$-boson by the CDF collaboration, that is deviated \emph{w.r.t.} the SM prediction~\cite{Blum:2013xva,RBC:2018dos,Keshavarzi:2018mgv,Davier:2019can,Aoyama:2020ynm,Colangelo:2018mtw,Hoferichter:2019mqg,Melnikov:2003xd,Hoferichter:2018kwz,Blum:2019ugy,ParticleDataGroup:2020ssz} by 7.2$\sigma$. That is, \begin{eqnarray} M^{\text{CDF}}_W &=& 80.4335~\text{GeV} \pm 6.4~\text{MeV} (stat) \pm 6.9~\text{MeV} (sys). \end{eqnarray} The origin of this deviation is suspected to be some New Physics (NP). The second experimental result is the reporting of an excess in the anomalous magnetic moment of the muon by FNAL~\cite{Muong-2:2021ojo,Muong-2:2021vma}, thereby concurring with the earlier result by BNL~\cite{Muong-2:2006rrc}. The combined result is quoted as \begin{eqnarray} \Delta a_\mu = (2.51 \pm 0.59) \times 10^{-9}. \end{eqnarray} A Two-Higgs doublet model (2HDM)~\cite{Branco:2011iw,Deshpande:1977rw} with a Type-X texture for Yukawa interactions has been long known to address the muon $g-2$ excess. The scalar sector of a 2HDM comprises the CP-even neutral scalars $h,H$, the CP-odd neutral scalar $A$, and a singly charged scalar $H^+$. Here, $h$ denotes the SM-like Higgs with mass 125 GeV. The vacuum expectation values of two doublets are $v_1$ and $v_2$ with tan$\beta = \frac{v_2}{v_1}$. Demanding invariance under a $\mathbb{Z}_2$ symmetry with the aim of annulling flavour changing neutral currents (FCNCs) leads to several variants of the 2HDM a particular kind of which is the Type-X. This variant features enhanced leptonic Yukawas with $H$ and $A$ and an sizeable contributions to muon $g-2$ are introduced via two-loop Barr-Zee (BZ) amplitudes. A resolution of the anomaly thus becomes possible for a light $A$ ($M_A \lesssim$ 100 GeV) and high tan$\beta$ ($\gtrsim 20$)~\cite{Broggio:2014mna,Cao:2009as,Wang:2014sda,Ilisie:2015tra,Abe:2015oca,Chun:2016hzs,Cherchiglia:2016eui,Dey:2021pyn}. The 2HDM framework can also accommodate $M_W^{\text{CDF}}$ \cite{Lee:2022gyf,Song:2022xts,Bahl:2022xzi,Babu:2022pdn,Ahn:2022xax,Han:2022juu,Arcadi:2022dmt,Ghorbani:2022vtv,Benbrik:2022dja,Botella:2022rte,Kim:2022xuo,Kim:2022hvh,Appelquist:2022qgl,Atkinson:2022qnl,Hessenberger:2022tcx,Kim:2022nmm,Arco:2022jrt,Kang:2022mdy,Jung:2022prq}. However, stringent constraints coming from lepton flavour universality in $\tau$ decays restricts large tan$\beta$. Also, recent LHC searches for $h \to AA \to 4\tau, 2\tau 2\mu$~\cite{CMS:2018qvj} channels rules out a large $h \to A A$ branching ratio. Such experimental results restrict to a great extent the parameter space in the Type-X that favours an explanation of muon $g-2$. A possible way to relax the parameter space is to introduce additional scalar degrees of freedom so that additional BZ amplitudes are induced. An interesting extension of the SM involves a scalar multiplet transforming as (8,2,1/2)~\cite{Manohar:2006ga} under the SM gauge group. Such a scenario is motivated by minimal flavour violation (MFV). It assumes all breaking of the underlying approximate flavour symmetry of the SM is proportional to the up- or down-quark Yukawa matrices. And it has been shown in \cite{Manohar:2006ga} that the only scalar representations under the SM gauge group complying with MFV are (\textbf{1,2}, 1/2 ) and (\textbf{8,2}, 1/2 ). The colored scalars emerging from this multiplet are the CP-even $S_R$, the CP-odd $S_I$ and the singly charged $S^+$. In addition, a color-octet can also stem from Grand Unification~\cite{Popov:2005wz,Dorsner:2007fy,FileviezPerez:2008ib,Perez:2008ry}, topcolor models~\cite{Hill:1991at} and extra dimensional scenarios~\cite{Dobrescu:2007xf,Dobrescu:2007yp}. Important phenomenological consequences of such a construct were studied in \cite{Carpenter:2011yj,Enkhbat:2011qz,Arnold:2011ra,Kribs:2012kz,Cao:2013wqa,Ding:2016ldt,Cao:2015twy,Gerbush:2007fe}. In fact, a scenario augmenting a 2HDM with a color-octet isodoublet has also been discussed in \cite{Cheng:2016tlc,Cheng:2017tbn}. The Type-I and Type-II variants were employed here. Important exclusion limits on such a framework were deduced in \cite{Miralles:2019uzg} and the radiatively generated $H^+ W^- Z(\gamma)$ vertex was studied in \cite{Chakrabarty:2020msl}. In this work, we extend the Type-X 2HDM by a color-octet iso-doublet. Taking into account the various constraints on this setup, we first identify the parameter region that accounts for $M^{\text{CDF}}_W$. We subsequently demonstrate how the parameter space accommodating $\Delta a_\mu$ expands \emph{w.r.t.} the pure Type-X on account of the additional BZ amplitudes stemming from the colored scalars. Thus, the given framework is shown to address the two anomalies simultaneously. We also propose a collider signal $p p \to S_R \to S_I A,~S_I \to b \overline{b},~A \to \tau^+ \tau^-$ for a hadron collider. Such a final state gives information about both the colorless and colored scalars involved in the cascade. In addition to the conventional cut-based methods, we plan to also use the more modern multivariate techniques for the analysis. The study is organised as follows. We introduce the Type-X 2HDM plus color-octet framework in section \ref{model}. In section \ref{constraints}, we list the important constraints on this model from theory and experiments. The resolution of the $W$-mass and muon $g-2$ anomalies in detailed in section \ref{anomalies}. A detailed analysis of the proposed LHC signature is presented in \ref{collider} employing both cut-based as well as multivariate techniques. Finally, the study is concluded in \ref{conclusions}. Important formulae are given in the Appendix. \section{The Type-X + color octet framework}\label{model} The scalar sector of the framework consists of two color-singlet $SU(2)_L$ scalar doublets $\Phi_{1,2}$ and one color-octet $SU(2)_L$ scalar $S$. The multiplets are parametrised as: \begin{eqnarray} \Phi_i = \begin{pmatrix} \phi_i^+ \\ \frac{1}{\sqrt{2}} (v_i + h_i + i z_i) \end{pmatrix} , (i = 1,2) ,~ S = \begin{pmatrix} S^+ \\ \frac{1}{\sqrt{2}} (S_R + i S_I) \end{pmatrix}. \end{eqnarray} The electroweak gauge group $SU(2)_L \times U(1)_Y$ is spontaneously broken to $U(1)_Q$ when $\Phi_i$ receives a vacuum expectation value (VEV) $v_i$ with $v^2 = v_1^2 + v_2^2 = (246 ~{\rm GeV})^2$. That the multiplet $S$ receives no VEV averts a spontaneous breakdown of $SU(3)_c$. The most generic scalar potential consistent with the gauge symmetry consists of a part containing the interactions among $\Phi_{1,2}$ only ($V_a(\Phi_{1},\Phi_{2})$), a part containing only $S$ ($V_b(S)$) and a part containing the interactions among all $\Phi_{1,2},S$ ($V_c(\Phi_{1},\Phi_{2},S)$). The scalar potential therefore looks like~\cite{Cheng:2016tlc} \begin{eqnarray} V (\Phi_{1},\Phi_{2},S) &=& V_a (\Phi_{1},\Phi_{2}) + V_b(S) + V_c(\Phi_{1},\Phi_{2},S), \end{eqnarray} where, \begin{eqnarray} V_a (\Phi_{1},\Phi_{2})&=& m_{11}^2 \Phi_1^\dag \Phi_1 + m_{22}^2 \Phi_2^\dag \Phi_2 - m_{12}^2 \left( \Phi_1^\dag \Phi_2 + \Phi_2^\dag \Phi_1 \right) \nonumber \\ && + \frac{\lambda_1}{2} \left( \Phi_1^\dag \Phi_1 \right)^2 + \frac{\lambda_2}{2} \left( \Phi_2^\dag \Phi_2 \right)^2 + \lambda_3 \left( \Phi_1^\dag \Phi_1 \right) \left( \Phi_2^\dag \Phi_2 \right) + \lambda_4 \left( \Phi_1^\dag \Phi_2 \right) \left( \Phi_2^\dag \Phi_1 \right) \nonumber \\ && + \left[ \frac{\lambda_5}{2} \left( \Phi_1^\dag \Phi_2 \right)^2 + \lambda_6 \left( \Phi_1^\dag \Phi_1 \right) \left( \Phi_1^\dag \Phi_2 \right) + \lambda_7 \left( \Phi_2^\dag \Phi_2 \right) \left( \Phi_1^\dag \Phi_2 \right)+ {\rm H.c.}\right], \label{pot-1} \end{eqnarray} \begin{eqnarray} V_b(S) &=& 2m_S^2 {\rm Tr}S^{\dag i}S_i + \mu_1 {\rm Tr}S^{\dag i}S_i S^{\dag j}S_j + \mu_2 {\rm Tr} S^{\dag i}S_j S^{\dag j}S_i + \mu_3 {\rm Tr} S^{\dag i}S_i {\rm Tr}S^{\dag j} S_j\nonumber\\ & +& \mu_4 {\rm Tr}S^{\dag i}S_j {\rm Tr}S^{\dag j}S_i + \mu_5 {\rm Tr}S_i S_j{\rm Tr} S^{\dag i}S^{\dag j} + \mu_6 {\rm Tr}S_i S_j S^{\dag j}S^{\dag i} \,, \label{pot-2} \end{eqnarray} \begin{eqnarray} V_c(\Phi_{1},\Phi_{2},S) &=& \nu_1 \Phi_1^{\dag i}\Phi_{1i}{\rm Tr}S^{\dag j}S_j + \nu_2 \Phi_1^{\dag i}\Phi_{1j} {\rm Tr}S^{\dag j}S_i\nonumber\\ & +& \left( \nu_3 \Phi_1^{\dag i}\Phi_1^{\dag j}{\rm Tr}S_i S_j + \nu_4 \Phi_1^{\dag i}{\rm Tr} S^{\dag j}S_j S_i + \nu_5 \Phi_1^{\dag i}{\rm Tr}S^{\dag j}S_i S_j + {\rm h.c.} \right) \nonumber \\ &+& \omega_1 \Phi_2^{\dag i}\Phi_{2i}{\rm Tr}S^{\dag j}S_j + \omega_2 \Phi_2^{\dag i}\Phi_{2j} {\rm Tr}S^{\dag j}S_i\nonumber\\ & +& \left( \omega_3 \Phi_2^{\dag i}\Phi_2^{\dag j}{\rm Tr}S_i S_j + \omega_4 \Phi_2^{\dag i}{\rm Tr} S^{\dag j}S_j S_i + \omega_5 \Phi_2^{\dag i}{\rm Tr}S^{\dag j}S_i S_j + {\rm h.c.} \right) \nonumber\\ &+& \kappa_1 \Phi_1^{\dag i}\Phi_{2i}{\rm Tr}S^{\dag j}S_j + \kappa_2 \Phi_1^{\dag i}\Phi_{2j}{\rm Tr}S^{\dag j}S_i + \kappa_3 \Phi_1^{\dag i}\Phi_2^{\dag j}{\rm Tr}S_j S_i, + {\rm h.c.} \label{pot-3} \end{eqnarray} Here, $i,j$ denote the fundamental $SU(2)$ indices. One can define $S_i = S_i^B T^B$ ($T^B$ being the $SU(3)$ generators and $'B'$ being the $SU(3)$ adjoint index) and the traces in Eq.(\ref{pot-2}) and Eq.(\ref{pot-3}) are taken over the color indices. We mention here that we do not impose some ad-hoc discrete symmetry to restrict the scalar potential. Rather, we are guided purely by MFV \cite{Manohar:2006ga}. One clearly identifies $V_a(\Phi_{1},\Phi_{2})$ with the generic scalar potential of two Higgs doublet model (2HDM). An important 2HDM parameter is $\tan \beta = \frac{v_2}{v_1}$. We take the VEVs and all model parameters to be real in order to avoid $\text{CP}$-violation. The scalar spectrum expectedly consists of both color-singlet as well as color-octet particles. The color-singlet scalar mass spectrum comprising the $\text{CP}$-even $h,H$, a $\text{CP}$-odd $A$ and a charged Higgs $H^+$, coincides with that of a 2HDM. Of these, $h$ is identified with the discovered scalar with mass 125 GeV. The expressions of the physical masses belonging to the particles in the colorless counterpart in terms of the couplings and mixing angles $\beta$ and $\alpha$\footnote{$\alpha $ is the mixing angle in the $\text{CP}$-even sector.} could be found in \cite{Branco:2011iw}. On the other hand, the masses of the neutral ($S_R,S_I$) and charged mass eigenstate ($S^+$) of the color-octet can be expressed in terms of the quartic couplings $\omega_i, \kappa_i, \nu_i$ and mixing angle $\beta$ as \cite{Cheng:2016tlc}: \begin{subequations} \begin{eqnarray} M_{S_R}^2 &=& m_S^2 + \frac{1}{4} v^2 (\cos ^2 \beta (\nu_1 + \nu_2 + 2 \nu_3)+\sin 2 \beta (\kappa_1 + \kappa_2 + \kappa_3) \nonumber \\ && + \sin ^2 \beta (\omega_1 + \omega_2 + 2 \omega_3)) \,, \\ M_{S_I}^2 &=& m_S^2 + \frac{1}{4} v^2 (\cos ^2 \beta (\nu_1 + \nu_2 - 2 \nu_3)+\sin 2 \beta (\kappa_1 + \kappa_2 - \kappa_3) \nonumber \\ && + \sin ^2 \beta (\omega_1 + \omega_2 - 2 \omega_3)) \,, \\ M_{S^+}^2 &=& m_S^2 + \frac{1}{4} v^2 (\nu_1 \cos ^2 \beta + \kappa_1 \sin 2 \beta + \omega_1 \sin ^2 \beta ). \end{eqnarray} \end{subequations} The Yukawa interactions in this framework are discussed next. For the interactions involving $\phi_1$ and $\phi_2$, we adopt the Type-X 2HDM Lagrangian. Here, the quarks get their masses from $\phi_2$ and the leptons, from $\phi_1$. That is, \begin{eqnarray} -\mathcal{L}^{\text{2HDM}}_Y &=& \Big[ y_u \overline{Q_L} \tilde{\phi}_2 u_R + y_d \overline{Q_L} \phi_2 d_R + y_\ell \overline{Q_L} \phi_1 \ell_R \Big] + \text{h.c.} \end{eqnarray} The lepton Yukawa interactions in terms of the physical scalars then becomes \begin{eqnarray} \mathcal{L}^\text{2HDM}_Y &=& \sum_{\ell=e,\mu,\tau} \frac{m_\ell}{v} \bigg(\xi_\ell^h h \overline{\ell} \ell + \xi_\ell^H H \overline{\ell} \ell - i \xi_\ell^A A \overline{\ell} \gamma_5 \ell + \Big[ \sqrt{2} \xi^A_\ell H^+ \overline{\nu_\ell} P_R \ell + \text{h.c.} \Big] \bigg). \end{eqnarray} The various $\xi_\ell$ factors are tabulated in the Appendix. The Yukawa interactions of the colored scalars can be expressed as \cite{Manohar:2006ga} \begin{eqnarray} -\mathcal{L}^{\text{col. oct.}}_Y &=& \sum_{p,q=1,2,3} \Big[Y^{pq}_u~\overline{Q_{Lp}} \tilde{S} u_{Rq} + Y^{pq}_d~\overline{Q_{Lp}} S d_{Rq} + \text{h.c.} \Big]. \end{eqnarray} In compliance with MFV, we take $Y_{u}^{pq} = \eta_U \frac{\sqrt{2}m_{u}}{v} \delta^{pq}$ and $Y_{d}^{pq} = \eta_D \frac{\sqrt{2}m_{d}}{v} \delta^{pq}$. We refer to \cite{Manohar:2006ga} for further details. The scaling constants $\eta_U$ and $\eta_D$ are complex in general. However, they are taken real in this study for simplicity. \section{Constraints applied}\label{constraints} The 2HDM plus color octet setup is subject to various restrictions from theory and experiments. We discuss them below. \subsection{Theoretical constraints} A perturbative theory demands that the magnitudes of the scalar quartic couplings must be $\leq 4\pi$. Next, tree-level unitarity demands that the $2 \to 2$ matrices constructed out of the tree-level scattering amplitudes involving the various scalar states of the model must have eigenvalues whose magnitudes are $\leq 8\pi$. The following unitarity conditions can be derived for the present framework~\cite{Cheng:2016tlc}. \begin{subequations} \begin{eqnarray} && \left[\frac{3}{2} (\lambda_1 + \lambda_2) \pm \sqrt{\frac{9}{4} (\lambda_1 - \lambda_2)^2 + (2 \lambda_3 + \lambda_4)^2}\right] \leq 8 \pi, \label{uni_a} \\ && \left[\frac{1}{2} (\lambda_1 + \lambda_2) \pm \sqrt{\frac{1}{4} (\lambda_1 - \lambda_2)^2 + \lambda_4^2} \right] \leq 8 \pi, \label{uni_b} \\ && \left[\frac{1}{2} (\lambda_1 + \lambda_2) \pm \sqrt{\frac{1}{4} (\lambda_1 - \lambda_2)^2 + \lambda_5^2} \right] \leq 8 \pi, \label{uni_c} \\ && (\lambda_3 + 2 \lambda_4 - 3 \lambda_5)\leq 8 \pi, \label{uni_d} \\ && (\lambda_3 - \lambda_5)\leq 8 \pi, \label{uni_e} \\ && (\lambda_3 + \lambda_4 )\leq 8 \pi, \label{uni_f} \\ && (\lambda_3 + 2 \lambda_4 + 3 \lambda_5)\leq 8 \pi, \label{uni_g} \\ && (\lambda_3 + \lambda_5)\leq 8 \pi, \label{uni_h} \\ &&|\nu_1| \leq 2 \sqrt{2} \pi , ~ |\nu_2| \leq 4 \sqrt{2} \pi, ~|\nu_3| \leq 2 \sqrt{2} \pi, \\ &&|\omega_1| \leq 2 \sqrt{2} \pi , ~ |\omega_2| \leq 4 \sqrt{2} \pi, ~|\omega_3| \leq 2 \sqrt{2} \pi, \\ &&|\kappa_1| \leq 2 \pi , ~ |\kappa_2| \leq 4 \pi, ~|\kappa_3| \leq 4 \pi, \\ &&|17 \mu_3 + 13 \mu_4 + 13 \mu_6| \leq 16 \pi, \label{uni_w} \\ &&|2 \mu_3 + 10 \mu_4 + 7 \mu_6| \leq 32 \pi, \label{uni_x} \\ &&|\nu_4 + \nu_5| \lesssim \frac{32 \pi}{\sqrt{15}}, \label{uni_y} \\ &&|\omega_4 + \omega_5| \lesssim \frac{32 \pi}{\sqrt{15}} \label{uni_z}. \label{unitarity_cond} \end{eqnarray} \end{subequations} Thus, unitarity restricts the magnitudes of the quartic couplings of the model. Eqs.(\ref{uni_a})-(\ref{uni_h}) correspond to the unitarity limit for a pure two-Higgs doublet scenario \cite{Ginzburg:2005dt,Kanemura:1993hm,Akeroyd:2000wc,Horejsi:2005da,Grinstein:2015rtl, Cacchio:2016qyh,Gorczyca:2011he}. We refer to \cite{He:2013tla,Cheng:2016tlc} for more details. Finally, the conditions ensuring a bounded-from-below scalar potential in this model along different directions in the field space are~\cite{Cheng:2018mkc}: \begin{subequations} \begin{eqnarray} &&\mu = \mu_1 + \mu_2 + \mu_6 + 2(\mu_3 + \mu_4 + \mu_5) > 0, \label{vsca}\\ &&\mu_1 + \mu_2 + \mu_3 + \mu_4 > 0, \label{vscb} \\ && 14(\mu_1 + \mu_2) + 5\mu_6 + 24(\mu_3 + \mu_4) - 3|2(\mu_1 + \mu_2) - \mu_6| > 0, \label{vscc}\\ && 5(\mu_1 + \mu_2 + \mu_6) + 6(2\mu_3 + \mu_4 + \mu_5) - |\mu_1 + \mu_2 + \mu_6| > 0, \label{vscd}\\ && \lambda_1 \geq 0 ,~ \lambda_2 \geq 0 ,~ \lambda_3 \geq - \sqrt{\lambda_1 \lambda_2}, \label{vsc1} \\ && \lambda_3 + \lambda_4 - |\lambda_5| \geq - \sqrt{\lambda_1 \lambda_2}, \label{vsc2} \\ && \nu_1 \geq -2 \sqrt{\lambda_1 \mu}, \label{vsc3}\\ &&\omega_1 \geq -2 \sqrt{\lambda_2 \mu}, \label{vsc4} \\ &&\nu_1 + \nu_2 - 2 |\nu_3| \geq -2 \sqrt{\lambda_1 \mu}, \label{vsc6}\\ &&\omega_1 + \omega_2 - 2 |\omega_3| \geq -2 \sqrt{\lambda_2 \mu}, \label{vsc7} \\ &&\lambda_1 + \frac{\mu}{4} + \nu_1 + \nu_2 + 2\nu_3 - \frac{1}{\sqrt{3}}|\nu_4 + \nu_5| > 0, \label{vsc8} \\ &&\lambda_2 + \frac{\mu}{4} + \omega_1 + \omega_2 + 2\omega_3 - \frac{1}{\sqrt{3}}|\omega_4 + \omega_5| > 0. \label{vsc8} \label{stability_cond} \end{eqnarray} \end{subequations} Among the above, Eqs.(\ref{vsc1}) and (\ref{vsc2}) correspond to the pure 2HDM. The rest of the conditions ensure positivity of the scalar potential in a hyperspace spanned by both colorless as well as colored fields. \subsection{Higgs signal strengths} The model also faces restrictions from signal strength measurements in different decay modes of the 125 GeV Higgs. Denoting the signal strength for the channel $p p \to h, ~h \to i$ by $\mu_i$, it is defined as, \begin{eqnarray} \mu_i = \frac{\sigma^{\rm{theory}}(pp \rightarrow h)~ {\rm BR^{theory}}(h \rightarrow i)}{\sigma^{\rm{exp}}(pp \rightarrow h)~ {\rm BR^{exp}}(h \rightarrow i)}. \label{sig-str-1} \end{eqnarray} We take $g g \to h$ as the production process at the partonic level. The cross section for the same can be expressed as \begin{eqnarray} \sigma(gg \rightarrow h) = \frac{\pi^2}{8 M_h} \Gamma (h \rightarrow gg)~ \delta(\hat{s} - M_h^2) \label{xsec:gg-h}, \end{eqnarray} $\sqrt{\hat{s}}$ being partonic centre-of-mass energy. Further, expressing the branching fractions in terms of the decay widths, one rewrites Eq.(\ref{sig-str-1}) as \begin{eqnarray} \mu_i &=& \frac{\Gamma^{\rm{BSM}}_{h \rightarrow gg}}{\Gamma^{\rm{SM}}_{h \rightarrow gg}} ~\frac{\Gamma_i^{\rm{BSM}}}{\Gamma_{\rm{tot}}^{\rm{BSM}}} ~\frac{\Gamma_{\rm{tot}}^{\rm{SM}}}{\Gamma_i^{\rm{SM}}}. \label{sig-str-3} \end{eqnarray} The {\em alignment limit} {\em i.e.} $\alpha = \beta - \frac{\pi}{2}$ is strictly imposed throughout the analysis in which the $h \to WW,ZZ,\tau^+\tau^-$ decay widths at the leading order are identical to the corresponding SM values. Therefore, the signal strength in these channels deviates from the corresponding SM predictions on account of only the additional contribution to the $g g \to h$ amplitude coming from the colored scalars. This is not the case with the $h \to g g, \gamma\gamma$ signal strengths where additional contributions are encountered from the scalar sector. We refer to \cite{Cheng:2016tlc,Cheng:2017tbn,Chakrabarty:2020msl} for relevant formulae on the decay widths for this framework. The latest data on Higgs signal strengths for $g g \to h$ is summarised in Table \ref{ss}. We combine the data using $\frac{1}{\sigma^2} = \frac{1}{\sigma^2_{\text{ATLAS}}} + \frac{1}{\sigma^2_{\text{CMS}}}$ and $\frac{\mu}{\sigma^2} = \frac{\mu_{\text{ATLAS}}}{\sigma^2_{\text{ATLAS}}} + \frac{\mu_{\text{CMS}}}{\sigma^2_{\text{CMS}}}$. The resulting data is used at 2$\sigma$ in our analysis. \begin{table}[htpb!] \centering \begin{tabular}{|c c c|} \hline $\mu_i$ & ATLAS & CMS \\ \hline $ZZ$ & $1.20^{+0.16}_{-0.15}$\cite{ATLAS:2018bsg} & $0.94^{+0.07}_{-0.07}(\text{stat.})^{+0.08}_{-0.07}(\text{syst.})$\cite{CMS:2019chr}\\ \hline $W^+ W^-$ & $2.5^{+0.9}_{-0.8}$ \cite{Aad:2019lpq} & $1.28^{+0.18}_{-0.17}$\cite{Sirunyan:2018egh}\\ \hline $\Gamma\g$ & $0.99 \pm 0.14$\cite{Aaboud:2018xdt} & $1.18^{+0.17}_{-0.14}$\cite{Sirunyan:2018ouh}\\ \hline $\tau\overline{\tau}$ & $1.09^{+0.18}_{-0.17}(\text{stat.})^{+0.27}_{-0.22}(\text{syst})^{+0.16}_{-0.11} (\text{theo syst})^{}_{}$\cite{ATLAS:2018lur} & $1.09^{+0.27}_{-0.26}$\cite{Sirunyan:2017khh}\\ \hline $b\overline{b}$ & $2.5^{+1.4}_{-1.3}$\cite{Aaboud:2018gay} & $1.3^{+1.2}_{-1.1}$\cite{CMS:2016mmc}\\ \hline \end{tabular} \caption{Latest limits on the $h$-signal strengths} \label{ss} \end{table} \subsection{Direct search} Searches for an $H^+$ in the $e^+ e^- \longrightarrow H^+ H^-$ channel at LEP~\cite{Abbiendi:2013hk} has led to the $M_{H^+} > 100$ GeV for all Types of a 2HDM. As for the Type-X, various exclusion limits are rather weak (compared to Type-II, for instance) owing to the suppressed Yukawa couplings of $H,A,H^+$ with the quarks~\cite{Chowdhury:2017aav}. We take $M_H$ = 150 GeV and $M_{H^+} \geq M_H$ to comply with the exclusion constraints. In foresight, we shall also adhere to $M_A > M_h/2$ to evade the limit on BR($h \to A A$) that can be derived from BR($h_{125} \to AA \to 4\tau, 2\tau 2\mu$)~\cite{CMS:2018qvj}. We now discuss exclusion constraints on the color octet mass scale. Color-octet resonances have been searched for at the LHC in the $pp \to S \to j j$ \cite{Aad:2014aqa,ATLAS:2016lvi,Khachatryan:2016ecr,Khachatryan:2015dcf} and $pp \to S \to t \overline{t}$ \cite{Aad:2015fna,CMS:2016zte,CMS:2016ehh} channels. Reference \cite{Miralles:2019uzg} recasted the search of colored scalars at the LHC for the Manohar-Wise scenario. The lightest colored scalar was taken to be $S_R$ therein. Since the colored scalars have Yukawa interactions with the quarks, exclusion limits on the color octet mass scale can depend on the strength of such couplings. Reference \cite{Miralles:2019uzg} reported that no clear constraints were derived from the $p p \to S_R \to t \overline{t}$ channel. As for $p p \to S_R t \overline{t} \to t \overline{t} t \overline{t}$, a bound $M_R \gtrsim$ 1 TeV can be derived for $\eta_U \sim \mathcal{O}(1)$. This bound is therefore expected to relax upon lowering $\eta_U$. Another channel is $p p \to S^+ t \overline{b} \to t \overline{b} t \overline{b}$ that leads to a bound of 800 GeV irrespective of the value of $\eta_U$ and $\eta_D \neq 0$. These bounds should apply to $S_I$, the lightest scalar assumed in our case. We take $\eta_U \ll \eta_D$ = 1 in our study in which case maintaining $M_{S_I} \geq$ 800 GeV complies with the direct search constraints. \subsection{Lepton flavour universality} Enhanced Yukawa couplings of the $\tau$-lepton potentially modify the $\tau \to \ell \nu \overline{\nu}$ decay rate by virtue of additional contributions stemming from the 2HDM scalars at both tree and loop-levels. This is particularly seen in the lepton-specific case for high $\tan\beta$. We refer to \cite{Chun:2016hzs} for details where this has been studied extensively. Following \cite{Chun:2016hzs}, we have therefore restricted $\tan\beta < 60$ throughout the analysis to comply with lepton flavour universality. \section{The CDF II and muon $g-2$ excesses}\label{anomalies} This section discusses how the measured values of the $W$-mass and muon anomalous magnetic moment can be realised in the 2HDM + color octet setup. The $W$-mass predicted by a new physics framework can be expressed in terms of its contributions to the oblique parameters $\Delta S$, $\Delta T$ and $\Delta U$ as~\cite{Maksymyk:1993zm} \begin{eqnarray} M^2_W &=& M^2_{W,\text{SM}} \bigg[1 + \frac{\alpha_{em}}{c^2_W - s^2_W} \bigg( -\frac{\Delta S}{2} + c^2_W \Delta T + \frac{c^2_W - s^2_W}{4 s^2_W} \Delta U \bigg) \bigg] \end{eqnarray} where $M_{W,\text{SM}}$ is the mass in absence of quantum corrections, and, $c_W$ and $\alpha_{em}$ respectively denote the cosine of the Weinberg angle and the fine-structure constant. We list below the contributions from the colorless and colored sectors to the $T$-parameter \cite{Peskin:1991sw,Grimus:2008nb} in the alignment limit. \begin{subequations} \begin{eqnarray} \Delta T_{\text{2HDM}} &=& \frac{1}{16 \pi s^2_W M^2_W}\Big[F(M^2_{H^+},M^2_{H}) + F(M^2_{H^+},M^2_{A}) - F(M^2_{H},M^2_{A})\Big] \,, \label{T2HDM} \nonumber \\ \Delta T_S &=& \frac{N_S}{16 \pi s^2_W M^2_W}\Big[F(M^2_{S^+},M^2_{S_R}) + F(M^2_{S^+},M^2_{S_I}) - F(M^2_{S_R},M^2_{S_I})\Big] \label{TS} \,, \end{eqnarray} \end{subequations} where, \begin{eqnarray} F(x,y) &=& \frac{x+y}{2} - \frac{xy}{x-y}~{\rm ln} \bigg(\frac{x}{y}\bigg)~~~ {\rm for} ~~~x \neq y \,, \nonumber \\ &=& 0~~~ {\rm for} ~~~ x = y. \end{eqnarray} Similarly, the corresponding contributions to the $S$-parameter read \begin{subequations} \begin{eqnarray} \Delta S_{\text{2HDM}} &=& \frac{1}{2\pi} \Big[\frac{1}{6}\text{log}\Big(\frac{M^2_H}{M^2_{H^+}}\Big) - \frac{5}{108} \frac{M^2_H M^2_A}{(M^2_A - M^2_H)^2} \nonumber \\ && + \frac{1}{6}\frac{M^4_A(M^2_A - 3 M^2_H)}{(M^2_A - M^2_H)^3}\text{log}\Big(\frac{M^2_A}{M^2_{H}}\Big) \Big], \\ \Delta S_S &=& \frac{N_S}{2\pi} \Big[\frac{1}{6}\text{log}\Big(\frac{M^2_{S_R}}{M^2_{S^+}}\Big) - \frac{5}{108} \frac{M^2_{S_R} M^2_{S_I}}{(M^2_{S_I} - M^2_{S_R})^2} \nonumber \\ && + \frac{1}{6}\frac{M^4_{S_I}(M^2_{S_I} - 3 M^2_{S_R})}{(M^2_{S_I} - M^2_{S_R})^3}\text{log}\Big(\frac{M^2_{S_I}}{M^2_{S_R}}\Big) \Big]. \end{eqnarray} \end{subequations} The total oblique parameter in the present setup is given by the sum of the colorless and colored components, i.e., $\Delta S = \Delta S_{\text{2HDM}} + \Delta S_S$ and $\Delta T = \Delta T_{\text{2HDM}} + \Delta T_S$. The $M_W$ value reported by CDF II can be accommodated by the following ranges~\cite{Asadi:2022xiy,Lu:2022bgw} of $\Delta S$ and $\Delta T$ for $\Delta U=0$: \begin{eqnarray} \Delta S = 0.15 \pm 0.08,~~\Delta S = 0.27 \pm 0.06,~~\rho_{ST} = 0.93. \label{oblique} \end{eqnarray} \begin{figure} \centering \includegraphics[height = 8 cm, width = 8 cm]{deltaM_deltaM.png} \caption{Parameter points in the $M_{S^+} - M_{S_I}$ versus $M_{H^+} - M_H$ plane compatible with the observed $M_W$ and the various constraints.} \label{f:mass_splitting} \end{figure} In the above, $\rho_{ST}$ denotes the correlation coefficient. The impact of stipulated ranges for the oblique parameters is expected to get reflected in the scalar mass splittings. To test it, we fix $M_H$ = 150 GeV and $M_{S_I}$ = 800 GeV and make the variations 0 $ < M_{H^+} - M_H < $ 300 GeV, $\frac{M_h}{2} < M_A <$ 200 GeV, 0 $ < M_{S^+} - M_{S_I} < $ 100 GeV and 0 $ < M_{S_R} - M_{S_I} < $ 100 GeV. We plot the parameter points predicting $\Delta S$ and $\Delta T$ in the aforesaid ranges in the $M_{H^+} - M_H$ vs $M_{S^+} - M_{S_I}$ plane in Fig.\ref{f:mass_splitting}. An inspection of the figure immediately suggests that the point $(M_H - M_{H^+},M_{S_I} - M_{S^+})=(0,0)$ is excluded by the CDF data. This is expected on account of the fact that $M_H = M_{H^+}$ and $M_{S_I} = M_{S^+}$ respectively lead to $\Delta T_{\text{2HDM}}$ = 0 and $\Delta T_{S}$ = 0 for all $M_{A}$ and $M_{S_R}$ and a vanishing $\Delta T$ does not suffice to predict the observed $M_W$. We now discuss muon $g-2$ in the given setup. Elaborate discussions on the Type-X 2HDM contributions to $\Delta a_\mu$ are skipped here for brevity. We focus on the contribution coming from the colored scalars in this section. Since the color-octet does not couple to the leptons at the tree-level, it does not contribute to muon $g-2$ at one-loop. The color-octet sector contributes to the muon anomalous magnetic moment through the two-loop BZ amplitudes shown in Fig.\ref{f:BZ_col_oct_diag}. The diagram on the left panel is a two-loop topology involving an effective $\phi \gamma \gamma$ ($\phi=h,H$) vertex that is generated at one loop via $S^\pm$ running in the loop. The BZ amplitude can be expressed as \begin{figure} \centering \includegraphics[height = 8 cm, width = 8 cm]{fig2b_twoloop.png}~~~ \includegraphics[height = 8 cm, width = 8 cm]{fig6b_twoloop.png}~~~ \caption{Two loop BZ contributions to $\Delta a_\mu$ involving the color octet.} \label{f:BZ_col_oct_diag} \end{figure} \begin{eqnarray} {\Delta a_\mu}_{\{S^+,~\phi\gamma\gamma\}}^{\text{BZ}} &=& \sum_{\phi = h,H} \frac{N_S\alpha M_\mu^2}{8 \pi^3 M_{\phi}^2}~ y_l^{\phi}~ \lambda_{\phi S^+ S^-}\mathcal{F}\left(\frac{M_{S^+}^2}{M_{\phi}^2}\right). \label{bz1} \end{eqnarray} Similarly, the right panel diagram involves an $H^+ W^- \gamma$ vertex that is generated at one loop. The amplitudes stemming from $S_R$ and $S_I$ in the loops are given by \begin{subequations} \begin{eqnarray} {\Delta a_\mu}_{\{S_R,~H^+ W^-\gamma\}}^{\text{BZ}} &=& \frac{N_S \alpha M_\mu^2 }{64 \pi^3 s_w^2 (M_{H^+}^2 - M_W^2)} \zeta_l ~ \lambda_{ H^+ S^- S_R} \int_{0}^{1} dx~x^2 (x-1) \nonumber \\ &&\times \left[\mathcal{G}\left(\frac{M_{S^+}^2}{M_{H^+}^2},\frac{M_{S_R}^2}{M_{H^+}^2}\right) - \mathcal{G}\left(\frac{M_{S^+}^2}{M_W^2},\frac{M_{S_R}^2}{M_W^2}\right)\right], \label{bz2} \\ {\Delta a_\mu}_{\{S_I,~H^+ W^-\gamma\}}^{\text{BZ}} &=& \frac{N_S\alpha M_\mu^2 }{64 \pi^3 s_w^2 (M_{H^+}^2 - M_W^2)} \zeta_l ~ \lambda_{ H^+ S^- S_I} \int_{0}^{1} dx~x^2 (x-1) \nonumber \\ &&\times \left[\mathcal{G}\left(\frac{M_{S^+}^2}{M_{H^+}^2},\frac{M_{S_I}^2}{M_{H^+}^2}\right) - \mathcal{G}\left(\frac{M_{S^+}^2}{M_W^2},\frac{M_{S_I}^2}{M_W^2}\right)\right]. \label{bz3} \end{eqnarray} \end{subequations} The subscripts in Eqs.(\ref{bz1}), (\ref{bz2}) and (\ref{bz3}) refer to the one-loop effective vertex and the circulating colored scalar. The expressions for the trilinear couplings $\lambda_{\phi S^+ S^-},\lambda_{ H^+ S^- S_R},\lambda_{ H^+ S^- S_I}$ and the functions $\mathcal{F}(z)$ and $\mathcal{G}(z^a,z^b,x)$ are given in the Appendix. We intend to test the magnitudes of the three Barr-Zee contributions and choose tan$\beta$ = 50, $M_{H}$ = 100 GeV, $M_{H^+}$ = 250 GeV, $M_{S_I}$ = 800 GeV, $M_{S^+}$ = 805 GeV, 810 GeV, 820 GeV. The values taken for tan$\beta$ and $M_{S_I}$ are allowed by the lepton flavour universality and direct search constraints respectively. In addition, the $M_{H^+}-M_H$ and $M_{S^+}-M_{S_I}$ mass differences are thus compatible with $M_W^\text{CDF}$, as can be checked with Fig.\ref{f:mass_splitting}. As for the values of the trilinear couplings, one derives for $\alpha = \beta - \frac{\pi}{2}$ that $\lambda_{H S^+ S^-} = -\frac{1}{2}\big((\nu_1 - \omega_1)c_\beta s_\beta + \kappa_1 s_{2\beta}\big) \simeq -\frac{\kappa_1}{2}$ for large tan$\beta$. Since $\kappa_1$ is a priori a free parameter of the theory, $|\lambda_{H S^+ S^-}|$ can be as large as 2$\pi$. It similarly follows that $|\lambda_{H^+ S^- S_R}|$ and $|\lambda_{H^+ S^- S_I}|$ $\lesssim \pi$. \begin{figure}[htpb!] \centering \includegraphics[height = 8 cm, width = 8 cm]{gmt_col_oct_MSp_805.png}~~~ \includegraphics[height = 8 cm, width = 8 cm]{gmt_col_oct_MSp_810.png}\\ \includegraphics[height = 8 cm, width = 8 cm]{gmt_col_oct_MSp_820.png} \caption{Variation of different BZ contributions involving colored scalars for $M_{S^+}$ = 805 GeV (top left), 810 GeV (top right) and 820 GeV (bottom).} \label{f:BZ_col_oct} \end{figure} We plot the individual BZ amplitudes in Fig.\ref{f:BZ_col_oct} versus $M_{S_R}$ tan$\beta$ = 50 and $\lambda_{H S^+ S^-} = -2\pi$ and $\lambda_{H^+ S^- S_R}=\lambda_{H^+ S^- S_I} = -\pi$. We find that they can be $\mathcal{O}(10^{-10})$ with the largest being ${\Delta a_\mu}_{\{S_R,~H^+ W^-\gamma\}}^{\text{BZ}}$. This sizeable magnitudes can be understood from the fact that the products $\lambda_{H S^+ S^-}\times \tan\beta$, $\lambda_{H^+ S^- S_R}\times\tan\beta$ and $\lambda_{H^+ S^- S_I}\times\tan\beta$ are $\mathcal{O}(100)$ numbers. Variations introduced by the said changes of $M_{S^+}$ are small and do not change the ball-park contributions to $\Delta a_\mu$. Retaining the same values for the scalar masses as in Fig.\ref{f:BZ_col_oct}, we perform the following scan over the rest of the parameters: \begin{eqnarray} 20~\text{GeV} < M_A < 200~\text{GeV},~0 < m_{12} < 100~\text{GeV}, \nonumber \\ 10 < \tan\beta < 100,~|\omega_1|,|\kappa_1|,|\kappa_2|,|\kappa_3|, |\nu_1|,|\nu_2|,|\nu_3| < 2\pi. \end{eqnarray} Parameter points that negotiate all constraints successfully and are consistent with the observed muon $g-2$ and $M_W$ at $2\sigma$ and $3\sigma$ respectively are plotted in the $M_A-\tan\beta$ ($M_A-M_{S_R}$) plane in the left (right) panel of Fig.\ref{f:param_col_oct}. One inspects in this figure that on account of the color-octet contributions, an $A$ that is compatible with $\Delta a_\mu$ can now be much heavier compared to what it is in the pure Type-X 2HDM. To elucidate, the enlarged parameter space now includes $M_A \lesssim 180$ GeV for a tan$\beta$ around 50 for the all three $M_{S^+}$ values taken. The lower bound $M_A \gtrsim 80$ GeV is noticed for $M_{S^+}$ = 805 GeV. This is a consequence of demanding $\Delta T$ and $\Delta S$ in the stated ranges (Eq.(\ref{oblique})) so as to comply with the observed $M_W$. We also show the subregions where $M_{S_R} > M_{S_I} + M_A$ on account of the $S_R \to S_I A$ decay signal in foresight. Such a requirement restricts $M_A \lesssim$ 140 GeV, 110 GeV and 85 GeV for $M_{S^+}$ = 805 GeV, 810 GeV and 820 GeV respectively. \begin{figure}[htpb!] \centering \includegraphics[height = 7 cm, width = 7 cm]{MA_tanbeta_100_5.png} \includegraphics[height = 7 cm, width = 7 cm]{MA_MSR_100_5.png} \\ \includegraphics[height = 7 cm, width = 7 cm]{MA_tanbeta_100_10.png} \includegraphics[height = 7 cm, width = 7 cm]{MA_MSR_100_10.png} \\ \includegraphics[height = 7 cm, width = 7 cm]{MA_tanbeta_100_20.png} \includegraphics[height = 7 cm, width = 7 cm]{MA_MSR_100_20.png} \\ \caption{Variation of ${\Delta a_\mu}_{\{S^+,~\phi\gamma\gamma\}}^{\text{BZ}}$ for $M_{S^+}$ = 805 GeV (top left), 810 GeV (top right) and 820 GeV (bottom).} \label{f:param_col_oct} \end{figure} \section{Collider Analysis} \label{collider} Having discussed the features of the multi-dimensional parameter space validated through the theoretical and experimental constraints, in this section, we aim to analyse a promising signature involving the non-standard colored scalars in high-luminosity (HL) 14 TeV LHC. The signal topology allows for the single production of $S_R$ dominantly through gluon-gluon and quark fusion and then subsequent decay of $S_R$ into $S_I$ and $A$. Finally the colored scalar $S_I$ decays into two $b$-jets and $A$ decays to $\tau^+ \tau^-$. In short we shall be analysing the following signal : \begin{eqnarray} p p \to S_R \to S_I A , S_I \to b \overline{b}, A \to \tau^+ \tau^- \,. \label{signal-proc} \end{eqnarray} Depending on the visible decay products of the $\tau^+$ and $\tau^-$ in the final state, there could be three different final states : \begin{itemize} \item Both $\tau$ leptons in the final state decay leptonically leading to the final state $2 \tau_\ell + 2b + \not \! E_T $ with $\tau_\ell = \tau_e, \tau_\mu$. In short we shall denote this case as ``DL". \item One of the two $\tau$s in the final state decays leptonically, whereas the second one decays hadronically. This semi-leptonic decay topology gives rise to $1 \tau_\ell + 1 \tau_h + 2b + \not \! E_T $ final state. For convenience this case will be denoted as ``SL". \item Both $\tau$ leptons decaying hadronically {\footnote{The visible decay product of the hadronic decay of $\tau$-lepton is identified as $\tau$-jet.}}, give rise to $2 \tau_h + 2 b + \not \! E_T $ final state. Since there is no lepton in the final state, this case will be denoted by "NoL". \end{itemize} Now to make $S_R \to S_I A$ decay mode mentioned in Eq.(\ref{signal-proc}) kinematically open, we need to set $M_{S_R} > M_{S_I} + M_A$. We throughout take $\eta_D=1$ and $\eta_U \ll \eta_D$ for the collider analysis. We also fix $M_{S_I} = 800$ GeV which is then compatible with the direct search constraints discussed in section \ref{constraints}. Next, we choose five benchmark points (BP1-BP5) characterized by low, medium and high masses of $A$ ranging from 66 GeV to 147 GeV. All the benchmarks are not only allowed by the theoretical and experimental constraints, but also can envisage the muon anomalous magnetic moment within the $3 \sigma$ band about the central value and address the $W$-mass anomaly simultaneously. For the chosen benchmarks, the masses of other scalars like $H^+, S^+$, the branching ratios of the processes $S_R \to S_I A, ~S_I \to b \overline{b},~ A \to \tau^+ \tau^-$ along with the corresponding values of $\Delta a_\mu$, $(M_W^{\rm CDF}- 80.000)$ are tabulated in Table \ref{bsm}. BR$(S_R \to S_I A)$ is $\sim 99\%$ for BP1 and BP2. Since the mass splitting $(M_{S_R}-M_{S_I})$ increases from BP3 to BP5, accordingly $S_R \to S_I Z ,~ S_R \to S^\pm W^\mp$ decay modes open up leading to the decrease in BR$(S_R \to S_I A)$. For all benchmarks BR$(A \to \tau^+ \tau^-)$ are almost $\sim 99 \%$. Lastly, our choice of $\eta_{U,D}$ ensures that $S_I \to b \overline{b}$ is the dominant decay mode. \begin{table}[htpb!] \begin{center} \resizebox{16cm}{!}{ \begin{tabular}{ |c | c | c | c | c | c | c | c | c | c | c| c| } \hline & tan$\beta$ & $M_A$ (GeV)& $M_{H^+}$ (GeV) & $M_{S_R}$ (GeV) & $M_{S_I}$ (GeV) & $M_{S^+}$ (GeV) & BR$(S_R \to S_I A)$ & BR$(S_I \to b \overline{b})$ & BR$(A \to \tau^+ \tau^-)$ & $\Delta a_\mu \times 10^9$ & $(M^{\text{CDF}}_W - 80.000)$ (MeV) \\ \hline BP1 & 43.264 & 66.39 & 250.0 & 876.994 & 800.0 & 820.0 & 0.998653 & 0.866694 & 0.996484 & 0.77824 (3$\sigma$) & 433.573 \\ \hline BP2 & 56.075 & 80.093 & 250.0 & 882.644 & 800.0 & 820.0 & 0.994456 & 0.866694 & 0.996488 & 0.74883 (3$\sigma$) & 417.401 \\ \hline BP3 & 55.565 & 100.314 & 250.0 & 909.707 & 800.0 & 810.0 & 0.791145 & 0.866694 & 0.996489 & 0.77966 (3$\sigma$) & 418.839 \\ \hline BP4 & 54.48 & 121.11 & 250.0 & 938 & 800 & 805 & 0.484672 & 0.866694 & 0.99649 & 0.77224 (3$\sigma$) & 423.641 \\ \hline BP5 & 58.7 & 147.0 & 250.0 & 950.3 & 800 & 800 & 0.157716 & 0.866694 & 0.996491 & 0.75824 (3$\sigma$) & 444.802 \\ \hline \end{tabular}} \caption{Benchmarks compatible with $M^{\text{CDF}}_W$ and the observed $\Delta a_\mu$.} \label{bsm} \end{center} \end{table} Next we discuss the relevant backgrounds corresponding to the signals mentioned earlier. The dominant contributors to the backgrounds are $p p \to Z \to \tau^+ \tau^- + jets, ~ p p \to t \overline{t} \to 1 \ell + jets,~ ~ p p \to t \overline{t} \to 2 \ell + jets$ \footnote{All the background samples having jets in the final state are generated by matching the samples up to two jets.}. The first background mimics the signal if the light jets are faked as $b$-jets. In the second background, if one of the light jets is mis-tagged as a $\tau$-jet and two of the light jets are faked as $b$-jets, then the final state resembles with $1 \tau_h + 1 \tau_\ell + 2b + \not \! E_T .$ In addition to the previously mentioned conditions in the second background, one of the leptons should be missed in order to achieve the same signal topology. Apart from these, there are several sub-dominant background processes like $tW,~ WZ \to 2 \ell 2q,~ WZ \to 3 \ell \nu + jets$ etc. A complete set of all possible backgrounds can be found in Table \ref{tab:xsecs}. The particle interactions relevant for collider analysis are first implemented in \texttt{FeynRules} \cite{Alloul:2013bka}. As an output, the Universal Feynrules Output (UFO) file is generated. The signal and background cross sections are calculated at the leading order (LO) through \texttt{MG5aMC@NLO} \cite{Alwall:2014hca} using the aforesaid UFO file. Further showering and hadronization are done via \texttt{Pythia8} \cite{Sjostrand:2014zea}. To incorporate the detector effect we use the default CMS detector simulation card included in Delphes-3.4.1~\cite{deFavereau:2013fsa}. We have used \texttt{anti-}$k_t$ jet-clustering algorithm \cite{Cacciari:2008gp} for jet reconstruction. We shall analyse the signal using both traditional cut-based method and sophisticated machine learning techniques in this study. We expect an improvement in the result while performing the later. The signal significance $\mathcal{S}$ can be calculated in terms of the number of signal ($S$) and background events ($B$) left after imposing relevant cuts using : $\mathcal{S} = \frac{S}{\sqrt{B}}$. After taking into account $\theta \%$ systematic uncertainty, the significance turns out to be $\mathcal{S} =\frac{S}{\sqrt{B+ (\theta*B/100)^2}}$ \cite{Cowan:2010js}. The signal and background cross sections at LO are computed using \texttt{MG5aMC@NLO}. While evaluating the cross sections, for some of the backgrounds (mentioned in Table \ref{tab:xsecs}), we use the acceptance cuts (tabulated in Table \ref{tab:objSel} and mentioned in item C0 later) at the generation level. For other backgrounds, we impose the similar cuts at the detector level to keep all the event samples at the same footing. The LO cross sections of some of the backgrounds are multiplied with relevant $k$-factors to obtain NLO cross sections. The signal and background cross sections are tabulated in Table \ref{tab:xsecs}. \begin{table}[htpb!] \begin{center} {\footnotesize \begin{tabular}{|l|c|} \hline Process & cross section (pb) \\ \hline\hline \multicolumn{2}{|l|}{\texttt{Signal benchmarks}} \\ \hline\hline \texttt{BP1} & $0.0431$ \\ \texttt{BP2} & $0.0429$ \\ \texttt{BP3} & $0.0342$ \\ \texttt{BP4} & $0.0209$ \\ \texttt{BP5} & $0.0068$ \\ \hline\hline \multicolumn{2}{|l|}{\texttt{SM Backgrounds}} \\ \hline\hline \texttt{$t\overline{t}\,\to\,2\ell\,+\,jets$} & $107.65$ [NNLO] \\ \texttt{$t\overline{t}\,\to\,1\ell\,+\,jets$} & $437.14$ [NNLO] \\ \texttt{$tW$} & $34.81$ [LO] \\ \texttt{$Z\,\to\,\tau^+\tau^-\,+\,jets$} & $803$ [NLO] \\ \texttt{$t\overline{t}W\,\to\,\ell\nu\,+\,jets$} & $0.25$ [NLO] \\ \texttt{$t\overline{t}W\,\to\,qq$} & $0.103$ $^1$ [LO] \\ \texttt{$t\overline{t}Z\,\to\,\ell^+\ell^-\,+\,jets$} & $0.24$ [NLO] \cite{Kardos:2011na}\\ \texttt{$t\overline{t}Z\,\to\,qq$} & $0.206$ $^1$ [NLO] \cite{Kardos:2011na} \\ \texttt{$WZ\,\to\,3\ell\nu\,+\,jets$} & $2.27$ [NLO] \cite{Campbell:2011bn} \\ \texttt{$WZ\,\to\,2\ell\,2q$} & $4.504$ [NLO] \cite{Campbell:2011bn} \\ \texttt{$ZZ\,\to\,4\ell$} & $0.187$ [NLO] \cite{Campbell:2011bn} \\ \texttt{$t\overline{t}h\,\to\,\tau^+\,\tau^-$} & $0.006$ $^1$ [LO] \\ \texttt{$b\overline{b}\tau^+\tau^-$} & $0.114$ $^1$ [LO] \\ \texttt{$WWW$} & $0.236$ [NLO] \\ \texttt{$WWZ$} & $0.189$ [NLO] \\ \texttt{$WZZ$} & $0.064$ [NLO] \\ \texttt{$ZZZ$} & $0.016$ [NLO] \\ \hline \end{tabular} } \end{center} \footnotesize{ $^1$ Some selections are applied at the generation (i.e. Madgraph) level. $p_T$ of jets(j) and $b$ quarks(b) $>\,20$ GeV, $p_T$ of leptons($\ell$) $>\,10$ GeV, $|\eta|_{j/b}\,<\,5$, $|\eta|_\ell\,<\,2.5$ and $\Delta R_{jj/\ell\ell/j\ell/b\ell}\,>\,0.4$. \\ } \caption{Cross sections of the signal benchmark points and the relevant SM backgrounds.} \label{tab:xsecs} \end{table} The subsequent discussion is divided into the two following subsections that contain cut-based and multivariate analyses respectively. \subsection{Cut-based analysis} We first apply a few pre-selection cuts (C0-C4) on the events which are used as baseline selection criterion and then perform cut based as well as multivariate analyses to estimate the signal sensitivity. Below we describe the baseline selection criterion in detail. \begin{itemize} \item[C0:] A few basic selection criteria are applied to select $e, \mu, \tau $ and jets in the final state. We construct the following set of kinematic variables both for leptons and jets: $(a)$ transverse momentum $p_T$, $(b)$ pseudo-rapidity $\eta$, and $(c)$ separation between $i$ and $j$-th objects $\Delta R_{ij}\,=\,\sqrt{(\Delta \eta_{ij})^2 + (\Delta \Phi_{ij})^2}$, which is defined in terms of the azimuthal angular separation $(\Delta \Phi_{ij})$ and pseudo-rapidity difference $(\Delta \eta_{ij})$ between the same objects. The chosen threshold values of these variables are quoted in Table~\ref{tab:objSel}. \begin{table}[htpb!] \begin{center} \footnotesize\setlength{\extrarowheight}{2pt} \begin{tabular}{|l|l|} \hline Objects & Selection cuts \\ \hline \texttt{$e$} & $p_{T} > 10$~{\rm GeV}, $~|\eta| < 2.5$ \\ \texttt{$\mu$} & $p_{T} > 10$~{\rm GeV}, $~|\eta| < 2.4$, $~\Delta R_{\mu e} > 0.4$ \\ \texttt{$\tau_{h}$} & $p_{T} > 20$~{\rm GeV}, $~|\eta| < 2.4$, $~\Delta R_{\tau_h e/\mu} > 0.4$ \\ \texttt{$b\,jets$} & $p_{T} > 20$~{\rm GeV}, $~|\eta| < 2.5$, $~\Delta R_{{\rm b\,jet}~ e/\mu} > 0.4$ \\ \hline \end{tabular} \footnotesize \end{center} \caption{ Summary of acceptance cuts to select analysis level objects} \label{tab:objSel} \end{table} \item[C1:] Next we ensure that the final state acquires correct lepton multiplicity. By lepton, here we mean $\mu$ and $e$ only. In the final state, we demand one and zero leptons for SL and NoL channels respectively. \item[C2:] As expected from the topology of the signals, we require two $\tau$-jets for NoL channel. Similarly, for SL channel, one $\tau$-jet is demanded. \item[C3:] Since the lepton and the $\tau$-jet (two $\tau$-jets) originate from two oppositely charged $\tau$-leptons in the SL (NoL) channel, the decay products in both cases must possess opposite charges. We impose this particular condition to the charges of the final particles. \item[C4:] Since the signals in both channels include two $b$-jets in the final state coming from $S_R$, we demand two $b$-jets in the final state for both channels. \end{itemize} Thus the baseline selection criterion are mainly used to ensure the presence of correct final state particles in signal and background events. As can be seen from Table \ref{tab:Yields}, at an integrated luminosity ${\cal L}\,=\,3000\,{\rm fb^{-1}}$, after applying the cuts C0-C4, signal to background ratio for each benchmark turns out to be small. Thus achieving a good signal significance becomes quite challenging if we only use C0-C4. However, a few kinematic variables seem to have better discriminative power to classify signal events over background as shown in Fig.\ref{fig:features-1} and Fig.\ref{fig:features-2}. Let us provide a brief description of these variables and impose appropriate cuts (C5-C9) on them to maximize the signal significance. \begin{figure}[htpb!]{\centering \subfigure[SL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{pt_bjet1_taultauh.png}} \subfigure[NoL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{pt_bjet1_tauhtauh.png}} \\ \subfigure[SL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{pt_bjet2_taultauh.png}} \subfigure[NoL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{pt_bjet2_tauhtauh.png}} \\ \subfigure[SL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{dr_bjet1_bjet2_taultauh.png}} \subfigure[NoL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{dr_bjet1_bjet2_tauhtauh.png}} } \hspace{0.01\textwidth} \label{fig:leg} \centering \includegraphics[width=0.7\textwidth]{Legend_new.png} \caption{ Distributions of some kinematic variables: (a,b) Distribution of leading b jet $p_T$, (c,d) Distributions of sub-leading b jet $p_T$, (e,f) $\Delta R$ between two $b$-jets for SL and NoL channels respectively.} \label{fig:features-1} \end{figure} \begin{figure}[htpb!]{\centering \subfigure[SL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{dr_lep_tauh_taultauh.png}} \subfigure[NoL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{dr_tauh1_tauh2_tauhtauh.png}} \\ \subfigure[SL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{smin_taultauh.png}} \subfigure[NoL]{ \includegraphics[height = 4.5 cm, width = 8.0 cm]{smin_tauhtauh.png}} } \hspace{0.01\textwidth} \centering \includegraphics[width=0.7\textwidth]{Legend_new.png} \caption{ Distributions of some kinematic variables: (a,b) $\Delta R$ between the decay products of $A$, (c,d) $\sqrt{\hat{s}_{min}}$, for SL and NoL channels respectively.} \label{fig:features-2} \end{figure} \begin{itemize} \item[C5:] We have depicted the normalized distributions of the transverse momentum of the leading $b$-jet ($p_T^{b_1}$) for all benchmarks and dominant backgrounds for SL and NoL channels in Fig.\ref{fig:features-1}(a) and \ref{fig:features-1}(b) respectively. Since the $b$-jets originate from the decay of a heavy particle $S_R$ having mass $\sim 870-950$ GeV, the corresponding distributions of $p_T^{b_1}$ for the signal are harder than that of the backgrounds. Thus we demand $p_T^{b_1} > 200$ GeV to destroy the backgrounds over the signals. \item[C6:] Similarly, for the sub leading $b$-jet, the distributions of $p_T^{b_2}$ for the signals and backgrounds are plotted in Fig.\ref{fig:features-1}(c) and \ref{fig:features-1}(d) respectively for SL and NoL channels. The nature of the distributions can be explained by applying the same logic given in C5. In this case, an efficient discrimination of signals and backgrounds would require $p_T^{b_2} > 100$ GeV. \item[C7:] The normalized distributions of $\Delta R_{b_1 b_2}$ for signal and backgrounds in the SL and NoL channels are drawn in Fig.\ref{fig:features-1}(e) and \ref{fig:features-1}(f). In both channels two $b$-jets originate from the massive particle $S_I$. As a result $S_I$ is not boosted enough to keep it's decay products collimated. Thus the distributions of $\Delta R_{b_1 b_2}$ for signal peak at a higher side than the backgrounds in both channels. Therefore to achieve maximum significance we impose a lower cut : $\Delta R_{b_1 b_2} > 2.0$. \item[C8:] Another important variable with a reasonable distinguishing power between the signal and backgrounds is $\Delta R_{\ell \tau_h}$ ($\Delta R_{\tau_{h_1} \tau_{h_2}}$) for SL (NoL) channel. Corresponding normalized distributions are shown in Fig.\ref{fig:features-2}(a) and \ref{fig:features-2}(b) respectively for SL and NoL channels. The visible decay products of $\tau^+ \tau^-$ in the semi-leptonic and fully hadronic decay modes originate from a lighter pseudoscalar with mass $\sim 66-147$ GeV. Thus the final state lepton and $\tau$-jet (two $\tau$-jets) in SL (NoL) channel become collimated, thereby setting $\Delta R_{\ell \tau_h}$ ($\Delta R_{\tau_{h_1} \tau_{h_2}}$) to a smaller value for signal compared to the backgrounds. Thus we apply an upper cut : $\Delta R_{\ell \tau_h}$ ($\Delta R_{\tau_{h_1} \tau_{h_2}}$) $< 1.8 $ to banish the backgrounds. \item[C9:] Finally we use {\em minimum parton level centre-of-mass energy} ($\sqrt{\hat{s}_{min}}$) which has highest degree of discerning power between the signal and backgrounds. Basically this is a global inclusive variable for determining the mass scale of any new physics in presence of missing energy at the final states. The normalized distributions for both channels and for signal and backgrounds are depicted in Fig.\ref{fig:features-2}(c) and \ref{fig:features-2}(d). Since this variable plays a significant role in wiping out the backgrounds, the signal significance is expected to be sensitive to it. Thus instead of giving a fixed cut on this variable, we try to tune $\sqrt{\hat{s}_{min}}$ over a suitable range to maximize the significance. Thus we do not include this cut (C9) in the cut-flow Table \ref{tab:Yields}. Table \ref{tab:smincut} shows the variation of signal significance with $\sqrt{\hat{s}_{min}}$. For example for BP2, the significance increases by 20$\%$ (14.8$\%$) for SL (NoL) channel after using this variable. \end{itemize} \begin{table}[htpb!] \begin{center} {\footnotesize \centering \setlength{\tabcolsep}{0.7em} {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{Processes} & Events & \multicolumn{5}{c|}{Events after cuts} \\ \cline{3-7} & produced & C0-C4 & C5 & C6 & C7 & C8 \\ \hline \multicolumn{7}{|c|}{\texttt{Signal Benchmarks}} \\ \hline \multirow{2}{*}{BP1} & \multirow{2}{*}{$129300$} & $3371$ & $2763$ & $2377$ & $2221$ & $1842$ \\ & & $4097$ & $3332$ & $2791$ & $2564$ & $1994$ \\ \hline \multirow{2}{*}{BP2} & \multirow{2}{*}{$128700$} & $3892$ & $3171$ & $2714$ & $2518$ & $1924$ \\ & & $4604$ & $3750$ & $3134$ & $2870$ & $2036$ \\ \hline \multirow{2}{*}{BP3} & \multirow{2}{*}{$102600$} & $3658$ & $3024$ & $2586$ & $2389$ & $1608$ \\ & & $4184$ & $3443$ & $2889$ & $2640$ & $1649$ \\ \hline \multirow{2}{*}{BP4} & \multirow{2}{*}{$62700$} & $2520$ & $2095$ & $1793$ & $1652$ & $974$ \\ & & $2764$ & $2288$ & $1931$ & $1762$ & $971$ \\ \hline \multirow{2}{*}{BP5} & \multirow{2}{*}{$20400$} & $905$ & $756$ & $645$ & $593$ & $293$ \\ & & $977$ & $812$ & $682$ & $622$ & $282$ \\ \hline \multicolumn{7}{|c|}{\texttt{Standard Model Backgrounds with Major Contributions}} \\ \hline \multirow{2}{*}{$t\overline{t}\,\to\,2\ell\,+\,jets$} & \multirow{2}{*}{$3.23\times 10^8$} & $7343240$ & $564720$ & $287951$ & $261605$ & $54530$ \\ & & $723852$ & $66376$ & $33086$ & $29546$ & $6348$ \\ \hline \multirow{2}{*}{$t\overline{t}\,\to\,1\ell\,+\,jets$} & \multirow{2}{*}{$1.31\times 10^9$} & $4773602$ & $469033$ & $229027$ & $187641$ & $52153$ \\ & & $1119938$ & $125423$ & $59333$ & $47860$ & $12950$ \\ \hline \multirow{2}{*}{$tW$} & \multirow{2}{*}{$1.03\times 10^8$} & $2658814$ & $126566$ & $64578$ & $59989$ & $12302$ \\ & & $234436$ & $13484$ & $7001$ & $6378$ & $1368$ \\ \hline \multirow{2}{*}{$t\overline{t}Z\,\to\,\ell^+\ell^-\,+\,jets$} & \multirow{2}{*}{$720000$} & $12956$ & $2285$ & $1171$ & $930$ & $480$ \\ & & $7637$ & $1405$ & $694$ & $541$ & $362$ \\ \hline \multirow{2}{*}{$WZ\,\to\,2\ell\,2q$} & \multirow{2}{*}{$1.35\times 10^7$} & $3550$ & $687$ & $283$ & $223$ & $136$ \\ & & $3130$ & $556$ & $229$ & $169$ & $131$ \\ \hline \multirow{2}{*}{$t\overline{t}W\,\to\,\ell \nu\,+\,jets$} & \multirow{2}{*}{$762000$} & $7703$ & $1321$ & $635$ & $467$ & $128$ \\ & & $1144$ & $213$ & $100$ & $73$ & $22$ \\ \hline \hline \end{tabular}}} \end{center} \caption{Event yields of the signal and SM background processes after the baseline selection (C0-C4) and after each successive selection cuts (C5-C8) of the cut based analysis at the 14\,TeV LHC for ${\cal L}\,=\,3000\,{\rm fb^{-1}}$. Each row is divided into two subrows that contain the information of the SL and NoL channels, respectively.} \label{tab:Yields} \end{table} In Table \ref{tab:Yields} we tabulate the signal (BP1-BP5) and background yields at integrated luminosity 3000 fb$^{-1}$ after imposing the baseline selection cuts (C0-C4) and successive cuts on relevant kinematic variables (C5-C9). Looking at the signal significances in Table \ref{tab:smincut}, one can conclude that NoL channel turns out to be the most promising among the two channels at 14 TeV HL-LHC. In the same table we also turn on $5 \%$ systematic uncertainty and evaluate the reduced signal significance. Due to huge background contribution, a $5\%$ systematic uncertainty on background affects the signal significance by a large margin. So, we proceed to perform more sophisticated multivariate analysis to achieve a better signal significance. \begin{table}[htpb!] \begin{center} {\footnotesize \centering \setlength{\tabcolsep}{0.7em} {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{2}{*}{Processes} & Cut on & \multicolumn{2}{c|}{Remaining events} & \multicolumn{2}{c|}{Significance} \\ \cline{3-6} & $\sqrt{\hat{s}_{min}}$ & Signal & Background & $\theta\,=\,0\%$ & $\theta\,=\,5\%$ \\ \hline \multirow{2}{*}{BP1} & $718$ & $1568$ & $60639$ & $6.37$ & $0.51$ \\ & $682$ & $1835$ & $13639$ & $15.7$ & $2.65$ \\ \hline \multirow{2}{*}{BP2} & $718$ & $1658$ & $60640$ & $6.73$ & $0.54$ \\ & $694$ & $1867$ & $13316$ & $16.2$ & $2.76$ \\ \hline \multirow{2}{*}{BP3} & $742$ & $1388$ & $55728$ & $5.88$ & $0.49$ \\ & $742$ & $1463$ & $11910$ & $13.4$ & $2.42$ \\ \hline \multirow{2}{*}{BP4} & $766$ & $834$ & $51065$ & $3.69$ & $0.32$ \\ & $742$ & $883$ & $11910$ & $8.09$ & $1.46$ \\ \hline \multirow{2}{*}{BP5} & $790$ & $250$ & $46768$ & $1.15$ & $0.11$ \\ & $742$ & $259$ & $11910$ & $2.37$ & $0.43$ \\ \hline \end{tabular}}} \end{center} \caption{Best cut on $\sqrt{\hat{s}_{min}}$ and corresponding signal and background yields for the five signal benchmark points. Last two columns show the signal significance values at ${\cal L}\,=\,3000\,{\rm fb^{-1}}$ with and without a systematic uncertainty $(\theta)$ of 0 and 5$\%$, respectively.} \label{tab:smincut} \end{table} \subsection{Multivariate analysis} We use deep neural network (DNN) \cite{lecun2015deep} to perform the multivariate analysis (MVA). We follow a supervised learning technique to do a binary-class classification. The basic work flow of a DNN is the following: A DNN has more than one hidden layer with multiple nodes or neurons fully connected to the nodes of the consecutive layers via different weights. The input to each node of $n^{th}$ is the linear superposition of the outputs of all the nodes in layer $(n-1)$. A nonlinear activation function is then applied on each node of layer $n$. The final layer of a network is the output layer and the output is estimated in terms of probability which is a function of all the weights of the network. The difference between the true output and the predicted one is referred as the loss function. The loss function is then minimized using stochastic gradient descent method to extract the best values of the weights. Those optimised weights represent a suitable nonlinear boundary on the plane of the input features that can classify the signal and background events. For all the five signal benchmarks, we train different networks of the same architecture and using the same set of input variables. We use a residual network (ResNet) like architecture rather than a simple feed forward network to perform this study. A ResNet has shortcut connections between multiple layers to make sure the gradient for minimization do not vanishes. This kind of structure is better to train a deeper network. One can see the detailed concept of ResNet in Reference \cite{DBLP:journals/corr/HeZRS15}. We use $80\%$ of the whole dataset {\it i.e.} signal and background combined, as the training set and to evaluate the performance of corresponding models, we keep the remaining set referred as the test dataset. The input variables used for training are described in Table \ref{tab:features}. \begingroup \setlength{\tabcolsep}{7pt} \renewcommand{\arraystretch}{1} \begin{table}[htpb!] \begin{center} \footnotesize\setlength{\extrarowheight}{1pt} \begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{No.} & \multicolumn{2}{|c|}{Variables} & Description \\ \cline{2-3} & \texttt{SL} & \texttt{NoL} & SL\,(NoL) \\ \hline 1 & \multicolumn{2}{c|}{\texttt{$p_T^{b_1}$}} & $p_T$ of leading $b$-jet \\ 2 & \multicolumn{2}{c|}{\texttt{$p_T^{b_2}$}} & $p_T$ of sub-leading $b$-jet \\ 3 & \multicolumn{2}{c|}{\texttt{$|\eta^{b_1}|$}} & $|\eta|$ of leading $b$-jet \\ 4 & \multicolumn{2}{c|}{\texttt{$|\eta^{b_2}|$}} & $|\eta|$ of sub-leading $b$-jet \\ 5 & \multicolumn{2}{c|}{\texttt{$\not \! E_T $}} & Missing transverse energy \\ \cline{2-3} 6 & \texttt{$p_T^{\tau_h}$} & \texttt{$p_T^{\tau_h^1}$} & $p_T$ of leading $\tau$-jet \\ 7 & \texttt{$|\eta^{\tau_h}|$} & \texttt{$|\eta^{\tau_h^1}|$} & $|\eta|$ of leading $\tau$-jet \\ 8 & \texttt{$p_T^{\ell}$} & \texttt{$p_T^{\tau_h^2}$} & $p_T$ of lepton\,(sub-leading $\tau$-jet) \\ 9 & \texttt{$|\eta^{\ell}|$} & \texttt{$|\eta^{\tau_h^2}|$} & $|\eta|$ of lepton\,(sub-leading $\tau$-jet) \\ 10 & \texttt{$\Delta R_{\ell, \tau_h}$} & \texttt{$\Delta R_{\tau_h^1, \tau_h^2}$} & $\Delta R$ between lepton-$\tau_h$ ($\tau_h^1$-$\tau_h^2$) coming from $A$ \\ 11 & \texttt{$\Delta \phi_{\ell,\, \not \! E_T }$} & \texttt{$\Delta \phi_{\tau_h^2, \not \! E_T }$} & $|\Delta \phi|$ between lepton-$\not \! E_T $ ($\tau_h^2$-$\not \! E_T $) \\ 12 & \texttt{$\Delta R_{\tau_h, A}$} & -- & $\Delta R$ between $\tau_h$ and reconstructed $A$ \\ 13 & \texttt{$\Delta R_{\tau_h, ssr}$} & \texttt{$\Delta R_{\tau_h^1, ssr}$} & $\Delta R$ between $\tau_h$\,($\tau_h^1$) and reconstructed $ssr$ {\it i.e.} $b\overline{b}$ \\ 14 & \texttt{$\Delta R_{\ell, \tau_h} \times p_T^A$} & \texttt{$\Delta R_{\tau_h^1, \tau_h^2} \times p_T^A$} & No. 10 $\times~ p_T^A$ \\ \cline{2-3} 15 & \multicolumn{2}{c|}{\texttt{$\Delta R_{b_1, b_2}$}} & $\Delta R$ between leading and sub-leading $b$-jet \\ 16 & \multicolumn{2}{c|}{\texttt{$\Delta R_{b_1, b_2} \times p_T^{ssr}$}} & No. 15 $\times ~ p_T{b1,b2}$ \\ \cline{2-3} 17 & \texttt{$\Delta R_{\ell, b_1}$} & \texttt{$\Delta R_{\tau_h^1, b_1}$} & $\Delta R$ between lepton\,($\tau_h^1$) and leading $b$-jet \\ 18 & \texttt{$\Delta R_{\ell, b_2}$} & \texttt{$\Delta R_{\tau_h^1, b_2}$} & $\Delta R$ between lepton\,($\tau_h^1$) and sub-leading $b$-jet \\ 19 & \texttt{$\Delta R_{\tau_h, b_1}$} & \texttt{$\Delta R_{\tau_h^2, b_1}$} & $\Delta R$ between $\tau_h$\,($\tau_h^2$) and leading $b$-jet \\ 20 & \texttt{$\Delta R_{\tau_h, b_2}$} & \texttt{$\Delta R_{\tau_h^2, b_2}$} & $\Delta R$ between $\tau_h$\,($\tau_h^2$) and sub-leading $b$-jet \\ \cline{2-3} 21 & \multicolumn{2}{c|}{\texttt{$\Delta \phi_{b_1,\,\not \! E_T }$}} & $|\Delta \phi|$ between leading $b$-jet and $\not \! E_T $ \\ 22 & \multicolumn{2}{c|}{\texttt{$\Delta \phi_{b_2,\,\not \! E_T }$}} & $|\Delta \phi|$ between sub-leading $b$-jet and $\not \! E_T $ \\ 23 & \multicolumn{2}{c|}{\texttt{$\Delta R_{b_1, A}$}} & $\Delta R$ between leading $b$-jet and reconstructed $A$ \\ 24 & \multicolumn{2}{c|}{\texttt{$\Delta R_{min}^{jets}$}} & Minimum $\Delta R$ between all jets \\ 25 & \multicolumn{2}{c|}{\texttt{$\sqrt{\hat{s}_{min}}$}} & Minimum parton-level centre-of-mass energy \\ 26 & \multicolumn{2}{c|}{\texttt{$n-Jets$}} & Number of jets \\ \hline \end{tabular} \footnotesize \end{center} \caption{ Input variables used for DNN.} \label{tab:features} \end{table} We try to choose the important input features by estimating the F-score using permutation invariance \cite{Breiman2001} for each analysis channel and signal benchmark. The input layer of the DNN is equipped with these 26(25) features described in Table \ref{tab:features}. Then the ResNet structure follows an initial hidden layer with $n_0=512$ nodes connected to five shortcut hops ($i$) each consists of two layers with equal number of nodes decreasing by a fraction of $0.5$ {\it i.e.} $n_i = 0.5 \times n_{i-1}$. Finally, two more hidden layers with 8 and 4 nodes and, then the output layer having two nodes represent signal and background. The activation function used in every hidden layer is the rectified linear-unit (``relu'') function and ``sigmoid'' function is used at the output to get the classification probability for signal and background events. Rest of the model parameters are described in Table \ref{tab:dnnparams}. \begin{table}[htpb!] \begin{center} \footnotesize\setlength{\extrarowheight}{1pt} \begin{tabular}{|l|c|c|} \hline Parameters & Description & Values/Choices \\ \hline \texttt{loss\_function} & Function to be minimised to get optimum model parameters & $binary\_crossentropy$ \\ \texttt{optimiser} & Perform gradient descent and back propagation & $Adam$ \\ \texttt{eta} & Learning rate & $0.001$ \\ \texttt{batch\_len} & Number of events in each mini batch & $5000$ \\ \texttt{batch\_norm} & Normalisation of activation output & $True$ \\ \texttt{dropout} & Fraction of random drop in number of nodes & $20\%$ \\ \texttt{L2-Regularizer} & Regularize loss to prevent over-fitting & $0.0001$ \\ \hline \end{tabular} \footnotesize \end{center} \caption{Details of the DNN parameters.} \label{tab:dnnparams} \end{table} After training, we check the performances of respective DNN models on the test dataset. Figure \ref{fig:ROCs} show the receiver operating characteristic (ROC) curves and corresponding area under the curve {\it i.e.} AUC values for BP1. The degree of performance of the MVA techniques increases with increasing AUC and the other benchmarks exhibit similar nature. \begin{figure}[htpb!] \centering \subfigure[SL (BP1)]{ \includegraphics[scale=0.32]{ROC_SL_resnet_66.png}} \subfigure[NoL (BP1)]{ \includegraphics[scale=0.32]{ROC_NoL_resnet_66.png}} \caption{ROCs of the DNN trained for $M_A\,=\,66.39$ GeV in (a) SL and, (b) NoL channels. True positive rate and false positive rate describe the signal efficiency and background efficiency, respectively.} \label{fig:ROCs} \end{figure} The models are trained in a stochastic approach and therefore, with increasing the number of iteration, the loss is expected to decrease because the network tries to learn the nature of signal and background from the distributions of the input features. We observe similar behavior for the loss and ROC for both train and test data which indicate the presence of negligible over-training. Based on that, we proceed to use respective models to evaluate the significance of the signal benchmarks. We also consider a $5\%$ linear-in-background systematic uncertainty on the background contribution to see the effect in signal significance values. \begin{table}[h!] \begin{center} {\footnotesize \centering \setlength{\tabcolsep}{0.7em} {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{2}{*}{Processes} & Cut on & \multicolumn{2}{c|}{Remaining events} & \multicolumn{2}{c|}{Significance} \\ \cline{3-6} & DNN response & Signal & Background & $\theta\,=\,0\%$ & $\theta\,=\,5\%$ \\ \hline \multirow{2}{*}{BP1} & $0.91$ & $1201$ & $6228$ & $15.2$ & $3.74$ \\ & $0.67$ & $2470$ & $12806$ & $21.8$ & $3.80$ \\ \hline \multirow{2}{*}{BP2} & $0.89$ & $1442$ & $7852$ & $16,3$ & $3.58$ \\ & $0.68$ & $2663$ & $13526$ & $22.9$ & $3.88$ \\ \hline \multirow{2}{*}{BP3} & $0.95$ & $1400$ & $9084$ & $14.7$ & $3.02$ \\ & $0.67$ & $2137$ & $11328$ & $20.1$ & $3.71$ \\ \hline \multirow{2}{*}{BP4} & $0.88$ & $641$ & $3926$ & $10.2$ & $3.11$ \\ & $0.88$ & $1194$ & $6643$ & $14.6$ & $3.49$ \\ \hline \multirow{2}{*}{BP5} & $0.97$ & $167$ & $3542$ & $2.80$ & $0.89$ \\ & $0.91$ & $264$ & $2431$ & $5.36$ & $2.02$ \\ \hline \end{tabular}}} \end{center} \caption{ Best cut on DNN response and corresponding signal and background yields for the five signal benchmark points. Last two columns show the signal significance values at ${\cal L}\,=\,3000\,{\rm fb^{-1}}$ with and without a systematic uncertainty $(\theta)$ of 0 and 5$\%$, respectively.} \label{tab:dnncut} \end{table} Table \ref{tab:dnncut} demonstrates the best possible cut on the DNN responses for every signal benchmarks keeping $B \geq 5 \times S$ \cite{Cowan:2010js} to make sure of everything remains in the asymptotic regime. The last column of Table \ref{tab:dnncut} shows the effect of a $5\%$ linear-in-background systematic uncertainty on the signal significance. It is observed that the DNN performs better than the cut based method. For instance, the statistical significance for BP5 improves by a factor $\gtrsim$ 2 upon switching to DNN. In absence of systematic uncertainties, this makes it possible to discover a pseudoscalar of mass 147 GeV in BP5 at 5$\sigma$. \section{Summary and conclusions}\label{conclusions} The recently reported discrepancy between the measured value of $M_W$ and its SM prediction has stirred up fresh hopes of having observed BSM phenomena. At the same time, the lingering excess in the muon anomalous magnetic moment of the muon has also opened door to model building using BSM physics. In thus study, we have proposed a solution to the twin anomalies in the framework comprising both color-singlet as well as color-octet scalars. More precisely, the well-known Type-X 2HDM was augmented with the color octet isodoublet. Particular emphasis has been laid on the role of the colored scalars in this context. That is, a virtual contribution of the colored scalars to the oblique parameters aids to uplift the $W$-mass to the observed value. At the same time, two-loop Barr-Zee contributions induced by the colored scalars extend the parameter region compatible with muon $g-2$ \emph{w.r.t.} what is seen for the pure Type-X 2HDM. We have proposed the $p p \to S_R \to S_I A \to b \overline{b} \tau^+ \tau^-$ signal in this work to look for the various scalars involved, both colorless as well as colored. The final ensuing $b\overline{b}\tau\tau$ final state is attractive from the perspective of collider experiments. This signal has been analysed at the 14 TeV LHC using both cut-based as well as multivariate techniques, in particular, deep neural networks. We have found that the observability of the framework appreciably improves upon incorporating DNN. One must also note that the effect of systematics is also quite high in the statistical significances due to high amount of background contamination. Several sources of systematics are not taken care of, such as: jet to $\tau_h$ fake, lepton to jet fake, pdf error, several normalised and shape based scale factors templates etc. By proper implementation of all the experimental details, such signal topologies have the potential to unravel the presence of both colorless as well as color octer scalars at the HL-LHC. \acknowledgements IC acknowledges support from Department of Science and Technology, Govt. of India, under grant number IFA18-PH214 (INSPIRE Faculty Award). NC acknowledges support from Department of Science and Technology, Govt. of India, under grant number IFA19-PH237 (INSPIRE Faculty Award). The authors also acknowledge support of the computing facilities of Indian Association for the Cultivation Science, Saha Institute of Nuclear Physics and Indian Institute of Technology Kanpur. \section{Appendix}\label{appendix} \subsection{Yukawa scale factors} \begin{table}[htpb!] \centering \begin{tabular}{ |c c c c c c c c c| } \hline $\xi^h_e$ & $\xi^h_\mu$ & $\xi^h_\tau$ & $\xi^H_e$ & $\xi^H_\mu$ & $\xi^H_\tau$ & $\xi^A_e$ & $\xi^A_\mu$ & $\xi^A_\tau$ \\ \hline $-\frac{\text{sin}\alpha}{\text{cos}\beta}$ & $-\frac{\text{sin}\alpha}{\text{cos}\beta}$ & $-\frac{\text{sin}\alpha}{\text{cos}\beta}$ & $\frac{\text{cos}\alpha}{\text{cos}\beta}$ & $\frac{\text{cos}\alpha}{\text{cos}\beta}$ & $\frac{\text{cos}\alpha}{\text{cos}\beta}$ & tan$\beta$ & tan$\beta$ & tan$\beta$ \\ \hline \end{tabular} \caption{Various Yukawa scale factors for the lepton-specific case.} \label{tab:xi} \end{table} \subsection{Functions in the two-loop BZ amplitudes } \begin{subequations} \begin{eqnarray} \mathcal{F}^{(2)} (z) = \frac{1}{2} \int_{0}^{1} dx \frac{x(1-x)}{z-x(1-x)} ~{\rm ln} \left(\frac{z}{x(1-x)}\right), \\ \mathcal{G}(z^a,z^b,x) = \frac{{\rm ln} \left(\frac{z^a x + z^b (1-x)}{x(1-x)}\right)}{x(1-x) - z^a x - z^b (1-x)}. \end{eqnarray} \end{subequations}
2024-02-18T23:39:47.533Z
2023-01-02T02:12:05.000Z
algebraic_stack_train_0000
410
12,082
proofpile-arXiv_065-2065
\section{Introduction} The brightest of five main spiral galaxies that form the Sculptor group, NGC 300 is a fairly typical late-type galaxy \citep{1988ngc..book.....T} at a distance of $\sim 2.1$ Mpc \citep{1992ApJ...396...80F}. Most of the measurements of the distance to this galaxy are based on the luminosity of its Cepheid variables population. Based on near-infrared H-band observations of two long-period Cepheid variables, \citet{1987ApJ...320...26M} reported a distance modulus $(m-M)_0=26.35 \pm 0.25$. The distance was slightly revised by \citet{1988PASP..100..949W} who derived a distance modulus $(m-M)_0=26.4 \pm 0.2$. Additional photometry of the same sample of variables by \citet{1992ApJ...396...80F} resulted in the already quoted distance $(m-M)_0=26.66 \pm 0.10$, subsequently revised to $(m-M)_0=26.63 \pm 0.06$ in \citet{2004ApJ...608...42S}. More recently, NGC 300 has been selected as a key target for the Araucaria Project\footnote{http://ifa.hawaii.edu/$\sim$bresolin/Araucaria}. \citet{2002AJ....123..789P} presented an extensive characterization of 117 Cepheid variables, most of which were new discoveries, observed with the 2.2m ESO/MPI telescope at La Silla, Chile. Additional V and I data were obtained by \citet{2004AJ....128.1167G} at Las Campanas and Cerro Tololo. Deep, near-infrared J and K band observations were obtained with ESO VLT using the ISAAC camera, resulting in a final distance modulus $(m-M)_0=26.37 \pm 0.05 \,{\rm (random)} \pm 0.03 \,{\rm (systematic)}$ \citep{2005ApJ...628..695G}. The superb angular resolution offered by the Hubble Space Telescope has recently open the possibility of determining the distance to NGC 300 using the Tip of the Red Giant Branch (TRGB). A set of HST WFPC2 fields were analyzed by \citet{2004ApJ...608...42S} and \citet{2004AJ....127.1472B}, and more recently by \citet{2005A&A...431..127T}. The derived distance moduli are $(m-M)_0=26.65 \pm 0.09$, $(m-M)_0=26.56 \pm 0.07 \pm 0.13$, and $(m-M)_0=26.50 \pm 0.15$, respectively. In this paper, we present the first TRGB distance based on deep ACS observations of NGC 300. These data are the deepest ever obtained for this galaxy, and they sample both the inner bulge and the outer disk. The paper is organized as follows: Section \ref{data} presents the data, the reduction techniques we adopted, and the resulting color-magnidute diagrams (CMD). We describe the TRGB method and its application to NGC 300 in Section \ref{distance}. We discuss our results in Section \ref{discussion} and a brief summary is presented in Section \ref{conclusions}. \section{Observations, Data reduction, and Color-Magnitude Diagrams} \label{data} The ACS observations used to derive a new TRGB distance to NGC 300 were obtained during HST Cycle 11, as part of program GO-9492 (PI: Bresolin), from July 2002 to December 2002. The main purpose of these observations was to complement the extensive ground-based CCD photometry of Cepheid variable stars and blue supergiant stars collected in the framework of the Araucaria project. Two-orbit HST visits allowed to obtain deep photometry in the F435W, F555W (1080 seconds), and F814W (1440 seconds) filters. A total of six fields were observed. Stellar photometry was performed with the DOLPHOT (version 1.0) package, an adaptation of HSTphot \citep{2000PASP..112.1383D} to ACS images. Pre-computer Point Spread Functions were adopted, and the final calibrated photometry was then transformed to the standard BVI system using the equations provided by \citet{2005astro.ph..7614S}. The transformation from one photometric system to another inevitably introduces additional uncertainties but it seems necessary given that most of the calibrations of the absolute magnitude of the TRGB are in the I band. For a more extended discussion of the issues related to calibration see \citet{Bresolin:rb}. As an example of the quality of the results, the final calibrated CMDs are shown in Figures \ref{cmd_1.ps} and \ref{cmd_3.ps} for Fields 1 and 3, respectively. Field 1 is situated close to the eastern outer edge, while Field 3 is centered on the nucleus of the galaxy \citep[see][for a map of the observed Fields]{Bresolin:rb}. All the CMDs show a very well pronounced sequence of blue young stars, reaching down to the lower age limit of isochrone sets \citep[$\sim 60$ Myr,][]{2000A&AS..141..371G}. Blue-loop stars occupy the central region of the diagrams, and a well defined red giant branch (RGB) extends from I $\sim 22$ down to the photometric detection limit, I $\sim 26$. A full discussion of the CMD features, along with a reconstruction of the star formation history, will be presented in a forthcoming paper. \begin{figure} \plotone{f1.eps} \caption{(V-I,I) color-magnitude diagram for Field 1 of NGC 300. The Field is situated close to the eastern edge of the galaxy.} \label{cmd_1.ps} \end{figure} \begin{figure} \plotone{f2.eps} \caption{(V-I,I) color-magnitude diagram for Field 3 of NGC 300. The Field is situated on top of the nucleus of the galaxy.} \label{cmd_3.ps} \end{figure} \section{The Distance to NGC300} \label{distance} \subsection{Detection of the tip} The distance estimates based on the RGB tip rest on a solid physical basis: low-mass stars reach the end of their ascent along the RGB with a degenerate helium core and they ignite helium burning within a very narrow range of luminosity \citep[][and references therein]{2002PASP..114..375S}. The potential of the method was revealed in a seminal paper by Lee and collaborators \citep{1993ApJ...417..553L}, along with a first attempt at objectively estimate the position of the tip on the CMD based on a digital edge-detection (ED) filter in the form [-2,0,2], applied to the I band luminosity function. This filter effectively responds to changes in the slope of the luminosity function and displays a peak corresponding to the TRGB. A refined version of this method was presented in \citet{1996ApJ...461..713S}. More recently, a different approach was suggested by \citet{2002AJ....124..213M}. To avoid problems related to binning, this method uses a maximum-likelihood (ML) analysis to get the best fit of a parametric RGB luminosity function to the observed one. Each of these methods has advantages and disadvantages. ED methods are quite sensitive to binning, but they don't require any {\it a priori} assumption on the shape of the RGB luminosity function. ML methods use much more information, because every star of the sample contributes to the probability distribution, but they use a theoretical luminosity function as an input parameter. In this work, we use both approaches and we discuss the different results. Whenever color information is available, it is advisable to restrict the analysis of the luminosity function to a suitable region carefully chosen to represent the RGB. To perform this selection, we took into account the available calibrations of the absolute magnitude of the RGB tip. As discussed in Section \ref{calib}, one of the most reliable calibrations available to date is based on the absolute magnitude of the RGB tip measured on a large sample of stars of the globular cluster $\omega$ Centauri \citep{2001ApJ...556..635B}, at a metallicity of $\rm{[Fe/H]} \sim -1.7$. To be able to apply this calibration to our data, we decided to select our RGB stars using the ridge line of $\omega$ Centauri and selecting stars in a narrow ($\sim$0.1 mag) range on both sides of it. Section \ref{calib} will present a discussion of the implications of this choice. The upper panels of Figures \ref{trgb_1.ps} - \ref{trgb_6.ps} show the detection of the RGB tip using the ML approach presented by \citet{2002AJ....124..213M}. The continuous line shows the observed RGB luminosity function, while the best fit is shown by a dashed line. The results of the detection are presented in columns 4 and 5 of Table \ref{tab1}. The lower panels of the same set of Figures show the detection of the RGB tip using the ED filter in a version similar to the one presented in \citet{1996ApJ...461..713S}. The continuous line shows the response of the ED filter, while the vertical line indicates the position of the center of the highest peak. The results of the measurements are reported in columns 2 and 3 of Table \ref{tab1}. The discontinuity in the luminosity function due to the RGB tip is conspicuous in most cases, although a significant amount of contamination from AGB stars is affecting Fields 2 and 3, producing a rather smooth slope at the level of the RGB tip. The effect of an AGB contamination has been investigated in many studies, \citep[e.g., see][]{makarov:co,2004ApJ...606..869B}. The conclusion is that in most cases the RGB tip detection is quite insensitive to the effect of this contamination. This result is further confirmed by looking at the results presented here. Indeed, the RGB tip positions measured in Fields 2 and 3 do not significantly differ from the positions measured in any other field. To estimate the errors connected with the detection of the RGB tip, we adopted a bootstrap resampling strategy similar to the one presented in \citet{2002AJ....124..213M}. The sample of stars chosen to represent the RGB was resampled 500 times, and the RGB tip measured for each realization. The r.m.s. of the results is then quoted in columns 3 and 5 of Table \ref{tab1}, for ED and ML methods, respectively. \begin{table} \begin{tabular}{c|cc|cc} \tableline \tableline & \multicolumn{2}{c|}{Edge detector} & \multicolumn{2}{c}{Maximum likelihood} \\ Field & I$_{RGBT}$ & $\sigma$ & I$_{RGBT}$ & $\sigma$ \\ \tableline 1 & 22.48 & 0.09 & 22.50 & 0.03 \\ 2 & 22.40 & 0.03 & 22.48 & 0.02\\ 3 & 22.48 & 0.06 & 22.50 & 0.02\\ 4 & 22.42 & 0.16 & 22.48 & 0.06\\ 5 & 22.50 & 0.10 & 22.50 & 0.02\\ 6 & 22.39 & 0.12 & 22.45 & 0.08\\ \tableline \end{tabular} \caption{Results of the measurements of the magnitude of the RGB tip.\label{tab1}} \end{table} \begin{figure} \plotone{f3.eps} \caption{Upper panel: Detection of the TRGB using ML method applied to Field 1. Lower panel: Detection of the TRGB using ED method applied to Field 1.} \label{trgb_1.ps} \end{figure} \begin{figure} \plotone{f4.eps} \caption{Upper panel: Detection of the TRGB using ML method applied to Field 2. Lower panel: Detection of the TRGB using ED method applied to Field 2.} \label{trgb_2.ps} \end{figure} \begin{figure} \plotone{f5.eps} \caption{Upper panel: Detection of the TRGB using ML method applied to Field 3. Lower panel: Detection of the TRGB using ED method applied to Field 3.} \label{trgb_3.ps} \end{figure} \begin{figure} \plotone{f6.eps} \caption{Upper panel: Detection of the TRGB using ML method applied to Field 4. Lower panel: Detection of the TRGB using ED method applied to Field 4.} \label{trgb_4.ps} \end{figure} \begin{figure} \plotone{f7.eps} \caption{Upper panel: Detection of the TRGB using ML method applied to Field 5. Lower panel: Detection of the TRGB using ED method applied to Field 5.} \label{trgb_5.ps} \end{figure} \begin{figure} \plotone{f8.eps} \caption{Upper panel: Detection of the TRGB using ML method applied to Field 6. Lower panel: Detection of the TRGB using ED method applied to Field 6.} \label{trgb_6.ps} \end{figure} \subsection{Distance modulus} \label{calib} The first calibration of the absolute magnitude of the RGB tip dates back to the early 1990's. \citet{1993ApJ...417..553L} defined the distance modulus based on the RGB tip as $$(m-M)_I=I_{TRGB}+BC_I-M_{bol,TRGB}$$ where $BC_I$ is the bolometric correction to the I magnitude, and $M_{bol,TRGB}$ is the bolometric magnitude of the TRGB. $BC_I$ and $M_{bol,TRGB}$ are given in \citet{1990AJ....100..162D} as $\rm{BC_I}=0.881-0.243(V-I)_0$ and $M_{bol}=-0.19\rm{[Fe/H]}-3.81$. These calibrations are based on the distance scale of \cite{1990ApJ...350..155L} where the magnitude of RR Lyrae stars is $M_V(RR)=0.82 + 0.17 \rm{[Fe/H]}$. All these relations are based on a small sample of RGB stars observed in a few template globular clusters, and they only cover the range $-2.17 < \rm{[Fe/H]} < -0.71$. An extensive set of computer simulations was performed by \citet{1995AJ....109.1645M} to test for possible systematic effects on the detection of the RGB tip. The authors found that a reasonable lower limit to the number of stars within 1 magnitude from the tip is 50. Below this level, strong biases can affect the determined magnitude of the tip. Note that in the sample of \citet{1990AJ....100..162D} the number of stars within 1 magnitude from the tip is never larger than 20, and can be as low as 2. A significant improvement on this situation was presented by \citet{2001ApJ...556..635B}. In their work, the authors derive a new calibration of the magnitude of the tip in the form $M_I^{TRGB}=0.14\rm{[Fe/H]}^2 + 0.48\rm{[Fe/H]} -3.66$. The result is based on an extensive sample of stars observed in different bands including the near-IR and presented in \citet{1999AJ....118.1738F,2000AJ....119.1282F}. Although based on a larger sample of stars than the one presented in \citet{1993ApJ...417..553L}, this calibration still does not meet the completeness criteria established by \citet{1995AJ....109.1645M}. In addition, both this calibration and the one by \citet{1993ApJ...417..553L} require a knowledge of the metallicity of the underlying population, either measured independently or deduced from the color of the RGB, iterating through measurements of the distance and the metallicity. The only calibration based on a sufficient number of stars is derived for $\omega$ Centauri by \citet{2001ApJ...556..635B}. According to this calibration, the absolute magnitude of the RGB tip is $M_I^{TRGB}=-4.04 \pm 0.12$ at a metallicity of ${\rm [Fe/H]} \sim -1.7$. This value is tied to the distance of the eclipsing binary OGLEGC 17 in $\omega$ Centauri \citep{2001AJ....121.3089T}, and it's completely independent from any other optical RR Lyrae distances. A possible source of uncertainty associated with this calibration is the wide and complex color/metallicity distribution observed in $\omega$ Centauri, but several studies have shown that the dominant population is rather metal-poor, and that the peak of the metallicity distribution is at ${\rm [Fe/H]} \sim -1.7$ \citep{2000ApJ...534L..83P,1996AJ....111.1913S}. In this work, we will adopt the value $M_I^{TRGB}=-4.04 \pm 0.12$. We note that this assumption is the reason behind our choice of the selection criteria we have adopted to define the RGB sample, as can be verified in Figure \ref{omegacen.ps}. The left panel of Figure \ref{omegacen.ps} shows the CMD of NGC 300, Field 2. Only 20 \% of the stars are plotted, for easier reading. The right panel shows the CMD of $\omega$ Centauri from \citet{2000A&AS..145..451R,2000A&AS..144....5R}. Horizontal and vertical lines show the position of the RGB tip as measured in NGC 300. It is evident that it is possible to define in NGC 300 a sample of RGB stars that perfectly overlaps with the RGB of $\omega$ Centauri. Assuming $E(B-V)=0.096 \pm 0.008$ \citep{2005ApJ...628..695G}, we derived distance moduli both with the ED and ML methods, and for the 6 ACS Fields. The results are presented in Table \ref{tab3}. To estimate the errors attached to these measurements, we separate the errors connected to the detection of the tip and the photometric calibration ({\em internal} error) and the errors due to the extinction correction and the calibration of the absolute magnitude of the tip ({\em external} error). The errors due to the detection of the tip have already been discussed earlier in this Section. The errors connected with the conversion from ACS photometric system and BVI system can be quantified in 0.02 mag \citep{2005astro.ph..7614S}. The error attached to the $E(B-V)$ measurement provided by \citet{2005ApJ...628..695G} is 0.006 mag, which accounts for a total of 0.01 mag attached to $A_I$. Finally, the error in the absolute calibration is 0.12 mag \citep{2001ApJ...556..635B}, and it's basically determined by the uncertainty in the distance to $\omega$ Centauri \citep{2001AJ....121.3089T}. The total amount of {\em internal} errors attached to the different distance moduli computed for the six Fields are reported in columns 3 and 5 of Table \ref{tab3}, for ED and ML methods, respectively. \begin{table} \begin{tabular}{c|cc|cc} \tableline \tableline & \multicolumn{2}{c|}{Edge detector} & \multicolumn{2}{c}{Maximum likelihood} \\ Field & $(m-M)_0$ & $\sigma$ & $(m-M)_0$ & $\sigma$ \\ \tableline 1 & 26.35 & 0.09 & 26.37 & 0.03 \\ 2 & 26.26 & 0.04 & 26.35 & 0.03\\ 3 & 26.35 & 0.06 & 26.37 & 0.03\\ 4 & 26.28 & 0.16 & 26.35 & 0.06\\ 5 & 26.37 & 0.10 & 26.37 & 0.03\\ 6 & 26.26 & 0.13 & 26.32 & 0.08\\ \tableline \end{tabular} \caption{Results of the measurements of the distance modulus. \label{tab3}} \end{table} To derive our final distance moduli, we computed a weighted mean of the measurements in the six Fields. The results are: $$(m-M)_0=26.30 \pm 0.03 \pm 0.12 (ED)$$ and $$(m-M)_0=26.36 \pm 0.02 \pm 0.12 (ML).$$ \begin{figure} \plotone{f9.eps} \caption{ Left panel shows the CMD of NGC 300, right panel shows the CMD of $\omega$ Centauri. Vertical and horizontal lines indicate the color and the magnitude of the TRGB as measured in NGC 300.} \label{omegacen.ps} \end{figure} \section{Discussion} \label{discussion} Our selection of the sample of the stars representing the RGB is entirely motivated by our choice of the absolute calibration of the RGB tip. This approach actually limits the analysis to about 20 \% of the total number of available RGB stars. As an alternative approach, one could choose to adopt a much larger sample of RGB stars, reaching the high-metallicity edge of the RGB. We argue that this approach would provide consistent results, but with a lower precision. This is shown in Figure \ref{ferraro2.ps}. In this Figure we plot the CMD of NGC 300, Field 2, in the absolute plane, using the distance and the reddening provided by \citet{2005ApJ...628..695G}. The continuous line shows the color dependence of the RGB tip according to \citet{2001ApJ...556..635B}. It is evident that the slope of the function $M_I^{TRGB} vs. (V-I)_0$ reproduces very closely the observed data. On the other hand, using the high-metallicity part of the CMD would introduce additional errors due the still uncertain slope of the high-metallicity extension of the calibration. \begin{figure} \plotone{f10.eps} \caption{CMD of NGC 300 in the absolute plane. The continuous line shows the color dependence of the TRGB according to \citet{2001ApJ...556..635B}} \label{ferraro2.ps} \end{figure} Another issue that should be given attention to is the age of the underlying population used to define the RGB sample. Whenever the RGB tip technique is applied to a composite stellar population, the possibility of biases arises, due to the fact that the presence of a well-developed and populated RGB does not necessarily imply the presence of a globular cluster-like population, while the calibration of the absolute magnitude of the RGB tip relies completely on a sample of globular clusters. \citet{2004ApJ...606..869B} reported that the RGB distances are rather insensitive to the stellar populations provided most of the stars are more metal poor than $\rm{[Fe/H]}=-0.3$ and that there is not a strong star formation burst between 1 and 2 Gyr. \citet{2005MNRAS.357..669S} extended this analysis to real cases, and showed that applying the standard technique for RGB tip distances to the LMC and to the SMC could result in significant deviations from the real value, due to the underestimation of the correct metallicity. We argue that the TRGB method can be safely applied to NGC 300, without introducing age- or metallicity-related biases. Indeed, \citet{2004AJ....127.1472B} have shown that the star formation history of this galaxy has been rather uniform throughout all its life, and they found no indication for an increased star formation rate at young ages, except for a possible final burst at 200-100 Myr. Besides, both \citet{2004AJ....127.1472B} and the results of the Araucaria project \citep{2002ApJ...567..277B, 2003ApJ...584L..73U} show that the metallicity of NGC 300 has probably been lower than ${\rm [Fe/H]}=-0.5$ for the whole life of the galaxy. The result presented in this paper is fully consistent with the results recently derived by \citet{2005ApJ...628..695G}, based on the luminosity of Cepheids variable stars. Our distance modulus is also consistent with the one derived by \citet{2004AJ....127.1472B}, provided the difference in the adopted reddening correction is taken into account. Indeed, the observed magnitude of the RGB tip that we derived is consistent within the errors with the value $I_{TRGB}=22.52 \pm 0.02$ measured by \citet{2004AJ....127.1472B}, but the authors then apply a reddening correction $E(B-V)=0.013$ \citep{1998ApJ...500..525S}, which is much lower than the value adopted in this paper, resulting in a distance modulus $(m-M)_0=26.56 \pm 0.07 \pm 0.13$. Similar considerations apply to the results published by \citet{2005A&A...431..127T}, although in this case we do not know what is the adopted calibration of the absolute magnitude of the RGB tip, and the reddening correction applied. On the other hand, the results presented here show a significant discrepancy with the measurements of \citet{2004ApJ...608...42S}, who published a distance modulus $(m-M)_0=26.65 \pm 0.09$. The total difference between this value and our value is $\sim 0.3$ magnitudes. Half of this difference can be explained by the different assumption of the reddening, as in the case of the distance presented by \citet{2004AJ....127.1472B}, but a further difference of $\sim 0.16$ remains to be explained. It appears that this difference can be accounted for by the difference in the estimated level of the RGB tip, measured at $I_{TRGB}=22.49 \pm 0.01$ in this paper, and at $I_{TRGB}=22.62 \pm 0.07$ by \citet{2004ApJ...608...42S}. It is difficult to provide an explanation for this difference, but a value of $I_{TRGB}=22.60$ is not compatible with our data. Besides, it is interesting to notice that the data analyzed by \citet{2004ApJ...608...42S} were also analyzed by \citet{2004AJ....127.1472B}, indicated as field F3. Both groups determined the RGB tip around 22.6, but they also warned the reader that the field analyzed was poorly populated, and that the determination could be uncertain. Indeed, \citet{2004AJ....127.1472B} rejected the result derived from this WFPC2 field as non reliable. \citet{2004AJ....127.1472B} also analyzed an additional field, indicated as field F1, and for that field they derived the already quoted value of $I_{TRGB}=22.52 \pm 0.02$, in agreement with our determination. Our conclusion is that WFPC2 and ACS measurements agree within the errors when sufficient number of stars are used, as is the case of field F3 of \citet{2004AJ....127.1472B}. Finally, \citet{2004ApJ...608...42S} also reported that the Cepheids distance to NGC300, based on the measurements of \citet{1992ApJ...396...80F}, is $(m-M)_0=26.63 \pm 0.06$, but using the calibration of \citet{1999AcA....49..201U} the distance would be $(m-M)_0=26.53 \pm 0.05$, which would be in agreement with our determination if our value for the reddening would be used. \section{Conclusions} \label{conclusions} We have presented a new measurement of the distance to NGC 300 based on the deepest available photometry catalog, obtained with the Advanced Camera for Survey on board the Hubble Space Telescope. We have used both edge-detection and maximum likelihood methods, and we have applied the methods independently to six different ACS Fields. All the Fields give consistent results. We have also discussed the possibility of biases in our results related to the application of the TRGB method to a composite stellar population, and we have concluded that NGC 300 is likely to be a case in which this distance estimator can be safely applied. Our result is fully consistent with the recent distance determination from near-infrared photometry of Cepheids variables \citep{2005ApJ...628..695G}. Since their result is tied to an assumed LMC distance modulus of 18.50, our independent TRGB distance determination of NGC 300 supports a distance of LMC of, or very close to, 18.50. The distance modulus we derive is also consistent with other recent determinations based on the TRGB \citep{2005A&A...431..127T,2004AJ....127.1472B} if our reddening value is used in these studies; however, our present determination has succeeded in reducing the internal errors of the result by a factor $\sim 3$. \acknowledgements{WP and GP gratefully acknowledge support for this work from the Chilean FONDAP Center for Astrophysics 15010003 and the Polish KBN grant No 2P03D02123. Support for program \# GO-9492 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. We would like to thank the referee for useful suggestions and comments that helped improve this paper.}
2024-02-18T23:39:47.599Z
2005-10-11T03:34:03.000Z
algebraic_stack_train_0000
414
4,370
proofpile-arXiv_065-2088
\section{Introduction} We are engaged in a large scale project to find additional extremely metal poor stars in the halo of our galaxy. The major existing survey for very metal-poor stars is the HK survey described in detail by Beers, Preston \& Shectman (1985, 1992). The stellar inventory of this survey has been scrutinized with considerable care over the past decade, but, as summarized by \cite{beers98}, only roughly 100 are believed to be extremely metal poor (henceforth EMP), with [Fe/H] $\le -3.0$ dex\footnote{The standard nomenclature is adopted; the abundance of element $X$ is given by $\epsilon(X) = N(X)/N(H)$ on a scale where $N(H) = 10^{12}$ H atoms. Then [X/H] = log$_{10}$[N(X)/N(H)] $-$ log$_{10}$[N(X)/N(H)]\mbox{$_{\odot}$}, and similarly for [X/Fe].}. We are therefore exploiting the database of the Hamburg/ESO Survey (HES) for this purpose. The HES is an objective prism survey from which it is possible to efficiently select EMP stars \citep{christlieb03}. The existence of a new list of candidates for EMP stars with [Fe/H] $< -3$ dex selected in an automated and unbiased manner from the HES, coupled with the very large collection area and efficient high resolution echelle spectrographs of the new generation of large telescopes, offers the possibility for a large increase in the number of EMP stars known and in our understanding of their properties. We have obtained and analyzed spectra with HIRES \citep{vogt94} at the Keck I Telescope of a large number of EMP candidates selected from the HES. The normal procedures outlined by \cite{christlieb03} to isolate EMP stars from the candidate lists produced by the HES were followed. In an effort to avoid selection biases, these differ from the criteria adopted by the HK Survey; see \cite{christlieb03} for details. Candidate EMP stars selected from the HES were vetted via moderate resolution spectroscopy at large telescopes to eliminate the numerous higher abundance interlopers. Most of the follow up spectra for the stars discussed here were obtained with the Double Spectrograph \citep{dbsp} on the Hale Telescope at Palomar Mountain, denoted P200, (a few are from the Boller and Chivens spectrograph on the Clay and Baade Telescopes at the Las Campanas Observatory) during the period from 2001 to the present. We intend to observe all candidates to the magnitude limit of the HES (B $\sim$ 17.5) in our fields; observations are now complete in $\sim$990 deg$^2$, complete to B=16.5 in an additional $\sim$700 deg$^2$, and approaching completion in the remaining fields. These follow up spectra are used to determine an accurate measure of the metallicity of the star, much more so than is possible to derive from the low resolution objective prism spectra of the HES itself. This is accomplished via a combination of strength of absorption in H$\delta$ (determining $T_{eff}$) and in the Ca~II line at 3933~\AA\ (the KP index, which determines [Fe/H], once $T_{eff}$\ and hence log($g$)\ are specified). A calibration between the strength of the indices and metallicity is required, and is generally derived from literature searches for high resolution abundance studies of relevant stars. We denote the resulting metallicity value as [Fe/H](HES). The specific algorithm adopted by the HES is described in \cite{beers99} and is essentially identical to that used by the HK Survey until recently; the latest updates to the algorithm as used by the HK Survey are described in \cite{rossi05}. \section{Systematic Calibration Problems in the Metallicity Scale of the HES and HK Survey} Stars were chosen for observation at high resolution with HIRES primarily on the basis of low predicted metallicity; every star with [Fe/H](HES) $\le -2.9$ dex north of $\delta -25^{\circ}$ was put on the HIRES observing list. Spectra have now been obtained for more than 55 EMP candidates from the HES. In all cases the stellar parameters have been determined by JGC from broad band (V-I, V-J and V-K) photometry and theoretical isochrones with no reference to the spectra themselves. Insofar as possible, the procedures, the codes, the model atmospheres \citep[we use those of][]{kurucz93} and the atomic data used to reduce the HIRES echelle spectra and to carry out the detailed abundance analysis are identical, and the analyses are thus as homogeneous as possible. \cite{cohen04} present full details of the analysis and results for a large sample of EMP dwarfs from the HES, while \cite{cohen05b} will present abundance analyses for 13 of the 16 known carbon stars from this HES sample; two of these have not yet been observed with HIRES. Fifteen C-normal giants have been analyzed to date, and these will appear in a future publication \citep{cohen05d}. Our operational definition of a carbon-star (C-star) is one whose spectrum shows bands of C$_2$. The P200 DBSP spectra mostly extend to 5300~\AA, hence the prominent C$_2$\ band at 5160~\AA\ is included. We are reasonably certain, as will be discussed in \cite{cohen05c}, through inspection of the regions of both C$_2$\ and CH in these followup spectra, that there are no additional giant C-stars in the Palomar sample. If no C$_2$\ bands are detected, but [C/Fe]$ > 1$ dex, we denote a star to be C-enhanced. Also we denote stars with $T_{eff}$\ $ > 6000$~K as ``dwarfs'', while all cooler stars are called ``giants''. The difference between the [Fe/H] derived from analysis of the HIRES spectrum versus that obtained by applying the algorithm of \cite{beers99} to the moderate resolution follow up spectra for the set of 497 candidate EMP stars with moderate resolution follow up P200 spectra is shown in Fig.~\ref{fig_delta_feh}. The giants with normal C and the warmer C-giants show good agreement; [Fe/H](HES) inferred from the moderate resolution spectra is a reliable indicator of the Fe-metallicity found from analysis of HIRES spectra. Getting the dwarfs correct is harder as metal lines become weaker at their higher $T_{eff}$. Also we have adopted a $T_{eff}$\ scale for them which is hotter than that used in most earlier analyses which provided the calibration of the \cite{beers99} relation for EMP dwarfs \citep*[see, e.g.][]{norris96,ryan96}. In the mean, [Fe/H](HES) systematically underestimates our HIRES-based Fe-metallicity by 0.37 dex for EMP dwarfs, corresponding to a systematic difference in adopted $T_{eff}$\ of $\sim$400~K. However, Fig.~\ref{fig_delta_feh} shows that for the cooler C-giants ($T_{eff}$\ $\lesssim 5200$~K), [Fe/H](HES) substantially underestimates our HIRES-based Fe-metallicity by $\sim$1 dex. \cite{preston01} suggested the presence of a systematic error of comparable size (up to 1 dex) for the sample of C-stars they analyzed from the HK Survey. This is a very large systematic error, much too large to be caused by problems in the $T_{eff}$\ scale, and so we attempt to understand what might be causing it. In Fig.~\ref{fig_spec} we show sections of the HIRES spectra of three EMP candidates from the HES shifted into the rest frame. These stars all have $T_{eff}$ $\sim 5150$~K. Two are C-stars, the third is a genuine EMP giant with weak CH. Each of these stars has [Fe/H](HES) $< -3.2$ dex. The red and blue continuum and the feature bandpass are shown for the KP and the HP2 indices used to determine [Fe/H](HES), with a feature bandpass 12~\AA\ wide for each; see \cite{beers99} for details of the index definitions. The figure clearly shows the source of the problem afflicting the C-stars -- the ``continuum'' bands are full of strong molecular absorption, particularly the red continuum band for the HP2 index. If the HP2 index is underestimated because the continuum is depressed, then the star is assumed to be cooler than it actually is, and the resulting [Fe/H](HES) for a fixed KP index will be too low. Furthermore, the blue continuum region of the KP index also shows strong CH absorption (the big chunks missing from the spectra of the C-stars in the left column blueward of the 3933~\AA~CaII line), and hence the abundance indicator KP will also be underestimated. The derived [Fe/H](HES) obtained using the calibration of \cite{beers99} will thus be substantially reduced below its true value for such C-stars. Because the absorption in the relevant spectral regions arises from both CN and CH, the magnitude of this effect depends on additional factors such as the C-enhancement and the C/N ratio as well as on $T_{eff}$. Several tests have been performed to verify this. First we checked that the measured KP and HP2 indices for C-stars (and for C-normal stars) can be reproduced to within their uncertainties from the much more precise HIRES spectra. We also checked that adding back the missing flux removed from the continuum by absorption in the sidebands significantly increases KP and particularly the HP2 indices above the measured values, by factors of two or more. Finally we checked that by so altering the KP and HP2 indices we derive a significantly higher value for [Fe/H](HES) which is much closer to the that obtained by the detailed abundance analyses for the cooler C-stars. \section{Discussion and Implications} An underestimate of a factor of $\sim$1 dex in the deduced value of [Fe/H](HES) for the cooler C-stars will have significant effects. Fig.~\ref{fig_feh_vmk} shows [Fe/H](HES) versus V-K for a sample of 489 EMP candidates from the HES with moderate resolution spectra from the Double Spectrograph at the Hale Telescope \citep[details will appear in][]{cohen05c}. The 10 known C-stars and the 1 C-enhanced star with HIRES analyses\footnote{The C-enhanced star is HE0024--2523, see \cite{lucatello03} and references therein.} from this sample are indicated. In the upper panel, these stars are plotted at their [Fe/H](HES) values, while in the lower panel they are plotted at their [Fe/H](HIRES) as determined from detailed abundance analyses\footnote{Two of the C-stars in the P200 sample have not been observed with HIRES; they are shown with the appropriate offset determined from Fig.~\ref{fig_delta_feh}.}. Although at their nominal Fe-metallicities the C-stars dominate the population of the giants below [Fe/H](HES) $-3$ dex, using the results from analysis of high resolution spectra in the lower panel the frequency of C-stars among the most metal poor EMP stars is considerably reduced. It is our contention that, as shown in Fig.~\ref{fig_feh_vmk}, this underestimate of the metallicity of the cooler C-stars by the algorithm of \cite{beers99}, used by both the HES and in the past by the HK Survey, produces a spurious high frequency of C-stars among EMP stars (see Fig.~\ref{fig_delta_feh}). Using the [Fe/H](HES) values for the C-stars would yield an apparent C-star fraction of 33\% for [Fe/H] $\le -3$ dex, while using the HIRES Fe-metallicities, a value a factor of 2.4 smaller is obtained. There are 122 HES giants with [Fe/H](HES) $< -2.0$ dex in our sample, suggesting a C-star frequency of 7.4$\pm2.9$\% for EMP stars. Adding in the fraction of of C-enhanced stars among giants with [Fe/H](HES) $< -2.0$ dex found by \cite{cohen05a} of 6.5$\pm2.7$\%, one obtains a total fraction of C-rich stars with [C/Fe] $> 1.0$ dex of 14$\pm4$\% among the giants in our HES EMP sample with [Fe/H](HES) $< -2.0$ dex, smaller than the value of 25\% for stars with [Fe/H] $< -2.5$ dex given by \cite{marsteller}. (We have derived this fraction for the C-enhanced stars among the dwarfs in our sample with [Fe/H](HES) $< -2.0$ dex; it has the same value, as will be reported in Cohen et al 2006a.) We are currently analyzing larger samples to refine this fraction. It will probably be necessary to include an indicator of the strength of the molecular bands \citep[most easily the G band of CH, i.e. the GP index already introduced in][]{bee85} with the standard KP and HP2 indices in the calibration algorithm to obtain valid Fe-abundances and maximum information from the HES and HK Survey samples and/or to replace the HP2 index with a combination of V from the HES and J or K from 2MASS; the latter was not available at the time the HK Survey defined their metallicity calibration algorithms, \citep*[see, e.g.][]{rossi05}. The metallicity distribution function is very sharply declining among halo stars at the lowest Fe-metallicities. Thus the systematic errors we have found in the calibration of the HES and by inference the HK metallicity scale of \cite{beers99}, at least until quite recently \citep*[see][]{rossi05}, will also lead directly to systematic overestimates of the number of EMP stars and of the yield for EMP stars by these two major surveys. We are currently evaluating in detail the impact of these calibration errors on such issues. \acknowledgements The entire Keck/HIRES user community owes a huge debt to the many other people who have worked to make the Keck Telescope and HIRES a reality and to operate and maintain the Keck Observatory. JGC and JM are grateful for partial support from NSF grant AST-0205951. JGC is grateful for support from the Ernest Fullam Award of the Dudley Observatory which helped initiate this work. The work of N.C. and FJZ is supported by Deutsche Forschungsgemeinschaft (grants Ch~214-3 and Re~353/44). N.C. acknowledges support through a Henri Chretien International Research Grant administered by the American Astronomical Society.
2024-02-18T23:39:47.658Z
2005-10-06T02:00:27.000Z
algebraic_stack_train_0000
420
2,334
proofpile-arXiv_065-2106
\section{Introduction} Dirac's equation describes the behavior of particles with mass and spin and how they couple to the electromagnetic field. The usual form of Dirac's equation is \[ (\imath\gamma^{\mu}\partial_{\mu}-m)\Psi(x)=0 \] The electromagnetic field is introduced by the minimal coupling prescription\cite{ref:Peskin} $\partial_{\mu}\rightarrow D_{\mu}$, with \[D_{\mu}=\partial_{\mu}+\imath A_{\mu}(x) \] where $A_{\mu}$ is the electromagnetic vector potential. Dirac's equation can be further coupled to gravity (at the classical level) using the prescription\cite{ref:Brill_Wheeler} \[ \partial_\mu\rightarrow\partial_\mu-\Gamma_\mu \] and the equation then takes the form\cite{ref:Finster,ref:Brill_Wheeler,ref:Brill_Cohen,ref:Smoller_Finster,ref:Smoller_Finster2} \begin{equation} \label{eq:full_dirac} \tilde{\gamma}^{\mu}[\imath\partial_{\mu}-\imath\Gamma_{\mu}-A_\mu]\Psi(x)-m\Psi(x)=0 \end{equation} where $\Gamma_\mu$ is known as the spin connection, $A_\mu$ is the electromagnetic vector potential, and $m$ is the mass. The gravitational coupling enters through the modified dirac matrices $\tilde{\gamma}_{\mu}$ which satisfy the anticommutation relation \[\{\tilde{\gamma}^{\mu},\tilde{\gamma}^{\nu}\}=I g^{\mu\nu}. \] and the operator $\tilde{\gamma}^{\mu}[\partial_{\mu}-\Gamma_{\mu}]$ is (in the absence of electromagnetic interactions) the covariant derivative for spinor fields in a curved space\cite{ref:Brill_Wheeler}. The above form of Dirac's equation describes the dynamics of the spinor field $\Psi$ when coupled to the scalar fields $A_{\mu}$ and gravity. There are two additional equations which describe the dynamics of $A_{\mu}$ and $g_{\mu\nu}$, these are the Einstein field equations \begin{equation} \label{eq:einstein_field} R_{\mu\nu}-\frac{1}{2}R=T_{\mu\nu} \end{equation} and Maxwell's equations \begin{equation} \label{eq:maxwell} \nabla_\mu F^{\mu\nu}=4\pi e\bar{\Psi}\gamma^\nu\Psi \end{equation} The Equations~(\ref{eq:full_dirac}), (\ref{eq:einstein_field}) and (\ref{eq:maxwell}) are collectively known as the Einstein-Dirac-Maxwell equations\cite{ref:Smoller_Finster,ref:Smoller_Finster2,ref:Krori}. The subject of this paper is Equation~(\ref{eq:full_dirac}). We will show that the equations of motion of an elastic solid have the same form as Equation~(\ref{eq:full_dirac}) with the mass end electromagnetic term emerging naturally from the formalism. \section{Elasticity Theory} \label{sec:elasticity_theory} The theory of elasticity is usually concerned with the infinitesimal deformations of an elastic body\cite{ref:Love,ref:Sokolnikoff,ref:Landau_Lifshitz,ref:Green_Zerna,ref:Novozhilov}. We assume that the material points of a body are continuous and can be assigned a unique label $\vec{a}$. For definiteness the elastic body can be taken to be a three dimensional object so each point of the body may be labeled with three coordinate numbers $a_{i}$ with $i=1,2,3$. If this three dimensional elastic body is placed in a large ambient three dimensional space then the material coordinates $a_{i}$ can be described by their positions in the 3-D fixed space coordinates $x_{i}$ with $i=1,2,3$. In this description the material points $a_{i}(x_1,x_2,x_3)$ are functions of $\vec{x}$. A deformation of the elastic body results in infinitesimal displacements of these material points. If before deformation a material point $a_0$ is located at fixed space coordinates $x_1,x_2,x_3$ then after deformation it will be located at some other coordinate $x'_1,x'_2,x'_3$. The deformation of the medium is characterized at each point by the displacement vector \[u_i=x'_i-x_i \] which measures the displacement of each point in the body after deformation. It is the aim of this paper to take this model of an elastic medium and derive from it equations of motion that have the same form as Dirac's equation. We first consider the effect of a deformation on the measurement of distance. After our elastic body is deformed, the distances between its points changes as measured with the fixed space coordinates. If two points which are very close together are separated by a radius vector $dx_i$ before deformation, these same two points are separated by a vector $dx'_i=dx_i+du_i$. The square distance between the points before deformation is then $ds^2=dx_1^2+dx_2^2+dx_3^2$. Since these coincide with the material points in the undeformed state, this can be written $ds^2=da_1^2+da_2^2+da_3^2$. The squared distance after deformation can be written\cite{ref:Landau_Lifshitz} $ds'^{2}=dx_1'^2+dx_2'^2+dx_3'^2=\sum_i dx_i'^2=\sum_i(da_i+du_i)^2$. The differential element $du_i$ can be written as $du_i=\sum_i\frac{\partial u_i}{\partial a_k }da_k$, which gives for the distance between the points \begin{eqnarray*} ds'^2&=&\sum_i\left(da_i + \sum_k\frac{\partial u_i}{\partial a_k}da_k\right) \left(da_i + \sum_l\frac{\partial u_i}{\partial a_l}da_l\right)\\ &=&\sum_i\left(da_i da_i + \sum_k\frac{\partial u_i}{\partial a_k}da_i da_k+ \sum_l\frac{\partial u_i}{\partial a_l}da_i da_l + \sum_k\sum_l\frac{\partial u_i}{\partial a_k}\frac{\partial u_i}{\partial a_l}\right)\\ &=&\sum_i\sum_k\left(\delta_{ik}+\left(\frac{\partial u_i}{\partial a_k}+\frac{\partial u_k}{\partial a_i}\right)+\sum_l\frac{\partial u_l}{\partial a_i}\frac{\partial u_l}{\partial a_k}\right) da_k da_l\\ &=&\sum_{ik}\left(\delta_{ik}+2\epsilon'_{ik}\right)da_i da_k \end{eqnarray*} where $\epsilon'_{ik}$ is \begin{equation} \label{eq:strain_tensor} \epsilon'_{ik}=\frac{1}{2}\left(\frac{\partial u_i}{\partial a_k}+\frac{\partial u_k}{\partial a_i}+\sum_l \frac{\partial u_l}{\partial a_i}\frac{\partial u_l}{\partial a_k}\right) \end{equation} The quantity $\epsilon'_{ik}$ is known as the strain tensor. It is fundamental in the theory of elasticity. In most treatments of elasticity it is assumed that the displacements $u_i$ as well as their derivatives are infinitesimal so the last term in Equation~(\ref{eq:strain_tensor}) is dropped. This is an approximation that we will not make in this derivation. The quantity \begin{eqnarray} \label{eq:metric} g_{ik}&=&\delta_{i,k}+\frac{\partial u_i}{\partial a_k}+\frac{\partial u_k}{\partial a_i}+\sum_l \frac{\partial u_l}{\partial a_i}\frac{\partial u_l}{\partial a_k}\\ &=&\delta_{i,k}+2\epsilon'_{ik}\nonumber \end{eqnarray} is the metric for our system and determines the distance between any two points. That this metric is simply the result of a coordinate transformation from the flat space metric can be seen by writing the metric in the form\cite{ref:Millman_Parker} \[ g_{\mu\nu}= \left( \begin{array}{lll}{\displaystyle \frac{\partial x'_1}{\partial a_1}}& {\displaystyle\frac{\partial x'_2}{\partial a_1}} & {\displaystyle\frac{\partial x'_3}{\partial a_1}}\\[15pt] {\displaystyle\frac{\partial x'_1}{\partial a_2}}& {\displaystyle\frac{\partial x'_2}{\partial a_2}} & {\displaystyle\frac{\partial x'_3}{\partial a_2}}\\[15pt] {\displaystyle\frac{\partial x'_1}{\partial a_3}}& {\displaystyle\frac{\partial x'_2}{\partial a_3}} & {\displaystyle\frac{\partial x'_3}{\partial a_3}} \end{array} \right) \left(\begin{array}{lll} {\displaystyle 1}& {\displaystyle 0} & {\displaystyle 0}\\[15pt] {\displaystyle 0}& {\displaystyle 1}& {\displaystyle 0}\\[15 pt] {\displaystyle 0}& {\displaystyle 0} & {\displaystyle 1} \end{array} \right) \left(\begin{array}{lll} {\displaystyle \frac{\partial x'_1}{\partial a_1}}& {\displaystyle\frac{\partial x'_1}{\partial a_2}} & {\displaystyle \frac{\partial x'_1}{\partial a_3}}\\[15pt] {\displaystyle\frac{\partial x'_2}{\partial a_1}}& {\displaystyle\frac{\partial x'_2}{\partial a_2}} & {\displaystyle\frac{\partial x'_2}{\partial a_3}}\\[15pt] {\displaystyle\frac{\partial x'_3}{\partial a_1}}& {\displaystyle\frac{\partial x'_3}{\partial a_2}} & {\displaystyle\frac{\partial x'_3}{\partial a_3}} \end{array} \right) \] \[ =J^TIJ \] where \[ \frac{\partial x'_\mu}{\partial x_\nu}=\delta_{\mu\nu}+\frac{\partial u_\mu}{\partial a_\nu}. \] and $J$ is the Jacobian of the transformation. Later in section \ref{sec:fourier_transform} we will show that the metric for the Fourier modes of our system is not a simple coordinate transformation. The inverse matrix $(g^{ik})=(g_{ik})^{-1}$ is given by $(g^{ik})=(J^{-1})(J^{-1})^T$ where \begin{equation} J^{-1}=\left(\begin{array}{lll} {\displaystyle\frac{\partial a_1}{\partial x'_1}}& {\displaystyle\frac{\partial a_1}{\partial x'_2}} & {\displaystyle\frac{\partial a_1}{\partial x'_3}}\\[15pt] {\displaystyle\frac{\partial a_2}{\partial x'_1}}& {\displaystyle\frac{\partial a_2}{\partial x'_2}} & {\displaystyle\frac{\partial a_2}{\partial x'_3}}\\[15pt] {\displaystyle\frac{\partial a_3}{\partial x'_1}}& {\displaystyle\frac{\partial a_3}{\partial x'_2}} & {\displaystyle\frac{\partial a_3}{\partial x'_3}} \end{array} \right) \end{equation} This yields for the inverse metric \begin{eqnarray} g^{ik}&=&\delta_{ik}-\frac{\partial u_i}{\partial x_k}-\frac{\partial u_k}{\partial x_i}+\sum_l \frac{\partial u_l}{\partial x_i}\frac{\partial u_l}{\partial x_k}\\ &=&\delta_{ik}-2\epsilon_{ik}\nonumber \end{eqnarray} where $\epsilon_{ik}$ is defined by \[ \epsilon_{ik}=\frac{1}{2}\left(\frac{\partial u_i}{\partial x_k}+\frac{\partial u_k}{\partial x_i}-\sum_l \frac{\partial u_l}{\partial x_i}\frac{\partial u_l}{\partial x_k}\right) \] We see that the metric components involves derivatives of the displacement vector with respect to the internal coordinates and the inverse metric involves derivatives with respect to the fixed space coordinates. \section{Equations of Motion} \label{sec:EOM} In the following we will use the notation \[ u_{\mu\nu}=\frac{\partial u_\mu}{\partial x_\nu} \] and therefore the inverse strain tensor is \[ \epsilon_{\mu\nu}=\frac{1}{2}\left(u_{\mu\nu}+u_{\nu\mu}+\sum_\beta u_{\beta \mu}u_{\beta\nu}\right). \] We will use the lagrangian method to derive the equations of motion for our system. Our model consists of an elastic solid embedded in a $3$ dimensional euclidean space. In the following we work in the fixed space coordinates and take the strain energy as the lagrangian density of our system. This approach leads to the usual equations of equilibrium in elasticity theory\cite{ref:Love,ref:Novozhilov}. The strain energy is quadratic in the strain tensor $\epsilon^{\mu\nu}$ and can be written as \[ E=\sum_{\mu \nu\alpha\rho} C_{\mu \nu\alpha\rho}\, \epsilon_{\mu\nu} \epsilon_{\alpha\rho} \] The quantities $C_{\mu \nu\alpha\rho}$ are known as the elastic stiffness constants of the material\cite{ref:Sokolnikoff}. For an isotropic space most of the coefficients are zero and in $3$ dimensions, the lagrangian density reduces to \begin{equation} \label{eq:lagrangian_3D} L=(\lambda + 2\mu)\left[\epsilon_{11}^2+\epsilon_{22}^2+\epsilon_{33}^2\right] + 2 \lambda \left[\epsilon_{11} \epsilon_{22}+ \epsilon_{11} \epsilon_{33} + \epsilon_{22}\epsilon_{33}\right] + 4\mu \left[\epsilon_{12}^2 + \epsilon_{13}^2 + \epsilon_{33}^2\right] \end{equation} where $\lambda$ and $\mu$ are known as Lam\'e constants\cite{ref:Sokolnikoff}. The usual Lagrange equations, \[ \sum_\nu\frac{d}{dx_\nu}\left(\frac{\partial L}{\partial u_{\rho \nu}}\right) - \frac{\partial L}{\partial u_\rho}=0, \] apply with each component of the displacement vector treated as an independent field variable. Since our Lagrangian contains no terms in the field $u_\rho$, Lagrange's equations reduce to \[ \sum_\nu\frac{d}{dx_\nu}\left(\frac{\partial L}{\partial u_{\rho \nu}}\right)=0. \] The quantity \[ V_\rho=\sum_\nu\frac{d}{dx_\nu}\left(\frac{\partial L}{\partial u_{\rho \nu}}\right) \] is a vector and as such can always be written as the sum of the gradient of a scalar and the curl of a vector or \[ \vec{V}=\nabla\phi+\nabla\times \vec{A}. \] From this decomposition we can immediately conclude, \begin{equation} \label{eq:Laplaces_equation} \nabla^2 \phi=0 \end{equation} We see therefore that the scalar quantity $\phi$ in the medium obeys Laplace's equation. \subsection{Physical Interpretation of $\phi$} To understand the physical origin of $\phi$ we derive its form in the usual infinitesimal theory of elasticity. The advantage of the infinitesimal theory is that an explicit form of the vector $\vec{V}$ may be obtained. In the infinitesimal theory of elasticity the strain components $u_{\mu\nu}$ are assumed to be small quantities and therefore the quadratic terms in the strain tensor are dropped and the strain tensor reduces to\cite{ref:Landau_Lifshitz} \[ \epsilon_{\mu\nu}=\frac{1}{2}\left(u_{\mu\nu} + u_{\nu\mu}\right) \] Using the above Lagrangian we obtain the explicit form \[ V_\rho=\sum_\nu\frac{d}{dx_\nu}\left(\frac{\partial L}{\partial u_{\rho \nu}}\right)=(2\mu+2\lambda)\frac{\partial \sigma}{\partial x_\rho} + 2\mu \nabla^2 u_\rho=0, \] where $\sigma=u_{11}+u_{22}+u_{33}\equiv \nabla\cdot\vec{u}$. Finally taking the divergence of $\vec{V}$ yields \[ \nabla^2\sigma=0 \] From this we see that the scalar in the infinitesimal theory is the divergence of the strain field $\sigma=\nabla \cdot \vec{u}$. It is an invariant with respect to change of coordinates and in general varies from point to point in the medium. This exercise exhibits the physical origin of $\phi$ which to lowest order in the strain components is the divergence of the strain field. In this work however will not make the infinitesimal approximation and we will work with the scalar $\phi$ and not $\sigma$. In most of what follows, the exact form of $\phi$ is not important. It is only important that such a quantity exists and obeys Laplace's equation. \subsection{Internal Coordinates} The central results of this work will be given in sections \ref{sec:internal_coordinates} and \ref{sec:fourier_transform}, where we will take one of our internal coordinates to be periodic and we will Fourier transform all quantities in that coordinate. We therefore need to translate the equations of motion $\nabla^2\phi=0$ from the fixed space coordinates to the internal coordinates. For clarity, in the remainder of this text we change notation slightly and write the internal coordinates not as $a_i$ but as $x_i'$ and the fixed space coordinates will be unprimed and denoted $x_i$. Now using $u_i=x_i'-x_i$ we can write \begin{eqnarray} \label{eq:coordinate_change} \frac{\partial}{\partial x_i}&=&\sum_j\frac{\partial x'_j}{\partial x_i}\frac{\partial}{\partial x'_j} \nonumber \\ &=& \sum_j\left(\frac{\partial x_j}{\partial x_i}+\frac{\partial u_j}{\partial x_i}\right)\frac{\partial}{\partial x'_j}\nonumber\\ &=& \sum_j\left(\delta_{ij}+\frac{\partial u_j}{\partial x_i}\right)\frac{\partial}{\partial x'_j} \end{eqnarray} Equation~(\ref{eq:coordinate_change}) relates derivatives in the fixed space coordinates $x_i$ to derivatives in the material coordinates $x'_i$. As mentioned earlier, in the standard treatment of elastic solids the displacements $u_i$ as well as their derivatives are assumed to be infinitesimal and so the second term in Equation~(\ref{eq:coordinate_change}) is dropped and there is no distinction made between the $x_i$ and the $x'_i$ coordinates. In this paper we will keep the nonlinear terms in Equation~(\ref{eq:coordinate_change}) when changing coordinates. Hence we will make a distinction between the two sets of coordinates and this will be pivotal in the derivations to follow. We will now demonstrate that Laplace's equation (\ref{eq:Laplaces_equation}) implies Dirac's equation. \section{Cartan's Spinors} \label{sec:Cartan} The concept of Spinors was introduced by Eli Cartan in 1913\cite{ref:Cartan}. In Cartan's original formulation spinors were motivated by studying isotropic vectors which are vectors of zero length. In three dimensions the equation of an isotropic vector is \begin{equation} \label{eq:isotropic_vector} x_1^2 + x_2^2 + x_3^2=0 \end{equation} for complex quantities $x_i$. A closed form solution to this equation is realized as \begin{equation} \label{eq:Cartan_spinor_solution} \begin{array}{lccr} {\displaystyle x_1 =\xi_0^2-\xi_1^2,\ } & {\displaystyle x_2=i(\xi_0^2+\xi_1^2),}&\ \mathrm{and}\ & {\displaystyle x_3=-2\xi_0\xi_1} \end{array} \end{equation} where the two quantities $\xi_i$ are \[ \begin{array}{lcr} {\displaystyle\xi_0=\pm\sqrt{\frac{x_1-\imath x_2}{2}}}& \ \mathrm{and} \ & {\displaystyle \xi_1=\pm\sqrt{\frac{-x_1-\imath x_2}{2}}} \end{array}. \] That the two component object $\xi=(\xi_0,\xi_1)$ is a spinor\cite{ref:Cartan} can be seen by considering a rotation on the quantities $v_1=x_1-\imath x_2$, and $v_2=-x_1-\imath x_2$. If $v_i$ is rotated by an angle $\alpha$, \[v_i\rightarrow v_i \exp(\imath\alpha) \] then the spinor component $\xi_0$ is rotated by $\alpha/2$. It is clear that the spinor is not periodic in $2\pi$ but in $4\pi$. A quantity of this type is a spinor and any equation of the form (\ref{eq:isotropic_vector}) has a spinor solution. Laplace's equation \[\left(\frac{\partial^2}{\partial x_1^2}+ \frac{\partial^2}{\partial x_2^2} + \frac{\partial^2}{\partial x_3^2}\right)\phi=0 \] can be viewed as an isotropic vector in the following way. The components of the vector are the partial derivative operators $\partial/\partial x_i$ acting on the quantity $\phi$. As long as the partial derivatives are restricted to acting on the scalar field $\phi$ it has a spinor solution given by \begin{equation} \label{eq:spinor0} \hat{\xi}_0^2=\frac{1}{2}\left(\frac{\partial}{\partial x_1}-i\frac{\partial}{\partial x_2}\right)=\frac{\partial}{\partial z_0} \end{equation} and \begin{equation} \label{eq:spinor1} \hat{\xi}_1^2=-\frac{1}{2}\left(\frac{\partial}{\partial x_1}+i\frac{\partial}{\partial x_2}\right)=\frac{\partial}{\partial z_1} \end{equation} where \[ \begin{array}{lcr} {\displaystyle z_0=x_1+ix_2}& \ \mathrm{and}\ & {\displaystyle z_1=-x_1+ix_2} \end{array} \]and the "hat" notation indicates that the quantities $\hat{\xi}$ are operators. The equations \[ \hat{\xi}_0^2=\frac{\partial}{\partial z_0} \] and \[ \hat{\xi}_1^2=\frac{\partial}{\partial z_1}. \] are equations of fractional derivatives of order $1/2$ denoted $\hat{\xi}_0=D^{1/2}_{z_0}$ and $\hat{\xi}_1=D^{1/2}_{z_1}$. Fractional derivatives have the property that\cite{ref:Miller_Ross} \[ D^{1/2}_{z}D^{1/2}_{z}=\frac{\partial}{\partial z} \] and solutions for these fractional derivatives can be written\cite{ref:Miller_Ross} \begin{equation} D^{\frac{1}{2}}_z \phi=\frac{1}{\Gamma \left(\frac{1}{2}\right)}\frac{\partial}{\partial z}\int^z_0 (z-t)^{-\frac{1}{2}}\phi(t)dt \end{equation} The exact form for these fractional derivatives however, is not important here. The important thing to note is that a solution to Laplace's equation can be written in terms of spinors which are fractional derivatives. If we assume that the fractional derivatives $\hat{\xi}_0$ and $\hat{\xi}_1$ commute then we also have \begin{eqnarray*} (\hat{\xi}_0\hat{\xi}_1)^2&=&\hat{\xi}_0\hat{\xi}_0\hat{\xi}_1\hat{\xi}_1 \nonumber \\ &=&-\frac{\partial}{\partial z_0}\frac{\partial}{\partial z_1}\nonumber \\ &=&-\frac{1}{4}\left(\frac{\partial}{\partial x_3}- \imath\frac{\partial}{\partial x_2}\right) \left(\frac{\partial}{\partial x_3}+ \imath\frac{\partial}{\partial x_2}\right)\nonumber \\ &=&-\frac{1}{4}\left(\frac{\partial ^2}{\partial x_2^2}+ \frac{\partial ^2}{\partial x_3^2}\right)\nonumber \\ &=&\frac{1}{4}\frac{\partial^2}{\partial x_1^2}\nonumber\\ \end {eqnarray*} Using this result combined with Equations~(\ref{eq:spinor0}) and (\ref{eq:spinor1}) we may write for the components of our vector \begin{equation} \label{eq:derivative1_solution} \frac{\partial}{\partial x_1}= -2\hat{\xi}_0\hat{\xi}_1 \end{equation} \begin{equation} \label{eq:derivative2_solution} \frac{\partial}{\partial x_2}=\imath(\hat{\xi}_0^2 + \hat{\xi}_1^2) \end{equation} and \begin{equation} \label{eq:derivative3_solution} \frac{\partial}{\partial x_3}=\hat{\xi}_0^2 - \hat{\xi}_1^2. \end{equation} This result gives the explicit solution of our vector quantities $\frac{\partial}{\partial x_i}$ in terms of the spinor quantities $\xi_i$. \subsection{Matrix Form} It can be readily verified that our spinors satisfy the following equations \begin{eqnarray*} \left[\hat{\xi}_0 \frac{\partial}{\partial x_1}+ \hat{\xi}_1 \left(\frac{\partial}{\partial x_3}-i\frac{\partial}{\partial x_2}\right)\right]\phi=0\\ \left[\hat{\xi}_0\left(\frac{\partial}{\partial x_3} + i\frac{\partial}{\partial x_2}\right)-\hat{\xi}_1\frac{\partial}{\partial x_1}\right]\phi=0 \end{eqnarray*} and in matrix form \begin{equation} \label{eq:dirac_matrix} \left( \begin{array}{lr} {\displaystyle\frac{\partial}{\partial x_1}} & {\displaystyle\frac{\partial}{\partial x_3}-i\frac{\partial}{\partial x_2}} \\[15pt] {\displaystyle\frac{\partial}{\partial x_3}+i\frac{\partial}{\partial x_2}} & {\displaystyle -\frac{\partial}{\partial x_1}} \end{array} \right) \left(\begin {array}{c} {\displaystyle\hat{\xi}_0 }\\[20pt] {\displaystyle\hat{\xi}_1} \end{array} \right) \phi=0 \end{equation} The matrix \[ X=\left(\begin{array}{lr} {\displaystyle\frac{\partial}{\partial x_1}} & {\displaystyle\frac{\partial}{\partial x_3}-i\frac{\partial}{\partial x_2}} \\[15pt] {\displaystyle\frac{\partial}{\partial x_3}+i\frac{\partial}{\partial x_2}} & {\displaystyle -\frac{\partial}{\partial x_1}} \end{array} \right) \] is equal to the dot product of the vector $\partial_\mu\equiv\partial/\partial x_\mu$ with the Pauli spin matrices \[ X=\frac{\partial}{\partial x_1}\gamma^1 + \frac{\partial}{\partial x_2}\gamma^2 + \frac{\partial}{\partial x_3}\gamma^3 \] where \[ \begin{array}{ccc} \gamma_1=\left(\begin{array}{ll} 1 & 0\\ 0 & -1 \end{array} \right),& \gamma_2=\left(\begin{array}{ll} 0 & -i\\ i & 0 \end{array} \right),& \gamma_3=\left(\begin{array}{ll} 0 & 1\\ 1 & 0 \end{array} \right) \end{array} \] are the Pauli matrices. So Equation~(\ref{eq:dirac_matrix}) can be written \begin{equation} \label{eq:dirac_unstrained} \sum_{\mu=1}^3\partial_\mu\gamma^\mu\xi=0. \end{equation} where we have used the notation $\xi\equiv \hat{\xi}\phi$. This equation has the form of Dirac's equation in 3 dimensions. It describes a spin $1/2$ particle of zero mass that is free of interactions. \subsection{Relation to the Dirac Decomposition} The fact that Laplace' equation and Dirac's equation are related is not new. However the decomposition used here is not the same as that used by Dirac. In the usual method, starting with the dirac equation $(i\gamma^\mu\partial_\mu)\Psi=0$ and operating with $-\imath\gamma^\mu\partial_\mu$ yields Laplace's equation for each component of the spinor field. In other words this method results in not one Laplace's equation but several (one for each component of the spinor). Conversely if one starts with Laplace's equation and tries to recover Dirac's equation one must start with $2$ independent scalars ( $4$ in the usual $4$ dimensional case) in order to derive the two component spinor equation (\ref{eq:dirac_unstrained}). What has been demonstrated in the preceding sections is that starting with only one scalar quantity satisfying Laplace's equation Dirac's equation for a two component spinor may be derived. Furthermore any medium (such as an elastic solid) that has a single scalar that satisfies Laplace's equation must have a spinor that satisfies Dirac's equation and such a derivation necessitates the use of fractional derivatives. The form of Equation~(\ref{eq:dirac_unstrained}) is relevant for a massless, non-interacting spin 1/2 particle. We will now demonstrate that if one of our internal coordinates is taken to be periodic a mass term as well as gravitational and electromagnetic interaction terms appear in Dirac's equation. \section{Transformation to Internal Coordinates} \label{sec:internal_coordinates} In section \ref{sec:fourier_transform} we will take the $x_3^\prime$ coordinate to be periodic and we will derive equations for the Fourier components of our fields. Since the elastic solid is assumed to be periodic in the internal coordinates we need to translate our equations of motion from fixed space coordinates to internal coordinates. Using Equation~(\ref{eq:coordinate_change}) we can rewrite Equation~(\ref{eq:dirac_unstrained}), as \begin{equation} \label{eq:dirac_before_FT} \sum_{\mu=1}^3\gamma^\mu\left(\partial_\mu'+\sum_\nu\frac{\partial u_\nu}{\partial x_\mu} \partial_\nu'\right)\xi=0 \end{equation} or \[ \sum_{\mu=1}^3\gamma'^\mu\partial'_\mu\xi=0 \] where $\partial'_\mu=\partial/\partial x'_\mu$ and $\gamma'^\mu$ is given by \begin{equation} \label{eq:modified_gamma_matrices} \gamma'^\mu=\gamma^\mu+\sum_\alpha\frac{\partial u_\mu}{\partial x_\alpha}\gamma^\alpha. \end{equation} The anticommutator of these matrices is \begin{eqnarray*} \{\gamma'^\mu,\gamma'^\nu\}&=& \{\gamma^\mu+\sum_\alpha\frac{\partial u_\mu}{\partial x_\alpha}\gamma^\alpha,\gamma^\nu+\sum_\beta\frac{\partial u_\mu}{\partial x_\beta}\gamma^\beta\}\\ &=&\{\gamma^\mu,\gamma^\nu\}+\sum_\beta u_{\nu\beta}\{\gamma^\mu,\gamma^\beta\} + \sum_\alpha u_{\mu\alpha}\{\gamma^\alpha,\gamma^\nu\}+ \sum_{\alpha\beta}u_{\mu\alpha}u_{\nu\beta}\{ \gamma^\alpha,\gamma^\beta\}\\ &=&\delta_{\mu\nu}+\sum\beta u_{\nu\beta}\delta_{\mu\beta}+ \sum_\alpha u_{\mu\alpha}\delta_{\alpha\nu}+ \sum_\alpha\sum_\beta u_{\mu\alpha}u_{\nu_\beta} \delta_{\alpha\beta}\\ &=&\delta_{\mu\nu}+2u_{\mu\nu}+\sum_\alpha u_{\mu\alpha}u_{\nu\alpha}\\ &\equiv& g^{\mu\nu} \end{eqnarray*} This shows that the gamma matrices have the form of the usual dirac matrices in a curved space\cite{ref:Brill_Wheeler}. To further develop the form of Equation~(\ref{eq:dirac_before_FT}) we have to transform the spinor properties of $\xi$. As currently written $\xi$ is a spinor with respect to the $x_i$ coordinates not the $x'_i$ coordinates. To transform its spinor properties we use a similarity transformation and write $\xi=S\tilde{\xi}$ where $S$ is a similarity transformation that takes our spinor in $x_\mu$ to a spinor in $x'_\mu$. We will not attempt to give an explicit form for $S$. We simply assume (similar to reference[\cite{ref:Brill_Wheeler}]) that this transformation can be effected by a real similarity transformation. We then have \[ \partial'_\mu\xi=(\partial'_\mu S)\tilde{\xi}+S\partial'_\mu\tilde{\xi}. \] Equation~(\ref{eq:dirac_before_FT}) then becomes \[ \begin{array}{lcl} \gamma'_\mu [S\partial'_\mu\tilde{\xi}+(\partial'_\mu S)\tilde{\xi}]&=&0\\ \mbox{} &=&\gamma'_\mu S[\partial'_\mu\tilde{\xi}+S^{-1}(\partial'_\mu S)\tilde{\xi}]\\ \mbox{} &=&S^{-1}\gamma'_\mu S[\partial'_\mu\tilde{\xi}+S^{-1}(\partial'_\mu S)\tilde{\xi}] \end{array} \] Using $(\partial'_\mu S^{-1}) S=-S^{-1}(\partial'_\mu S)$. This can finally be written as \begin{equation} \label{eq:dirac_curved_space} \tilde{\gamma}_\mu [\partial'_\mu-\Gamma_\mu]\tilde{\xi}=0 \end{equation} where $\Gamma_\mu=(\partial'_\mu S^{-1})S$ and $\tilde{\gamma}_\mu=S^{-1}\gamma'_\mu S$. Equation~(\ref{eq:dirac_curved_space}) has the form of the Einstein-Dirac equation in 3 dimensions for a free particle of zero mass. The quantity $\partial'_\mu-\Gamma_\mu$ is the covariant derivative for an object with spin in a curved space\cite{ref:Brill_Wheeler}. In order to make this identification, the field $\Gamma_\mu$ must satisfy the additional equation\cite{ref:Brill_Wheeler,ref:Brill_Cohen} \begin{equation} \label{eq:auxiliary_equation} \frac{\partial \tilde{\gamma}^\mu}{\partial x'^\nu}+\tilde{\gamma}^\beta\Gamma^\mu_{\beta\nu}-\Gamma_\nu\tilde{\gamma}^\mu+\tilde{\gamma}^\mu\Gamma_\nu=0 \end{equation} where $\Gamma^\mu_{\beta\nu}$ is the usual Christoffel symbol. We will now show that this equation does hold for this form of $\Gamma$. \subsection{Spin Connection} To show that Equation~(\ref{eq:auxiliary_equation}) holds we consider the equation $\partial_\nu\vec{\gamma}=0$ where the vector $\vec{\gamma}$ is \[ \vec{\gamma}=\sum_{\mu=1}^3\gamma^\mu \vec{e_\mu} \] and $\vec{e_\mu}$ is a unit vector in the $x_\mu$ direction. Since $\vec{\gamma}$ is a vector, then the quantity $\partial_\nu\vec{\gamma}=0$ is a tensor equation. Therefore, in the primed coordinate system we can immediately write \[ \sum_{\mu=1}^3 \left(\partial'_\nu \gamma'^\mu+\gamma'^\beta\Gamma^\mu_{\beta\nu}\right)\vec{e_\mu}=0 \] where $\gamma'^\mu=\gamma^\mu+\sum_\alpha\frac{\partial u_\mu}{\partial x_\alpha}\gamma^\alpha$ is the expression of $\gamma^\mu$ in the primed coordinate system. Using $\gamma'^\mu=S\tilde{\gamma}^\mu S^{-1}$, we have \[ \sum_{\mu=1}^3 \left(\partial'_\nu (S\tilde{\gamma}^\mu S^{-1})+(S\tilde{\gamma}^\beta S^{-1})\Gamma^\mu_{\beta\nu}\right)\vec{e_\mu} =0 \] or \[\sum_{\mu=1}^3 \left((\partial'_\nu S)\tilde{\gamma}^\mu S^{-1}+ S(\partial'_\nu \tilde{\gamma}^\mu) S^{-1}+ S \tilde{\gamma}^\mu (\partial'_\nu S^{-1})+(S\tilde{\gamma}^\beta S^{-1})\Gamma^\mu_{\beta\nu}\right)\vec{e_\mu}=0. \] Multiplying by $S^{-1}$ on the left and $S$ on the right yields \[ \sum_{\mu=1}^3 \left(S^{-1}(\partial'_\nu S)\tilde{\gamma}^\mu + (\partial'_\nu \tilde{\gamma}^\mu) + \tilde{\gamma}^\mu (\partial'_\nu S^{-1})S+\tilde{\gamma}^\beta \Gamma^\mu_{\beta\nu}\right)\vec{e_\mu} =0 \] Finally, using $\Gamma_\mu=(\partial_\mu S^{-1})S$ and again noting that $\partial_\mu S^{-1}S=-S^{-1}\partial_\mu S$ we have, \[ \tilde{\gamma}^\mu\Gamma_\mu-\Gamma_\mu\tilde{\gamma}^\mu +\left(\partial'_\nu \tilde{\gamma}^\mu +\tilde{\gamma}^\beta \Gamma^\mu_{\beta\nu}\right)=0 \] We have just demonstrated that in the internal coordinates, the equations of motion of an elastic medium have the same form as the free-field Einstein Dirac equation for a massless particle in three dimensions. \subsection{Physical Content} Thus far all of the transormations that have been obtained are "trivial" in the sense that they only result due to changing coordinates from the unprimed coordinates $x_\mu$ to the primed coordinates $x_\mu'$. Changes of coordinates of course do not result in any new physical content. In particular the metric derived in Equation~(\ref{eq:metric}) does not lead to a curved space. The Riemann curvature tensor calculated from Equation~(\ref{eq:metric}) is identically zero. Likewise the spin connection $\Gamma_\mu$ is due solely to a gauge transformation $\xi\rightarrow S\xi'$ and as such contains no physical content since it can be removed by transforming $\xi'\rightarrow S^{-1}\xi$. What we will demonstrate in the following sections is that for a system where one coordinate is periodic, the resulting $2$ dimensional quantities are NOT trivial. In other words the metric that determines the dynamics of the Fourier components of $\xi$ does in fact lead to a curved space and the spin connection cannot be removed by a gauge transformation. Furthermore the introduction of the fourier components will generate extra terms in Equation~(\ref{eq:dirac_before_FT}) that imply a series of equations relevant for particles with mass coupled to fields that can be associated with electromagnetism. We will show that in the low energy approximation (ie a system in which only the lowest few modes are present) the equations of motion are identical in form to Equation~(\ref{eq:full_dirac}). \section{Interacting particles with mass} \label{sec:fourier_transform} In this section we again consider a three dimensional elastic solid but we take the third internal dimension to be compact with the topology of a circle. All variables then become periodic functions of $x'_3$ and can be Fourier transformed. In preparation for Fourier Transforming we isolate the terms involving $x'_3$ and rewrite Equation~(\ref{eq:dirac_before_FT}) as, \begin{equation} \label{eq:dirac_curved_space_separated} \sum_{\mu=1}^2\gamma^\mu \left(\partial_\mu'+\sum_{\nu=1}^2 \frac{\partial u_\nu}{\partial x_\mu} \partial_\nu' + \frac{\partial u_3}{\partial x_\mu} \partial_3' \right)\xi + \gamma^3\left(\partial_3'+\sum_{\nu=1}^2 \frac{\partial u_\nu}{\partial x_3} \partial_\nu' + \frac{\partial u_3}{\partial x_3} \partial_3' \right)\xi \end{equation} We first transform the partial derivatives of the $u_v$ in equation (\ref{eq:dirac_curved_space_separated}) to obtain \[ u_{\nu\mu}\equiv\frac{\partial u_\nu}{\partial x_\mu}=\sum_ku_{\nu\mu,k}e^{ikx_3'} \] where $u_{\nu\mu,k}$ is the $k^{th}$ Fourier mode of $\partial u_\nu/\partial x_\mu$ and $k=2\pi i/a$ with $a$ the length of the circle formed by the elastic solid in the $x_3'$ direction and $i$ is an integer. Equation~(\ref{eq:dirac_curved_space_separated}) now becomes, \begin{eqnarray*} \lefteqn{\sum_k e^{ikx_3'}\left[\sum_{\mu=1}^2\gamma^\mu\left(\partial_\mu'\delta_{k,0}+ \sum_{\nu=1}^2 u_{\nu\mu,k} \partial_\nu' + u_{3\mu,k}\partial_3' \right)\xi \right.}\hspace{1.5in}\\ & & \mbox{}+\left.\gamma^3\left(\partial_3'\delta_{k,0}+\sum_{\nu=1}^2 u_{\nu 3,k} \partial_\nu' + u_{33,k}\partial_3' \right)\xi\right]=0 \end{eqnarray*} Next we transform the spinor (noting that it is periodic in $4\pi a$), \[ \xi=\sum_q \xi_{q/2} e^{i\frac{q}{2}x_3'} \] with $q=2\pi j/a$ and $j$ an integer. This yields, \begin{eqnarray*} \lefteqn{\sum_k\sum_q e^{ix_3'(k+q/2)}\left[\sum_{\mu=1}^2\gamma^\mu\left(\partial_\mu'\delta_{k,0}+\sum_{\nu=1}^2 u_{\nu\mu,k} \partial_\nu' + i(q/2) u_{3\mu,k} \right)\xi_q/2\right.}\hspace{1.5in}\\ & & \left.\mbox{}+ \gamma^3\left(i(q/2)\delta_{k,0}+\sum_{\nu=1}^2 u_{\nu 3,k} \partial_\nu' + i(q/2) u_{33,k} \right)\xi_{q/2}\right]=0. \end{eqnarray*} This equation is independently true for each distinct value of $k+q/2=m/2$ or $2k+q=m$ with $k,q,m$ an integer. Writing $q=m-2k$ yields finally, \begin{eqnarray} \label{eq:dirac_eq_all_modes} \lefteqn{\sum_k \left[\sum_{\mu=1}^2\gamma^\mu\left(\partial_\mu'\delta_{k,0}+\sum_{\nu=1}^2 u_{\nu\mu,k} \partial_\nu' + i\frac{(m-2k)}{2} u_{3\mu,k} \right)\xi_{(m-2k)/2}\right.}\hspace{1.5in}\\ & & \left.\mbox{}+ \gamma^3\left(i\frac{(m-2k)}{2}\delta_{k,0}+\sum_{\nu=1}^2 u_{\nu 3,k} \partial_\nu' + i\frac{(m-2k)}{2} u_{33,k} \right)\xi_{(m-2k)/2}\right]=0 \nonumber \end{eqnarray} This is an infinite series of equations describing the dynamics of the fields $\xi_m$. This set of equations describes the dynamics of our elastic solid and contains the same information as Laplace's equation. So far no approximations have been made. In the next section we will demonstrate that if only the lowest modes are present, this reduces to an equation that is identical in form to Equation~(\ref{eq:full_dirac}). \subsection{Spectrum of Lowest modes} \label{sec:lowest_modes} We now consider a theory in which only the lowest few modes in Equation~(\ref{eq:dirac_eq_all_modes}) are present. We therefore keep only the $m=0,\pm1/2$ modes we obtain the following 3 equations, \begin{equation} \label{eq:mode_0} \sum_{\mu=1}^2\gamma^\mu\left(\partial'_\mu+\sum_{\nu=1}^2u_{\nu\mu,0}\partial_\nu'\right)\xi_0 + \sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\gamma^3\xi_0 =0 \end{equation} \begin{eqnarray} \label{eq:mode_1} \lefteqn{\sum_{\mu=1}^2\gamma^\mu\left(\partial'_\mu+\sum_{\nu=1}^2u_{\nu\mu,0}\partial_\nu'+im_{1/2}u_{3\mu,0}\right)\xi_{1/2} + \gamma^3\imath m_{1/2}(1+u_{33,0})\xi_{1/2}}\hspace{1.0in} \nonumber \\ & & \hbox{}+ \gamma^3\sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\xi_{1/2} + \gamma^3\sum_{\nu=1}^2 u_{\nu 3,1}\partial'_\nu \xi_{-1/2}=0 \end{eqnarray} \begin{eqnarray} \label{eq:mode_-1} \lefteqn{\sum_{\mu=1}^2\gamma^\mu\left(\partial'_\mu+\sum_{\nu=1}^2u_{\nu\mu,0}\partial_\nu'+\imath m_{-1/2}u_{3\mu,0}\right)\xi_{-1/2} + \gamma^3\imath m_{-1/2}(1+u_{33,0})\xi_{-1/2}}\hspace{1.0in}\nonumber\\ & & \hbox{} + \gamma^3\sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\xi_{-1/2} + \gamma^3\sum_{\nu=1}^2 u_{\nu 3,-1}\partial'_\nu \xi_{1/2}=0 \end{eqnarray} where $m_i=2\pi i/a$ denotes the Fourier mode with $i$ a half integer. These equations describe the dynamics of three fields $\xi_0$ and the coupled fields $\xi_{1/2}$ and $\xi_{-1/2}$. The first equation ($m=0$ mode) describes the dynamics of a massless, free particle. We will not attempt to identify this mode with any physical particle but we simply note that in this approximation this equation is completely uncoupled from the $m=\pm1/2$ modes and therefore its dynamics are independent and have no affect on these other modes. We now examine the equations describing $\xi_{1/2}$ and $\xi_{-1/2}$. These two equations can be combined by noting that for real fields, $u_{\mu\nu,k}=u^\ast_{\mu\nu,-k}$. The $m=\pm 1/2$ modes can now be combined into the single equation \begin{eqnarray} \label{eq:Psi} \lefteqn{\sum_{\mu=1}^2\gamma^\mu\left(\partial'_\mu+\sum_{\nu=1}^2u_{\nu\mu,0}\partial_\nu'+im_{1/2}u_{3\mu,0}\right)\Psi + \gamma^3\imath m_{1/2}(1+u_{33,0})\Psi}\hspace{1.0in} \nonumber \\ & & \hbox{}+ \gamma^3\sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\Psi + \gamma^3\sum_{\nu=1}^2 u_{\nu 3,1}\partial'_\nu \Psi^\ast=0, \end{eqnarray} where $\Psi=\xi_{1/2}+\xi^\ast_{-1/2}$. To put this equation into a more recognizable form we multiply Equation~(\ref{eq:Psi}) by $\gamma^3$ from the left and define \begin{equation} \label{eq:dirac_matrices_FT} \gamma'^\mu=\gamma^3\gamma^\mu+\sum_{\beta=1}^2\gamma^3\gamma^\beta u_{\mu\beta,0}. \end{equation} These matrices with $\mu=1,2$ and $\nu=1,2$ satisfy the anticommutation relations \[ \left\{\gamma'^\mu,\gamma'^\mu\right\}= \delta_{\mu\nu} + (u_{\mu\nu,0}+u_{\nu\mu,0})+\sum_{\beta=1}^2u_{\mu\beta,0}u_{\nu\beta,0}. \] If we insist that our new matrices satisfy $\left\{\gamma^\mu,\gamma^\nu\right\}=g^{\mu\nu}$ then we are led to define \begin{equation} \label{eq:commutation_relations_FT} g^{\mu\nu}\equiv\delta_{\mu\nu} + (u_{\mu\nu,0}+u_{\nu\mu,0})+\sum_{\beta=1}^2u_{\mu\beta,0}u_{\nu\beta,0}. \end{equation} This is the metric for our two dimensional subspace and it does not have the form of a simple coordinate transformation on a flat space metric like that of section \ref{sec:elasticity_theory}. Equation~(\ref{eq:Psi}) can now be rewritten as \begin{eqnarray} \label{eq:dirac_recognizable_form} \lefteqn{\sum_{\mu=1}^2\gamma'^\mu\left(\partial'_\mu+\imath m_{1/2}u_{3\mu,0}\right)\Psi -\imath m_{1/2}u_{3\mu,0}\sum_{\beta=1}^2\gamma^3\gamma^\beta u_{\mu\beta,0}\Psi+\imath m_{1/2}(1+u_{33,0})\xi_{1/2}}\hspace{2.5in} \nonumber \\ & & \hbox{}+ \sum_{\nu=1}^2u_{\nu3,0}\partial'_\nu\Psi +\sum_{\nu=1}^2 u_{\nu 3,1}\partial'_\nu \Psi^\ast=0. \end{eqnarray} As we did for Equation~(\ref{eq:dirac_curved_space_separated}), in going from the $x$ to the $x'$ coordinates, we assume that the spinor properties of $\xi_{1/2}$ may be transformed using a real similarity transformation and writing $\xi_{1/2}=S\tilde{\xi}_{1/2}$. Transforming $\Psi$ in this way and multiplying on the left by $S^{-1}$ gives us the following form for the $m=\pm1/2$ modes, \begin{eqnarray} \label{eq:dirac_final_form} \lefteqn{\sum_{\mu=1}^2 S^{-1}\gamma'^\mu S \left(\partial'_\mu+\imath m_{1/2}u_{3\mu,0}+ S^{-1}\partial'_\mu S\right)\tilde{\Psi} -\imath m_{1/2}u_{3\mu,0}\sum_{\beta=1}^2 S^{-1}\gamma^\beta S u_{\mu\beta,0}\tilde{\Psi}} \hspace{.1in}\\ & & \mbox{}+ \imath m_{1/2}(1+u_{33,0})\tilde{\Psi}+ \sum_{\nu=1}^2 u_{\nu3,0}(\partial'_\nu +S^{-1}\partial'_\nu S)\tilde{\Psi} +\sum_{\nu=1}^2 u_{\nu 3,1}(\partial'_\nu+S^{-1}\partial'_\nu S) \Psi^\ast\nonumber \end{eqnarray} We now examine each quantity in Equation~(\ref{eq:dirac_final_form}). As before, we identify $\tilde{\gamma}^\mu=S^{-1}\gamma'^\mu S$ with the transformed gamma matrix and $\Gamma_\mu=(\partial'_\mu S^{-1})S$ with the spin connection. We also would like to identify the quantity $A_\mu=m_1u_{3\mu,0}$ in the first term with the electromagnetic potential and the third term in Equation~(\ref{eq:dirac_final_form}) as a mass term with $m=m_1(1+u_{33,0})$ which implies that the field $u_{33,0}$ provides a mass for our $\Psi$ particle. Let us further assume that the quantities $u_{\mu\nu}$ are small compared to unity so that the second term may be neglected as being of order $u^2_{\mu\nu}$ (ie we are now assuming that our medium undergoes only small deformations). If these identifications are made we can write Equation~(\ref{eq:dirac_final_form}) in the final form \begin{eqnarray} \label{eq:dirac_final_form2} \lefteqn{\sum_{\mu=1}^2 \tilde{\gamma}^\mu \left(\imath\partial'_\mu+ \imath\Gamma_\mu- A_\mu\right)\tilde{\Psi} - m\tilde{\Psi} } \hspace{1.5in}\\ & & \mbox{}+ \imath\sum_{\nu=1}^2u_{\nu3,0}(\partial'_\nu-\Gamma_\nu)\tilde{\Psi} + \imath\sum_{\nu=1}^2 u_{\nu 3,1}(\partial'_\nu-\Gamma_\nu) \Psi^\ast\nonumber. \end{eqnarray} Notice the formal similarity of this equation to Equation~(\ref{eq:full_dirac}). The first two terms have exactly the form of Dirac's equation for a spin $1/2$ particle of mass $m$ in curved space interacting with the electromagnetic vector potential $A_\mu$. Note that the mass term and the electromagnetic potentials were not added by hand but emerged naturally from the formalism. The nature of the last two terms in Equation~(\ref{eq:dirac_final_form2}) are unknown. They don't appear in the usual statement of Dirac's equation and their implications are unknown. Equation~(\ref{eq:dirac_final_form2}) is the central result of this work. We have as yet not shown that the dynamics of the fields $A_\mu$ are consistent with their identification as the electromagnetic vector potential. To truly claim that the quantity $u_{\mu 3,0}$ is the electromagnetic potential it must be shown to satisfy Maxwell's equations. We believe however that the formal correspondence between Equation~(\ref{eq:dirac_final_form2}) and Dirac's equation is significant in its own right and will not, in this paper, pursue the question of whether Maxwell's equation or the Einstein Field equations are satisfied. Before concluding we note that although our derivation assumed that we were working in three dimensional space, the formalism extends to any number of dimensions\cite{ref:Cartan,ref:Brauer_Weyl}. The major difference is that in the three dimensional case, we were able to find an explicit solution for the components of a spinor in terms the components of the vector $\partial _\mu$. An explicit solution might not exist in general. Nevertheless it can be shown\cite{ref:Cartan} that the quadratic form in Laplace's equation implies the existence of a multicomponent spinor $\xi$ satisfying a dirac-like equation in any dimension. \section{Conclusions} We have taken a model of an elastic medium and derived an equation of motion that has the same form as Dirac's equation in the presence of electromagnetism and gravity. We derived our equation by using the formalism of Cartan to reduce the quadratic form of Laplace's equation to the linear form of Dirac's equation. We further assumed that one coordinate was compact and upon Fourier transforming this coordinate we obtained, in a natural way, a mass term and an electromagnetic interaction term in the equations of motion.
2024-02-18T23:39:47.696Z
2005-10-21T20:16:23.000Z
algebraic_stack_train_0000
426
7,246
proofpile-arXiv_065-2145
\section{Introduction} The particle lateral distribution of extensive air showers (EAS) is the key quantity for cosmic ray ground observations, from which most shower observables are derived. The interaction cascade, which is initiated by a high energy cosmic ray particle in the atmosphere, creates a multitude of secondary particles, which arrive nearly at the same time but distributed over a large area perpendicular to the direction of the original particle. The disc of secondary particles may extend over several hundred meters from the shower axis, with maximum density in the center of the disc, which is called the shower core. Apart from the arrival times, the density distribution of particles within the shower disc contain all informations on the primary particle, which are left after it has undergone a millionfold multiplication process by the atmosphere. However, it is this multiplication process, that foremost offers the chance to observe cosmic rays in the ultra and very high energy region at all: Due to their low flux measurements at ground, carried out with large arrays of individual detectors which take samples of the shower disc at several locations, are still the only possible way to study these high-energy cosmic particles \cite{haungs}. \\ The lateral distributions of electrons and muons in EAS not only contains information on the nature of the primary cosmic ray particle, which is related to astrophysical questions, but it also carries information relevant to particle physics. While the electromagnetic interactions are thought to be well understood, this is not true for the high energy hadronic interactions. The energy range and the kinematical region in which the first hadronic interactions of the shower development occur are far beyond the accessible realm of todays accelerator experiments. Uncertainties in the description of hadronic interactions therefore imply uncertainties in the prediction of the shape of the lateral distributions \cite{drescher}. \\ A parameter, commonly used to describe the form of the lateral density distribution, is the lateral form parameter in the Nishimura-Kamata-Greisen (NKG)-function \cite{nishi,kamat,greis}, usually called \it age. \rm The name expresses the relation between the lateral shape of the electron distribution and the height of the shower maximum. Due to the statistical nature of shower development, the height of the shower maximum is subject to strong fluctuations. Showers, which have started high in the atmosphere show a flat lateral electron distribution, as electrons in the electromagnetic cascade suffer more from multiple scattering processes. Such showers are called old and are characterised by a large value of the age parameter. Young showers have started deeper in the atmosphere and had their maximum more close to observation level. This results in a steeper lateral electron distribution, which corresponds to a smaller value of the age parameter. Apart from fluctuations, the height of the shower maximum depends on energy and mass of the shower initiating primary. Therefore, the lateral shape parameter is also sensitive to the mass of the primary.\\ The mutual interrelation of several independent parameters on which the form of the lateral shape depends, makes it a delicate task to draw unique conclusions from the results of the measurements. Moreover, the interpretation of the measured raw data requires a profound understandig on the details of the detector response functions. Sophisticated simulations of the whole event chain that is initiated by the first collision of an ultrahigh energy cosmic ray particle with an air nucleus in the upper atmosphere and ends with the registration of electronical signals in the various detector components are a prerequisite to any reliable analysis of the lateral distributions of all particle components. \\ This view encourages to measure the secondary particle components separately, which from an experimental point of view requires several detector components to be operated simultaneously. The detector array of the multi detector setup KASCADE (Karlsruhe shower core and array detector) \cite{kas} is designed to disentangle the electromagnetic, the hadronic, and the muon component of the shower disc. Lateral distributions of electrons, hadrons, and muons (for different muon threshold energies) in the primary energy range $5\cdot 10^{14}\mbox{eV} < E < 10^{17}\mbox{eV}$ as measured with KASCADE have already presented in a previous paper \cite{lat}. In this paper, the lateral distributions of electrons and muons in EAS events as measured with the KASCADE array detectors will be compared with the predictions of detailed Monte Carlo calculations, which comprise the simulation of the full cascading process of EAS, the simulation of the array detector response and the final data reconstruction mechanisms. Whereas in \cite{lat} the parameterisations of the lateral distributions were analysed for mean values only, here the reconstruction is also performed on a single event basis. Special emphasis is given to investigations of the shape of the lateral distributions, the so-called 'lateral age', and its dependencies on primary energy and mass of the cosmic rays. Contrary to \cite{lat}, the hadronic component, measured with the KASCADE central hadron calorimeter, will not be considered in the present analysis, as well as measurements from the additional KASCADE muon devices.\\ The paper is organised as follows: After a brief description of the experimental setup an overview on the simulation methods is given. Then we shortly outline the data reconstruction scheme. A more detailed explanation is given on the method we use for the reconstruction of electron numbers and on function used to describe the measured and simulated lateral shapes. This is followed by the presentation of mean lateral distributions for muons and electrons, as measured with the KASCADE array and a comparison with the results from the simulations. Then we show the results for the lateral shape of individual showers and its dependence on the shower observables electron and muon number. The simulations results, the data will be compared with, are mostly based on the hadronic interaction model QGSJet \cite{qgsjet}, but simulations with lower statistics based on the SIBYLL model \cite{sibyll} have also been performed. Therefore, results based on the SIBYLL model and the differences in the predictions of both models are discussed at the end of the paper. \section{The KASCADE experiment} The KASCADE experiment is located at the site of the Forschungszentrum Karlsruhe at an altitude of 110~m above sea level. A central hadron calorimeter is surrounded by a rectangular array of 252 scintillation detector stations, equally spaced by 13~m and covering an area of 200x200~m$^2$. In addition, there is a muon tracking detector with an effective area of 128~m$^2$. The experiment measures the hadronic, muonic and electromagnetic components of extensive air showers in the energy range of $5\cdot 10^{14}~$eV up to $10^{17}~$eV of the primary particles. A detailed description of the experiment can be found in \cite{kas}.\\ The 252 detector stations of the KASCADE detector array are organized in 16 electronically independent clusters. Each cluster consists of 16 stations, except the inner four clusters, where one station per cluster had to give way to the central detector. The stations of the inner four clusters contain four liquid scintillation detectors, each with an area of 0.8~m$^2$ read out by one photomultiplier. The stations of the outer clusters contain two such detectors with 1.6~m$^2$ total area. All photomultiplier signals of a detector station are added, and the integrated charge of the signal is recorded, together with the time of the earliest detector hit by a shower particle. These detectors are designed to measure arrival time and energy deposits of the electromagnetic component of the showers and are therefore referred to as $e/\gamma$-detectors here.\\ Additionally, the stations of the 12 outer clusters house 3.2~m$^2$ plastic scintillation detectors below a shielding of 10~cm lead and 4~cm of iron, which gives 0.23~GeV threshold for muons. Each detector is read out by 4 photomultiplier devices and in turn yields time and energy deposit information. Again, the sum of the multiplier signals is recorded together with the hit pattern and the time of the earliest detector hit. These detectors measure the muon component and are referred to as muon-detectors here.\\ The shower observables which are reconstructed with the KASCADE array data are core position, shower direction and the lateral distribution of electrons and muons. From this, the shower size, expressed as total number of electrons $N_e$ above 3~MeV and a lateral shape parameter will be derived. For the muon component only the total number of muons $N_{\mu}$ above 100~MeV can be estimated. Due to the low muon densities, a reliable determination of the lateral shape parameter is in general not possible for single event analysis. \section{Monte Carlo simulation} A reliable interpretation of the data requires a detailed understanding of the physics of shower development, as well as a detailed knowledge of the detector response. The whole event chain, starting with the primary interaction in the upper atmosphere, followed by the cascading of the shower particles through the air up to the response of the detectors at KASCADE ground level has been simulated carefully.\\ The simulation of extensive air showers is performed with the program CORSIKA (version 6.156 and higher) \cite{corsika}. For the high energy hadronic interactions, the models QGSJet (version 01) \cite{qgsjet} and SIBYLL (version 2.1) \cite{sibyll} are used. Hadronic interactions with energies below 200~GeV are treated with the FLUKA code \cite{fluka}, and the electromagnetic component is treated with the EGS4 package \cite{egs4}. The showers were simulated in the energy range from $1 \cdot 10^{14}~$eV to $1 \cdot 10^{17}$~eV. With respect to computing time the distribution in energy was chosen to follow a power law with spectral index $\gamma=-2$. To represent different primary masses, the set contains equal numbers of showers for five different primary types, namely protons, helium, carbon, silicon and iron. The position of the shower cores were distributed randomly all over the array. For the shower directions an isotropic distribution was chosen. Output of the program is a list of all particles reaching KASCADE ground level together with their coordinates, arrival time, 3-momenta and particle type.\\ These data are input to the KASCADE simulation program, which is based on the GEANT3 package \cite{geant}. The simulation covers the whole experiment, with all its detectors modelled in great detail. All particles from a CORSIKA simulated shower are tracked through the detectors, the surrounding air and the absorber materials. Secondaries, created in interactions with the detector materials are likewise followed. In case of the array stations, energy deposits and timing information are gathered during the tracking step and converted into a photomultiplier signal and a signal time taking into account the light collecting geometry of the detector as well as the specific properties of the scintillation material. Output of this program, concerning the array part, are arrival times and multiplier signals for $e/\gamma$- and muon-detectors in exactly the same format, that is written by the real experiment after the calibration procedure, i.e. the simulated data can be analysed with the standard KASCADE reconstruction software. A more detailed description of the KASCADE array simulation can be found in \cite{MCS-Rec}. \section{Reconstruction of particle density distributions and shower parameters} Detector simulations applied to CORSIKA shower events have also been extensively used for the development and testing of the array data reconstruction algorithms and procedures. As the KASCADE array reconstruction scheme has already been described in several previous papers \cite{kas,lat,MCS-Rec}, only a brief overview will be given in this chapter. Those parts of the analysis chain, which are concerned with the reconstruction of the lateral electron distribution and its properties will be described in more detail, as they were subject to modifications applied for the present analysis. \subsection{Reconstruction of the $e/\gamma$ component} Shower direction and shower core position, as well as shower size and the lateral form parameter (usually known as age parameter) are reconstructed from energy deposits and detector response times of the $e/\gamma$-detectors using an iterative procedure involving three steps.\\ In the first step a rough estimate of the shower direction, core position and shower size is obtained using fast and robust algorithms, which don't rely on any fit procedure. In the second step, the shower direction is determined more accurately, by evaluating the arrival times of the first particle in each detector. This yields an inclination resolution better than 0.3 degrees for showers with $\lg N_e > 4.5$ \cite{kas}. Then, corrections depending on core position and shower inclination are applied to the individual detector energy deposits. From this, particle numbers and corresponding particle densities are calculated for each detector. A 4-parameter fit to the spatial distribution of the particle densities yields core position and shower size, and in addition, a lateral shape parameter of the charged particle density distribution. The core position resolution at this level of reconstruction is better than 0.3~m for showers with $\lg N_e > 4.5$, the shower size resolution at the percentage level \cite{kas}. The e/$\gamma$-detector signals contain contributions also from the muon component, for which must be corrected for, which is performed in the third step of the reconstruction scheme: As the analysis of the muon-detector data proceeds in parallel to the analysis of the e/$\gamma$-detector data, the total muon number $N_{\mu}$ is known at the time of step three from the step two muon analysis. Therefore, the expected muon density can be estimated individually for each e/$\gamma$-detector. The resulting signal contribution is then accounted for in the detector probabilities for the Likelihood minimisation function and the combined lateral density distributions are fitted. Since the accuracy in core position is in general not further improved in this step, only shower size and lateral form parameter are varied, yielding the final values of the total electron number $N_e$, and the shape parameter of the lateral electron density distribution. \subsubsection{ Reconstruction of particle densities } Reconstruction of the lateral particle density distribution requires to interprete the measured energy deposits in terms of particle numbers. After correcting for different track lengths in the scintillator due to shower inclination, special attention must be paid to the $\gamma$-component. The electrons are accompanied by a multitude of $\gamma$-particles, which fake additional electrons, because the photon efficiency of the scintillation detectors is roughly $10$\%. The percentage of fake electrons strongly depends on core distance, because the mean $\gamma/e$-ratio is a function of core distance. In addition, the detector efficiency for electrons decreases with increasing core distance, as their kinetic energy distribution becomes more and more soft with growing core distance. Moreover, both effects depend on shower size and primary particle type.\\ A shower size dependent lateral energy correction function (LECF) has been derived using the Monte Carlo simulations described in chapter 3. This function gives the average expected energy deposit per \it shower electron \rm as a function of core distance and shower size and thereby accounts for the additional deposit due to accompanying photons and the dependence of the detector efficiency on the kinetic energy of the electrons. Dividing the measured energy deposit by the expected deposit per shower electron, an estimate on the number of electrons, hitting the detector is given. Figure \ref{fig1} left shows LECFs for proton induced showers for different shower sizes. The figure also shows the parametrisation of the mean $\gamma/e$-ratio $q_{\gamma/e}$ and the mean energy deposits $E_{dep}^{e}$, $E_{dep}^{\gamma}$ per electron and photon, respectively. The energy $E_{dep}^{tot}$ deposited in average by $n_e$ electrons and $n_{\gamma}$ accompanying photons in the detector is then given by \begin{equation} E_{dep}^{tot} = n_e \cdot E_{dep}^{e} + {n}_{\gamma}\cdot E_{dep}^{\gamma}, \end{equation} From this the functional form of the LECF ist derived according to \begin{equation} f_{LECF} \equiv \frac{E_{dep}^{tot}}{{n}_e}= E_{dep}^{e} + \frac{n_{\gamma}}{n_{e}} \cdot E_{dep}^{\gamma} \approx E_{dep}^e + q_{\gamma/e}\cdot E_{dep}^{\gamma} \end{equation} where the mean electron and photon energy deposits are assumed to depend only on the mean kinetic particle energies.\\ The sudden fall of the LECF starting at the shower center reflects the decrease of the mean kinetic energy of electrons and $\gamma$'s with increasing core distance. At a core distance of about 30~m, the corresponding loss in detector efficiency becomes compensated by the rising $\gamma$-electron ratio, which yields an increasing fraction of fake deposit due to $\gamma$-particles. At large core distances, $\gamma$'s fake nearly half one of the average energy deposit per electron. \begin{figure}[t] \begin{minipage}[b]{.50\linewidth} \centering\includegraphics[width=7.5cm]{pics/pic1.eps} \end{minipage} \begin{minipage}[b]{.50\linewidth} \centering\includegraphics[width=7.5cm]{pics/pic2.eps} \end{minipage} \caption{\label{fig1}Left: Proton LECF's for $\lg N_e = 5$ (dashed lines), $\lg N_e = 6$ (full lines) and $\lg N_e = 7$ (dashed dotted lines). Also shown are corresponding $\gamma$/e-ratios and mean energy deposits for electrons and photons as parametrised and used for the LECF. Right: Mean electron densities as reconstructed from simulated detector data for proton induced CORSIKA showers. The solid lines give the original CORSIKA electron densities. Only electrons and photons have been tracked through the detectors in this case.} \end{figure} Comparing LECFs calculated for proton and iron primaries, one finds them to differ by less than $1$\% within 100~m from the shower core. For larger distances, the differences do not exceed $5$\%. So one is free to use a common LECF for the analysis of data, where the primary particle type is unknown. Moreover, the variation with shower size is less than 100~m. There is a negligible dependence also on the inclination of the showers, as geometrical effects are corrected for in the reconstruction procedures.\\ The right part of Figure \ref{fig1} shows results of the reconstruction when applied to simulated detector deposits, calculated from CORSIKA showers with the detector Monte Carlo, as described above. To check the reliability of the LECF, only electrons and $\gamma$-particles have been tracked in this case. The reconstructed electron distributions compare well to the distributions of the CORSIKA electrons. Deviations from the original distribution are found only for large showers and small core distance. This however is due to saturation effects in the $e/\gamma$-detectors, which are included in the detector simulation. \subsubsection{ Reconstruction of shower parameters size and lateral shape} A theoretically motivated function for the description of the lateral electron density distribution $\rho(r)$ is given by the so-called Nishimura-Kamata-Greisen function (NKG) \cite{nishi,kamat,greis} \begin{equation} \rho=N_e \cdot c(s) \cdot \left( \frac{r}{r_M}\right)^{s-2} \left( 1+\frac{r}{r_M}\right )^{s-4.5}, \end{equation} with the age parameter $s$, the Moliere radius $r_M$ and the normalising factor $c(s)$ \begin{equation} c(s)=\frac{\Gamma(4.5-s)}{2 \pi r_M^2 \Gamma(s)\Gamma(4.5-2s)}. \end{equation} This function has been derived analytically for the case of purely $e/\gamma$-induced air showers but is also used to describe the lateral electron distribution of hadron induced showers. It is however known \cite{fail_nkg_1,fail_nkg_2,fail_nkg_3}, that the NKG functions have shortcomings in fitting measured EAS electron distribution, most obvious at large core distances. This deficit is usually addressed to the fact, that the NKG-function was derived for electromagnetic cascades, whereas hadron induced air showers are a superposition of a large number of independent electromagnetic showers.\\ In a typical KASCADE event, the detector distances to the core may extend up to 200~m. In case of large showers with sizes well above $\lg N_e = 6$, which roughly correspond to a primary energy of 10~PeV, the detectors close to the shower core become saturated and must be rejected for the analysis. Thus, the lateral fit range differs for small and for large showers significantly in both, the upper and lower bound. The deviation of the NKG-function from the true shape of the lateral electron distribution therefore gives rise to systematic errors in the lateral shape parameter (age $s$ in the NKG-formula), which depend on the lateral fit range, and thereby on shower size.\\ It has been pointed out \cite{lat}, that fixing the age parameter $s$ and instead varying the scale parameter $r_{0}$ (Moliere radius $r_M$ in the NKG-formula) considerably improves the fit behaviour of the NKG function. Unfortunately, this does work well only for mean lateral distributions constructed from a large number of showers. When fitting individual events, which suffer from large statistical and physical fluctuations between the detector stations, this method has proven to be significantly unstable. Especially for small showers it yields unreliable results.\\ \begin{figure}[t] \begin{minipage}[b]{.50\linewidth} \centering\includegraphics[width=7.5cm]{pics/pic3.eps} \end{minipage} \begin{minipage}[b]{.50\linewidth} \centering\includegraphics[width=7.5cm]{pics/pic4.eps} \end{minipage} \caption{\label{fig2}Left: Comparison of fit results for the standard NKG function and the modified NKG function. Shown are, as an example, fitted CORSIKA electron distributions (i.e. without detector simulation) for proton and iron showers and two different shower sizes each. Right: Residuals of the fitted function and the simulated CORSIKA electron distributions for proton and iron induced showers and two different shower sizes (dark symbols represent the small, grey symbols the large $N_e$-bin). The iron symbols mostly overlap.} \end{figure} Other ways to cure the defects of the NKG function for describing electron lateral distributions of hadron induced air showers, is to replace it by a different function (e.g. as in \cite{cap-dev}) or to modify its functional form by changing the values of the exponents. Indeed, this gives a better adaption to the shape of the lateral density distribution. For this, we replace equation (2) by \begin{equation} \rho=N_e \cdot \tilde{c}(s) \cdot \left( \frac{r}{r_0}\right)^{s-\alpha} \left( 1+\frac{r}{r_0}\right )^{s-\beta}, \end{equation} with \begin{equation} \tilde{c}(s)=\frac{\Gamma(\beta-s)} {2 \pi r_0^2 \Gamma(s-\alpha+2)\Gamma(\alpha+\beta-2s-2)}. \end{equation} Testing this function with Monte Carlo data, as optimum values for the exponents we have found $\alpha=1.5$ and $\beta=3.6$ , when $r_0=40$~m for the scale parameter is used. Optimum in this case means an almost negligible systematic uncertainty in the reconstructed shower size $N_e$ over the full KASCADE range as shown in Figure \ref{fig3}. With these values of $\alpha, \beta$ and $r_0$, equation (5) limits the new parameter $s$ to the range $-0.5 < s < 1.55$. At the same time, of course, it looses its numerical relation to the longitudinal development of the electromagnetic cascade, as it is often mentioned for its original form. In the following, the new parameter $s$ will be called the shape or form parameter of the lateral density distribution.\\ As an example, the left part of Figure \ref{fig2} compares both variants of the fit function when applied to mean lateral electron distributions derived directly (i.e. without detector simulation) from CORSIKA simulated proton and iron showers for two different shower sizes. The modified NKG-function adapts much better to the lateral shapes, which becomes most obvious at large core distances. This can be seen even better at the right part of Figure \ref{fig2}, which compares relative deviations of the fit function from the distributions for the original NKG-function and the modified one. Apart from the very vicinity of the shower core, the modified NKG-function describes the shape of the mean lateral distributions with significantly smaller residuals over the whole fit range. \\ \begin{figure}[t] \begin{minipage}[b]{.50\linewidth} \centering\includegraphics[width=7.5cm]{pics/pic5.eps} \end{minipage} \begin{minipage}[b]{.50\linewidth} \centering\includegraphics[width=7.5cm]{pics/pic6.eps} \end{minipage} \caption{\label{fig3}Left: Deviation of reconstructed from true shower size for NKG and modified NKG function, when fitting individual showers. Right: Reconstructed shape parameter as a function of shower size using NKG and modified NKG function. The rise of the age parameter for showers with $\lg N_e >6.5$ is an artefact of the shortcomings of the NKG function in describing the lateral distributions at large core distance. The vertical dashed lines shows the KASCADE threshold.} \end{figure} The benefits of the modified NKG-function when applied to individual showers are shown in Figure \ref{fig3}. The left part shows results for the estimate of the total shower size, derived by both functions. With the modified function, the systematic uncertainty in shower size is almost zero over the range $ 5< \lg N_e < 7 $ and still below $5$\% between $ 4.5 < \lg N_e < 7.5$. Even more convincing, this does not depend on the primary particle type, contrary to the results from the original NKG function, which fits iron induced shower profiles with a larger systematic uncertainty than proton induced ones.\\ The increasing systematic error towards small shower sizes below $\lg N_e=5$ in the case of iron primaries is related to the strongly growing $\mu/e$-ratio. For $\lg N_e =4$, the muon density becomes comparable to the electron density even at small core distances. This makes it difficult to disentangle both components in the $e/\gamma$-detector analysis and finally leads to an overestimate in shower size. For $\lg N_e >7$ on the other hand, more and more detectors in the vicinity of the shower core get saturated. This reduces the available lateral fit range and results in a quickly growing underestimate of the shower size. Both effects ultimately establish the limits in the primary energy range for this analysis to the region between $5\cdot 10^{14}~$eV and $10^{17}~$eV.\\ The right part of Figure \ref{fig3} compares the results of the original and the modified NKG function for the lateral shape parameter as a function of the reconstructed shower size. As already mentioned, the absolute value of the shape parameter is related to the choice of the scale radius $r_0$ and the values taken for the exponents $\alpha$ and $\beta$ of the modified NKG-function and is therefore shifted to smaller values. Apart from that, the results of the conventional fit function exhibit a rise of the shape parameter for $\lg N_e > 6.6$ due to the described shortcomings of the original NKG-function and the shrinking of the lateral fit range with growing shower size, when detectors near to the shower core become saturated. This artefact is absent with the new fit function. For shower sizes below the KASCADE threshold the age distributions are effected by the selection procedures. \subsection{Reconstruction of the $\mu$-component} \begin{figure}[t] \centering \includegraphics[width=12cm]{pics/pic7.eps} \caption{\label{fig4} Muon lateral densities as reconstructed from simulated muon-detector data compared to the true CORSIKA densities.} \end{figure} The analysis of the muon-detector signals follows the same line as in the case of the e/$\gamma$-detectors. Again, the raw signals are subject to corrections due to shower inclination and core position using a corresponding muon LECF, which has a much simpler form than in the e/$\gamma$-case. Additionally, corrections for punch through from the e/$\gamma$- and hadron component are applied before the muon densities for every detector are calculated. Detectors closer than 40~m to the shower core must be excluded from the analysis, because in this region punch through dominates the signal.\\ The total muon number is estimated by fitting a modified NKG function with exponents $\alpha=1.5$ and $\beta=3.7$. For the muon component this gives only a moderate improve over the original NKG-function, which is known to fit muon lateral distributions already quite well, provided the scale parameter is chosen appropriately. Here we take $r_{0\mu}=420$~m. Due to the low muon densities, a 2-parameter fit on a single air shower basis proves unreliable. Therefore the total muon number $N_{\mu}$ of the shower is estimated with a fixed lateral form parameter $s_{\mu}$ and a 1-parameter fit. The muon lateral shape $s_{\mu}$ is parametrized as a function of shower size $N_e$ from CORSIKA simulations and the actual value is chosen event by event during the iterative reconstruction. For the considered data sample $s_{\mu}$ varies between 0.81 and 0.75, slowly decreasing with increasing shower size.\\ Reconstruction results for muon densities are illustrated in Figure \ref{fig4}, where muon distributions are compared for simulated showers with the corresponding true CORSIKA distributions for several ranges of the reconstructed total muon number $N_{\mu}$. For individual showers, the accuracy of this observable is moderate compared with shower size $N_e$ and is not better than 10 to $20$\%. This results from the poor muon statistics in case of small showers and punch through at large shower sizes. The muon size $N_{\mu}$ is input to the correction of shower size and shape parameter in the third step of the e/$\gamma$-detector analysis. \section{Comparisons of KASCADE data with Monte Carlo simulations} For the results presented here all measured showers with $\log N_e > 6$ have been taken into account. This sample comprises about 170~000 events measured in a period of nearly 8 years. Additionally, these data have been enriched with a sample of the many small showers, which hit the array more frequently due to the steep energy spectrum of cosmic rays. For this, two KASCADE runs with in total about 2.5 million recorded shower events were added to the data set.\\ All showers, real or simulated, included in this analysis were subject to the same reconstruction procedure and to the same cuts concerning trigger condition, core position, inclination angle and shower shape parameter. Showers are restricted to core distances less than 90~m from the array center and zenith angles less than 30 degrees. An additional cut for showers with shape parameter value $s>1.4$, which is close to the upper boundary of the mathematically possible range, excludes showers which are frequently misreconstructed inside the array but actually had its core outside, or are just very small showers which fluctuated in such a way that the reconstruction overestimated their size by a large amount. Indeed, showers of the second kind are already very efficiently excluded, by comparing shower sizes as reconstructed during step one and step two of the analysis, and cutting on those events, where the difference in the estimated sizes exceeds the expectations due to the uncertainties in both methods considerably. Since the energy distribution of the simulated showers represents a spectral index of $\gamma=-2$ while the real data follow an index with $-2.6 < \gamma < -3.1$ fluctuations to larger values in shower size would be less pronounced in simulations. Therefore appropriate statistical weights have been given to the simulated events, as will be explained below. \\ The analysed KASCADE data set is first compared to the predictions of the QGSJet model, based on a sample of 1.7 Million simulated events. In addition a set of showers with half that statistics but based on the SIBYLL model was analysed (section 5.4). \subsection{Lateral distributions of muons} Figure \ref{fig5} shows mean lateral distributions from the simulations based on the QGSJet model and compares them with the data. Showers have been sampled in ranges of the reconstructed total muon number per single event. Because of the steeper energy spectrum of the data, fluctuations in muon number would give significantly larger contributions to higher $N_\mu$-bins than they would for simulated events. To account for this effect and also to show its result on the form of the lateral distributions, the simulated events have been analysed for a statistical weight representing a spectral index $\gamma=-2.6$ as well as for weights giving an index $\gamma=-3.2$. The spectral index of the data varies with energy, but will lie somewhere between these values. The small shaded bands in Figure \ref{fig5} give the simulation results within this bounds. The lower bound of the band always corresponds to the larger absolute index value, i.e. to $\gamma=-3.2$. The width of the band for an individual primary mass is in the order of the symbol size, only. It is obvious, that the form of the lateral distribution function does only weakly depend on primary energy. Moreover, it can be seen, that the bands of proton and iron nuclei at least partially overlap. The simulations therefore predict, that the form of the muon distribution is not sensitive to the nature of the primary particle. At low energies, proton induced showers show slightly steeper lateral shapes compared to iron primaries, but these differences vanish for higher energies. Even for small showers the differences in the density distributions for showers of either type do not exceed ten percent. \begin{figure}[t] \centering \includegraphics[width=13cm]{pics/pic8.eps} \caption{\label{fig5} Lateral density distributions of muons as measured with the KASCADE array muon-detectors. The shaded bands cover the range of the Monte Carlo simulation results with respect to five different elemental masses including an uncertainty in the spectral index within the range $-3.2 < \gamma < -2.6$. For reasons of clarity, only the results for two elemental masses are shown additionally. For these, a spectral index $\gamma=-3$ is assumed.} \end{figure} The lateral distributions derived from the data agree quite well with the Monte Carlo predictions. The simulated distributions describe the measurements over the full KASCADE range of core distances and primary energies. The figure thereby shows clearly, why the muon measurements at KASCADE are sensitive to the primary energy, but give no valuable information on the elemental composition of cosmic rays. This is also found by using the SIBYLL model. \subsection{Lateral distributions of \it charged particles \rm} Figure \ref{fig6} presents lateral distributions of charged particles as measured by the e/$\gamma$-detectors together with distributions derived from simulations. Though the bulk of particles are electrons, also muons contribute to the e/$\gamma$-detector signals, while contributions of photons are corrected by the LECF, and contaminations by hadrons are negligible. Therefore, the distributions presented here contain besides electrons also muons. All showers have been grouped according to their reconstructed total muon number. \begin{figure}[t] \centering \includegraphics[width=13cm]{pics/pic9.eps} \caption{\label{fig6} Lateral density distributions of electrons including muons as measured with the KASCADE $e/\gamma$-detectors and by simulations. The shaded bands cover the range of the simulation results with respect to five different elemental masses including an uncertainty in the spectral index within the range $-3.2 < \gamma < -2.6$. The results for each elemental mass, which are also shown, assume a spectral index $\gamma=-3$. } \end{figure} The shaded bands of Figure \ref{fig6} again indicate the range of uncertainty, which results, when the spectral index is varied between $-3.2 < \gamma < -2.6$ and the primary mass from proton to iron. Again, the lower bounds corresponds to the larger absolute value $\gamma=-3.2$, the upper bounds to $\gamma=-2.6$. In addition, within each band the lateral distributions for five different primary masses are shown, assuming a spectral index of $\gamma=-3$ for each. The curve close to the upper bound of each band now results from the lightest primary, the one close to the lower bound belongs to iron. This expresses the well known fact \cite{ap-16}, that showers originating from light primaries are more electron rich at sea level than showers induced by heavy nuclei. The $e/\mu$-ratio is therefore a common starting point for the analysis of the chemical composition of cosmic rays. Furthermore, the figure illustrates, that showers induced by light primaries are predicted to exhibit a slightly steeper electron distribution than showers stemming from heavy nuclei. \\ Comparing real data with simulations, one sees that small showers with $\lg N_{\mu} \approx 4$ fit simulated proton and helium distributions quite well, while large showers with $\lg N_{\mu} \approx 7$ are best described by the silicon and iron distributions, i.e. by heavier primary particles. The figure visualizes the known \cite{kas,emu-kas} variation in the average $e/\mu$-ratio with shower size, which indicates, within the scope of the QGSJet model, a transition of the primary particle composition from light elements at energies below to heavy elements at energies above the knee. A detailed analysis of the electron-muon-number frequency spectrum as measured by KASCADE is subject of a separate paper, focused on the chemical composition of cosmic rays \cite{holger}.\\ A closer look to Figure \ref{fig6} also reveals discrepancies between the data and the QGSJet Monte Carlo predictions. At low energies, i.e. small muon numbers, the shapes of the measured distributions are slightly flatter than those of the simulated proton and helium distributions. For the smallest showers considered here, the measured particle densities even exceed the expected range of densities at core distances beyond 120~m. This kind of shortcoming can not be cured by any assumption on the elemental composition and will be discussed in more detail in the next chapter. \\ Looking at the highest energies, where data are best described by heavy primary showers, simulations seem to show slightly lower densities and a flatter lateral behaviour at small core distances. In this region however, data suffer severely from overflows, from which additional uncertainties result.\\ It might be worth mentioning, that the shortcomings described here must originate completely from processes involved in the generation and development of the $e/\gamma$-component. The muon component contributes at low shower energies considerably to the lateral distributions measured with the $e/\gamma$-detectors. But the form of the muon lateral distribution is well described by the Monte Carlo simulations and therefore can not be drawn to explain the observed deviations. \\ \subsection{The lateral shape parameter} The most obvious differences between the lateral shapes of the individual elements shown in Figure \ref{fig6} are simply the amplitudes of the density functions, and are related to different $e/\mu$-ratios. A more subtle quantity, which in this kind of representation is difficult to compare, concerns the functional form or the slope of the lateral distribution.\\ \begin{figure}[t] \centering \includegraphics[width=13cm]{pics/pic15.eps} \caption{\label{fig7} Mean shower shape parameter $s$ as a function of $\lg N_{\mu}$ and $\lg N_e$ for simulated showers based on the QGSJet model with a composition adopted from \cite{holger}. Also shown are the lines of maximum probability for proton and iron induced showers and the coordinate system, used for the comparison of simulation results and data.} \end{figure} The relations of shower sizes $N_e$ and $N_{\mu}$ with the shape parameter $s$ and with the primary mass are illustrated by Figure \ref{fig7}. It shows the mean reconstructed shape parameter value as a function of the observables $\lg N_e$ and $\lg N_{\mu}$ for QGSJet based simulated showers. The individual events are weighted in energy to represent an elemental composition according to the results for the QGS modell as described in \cite{holger}. The lines overlaid to the distribution represent linear approximations to lines of maximum probability for showers of a single element (here shown only for proton and iron) but variable energy. It is obvious, that showers from light primaries are younger in average, i.e. have smaller shape values compared to heavy primaries, and showers of high energy are younger than low energy ones.\\ The lines of maximum probability in Figure \ref{fig7} offer one axis of a natural, rectangular chosen coordinate system to compare the data with the simulation results. The new coordinates are related to the $\lg N_{\mu} - \lg N_e - $ system by a simple rotation around the origin with an angle of 51,6 degrees, obtained from the simulations. This coordinate system simply adapts the form of the event distribution in the $\lg N_{\mu} - \lg N_e - $ plane and will be used in the following to compare data and Monte Carlo results only on the basis of measured (or simulated) observables. While the coordinates along the lines of maximum probability may be associated with an energy estimator $\varepsilon_{\lg E}$, the coordinates perpendicular to this direction measure a mass estimator $\varepsilon_m$. These are surely not the best possible estimators for energy and mass but we do not want to draw quantitative conclusions from them. Therefore the numerical values of the new coordinates are simply the values obtained by the rotation from the old $\lg N_{\mu} - \lg N_e - $values. \\ The shape parameter as a function of the energy estimator $\varepsilon_{ \lg E}$ is shown in Figure \ref{fig8} for both, data and Monte Carlo simulations of all five elements. The Monte Carlo results confirm that showers induced by heavy primaries are older compared to showers of light primaries. With increasing energy the shape parameter value decreases for all simulated elements and reflects the fact, that the height of the shower maximum decreases with increasing energy.\\ The data fit into this picture only qualitatively. Up to an energy of about 10~PeV, they follow the line of carbon. For low energies, this suggests a relatively heavy composition, which clearly disagrees with the predictions of figure \ref{fig6}. For higher energies, the lateral shape parameter stays almost constant and crosses the line of iron at an energy of about 30~PeV. Beyond this crossing point, the absolute values of the measured shape parameter cannot be explained by any elemental composition within this Monte Carlo model. \\ \begin{figure}[t] \centering \includegraphics[width=13cm]{pics/pic10.eps} \caption{\label{fig8} Reconstructed shape parameter (of the modified NKG function) as a function of the energy estimator $\varepsilon_{\lg E }$ for KASCADE data, five primary masses and a model composition as simulated using QGSJet. The scale on top gives a rough estimate for $\lg E$ in GeV.} \end{figure} For a more detailed investigation the data distributions are compared with what would be expected from the simulations, once a reasonable elemental composition is given. For this, the simulated shower events of the five elemental masses have been weighted with individual energy spectra, which have been reconstructed from an analysis of the measured $N_e/N_{\mu}$-spectrum using a sophisticated unfolding algorithm based on the same model QGSJet. The resulting model composition favours light elements before the knee and a significant contribution from heavy elements at energies above the knee \cite{holger}. The effect on the shape parameter as a function of the energy estimator is also shown in Figure \ref{fig8}. It is remarkable, that the line of the measured shape parameter values runs almost parallel to the line representing these adapted Monte Carlo predictions, but is displaced by a nearly constant amount of $\Delta s \sim 0.05$ over the whole energy range.\\ \begin{figure}[t] \centering \includegraphics[width=15cm]{pics/pic11.eps} \caption{\label{fig9} Reconstructed shape parameter (of the modified NKG function) as a function of the mass estimator $\varepsilon_m$ for KASCADE data and five primary masses as simulated using QGSJet for different ranges of the energy estimator $\varepsilon_{\lg E }$. The line of maximum probability for protons corresponds to a value $\varepsilon_m=-0.22$ and for iron it is $\varepsilon_m = -0.45$.} \end{figure} The behaviour shown in Figure \ref{fig8} may therefore be interpreted in the same way as the form of the lateral shapes in Figure \ref{fig6}. The almost constant value of the shape parameter for energies beyond 10~PeV can be understood as the result of a transition from light to heavy nuclei in the elemental composition of cosmic rays. The offset between the lines of measured and simulated shape simply states, that the simulations in general yield slightly steeper shapes than observed in real showers. This can be seen even more clearly when looking along the lines of constant values of the mass estimator $\varepsilon_m$. This view is given in Figure \ref{fig9} for several ranges of the energy estimator $\varepsilon_{ \lg E}$, i.e. slices of Figure \ref{fig7} perpendicular to the lines of maximum probability. Here higher mass values correspond to smaller shower size and larger muon content, i.e. to showers which are electron-poorer. Remarkably, all elemental masses can be seen to follow the same (energy dependent) functional dependence between the shape parameter and the mass estimator. On the other hand this may be expected, because showers of heavy primaries develop higher in the atmosphere compared to showers of light nuclei but same energy. However, the total number of electrons present in the shower maximum is, for showers of the same energy, roughly independent of the kind of the primary nucleus. Therefore, a deeply penetrating iron shower may not be distinguishable in shape from a proton shower, which developed very early in the atmosphere. The shape of real showers however does not follow this functional dependence. Measured showers are older in average, with values deviating by an amount $\Delta s \sim 0.05$ from simulated showers, with increasing tendency at the highest energies. This might indicate, that real showers develop at higher altitudes than predicted by the simulations or/and that multiple coulomb scattering plays a distinct and more pronounced role in the development of the real electromagnetic cascade expected from simulations. \subsection{Comparison with the SIBYLL model} \begin{figure}[t] \centering \includegraphics[width=13cm]{pics/pic12.eps} \caption{\label{fig10} Same as Figure \ref{fig6}, but with SIBYLL generated showers.} \end{figure} The data have also been compared with simulations based on the SIBYLL model. While the $e/\mu$-ratio for SIBYLL showers is larger in general, no notable differences to the QGSJet model were found when comparing the shapes of the lateral distributions of muons for equal total muon numbers. The lateral shapes of the electron component show only small differences compared to the QGSJet model, as can be seen from Figures \ref{fig10} and \ref{fig6}. The SIBYLL calculated shapes predict a more heavy composition, as a result of small differences in the $e/\mu$-ratios. In addition, the mean lateral electron distributions appear a bit younger. \\ Investigating the dependence of the lateral form parameter on the energy estimator as done in section 5.3, one can see in Figure \ref{fig11} that SIBYLL describes the data worse compared to QGSJet. The SIBYLL iron curve crosses the data already at an energy of about 10~PeV, so there is no explanation for the measured shape values within this model for larger energies. Comparing with Figure \ref{fig8} one finds that the mean shape of SIBYLL showers in general is smaller by $\Delta s \sim 0.05$ compared to QGSJet. \begin{figure}[t] \centering \includegraphics[width=13cm]{pics/pic13.eps} \caption{\label{fig11} Same as Figure \ref{fig8}, but with SIBYLL generated showers.} \end{figure} It may appear surprising then, that individual SIBYLL showers follow the same functional dependence on the mass and energy estimators (and therefore also on $\lg N_e $ and $\lg N_{\mu} $) than QGSJet showers do, as can be seen from Figure \ref{fig12}. This shows that the longitudinal development of the electromagnetic component must be very similar in both models. The difference in the mean lateral shape parameter is therefore simply due to a different distribution of SIBYLL events in the $N_e-N_{\mu}-$plane (see also \cite{holger}). SIBYLL showers are more electron rich and produce less muons. The abundancy maximum for a given primary energy is therefore shifted to lighter mass values, enhancing the weight of younger showers when averaging on the shape parameter.\\ \begin{figure}[t] \centering \includegraphics[width=15cm]{pics/pic14.eps} \caption{\label{fig12} Reconstructed shape parameter (of the modified NKG function) as a function of the mass estimator $\varepsilon_m$ for QGSJet and SIBYLL based simulations and different ranges of the energy estimator $\varepsilon_{\lg E }$. For reasons of clarity only two elemental masses are shown.} \end{figure} \section{Summary and conclusions} Lateral electron and muon density distributions of air showers as measured with the KASCADE array have been compared to the results of Monte Carlo simulations based on the CORSIKA program using EGS4 and the two hadronic interaction models QGSJet and SIBYLL.\\ Muon lateral distributions measured with the KASCADE array muon detectors were found to be well described by the Monte Carlo simulations and no significant differences were observed between the two hadronic interaction models QGSJet and SIBYLL. Moreover, muon lateral distributions appear very similar in shape, independent of the nature of the primary particle, so that details of the chemical composition can not show up in the comparison of data and simulations.\\ Deviations from the Monte Carlo predictions are found for the lateral distributions of charged particles, which were reconstructed from the measurements of the KASCADE e/$\gamma$-detector array. Common to both models is that they suggest a transition from light to heavy nuclei in the chemical composition of cosmic rays in the energy range of 1~PeV to 100~PeV. This is consistent with the results of an independent study \cite{holger} based on a detailed analysis of the $\lg N_e -\lg N_{\mu}-$ frequency spectrum of KASCADE events.\\ Investigating in detail the shape of the lateral distributions, the absolute values of the measured shape parameter however were found to disagree with the predictions of either of the two hadronic interaction models. While both models yield the same dependence of the average shower shape on $\lg N_e $ and $\lg N_{\mu}$, the absolute values appear smaller than the measured shape values for the whole considered range in $\lg N_e$ and $\lg N_{\mu}$.\\ Looking at the mean shape parameter as a function of primary energy, SIBYLL yields smaller values compared to QGSJet. The reason for this difference between the two models is a kind of "lighter" distribution of events in the $\lg N_e -\lg N_{\mu}-$ plane in case of SIBYLL: showers of same primary type and energy contain on average less muons but more electrons and therefore make up a smaller average value for the shape parameter compared to QGSJet. However, also the QGSJet predictions still underestimate the results from measurements by an almost constant amount of $\Delta s \sim 0.05$.\\ Summarizing, both models are not able to describe the measured lateral distribution of the $e/\gamma$-component correctly. The details of the form of the lateral distribution depend on the hadronic interaction mechanism as well as on electromagnetic cascading processes. Thus, a variant of the QGSJet model that predicts a larger $e/\mu$-ratio, would give better consistency with data. However, the discrepancies might also be burried in the electromagnetic cascading algorithm EGS4 and its treatment of the multiple coulomb scattering process. Further improvements in the simulation models may be necessary, to understand and remove the discrepancies between data and simulations. Meanwhile, these results hopefully may help to stimulate this process and provide some additional hints. \begin{ack} The authors would like to thank the members of the engineering and technical staff of the KASCADE collaboration, who contributed to the success of the experiment with enthusiasm and committment. The KASCADE experiment is supported by the Ministry of Research of the Federal Government of Germany. The Polish group acknowledges support by KBN research grant 1 P03B 03926 for the years 2004-2006. \end{ack}
2024-02-18T23:39:47.829Z
2005-10-28T17:12:42.000Z
algebraic_stack_train_0000
435
8,594
proofpile-arXiv_065-2303
\section{\protect\\ The Issue} Lee and Lee (2004) consider a global monopole as a candidate for the galactic dark matter riddle and solve the Einstein Equations in the weak field and large $r$ approximations, for the case of Scalar Tensor Gravity (where $G=G_*(1+\alpha_0^2)$, with $G_* $ the bare Gravitational Constant). The potential of the triplet of the scalar field is written as: $V_M(\Phi^2)=\lambda/4 (\Phi^2-\eta^2)^2$, the line element of the spherically symmetric static spacetime results as: $ds^2= -N(r)dt^2 + A(r) dr^2 + B(r) r^2 d\Omega^2$ where the functions $N(r)$, $A(r)$, $B(r)$ are given in their eq. 19. From the above, Lee and Lee (2004) write the geodesic equations, whose solution, for circular motions, reads: $$ V^2(r) \simeq 8\pi G \eta^2 \alpha_0^2 +GM_\star(r)/r \eqno(1) $$ \begin{figure*} \begin{center} \includegraphics[ width=67mm,angle =-90]{1.ps} \end{center} \vskip -0.7truecm \caption{Logarithmic gradient of the circular velocity $\nabla$ $vs.$ B absolute magnitude and $vs.$ $log \ V(R_{opt})$. Lee and Lee (2004) predictions are $\nabla(V_{opt})=0$ and $\nabla(M_B)=0$.} \end{figure*} where $M_\star (r)$ is the ordinary stellar mass distribution. In the above equation, they interpret the first (constant) term, that emerges in addition to the standard Keplerian term, as the alleged constant (flat) value $V(\infty) $ that the circular velocities are thought to asymptotically reach in the external regions of galaxies, where the (baryonic) matter contribution $GM_\star /r$ has decreased from the peak value by a factor of several. Furthermore, they compare the quantity $ 8\pi G \eta^2 \alpha_0^2$ with the spiral circular velocities at outer radii and estimate: $\eta \sim 10^{17} GeV$. The crucial features of their theory (at the current stage) are: the "DM phenomenon" always emerges at outer radii $r$ of a galaxy as a constant threshold value below which the circular velocity $V(r)$ cannot decrease, regardless of the distance between $r$ and the location of the bulk of the stellar component. The theory implies (or, at its present stage, seems to imply) the existence of an observational scenario in which the rotation curves of spirals are asymptotically flat and the new extra-Newtonian (constant) quantity appearing in the modified equation of motion, can be derived from the rotation curves themselves. As a result, the flatness of a RC becomes a main imprint for the Nature of the "dark matter constituent". The aim of this Comment is to show that the above "Paradigm of Flat Rotation Curves" of spiral galaxies (FRC) has no observational support, and to present its inconsistency by means of factual evidence. Let us notice that we could have listed a number of objects with a serious gravitating {\it vs.} luminous mass discrepancy having steep (and not flat) RC's and that only a minority of the observed rotation curves can be considered as flat in the outer parts of spirals. However, we think that it is worth to discuss in detail the phenomenology of the spirals' RC's, in that we believe that it is the benchmark of any (traditional or innovative) work on "galactic dark matter", including that of Lee and Lee (2004). The "Phenomenon of Dark Matter" was discovered in the late 70's (Bosma 1981, Rubin et al. 1980) as the lack of the Keplerian fall-off in the circular velocity of spiral galaxies, expected beyond their stellar edges $R_{opt}$ (taken as 3.2 stellar disk exponential scale-lengths $R_D$). In the early years of the discovery two facts led to the concept of Flat Rotation Curves: 1) Large part of the evidence for DM was provided by extended, low-resolution HI RC's of very luminous spirals (e.g. Bosma 1981) whose velocity profile did show small radial variations. 2) Highlighting the few truly flat rotation curves was considered a way to rule out the claim that non Keplerian velocity profiles originate from a faint baryonic component distributed at large radii. It was soon realized that HI RC's of high-resolution and/or of galaxies of low luminosity did vary with radius, that baryonic (dark) matter was not a plausible candidate for the cosmological DM, and finally, the prevailing Cosmological Scenario (Cold Dark Matter) did predict galaxy halos with rising as well as with declining rotation curves (Navarro, Frenk and White, 1996). The FRC paradigm was dismissed by researchers in galaxy kinematics in the early 90's (Persic et al. 1988, Ashman, 1992), and later by cosmologists (e.g. Weinberg, 1997). Today, the structure of the DM halos and their rotation speeds is thought to have a central role in Cosmology and a strong link to Elementary Particles via the Nature of their constituents, (e.g. Olive 2005) and a careful interpretation of the spirals' RC's is considered crucial. \section{\protect\\ The Observational Scenario } Let us stress that a FRC is not a proof of the existence of dark matter in a galaxy. In fact, the circular velocity due to a Freeman stellar disk has a flattish profile between 2 and 3 disk scale-lengths. Instead, the evidence in spirals of a serious mass discrepancy, that we interpret as the effect of a dark halo enveloping the stellar disk, originates from the fact that, in their optical regions, the RC are often steeply rising. Let us quantify the above statement by plotting the average value of the RC logarithmic slope, $\nabla \equiv \ (dlog \ V / dlog \ R)$ between two and three disk scale-lengths as a function of the rotation speed $V_{opt}$ at the disk edge $R_{opt}$. We remind that, at 3 $R_D$ in the case of no-DM self-gravitating Freeman disk, $\nabla =-0.27$ in any object, and that in the Lee and Lee proposal $\nabla \sim 0 $ (see eq. 1). We consider the sample of 130 individual and 1000 coadded RC's of normal spirals, presented in Persic, Salucci \& Stel (1996) (PSS). We find (see Fig. 1b): $$ \nabla = 0.10-1.35 \ log {V_{opt}\over {200~ km/s}} \eqno(2a) $$ (r.m.s. = 0.1), where $80\ km/s \leq V_{opt}\leq 300 \ km/s$. A similarly tight relation links $\nabla$ with the galaxy absolute magnitude (see Fig. 1a). For dwarfs, with $40 \ km/s \leq V_{opt}\leq 100 \ km/s $, we take the results by Swaters (1999): $$ \nabla = 0.25-1.4 \ log {V_{opt}\over {100\ km/s}} \eqno(2b) $$ (r.m.s. = 0.2) that results in good agreement with the extrapolation of eq. 2. The {\it large range} in $\nabla$ and the high values of these quantities, implied by eq. 2 and evident in Fig. 1, are confirmed by other studies of independent samples (e.g. Courteau 1997, see Fig. 14 and Vogt et al. 2004, see figures inside). Therefore, in disk systems, in region where the stars reside, the RC slope takes values in the range: $$ -0.2 \leq \nabla \leq 1 $$ i.e. it covers most of the range that a circular velocity slope could take (-0.5 (Keplerian) , 1 (solid body)). Let us notice that the difference between the RC slopes and the no-DM case is almost as severe as the difference between the former and the alleged value of zero. It is apparent that only a very minor fraction of RC's can be considered as flat. Its rough estimate can be derived in simple way. At luminosities $L<L_*$, ($L_*=10^{10.4}\ L_{B\odot}$ is the knee of the Luminosity Function in the B-band) the spiral Luminosity Function can be assumed as a power law: $\phi(L) dL \propto L^{-1.35} dL$, then, by means of the Tully-Fisher relationship $L/L_* \simeq (V_{opt}/(200~ km/s))^3$ (Giovanelli et al., 1997) combined with eq. 2a, one gets: $ n(\nabla) d\nabla \propto 10^{0.74 \nabla} d\nabla $ finding that the objects with a solid-body RC ($0.7 \leq \nabla \leq 1$) are one order of magnitude more numerous than those with a "flat" RC ($-0.1 \leq \nabla \leq 0.1$). In short, there is plenty of evidence of galaxies whose inner regions show a very steep RC, that in the Newtonian + Dark Matter Halos framework, implies that they are dominated by a dark component, with a density profile much shallower than the "canonical" $r^{-2}$ one. \begin{figure} \vskip 1cm \begin{center} \includegraphics[width=49mm]{2.ps} \end{center} \vskip -0.3truecm \caption{ The Universal Rotation Curve } \end{figure} At outer radii (between 6-10 disk scale-lengths) the observational data are obviously more scanty, however, we observe a varied and systematics zoo of rising, flat, and declining RC's profiles (Gentile et al. 2004; Donato et al. 2004). \section{\protect\\ Discussion } The evidence from about 2000 RC's of normal and dwarf spirals unambiguously shows the existence of a systematics in the rotation curve profiles inconsistent with the Flat Rotation Curve paradigm. The non stellar term in eq. 1 must have a radial dependence in each galaxy and vary among galaxies. To show this let us summarize the RC systematics. In general, a rotation curve of a spiral, out to 6 disk scale-lengths, is well described by the following function: $$ V(x)=V_{opt} \biggl[ \beta {1.97x^{1.22}\over{(x^2+0.782)^{1.43}}} + (1-\beta)(1+a^2)\frac{x^2}{x^2+a^2} \biggl] $$ where $x \equiv R/R_{opt}$ is the normalized radius, $V_{opt}=V(R_{opt})$, $\beta=V_d^2/V_{opt}^2$, $a=R_{core}/R_{opt}$ are free parameters, $V_d$ is the contribution of the stellar disk at $R_{opt}$ and $R_{core}$ is the core radius of the dark matter distribution. Using a sample of $\sim$ 1000 galaxies, PSS found that, out to the farthest radii with available data, i.e. out to $6\ R_D$, the luminosity specifies the above free parameters i.e. the main average properties of the axisymmetric rotation field of spirals and, therefore, of the related mass distribution. In detail, eq. 2 becomes the expression for the {\it Universal Rotation Curve} (URC, see Fig. 2 and PSS for important details). Thus, for a galaxy of luminosity $L/L_*$ (B-band) and normalized radius $x$ we have (see also Rhee, 1996): $$ V_{URC}(x) =V_{opt} \biggl[ \biggl(0.72+0.44\,{\rm log} {L \over L_*}\biggr) {1.97x^{1.22}\over{(x^2+0.782)^{1.43}}} + $$ \vskip -0.85truecm $$ \biggl(0.28 - 0.44\, {\rm log} {L \over L_*} \biggr) \biggl[1+2.25\,\bigg({L\over L_*}\biggr)^{0.4}\biggr] { x^2 \over x^2+2.25\,({L\over L_*})^{0.4} } \biggr]^{1/2} $$ The above can be written as: $V^2(x)=G (k M_\star/x+ M_h(1) F(x,L)) $ where $M_h(1)$ is the halo mass inside $R_{opt}$ and k of the order of unity. Then, differently from the Lee and Lee (2004) claim and the FRC paradigm, the "dark" contribution $F(x,L)$ to the RC varies with radius, namely as $x^2/(x^2+a^2)$, $a =const$ in each object. Finally, also the extrapolated "asymptotic amplitude $V(\infty)$" varies, according to the galaxy luminosity, between $50\ km/s $ and $ 250\ km/s $ (see also PSS) in disagreement with the Lee and Lee (2004) predicted constant value of $ 8\pi G \eta^2 \alpha_0^2 \sim 300\ km/s $. Let us conclude with an {\it important} point: this paper is not intended to discourage testing out whether a theory, alternative to the DM paradigm, can account for an outer flat rotation curve, but to make us sure that this is the (simplest) first step of a project meant to account for the actual complex phenomenology of rotation curves of spirals and of the implied physical relevance of the mass discrepancy (e.g. Gentile et al. 2004).
2024-02-18T23:39:48.307Z
2005-10-25T16:45:30.000Z
algebraic_stack_train_0000
458
2,005
proofpile-arXiv_065-2312
\section{Introduction\label{sec:intro}} We have seen remarkable progress in the understanding of pure dark matter structures over the last few years. This was triggered by numerical simulations which have observed general trends in the behaviour of the radial density profile of equilibrated dark matter structures from cosmological simulations, which roughly follow an NFW profile~\cite{nfw96,moore,Fukushige1997,Moore1998,Moore1999pro,Ghigna1998}, see also \cite{Fukushige2004,Tasitsiomi2004,Navarro2004,Reed2004,diemand04,power,Diemand:2005wv} for references. General trends in the radial dependence of the velocity anisotropy have also been suggested~\cite{cole,carlberg}. Recently, more complex relations have been identified, holding even for systems that do not follow the simplest radial power-law behaviour in density. These relations are first that the phase-space density, $\rho/\sigma^3$, is a power-law in radius~\cite{taylor}, and second that there is a linear relationship between the density slope and the anisotropy~\cite{hm}. A connection between the shape of the velocity distribution function and the density slope has also been suggested~\cite{students,HMZS}. Using the Jeans equation together with the fact that phase-space density is a power-law in radius allows one to find the density slope in the central region numerically~\cite{taylor}, and even analytically for power-law densities~\cite{jeanspaper}. These results were generalized in \cite{austin}. Recently a more refined study \cite{dm} using both the phase-space density being a power-law in radius and also the linear relationship between density slope and anisotropy, showed that one can solve the Jeans equations analytically and extract the radial dependence of density, anisotropy, mass etc. It therefore appears that we only have to understand the two numerical relationships described above in order to fully quantify pure dark matter halo structures. However, no theoretical explanation for the origin of these relations is known to us. The relationship between phase-space density and radius has been considered several times~\cite{taylor,jeanspaper,austin,dm,ascasibar,rasia,barnes} and seems to be well established, however, the other crucial ingredient in the analysis, namely the linear relationship between density slope and anisotropy has only been investigated qualitatively~\cite{hm}. In this paper we perform a large set of simulations in order to quantify this relationship. We show that with present day simulations there does indeed appear to be a linear relation between density slope and anisotropy. When combined with the assumption of phase-space being a power-law in radius this implies zero anisotropy near the center with density slope of approximately $-0.8$, and that the outer anisotropy is radial and close to $+0.5$. \section{Head-on collisions} \label{sec:head} The first controlled simulation is the head-on collision between two initially isotropic NFW structures. For the construction of the initial structures, we use the Eddington inversion method as described in~\cite{stelios}. We create an initial NFW structure with zero anisotropy containing 1 million particles. The parameters are chosen to correspond to a total mass of $10^{12}$ solar masses (concentration of 10), with half of the particles within 160 kpc. The structure is in equilibrium in the sense that when evolving such a structure in isolation its global properties remain unchanged. We now place two such structures very far apart with 2000 kpc between the centres, which is well beyond the virial radius. Using a softening of 0.2 kpc, we now let these two structures collide head-on with an initial relative velocity of 100 km/sec towards each. After several crossings the resulting blob relaxes into a prolate structure. We run all simulations until there is no further time variation in the radial dependence of the anisotropy and density. We check that there is no (local) rotation. We run this simulation for 150 Gyrs (corresponding rougly to 10 Hubble times), which means that a very large part of the resulting structure is fully equilibrated. All simulations were carried out using PKDGRAV, a multi-stepping, parallel code~\cite{stadel01}, which uses spline kernel softening and multi-stepping based on the local acceleration of particles. Force accuracy is set by an opening angle of $\Theta = 0.7$ which combined with the use of a 4-th order multipole expansion results in typical RMS force errors of better than 0.1\%. The simulations were performed on the zBox and zBox2 at the University of Zurich \footnote{\tt http://www-theorie.physik.unizh.ch/\~{}stadel/zBox}. We now extract all the relevant parameters in radial bins, logarithmically distributed from the softening length to beyond the region which is fully equilibrated. The resulting density profile is very similar to an NFW profile. We calculate the radial derivative of the density (the density slope) \begin{equation} \alpha \equiv \frac{d {\rm ln} \rho}{d {\rm ln} r} \, , \end{equation} and the velocity anisotropy \begin{equation} \beta \equiv 1 - \frac{\sigma^2_t}{\sigma^2_r} \, , \end{equation} in each radial bin, where the $\sigma^2_{\rm t,r}$ are the one dimensional velocity dispersions in the tangential and radial directions respectively. This is shown in figure~\ref{fig:headon} as green filled pentagons. We also present the initial conditions as a red (straight) line. It is clear from this figure that some connection between $\alpha$ and $\beta$ exists, in the sense that a density slope of about -1 has small anisotropy, and density slope of about -3 has large radial anisotropy. In these figures we are including points which are inside the resolved region and also points which are outside the fully equilibrated region. We will discuss this issue fully in section~\ref{sec:trust}. \begin{figure}[htb] \begin{center} \epsfxsize=13cm \epsfysize=10cm \includegraphics[width=0.8\textwidth]{nfwcoll2.ps} \caption{Head-on collision between two initially isotropic NFW structures. The red (straight) line shows the initial condition for each of the colliding structures. The green filled pentagons show the result after the first collision, where the structure contains 2 million particles. The blue open triangles show the result after a second collision (4 million particles). The green and blue symbols land very near each other in the region inside of a density slope of about $-2.2$} \label{fig:headon} \end{center} \end{figure} The first important question to address is if the ${\alpha-\beta}$ relation found after this collision represents an optimal configuration of the dark matter system, or if subsequent collisions might lead to a different configuration. We therefore take the resulting structure (now containing 2 million particles, in a prolate shape) and collide 2 copies together, again separated by 2000 kpc, and with initial relative velocity of 100 km/sec. We now use softening of 1 kpc to speed up the simulation. The resulting structure is even more prolate when observed in density contours, and we evolve this for another 150 Gyr. It should be noted that already after a few crossing times the structures are sitting near the $\alpha-\beta$ relation, and the fact that we run the simulations for much longer times has only a small effect in the outer region, where the particle density is much smaller. We again extract in radial bins, and the ${\alpha-\beta}$ is shown as blue open triangles in figure~\ref{fig:headon}. We see very clearly, that for density slopes more shallow than about $-2.2$, there is virtually no change in the ${\alpha-\beta}$ relation. We conclude that in the central region, where a significant perturbation occurred during the collision, the ${\alpha-\beta}$ relation is unchanged, and hence the structure has indeed reached an optimal state. In the outer region there has been a small change in the connection between $\alpha$ and $\beta$, in the sense that the first collision brought the outer region away from the initial conditions and the second collision brought the system even further away. \subsection{Dependence on shape?} The resulting structures described above are prolate when observed in density contours. One may fear that the resulting ${\alpha-\beta}$ relation will depend strongly on the axis-ratios of these structures. Naturally, if the relevant axis-ratios are instead the ones extracted when considering contours in potential energy, then we should expect only an effect in the very central region, since the potential energy contours are close to spherical in the outer region. \begin{figure}[htb] \begin{center} \epsfxsize=13cm \epsfysize=10cm \includegraphics[width=0.8\textwidth]{orientation.ps} \caption{Repeated head-on collision between two initially isotropic NFW structures, to test the effect of shape. The different colours refer to the axis along which the second collision was made. In the inner region there is virtually no difference in the resulting ${\alpha-\beta}$ relations.} \label{fig:shape} \end{center} \end{figure} In order to test this question, we extracted the resulting prolate structure after the first collision described above. We can now decide to collide this structure along different axes, either along the long, intermediate or short axes. We perform 2 test collisions, one is a second collision along the intermediate axis, and the other is a second collision along the short axis (there is very little difference between the short and intermediate axis after the first collision). These structures end up being triaxial, almost oblate. The softening of these last collisions was chosen to be 2 kpc in order to speed up the simulation. The collision along the long axis was described above, resulting in a very prolate structure. The results are shown in figure~\ref{fig:shape}. It is clear from figure~\ref{fig:shape} that in the inner region (inside density slope of about $-2.2$) there is very little difference between the 3 different structures. These 3 resulting structures have rather different shape when seen in density contours, but are very similar when seen in potential energy contours. We conclude from this that whereas the definition of density slope does depend slightly on the shape of the density contours, then the ${\alpha-\beta}$ relation is almost independent of the shape~\cite{parisconf}. We note that the structures should be more perturbed when collided along the long axis, and we do indeed observe that the changes in the ${\alpha-\beta}$ relation in the outer region (beyond density slope of $-2.2$) are larger when collided along the long axis as compared to collisions along the short axis. \subsection{Dependence on initial conditions?} To address the question of how strongly the ${\alpha-\beta}$ relation from the head-on collisions described above depend on the initial conditions, we now consider 2 simulations each with two steps. 1a) First we create a spherical isotropic NFW structure with 1 million particles as described above. We take each individual particle in this structure and put its total velocity along the radial direction, maintaining the sign with respect to inwards or outwards from the halo centre. We keep the energy of each particle fixed. This structure is thus strongly radially anisotropic. 1b) Now we take two such radially anisotropic structures and place their centres 2000 kpc apart (well outside the virial radius) with relative velocity of 100 km/sec towards each other. We use softening of 1 kpc for these tests. Before the centres of the two structures collide the radial motion of the particles behaves almost like a radial infall simulation, and strong scattering of the particles is observed. The 2 individual structures pick random orientations in space as they become triaxial~\cite{merritt85}; their orientations are completely erased after the two structures collide head-on. We let the structure equilibrate. We take two copies of the resulting structure and collide these head-on, again with 100 km/sec. The final equilibrated region contains approximately 2.5 million particles (of the total of 4 million particles). We follow this with a simulation identical to the one just described, except that we place each individual particle in the initial structures on a tangential orbit, without changing the energy of the individual particles. 2a) This initial condition thus corresponds to strongly tangential motion. When letting this structure equilibrate in isolation it settles down with a tangential anisotropy of $\beta \sim -2$. 2b) Again, two copies of this equilibrated structure are collided head-on as in step 1b. The result of these tests are shown in figure~\ref{fig:ic}. \begin{figure}[htb] \begin{center} \epsfxsize=13cm \epsfysize=10cm \includegraphics[width=0.8\textwidth]{compareic.ps} \caption{Repeated head-on collision between two NFW structures, to test the effect of different initial conditions. The different colours refer to different initial conditions; green (open) circles show the results from the initially radially anisotropic case (1), cyan (filled) pentagons show the results of the initially tangentially anisotropic case (2), and blue (open) triangles was the isotropic collision described in section~\ref{sec:head}. In the region inside of a density slope of about $-2.4$, there is virtually no different between the resulting ${\alpha-\beta}$ relations.} \label{fig:ic} \end{center} \end{figure} We see from figure~\ref{fig:ic} that the resulting ${\alpha-\beta}$ relation is virtually independent of initial conditions, and we conclude that the collisions were sufficiently violent to erase the initial conditions sufficiently to conform with the ${\alpha-\beta}$ relation. We emphasize, that we are not stating that initial conditions are erased completely, however, that they are erased sufficiently to allow the resulting structure to obey the ${\alpha-\beta}$ relation. \subsection{Symmetric simulations} The simulations described above were all performed through very non-symmetric collisions. We therefore performed two experiments with symmetric collisions. First we take 6 copies of the isotropic spherical NFW structure, each with 1 million particles, and place them symmetrically along the x, y and z axes. We collide these with initial separation and relative velocities similar to what was described above. For this simulation we use softening of 2 kpc. The resulting structure has 6 million particles. Second, we created an NFW structure with the same parameters as described above, but only containing $10^4$ particles. We now place 15 of these in a cubic symmetry (1 cell-centered, 6 face-centered and 8 at corners) with initial separation about 600 kpc and initial relative velocities of 60 km/sec, and collide these together using softening of 1 kpc. After letting the resulting structure equilibrate we take this resulting structure, place 15 copies in a cubic symmetry, and collide these again. The resulting structure thus contains 2.25 million particles. The results of these two simulations are shown in figure~\ref{fig:sym}. There is good agreement between the resulting ${\alpha-\beta}$ relation for these two structures in the central region. \begin{figure}[htb] \begin{center} \epsfxsize=13cm \epsfysize=10cm \includegraphics[width=0.8\textwidth]{symmetric.ps} \caption{Symmetric collision between 6 large isotropic NFW structures (green open triangles); repeated collisions of numerous small initially isotropic structures (red filled pentagons); tangential instability (black open circles). These significantly different simulations exhibit remarkably similar ${\alpha-\beta}$ relations.} \label{fig:sym} \end{center} \end{figure} \subsection{Tangential instability} One may fear that the resulting structures from the various simulations shown thus far end up near the same line in ${\alpha-\beta}$ space simply because they sample only relatively similar kinds of merging processes. The concern is that all the global properties of the final structures are the result of the fact that we slam structures into one another. Also the radial infall simulations included in ref.~\cite{hm} can be argued to be nothing but small structures falling into some larger structure. In order to probe this issue further, we will simulate the evolution of a rather artificial structure. We populate a small number of particles, $4 \cdot 10^4$, uniformly (but with randomly chosen positions) on an infinitely thin spherical shell of radius $0.01$ in simulation units. These particles all have zero initial velocity. Then we place 1 million particles on another infinitely thin spherical shell at the radius $20$ in simulation units, again uniformly distributed, but with randomly chosen positions. These particles are all placed on exactly circular orbits, but with random directions (tangentially to this sphere). All particles have exactly the same mass. Now, if this system had infinitely many particles uniformly distributed then the system would be in equilibrium, however, the poissonian noise of the particles in the initial condition, and the numerical noise from the integration of the equation of motion, imply that the particles will slowly start clumping together at random points on the sphere. The central collection of particles ($4 \cdot 10^4$), which effectively act as a central massive mass point, quickly settles into a small equilibrated structure, leaving the gravitational influence on the outer particles virtually unaffected. The particles on the outer shell, however, slowly break the initial spherical symmetry. Particles passing nearby each other start scattering off each other, kicking some particles outwards and some particles inwards. Eventually the entire system breaks up, and the small initial central collection of particles are swallowed by the large collection of particles from the outer shell. We run this simulation with softening of 0.1 in these simulation units, and we let the system evolve towards a new equilibrium state. This state does, surprisingly enough, resemble an NFW structure in density, and we extract the ${\alpha-\beta}$ relation for this resulting structure. The results are plotted as black (open) circles on figure~\ref{fig:sym}, and the agreement with the other simulations is striking. We conclude that the appearance of the ${\alpha-\beta}$ relation must have a deeper foundation than merely being the result of considering systems which were constructed to smash into each other. \subsection{Comparison with previous works} In order to compare the results of these various simulations presented above, we now plot the 3 double head-on collisions with different initial conditions, the two symmetric collisions (between 6 and 225 initially isotropic NFW structures), and the tangential instability in figure~\ref{fig:together}. On the same figure we plot the line which was argued to be a reasonable fit in ref.~\cite{hm}. This line was a fit to simulations including mergers of disk galaxies, cosmological structures, and radial infall simulations, all simulations very different from the ones considered in this paper. We see indeed that our simulations are in fair agreement with the results of ref.~\cite{hm}. \begin{figure}[htb] \begin{center} \epsfxsize=13cm \epsfysize=10cm \includegraphics[width=0.8\textwidth]{stat.ps} \caption{A collection of the various simulations described in the previous sections. The straight (green) line is from ref.~\cite{hm}, which was shown to provide a reasonable fit to a collection of different simulations, including mergers of disk galaxies, cosmological structures, and radial infall simulations.} \label{fig:together} \end{center} \end{figure} \section{Region of trust} \label{sec:trust} In all the figures above we have plotted a very extended region in the density slope, which corresponds to plotting points very near the central region (small negative $\alpha$) and very distant (large negative $\alpha$). In the central region one can generally only trust a region beyond a few times the softening length. In almost all of our simulations this corresponds to a region where the density slope is about -1. Further inwards numerical noise leads to density profiles more shallow than -1, which cannot be trusted. However, most of our simulations have been run for very long times (much longer than a corresponding typical cosmological simulation) and the numerical noise therefore has a slightly larger reach. We are being rather conservative, and ignore the regions inside 5 times the softening. Concerning the outer region, we have visually inspected all the simulations, and it is clear that regions with slopes more shallow than $-3$ are fully equilibrated. Looking at figure~\ref{fig:together}, one also sees that the scatter in ${\alpha-\beta}$ becomes large beyond $\alpha$ of $-3$. We therefore trust the region with slopes more shallow than $ \alpha=-3$. This region (excluding points inside 5 times the softening length) is shown for each simulation in figure~\ref{fig:together2}, together with a line from the equation \begin{equation} \beta = - 0.2 * ( \alpha + 0.8 ) \label{eq:relation} \end{equation} which provides a reasonable fit within this region. We see that the scatter in $\beta$ is only about $0.05$. \begin{figure}[htb] \begin{center} \epsfxsize=13cm \epsfysize=10cm \includegraphics[width=0.8\textwidth]{together2.ps} \caption{Trusted region. We have excluded points inside a central region corresponding to 5 times the softening. We do not trust points further out than $\alpha = -3$. The red dashed line is a fit to the points in this region, given by $\beta \approx -0.2 \, (\alpha+0.8)$. The thin blue (solid) line shows the theoretical results of ref.~\cite{dm} for the connection between the central density profile and the central anisotropy: from theory alone these {\em central} values can be anywhere on this solid line. The crossing of the two straight lines thus shows the actual values for the central density slope and anisotropy.} \label{fig:together2} \end{center} \end{figure} \section{Comparison with theory} In a recent paper, Dehnen \& McLaughlin (2005) showed, that under the two assumptions that phase-space density is a power-law in radius, and that there is a linear ${\alpha-\beta}$ relation, that the central slope of dark matter structures must be $\alpha_0 = -(7 + 10\beta_0)/9$, where $\beta_0$ is the central anisotropy. In other words, the central values of the density slope and the anisotropy must be somewhere on the thin (blue, solid) line in figure~\ref{fig:together2}, however, from theory alone there is no telling where. We can now compare this theoretical result with our numerical findings. Our numerical results are approximated by the fat (red, dashed) line. We see that the two lines cross near $\beta=0$ and $\alpha=-7/9$, showing that the central part of dark matter structures is isotropic, $\beta_0 \approx 0$, and that the central density slope is indeed $\alpha_0 \approx -7/9$. Theory~\cite{austin} also shows that the outer density slope is about $\gamma_\infty = -31/9 \approx -3.44$, which when compared with our findings result in $\beta_\infty \approx 0.53$. Thus, from theoretical works we know the innermost and outermost density slopes, and combined with the results of this paper we now also know both innermost and outermost anisotropy of pure dark matter structures. Ref.~\cite{dm} noticed that when assuming that phase-space density is a power-law in radius, then the Jeans equations have a particularly simple form if and only if there is a {\em linear} ${\alpha-\beta}$ relation as originally suggested in ref.~\cite{hm}. Our numerical results indeed confirm such linearity, and hence support the use of this particularly simple version of the Jeans equation for theoretical studies. \subsection{Comparison with cosmological simulations} We clarified in section~\ref{sec:head} that the ${\alpha-\beta}$ relation only holds for systems which have been perturbed sufficiently and subsequently allowed to relax. One can imagine setting up two equilibrium systems each with zero anisotropy, and colliding these with large impact parameter, in which case the system is not perturbed sufficiently, and the resulting system should not be expected to land near the ${\alpha-\beta}$ relation. As was shown in section~\ref{sec:head}, even with zero impact parameter, only the densest part of the system (inside density slope of about $-2.2$) will reach the ${\alpha-\beta}$ relation after only one collision. One should keep in mind that the problem here is that setting up a system with zero anisotropy is very artificial, and not in agreement with structures formed in cosmological simulations. Similarly, only the central part of structures formed in cosmological simulations should be expected to land on the ${\alpha-\beta}$ relation. This is because most of the infalling matter has never been near the center of the structure and hence has not been perturbed: most particles outside half of the virial radius have not been through the center and have therefore not been mixed sufficiently. This expectation is indeed confirmed by cosmological simulations (private communications with J. Diemand and C. Power, see also the large scatter in figure 3 of ref.~\cite{dm}) which show that the central regions of dark matter structure do land on the ${\alpha-\beta}$ relation, whereas the outermost regions do not quite. The $\alpha-\beta$ relation has also been shown to hold for high-$\sigma$ subset of galaxy haloes~\cite{diemandmadau}. \section{Attractor solution?} It appears from the agreement between the different simulations presented above, that the $\alpha-\beta$ relation is some kind of {\em attractor}. In order to test such a claim one should perform minor perturbations of the system, and then observe in which direction the system flows. Antonov's laws of stability tell us that many systems are in equilibrium, e.g. an isotropic Hernquist structure will remain isotropic when exposed to minor perturbations. Antonov's laws of stability are valid under the assumption that there is a zero on the r.h.s. of the Boltzmann equation, and we will therefore expose our system to perturbations which act like an instantaneous non-zero term on the r.h.s. of the Boltzmann equation. \begin{figure}[htb] \begin{center} \epsfxsize=13cm \epsfysize=10cm \includegraphics[width=0.8\textwidth]{kick.ps} \caption{We perturb the energies slightly of each individual particle in an equilibrum system. After letting the system relax do we perturb again. We show the resulting $\alpha-\beta$'s after 1, 4, 7 and 10 perturbations. These perturbations are isotropic, and do therefore by themselves not induce any anisotropy. The system does indeed appear to move towards the universal $\alpha-\beta$ relation, indicating that it is indeed an attractor.} \label{fig:kick} \end{center} \end{figure} We take an isotropic NFW structure which is in total equilibrium. Then we take each individual particle and change its velocity with a random number, keeping its direction unchanged. This is done in such a way that the new energy is changed by maximum $25\%$ from the original energy, and in such a way that there is overall energy conservation. This perturbation is thus completely isotropic, and should not by itself change the isotropy of the system. Subsequently we let the system relax to a new equilibrium state, and we extract density profile and anisotropy of this new system. We then take the new equilibrated system and perturb it in a similar way with a new set of random numbers. After letting it relax, we can repeat the process. The resulting $\alpha-\beta$'s are presented in figure~\ref{fig:kick}, where we show the relaxed systems, after having perturbed 1, 4, 7 and 10 times. We see indeed, that the system is slowly, but surely, moving in the direction of the universal $\alpha-\beta$ relation. We therefore conclude that the $\alpha-\beta$ relation really is an attractor. We did not perform further perturbations of the system, because for anisotropic systems (like after the 10'th perturbation) these isotropic perturbations will on their own tend to isotropize the system, and hence the system may potentially appear to not flow towards the attractor. Most probably one must devise more sophisticated perturbations in order to see the flow all the way toward the universal $\alpha-\beta$ line. \section{Conclusions} We have quantified the relationship between the density slope and the velocity anisotropy, the ${\alpha-\beta}$ relation. We have performed a large set of simulations to investigate systematic effects related to shape and initial conditions, and we find that the ${\alpha-\beta}$ relation is almost blind to both shape and initial conditions, as long as the system has been {\em perturbed sufficiently} and subsequently allowed to relax. We find strong indications that the relation is indeed an attractor. We have performed symmetric and highly non-symmetric simulations, along with a simulation of the tangential orbit instability to extract the zero point and slope of the ${\alpha-\beta}$ relation in the regions which are numerically trustworthy. These simulations complement the simulations performed in ref.~\cite{hm}, which included a cosmological simulation and radial infall collapse, and yet exhibit a striking level of agreement with this work. When compared with analytical results we find that the central region is indeed isotropic, and that the outer asymptotic anisotropy is radial, with a magnitude of $\beta \approx 0.5$. \ack It is a pleasure to thank Ben Moore for discussions and encouragement. SHH thanks the Tomalla foundation for support. \label{lastpage} \section*{References}
2024-02-18T23:39:48.347Z
2005-10-21T22:44:02.000Z
algebraic_stack_train_0000
461
4,748
proofpile-arXiv_065-2321
\section{Introduction} Giant extragalactic {\ion{H}{2}} regions (GEHR) are important sites of star formation. They are small scale examples of extreme sites of star formation such as local and distant starburst galaxies. Like starburst regions, they contain several distinct star clusters \citep[e.g.][]{meu95,hun96} that can interact with each other to potentially enhance or slow down the star formation processes \citep[see review by][and references therein]{tan05}. They produce the most massive stellar types known (O, B, and Wolf-Rayet) that have the potential to transform the morphological and chemical aspects of galaxies through their feedback \citep[e.g.][among others]{heck90,mar02,tre04,cal04}. Most GEHR are recent and quasi-instantaneous events of star-formation \citep{mas91,mas99,sch99,sta99}, as seems to be the case for a starburst, in the sense that most of their massive stars seems to form within less than 2-3\,Myr \citep{pel04}. Evolutionary synthesis is a powerful tool to study stellar populations in various environments \citep[e.g.][]{wor94,lei99,bruz03,rob03}. The main goal of evolutionary synthesis is to deduce the global properties of spatially unresolved stellar populations such as their averaged age, mass, and metallicity. The development of evolutionary synthesis codes in the past decade has considerably improved our knowledge of galaxies \citep[e.g.][among many others]{gonz99,lei01,chan03}. With the recent (and coming) generation of large telescopes such as Keck, Gemini, JWST, and ALMA, this technique will be very useful for our understanding of very distant galaxies and of their evolution through cosmic time. Nearby GEHR are excellent candidates to test the accuracy of the evolutionary synthesis technique. GEHR like those found in M\,33 are close enough to resolve individual stars and to compare their detailed stellar content with what is deduced from synthesis of integrated spectra. In this work, a detailed study of the massive stellar content of several GEHR observed in the far-ultraviolet (900-1200\AA; FUV) is presented. The study is based on the spectral synthesis code {\tt LavalSB} and its recent empirical spectral library in the FUV range \citep{rob03}. The synthesis of GEHR observed in M\,33 and M\,101 will be compared, when possible, to previous works detailing their resolved stellar content. The following section presents a summary of the data processing. Section~\ref{lavalsb} describes the evolutionary synthesis code {\tt LavalSB} used in this work. The synthesis results for each GEHR are detailed in \S\ref{syn}, and compared with previous works at various wavelengths. A discussion of specific results is presented in section~5 and the main results are summarized in \S6. \section{FUSE Data and Reduction} \label{data} FUV spectrograms of nine GEHR were obtained by the {\it {Far Ultraviolet Spectroscopic Explorer}} (FUSE) telescope \citep{moos00} for various projects. Most data were obtained through the largest aperture (LWRS; 30$^{\prime\prime}$$\times$30$^{\prime\prime}$) while some spectrograms were obtained using a smaller aperture (MDRS; 4$^{\prime\prime}$$\times$20$^{\prime\prime}$). Aperture locations are displayed in Figure~1. A general description of the FUSE data is reported in Table~1. Data were gathered from the MAST\footnote{Multimission Archive at Space Telescope Science Institute; http://archive.stsci.edu/\,.} public archives. The data were processed with the {\tt calfuse} pipeline v2.4.2. This version corrects for Doppler shift induced by the heliocentric motion of Earth, event bursts, the walk problem, grating thermal shifts, bad pixels, background noise, distortions, and astigmatism. More information relative to {\tt calfuse} is available electronically\footnote{http://fuse.pha.jhu.edu/analysis/calfuse.html}. The output from {\tt calfuse} comprises eight segment spectrograms for each exposure that correspond to the eight optical paths of the instrument. Each segment covers a different wavelength range, with some of them overlapping \citep[see fig.~2 of][]{sah00}. First, for each segment, each exposure was combined with a statistical weight based on exposure time. Then the segments that cover the same wavelength regions (roughly 900-1000\AA, 1000-1100\AA, and 1100-1200\AA) were averaged with weights based on their signal-to-noise ratios. Finally, the spectrograms of each wavelength range were simply coadded to obtain one spectrogram covering the entire 905-1187\AA\ range. The spectrograms were then smoothed by a factor of 20 using the IRAF\footnote{Image Reduction and Analysis Facility, supported by NOAO and operated by AURA under cooperative agreement with the NSF; http://iraf.noao.edu/\,.} {\it {boxcar}} task, corresponding to a resolution of about 0.13\AA. This last step increases the signal-to-noise ratio without affecting the stellar line profiles. The spectrograms were corrected for redshift. Reddening correction will be discussed in section~\ref{syn}. \section{Stellar population modeling in the FUV} \label{lavalsb} A first work of spectral synthesis below 1200\AA\ has been made by \citet{gonz97} for the {O~{\sc{vi}}}$+$Ly$\beta$$+${\ion{C}{2}} feature. The stellar library was based on Copernicus and the Hopkins Ultraviolet Telescope (HUT) with a spectral resolution of 0.2\AA. Their work clearly showed that the line profile was sensitive to the age of a stellar population. A new FUV spectral library, based on FUSE data, has recently been added to the spectral synthesis code {\tt LavalSB} \citep{rob03}. This code has been proven to be very powerful for young stellar populations \citep{pel04} and will be used in the present work to deduce the global properties of massive stars in GEHR from their integrated FUV light. {\tt LavalSB} is a parallel version of {\tt Starburst99} \citep{lei99}. It uses the evolutionary tracks of the Geneva group (Schaller et al. 1992; Schaerer et al. 1993a, 1993b; Charbonnel et al. 1993; Meynet et al. 1994). The stellar population follows a mass distribution based on a chosen stellar initial mass function (IMF) and mode of star formation (instantaneous or continuous). Individual stellar parameters are used to assign the corresponding normalized empirical spectrogram from the FUV library based on relations from \citet{schm82}. The normalized library spectrograms are flux calibrated using stellar atmosphere models of \citet{kur92} for normal stars, and of \citet{sch92} for stars with extended envelopes. The \citet{kur92} spectra have been fitted using a Legendre function to remove their low resolution spectral features in order to avoid any confusion with empirical stellar lines from the spectral library. The FUSE stellar library covers from 1003.1 to 1182.678\AA\ with a dispersion of 0.127\AA. The library metallicities corresponds to the evolutionary tracks of {\tt LavalSB}, e.g. {Z$_{\odot}$} for Galactic stars \citep[{12$+$log[O/H]$=$8.7};][]{all01}, 0.4\,{Z$_{\odot}$} for LMC stars \citep[{12$+$log[O/H]$=$8.3};][]{rus92}, and 0.1\,{Z$_{\odot}$} for SMC stars \citep[{12$+$log[O/H]$=$8.0};][]{rus92}. The most useful stellar indicators in the FUV are the {\ion{C}{3}} blend multiplet centered at 1175.6\AA, and the {\ion{P}{5}} doublet at 1118.0 and 1128.0\AA. The profiles of these lines show strong variations with age and metallicity of the population, depending on what spectral types dominate in flux. Significant, but more subtle, changes also appear with different IMF parameters. At shorter wavelengths, the {O~{\sc{vi}}~$\lambda\lambda$1031.9, 1037.6} and the {S~{\sc{iv}}~$\lambda\lambda$1062.7, 1073.0, 1073.5} line profiles show variations with age and metallicity, and possibly with IMF. However, the empirical stellar library used in {\tt LavalSB} contain stars for which these diagnostic lines are contaminated by interstellar features from Galactic H$_2$ and other atomic transitions. Consequently, stellar lines of {O~{\sc{vi}}} and {S~{\sc{iv}}} will not be used in the present work since {\ion{C}{3}} and {\ion{P}{5}} lines alone will provide more accurate results. An extensive identification of stellar and interstellar lines contained within the FUSE range can be found in \citet{rob03} and \citet{pel02}. To establish the characteristics of an integrated stellar population in the FUV, the FUSE spectrogram is first normalized and the stellar indicators of {\ion{C}{3}~$\lambda$1175.6} and {\ion{P}{5}~$\lambda\lambda$1118.0, 1128.0} are compared to the models. The best-fit model is chosen both by eye and by performing a $\chi^2$ fit. This first step provides information on the age, the metallicity, and the IMF parameters of the population. A standard IMF is defined here as having a slope $\alpha$=2.35 a mass range from 1 to 100\,{M$_{\odot}$}. Once the age and metallicity of the stellar population are estimated from the normalized FUV spectrogram, the extinction is then evaluated by comparing the observed continuum slope of the flux calibrated data to the one of the best-fit model. The theoretical law from \citet{witt00} for a clumpy dust shell distribution with an optical depth of 1.5 in the V band is used to derive the internal extinction E(B-V)$_i$. The Galactic extinction is corrected using the law of \citet{sea79}. Finally, the stellar mass involved in the system is estimated from the unreddened flux level. Uncertainties related to the line profile fitting are determined by comparing the different sets of models at a given metallicity. Since {\tt LavalSB} covers only specific values (0.1\,{Z$_{\odot}$}, 0.4\,{Z$_{\odot}$}, {Z$_{\odot}$}, and 2\,{Z$_{\odot}$}), it is not possible at this point to evaluate the full age range that could fit the data. The jumps in metallicity are quite large so the synthetic spectra from the next metallicity value do not always reproduce the observed line profiles and cannot give clues on the age range. Consequently, the age uncertainties given in the present work are underestimated and do not take into account the possibility that the data can be fitted using a slightly different age at a slightly different metallicity. For a given model, the primary source of error in the estimation of stellar masses and predicted fluxes is usually the FUV flux uncertainty from FUSE, which is usually around 10\%. However, in some cases, the age uncertainty gives a larger error bar than the FUSE uncertainty. In every case, the largest uncertainty is given. Also, the IMF slope used to calculate the total stellar mass affects the uncertainty on masses and predicted fluxes. However, these uncertainties are not explicitly included for the best fit model uncertainties. Where possible, parameters of other good-fit models are given to better evaluate the full uncertainties. \section{Massive Stellar Content of GEHR: the FUV Point of View} \label{syn} \subsection{NGC\,604} \label{n604} NGC\,604 is a well-known GEHR within the Local Group galaxy M\,33. Several studies found and confirmed the presence of very massive O, B, and Wolf-Rayet (WR) stars \citep{vil88,dri93,hun96,gonz00,bru03, maiz04}. At least four distinct starclusters have been identified in this object \citep{hun96}. The FUSE spectrogram of NGC\,604 obtained through the LWRS aperture is shown in Figure~2a. This aperture corresponds to a physical size of 123$\times$123\,pc$^2$ \citep[1$^{\prime\prime}$=4.1\,pc at 840\,kpc; see also Fig.1 of][]{leb05}. The aperture includes the Cluster~A from \citet{hun96}, but not the entire {\ion{H}{2}} region. The spectrogram has a very good signal-to-noise ratio (S/N) of 20 between 1155 and 1165\AA\ that allows to perform a good synthesis with details on the IMF slope. The {\ion{C}{3}} line profile shows a large absorption feature in its blue wing, indicating the presence of evolved late-type O~stars. The {\ion{P}{5}} doublet also displays P\,Cygni line profiles typical of massive stars with strong winds. The line depths of {\ion{C}{3}} and {\ion{P}{5}} suggest a sub-solar metallicity for the stars. Continuous burst models have to be excluded since they produce stellar lines with too faint P\,Cygni profiles. To obtain a good fit, especially for the {\ion{C}{3}} line profile, a flatter IMF with a slope $\alpha$=1.5-2.2 is better, while a standard IMF with $\alpha$=2.35 could also fit. The best fits are obtained for models having $\alpha$=1.5 and an age of 3.9$\pm$0.1\,Myr for 0.1\,{Z$_{\odot}$} and 3.3$\pm$0.1\,Myr for a 0.4\,{Z$_{\odot}$} metallicity. If $\alpha$(IMF)=2.35, then the best-fit ages are a little lower with 3.5$\pm$0.3\,Myr for 0.1\,{Z$_{\odot}$} models and 3.0\,Myr at 0.4\,{Z$_{\odot}$}. The solution is not unique since there is a degeneracy in the line depth for the models at sub-solar metallicities when the P\,Cygni profiles are well developed. \citet{gonz00} performed a detailed study of NGC\,604 using IUE spectrograms (9.5$^{\prime\prime}\times$22$^{\prime\prime}$ aperture), optical ground-based data, and H$\alpha$ images from the HST, to fully describe this GEHR. From the H Balmer and {\ion{He}{1}} absorption lines, they deduced an age between 3 and 4\,Myr for the stellar population with a standard IMF or flatter. A continuous burst cannot fit their emission line ratios. Their IUE spectrograms revealed a population of 3-5\,Myr (better fit at 3\,Myr) with an IMF slope flatter than 3.3. \citet{vil88} studied in detail the chemical abundances in M\,33 from nebular lines. They measured an oxygen abundance 12$+$log[O/H]=8.51 for NGC\,604. All these results are fully consistent with the FUV line profile synthesis. The best-fit model parameters are reported in Table~2, together with the other good-fit models. Note that hereafter, calculations using the models at 0.4\,{Z$_{\odot}$} are favored based on the metallicity from \citet{vil88}. Adopting an instantaneous burst model of 3.3\,Myr at 0.4\,{Z$_{\odot}$} with an IMF slope $\alpha$=1.5, the observed FUV continuum slope suggests no significant internal extinction E(B-V)$_i$. No internal extinction is needed if a Galactic correction of 0.02 is applied, and E(B-V)$_i$=0.03 is calculated if no Galactic extinction is applied. Using an IMF truncated between 1 and 100\,{M$_{\odot}$}, the FUV flux level leads to a stellar mass of (7$\pm$2)$\times$10$^3$\,{M$_{\odot}$} within the LWRS aperture. Using an IMF slope of 2.35, the calculated stellar mass is rather (1.4$\pm$0.3)$\times$10$^4$\,{M$_{\odot}$}. \citet{gonz00} obtained a E(B$-$V)$_i$ of 0.1 based on their IUE spectrograms. They also estimate a stellar mass of 0.1-2$\times$10$^5$\,{M$_{\odot}$}. \citet{hun96} found, based on optical HST images, an extinction value of 0.08 for Cluster~A contained within the LWRS aperture. Their extinction and mass values are slightly higher than those from the FUSE data. From the stellar population described above with $\alpha$=1.5, several physical parameters can be deduced and compared (see Table~3). First, such a population would theoretically lead to an unreddened H$\alpha$ flux of (2$\pm$1)$\times$10$^{-11}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, and a continuum level at 5500\AA\ around (3$\pm$1)$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}. Changing the IMF does not change these numbers significantly. H$\alpha$ fluxes of 4.0 and 3.3 $\times$10$^{-11}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$} have been measured from HST and ground-based images by \citet{gonz00} and \citet{bos02}, respectively. Those values are slightly above the FUV predicted values. The differences in H$\alpha$ fluxes are consistent with the differences in stellar masses. According to the H$\alpha$+UV images from HST \citep[see Fig. 2 of][]{gonz00}, several massive stars from Cluster~A are co-spatial with the nebular emission. These stars are good candidates to higher extinction and it is likely that their contribution to the FUV flux is significantly lower than at longer wavelengths (even at $\sim$1500\AA) and then partly explain the differences observed in extinction values at various wavelength ranges. Furthermore, Fig.~2 of \citet{hun96} shows that Cluster~B and C contribute significantly to the nebular emission of NGC\,604, but they are not taken into account in the total stellar mass derived from FUV since they are not included within the FUSE aperture. Also, a detailed study from \citet{maiz04} revealed an extremely complex gas/dust geometry for which around 27\% of the ionizing photons might be missing in NGC\,604 due to attenuation. In addition to the aperture effect, this obviously contributes to create a discrepancy between the predicted and observed values in the stellar mass and other fluxes parameters. The FUV synthesis of a 3.3\,Myr population with $\alpha$=1.5 at 0.4\,{Z$_{\odot}$} predicts that about 9 WR stars (3 WN and 6 WC) should be present in NGC\,604. \citet{dri93} obtained ground-based and HST-WF/PC1 images and identified 12 WR or Of candidates, slightly more than the {\tt LavalSB} predictions. More recently Drissen et al. (2005, in preparation) confirms that there are at least 6 WN and 2 WC stars among them. This WC-to-WN number ratio is not consistent with {\tt LavalSB} (or {\tt Starburst99} neither). To obtain WC/WN$\sim$1/3, both models propose an age around 4.5-4.7\,Myr for the population. However, {\tt LavalSB} do not include the effect of rotation in evolutionary tracks. By including rotation in the models, the result will be to extend the duration of the WR phase and to increase considerably the number of WN stars, which will fit better the observations (G. A. V\'azquez 2005, private communication). Also, for the population synthesized above for NGC\,604, LavalSB predicts that 90$^{+30}_{-10}$ O-type stars (of all spectral types still present at this age). \citet{hun96} estimate from HST/WFPC2 images that about 190 stars brighter than O9.5\,V are present in NGC\,604, which is higher than the FUV estimation. However, the number of \citet{hun96} may include some B supergiants. If we use an IMF slope of 2.35, the model then predicts roughly the same number of O-type stars but no WR stars (or very few) at 3.0\,Myr, which is in disagreement with the observations of \citet{dri93}. The comparison between the predicted and observed number of WR stars favors the case of an IMF slope flatter than 2.35. The FUSE spectrogram of the inner part of NGC\,604 obtained through the MDRS aperture is shown is Figure~2b (S/N$\sim$14). This smaller aperture corresponds to a physical size of 16$\times$82\,pc$^2$. The stellar line profiles are similar to those obtained with the LWRS aperture, but not exactly the same. The {\ion{C}{3}} and {\ion{P}{5}} line profiles cannot be reproduced as well as for the LWRS data, especially in their blue wings. The models closer to the observed line profiles are those of 3.9-4.1\,Myr at 0.1\,{Z$_{\odot}$} and 3.3-3.4\,Myr using 0.4\,{Z$_{\odot}$} models. Interestingly, the MDRS spectrogram of NGC\,604 corresponds better to a combination of a synthesized population and the spectrogram of a O8\,I LMC star. The blue wings in {\ion{P}{5}} and {\ion{C}{3}} profiles are fitted by the single star spectrogram, while the photospheric portion cannot be fitted by the star, but by a modeled population. This strongly suggests that the number of massive stars within the aperture is low enough to be subject to statistical biases on the stellar IMF, and is not well represented anymore by an analytical IMF. Assuming a stellar population of 3.3\,Myr at 0.4\,{Z$_{\odot}$} as found previously, the continuum slope for the MDRS spectrogram gives E(B-V)$_i$ 0.03$\pm$0.02 if no Galactic extinction is considered. The flux level indicates a stellar mass of about 1$\times$10$^{3}$\,{M$_{\odot}$} through the MDRS aperture, clearly indicating that the MDRS aperture does not include the whole GEHR. \subsection{NGC\,595} \label{n595} As NGC\,604, NGC\,595 contains multiple star clusters with OB stars \citep[e.g.][]{dri93,mas99,maiz01}. The FUSE spectrogram of NGC\,595 is presented in Figure~2c with S/N$\sim$13. Particularly strong P\,Cygni profiles are observed in {\ion{C}{3}} and {\ion{P}{5}}. As for a single evolved O~star, the {\ion{C}{3}} profile of NGG\,595 does not show a blend of photospheric$+$wind features as in an integrated population, but a single well-developed P\,Cygni profile. In fact, it appears that a synthesized stellar population is unable to reproduce the FUV line profiles. The FUSE spectrograms have then been compared to those of single O stars from the FUV stellar library of {\tt LavalSB} and it reveals that an O7\,I LMC star is the closest match to the spectrogram of NGC\,595 (see superimposed thick line spectrogram in Figure~2c). It is obvious here that there are not enough hot stars in NGC\,595 to fit an analytical IMF as used in current spectral synthesis. Only a few stars with strong winds seem to dominate the line profiles. According to {\tt LavalSB}, O7\,I stars appear between 2.5 and 4.0\,Myr after an instantaneous burst. At 2.5\,Myr, stars slightly brighter than O7\,I will probably dominate the FUV flux. Consequently, the O7\,I stars in NGC\,595 would be consistent with an age of 3.5$\pm$0.5\,Myr with a metallicity close to the LMC (0.4\,{Z$_{\odot}$}). This age is consistent with the works of \citet{mal96} and \citet{mas99}. Assuming a standard IMF, it is still possible to roughly estimate parameters related to the FUV slope and flux level. Adopting a Galactic extinction of 0.04 (NED\footnote{The NASA Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration; http://nedwww.ipac.caltech.edu/\,.}), a very low internal extinction of E(B-V)$_i$=0.02$\pm$0.02 is found. The stellar mass of NGC\,595 is then estimated to be about 1$\times$10$^3$\,{M$_{\odot}$} with very large uncertainties. Previous works in the visible range suggested a higher extinction value of 0.3 \citep{mal96,mas99,maiz01} for this GEHR. \citet{mas99} and \citet{mal96} also estimated a stellar mass of 5-6$\times$10$^3$\,{M$_{\odot}$}, which is also significantly higher than the FUV result, but of the same order of magnitude. Based on {\tt LavalSB}, the age and mass of NGC\,595 suggest that about 10 O~stars and 1 or 2~WR should be present in NGC\,595. However, HST imaging reveals larger numbers of these stars. \citet{dri93} identified 11~WR/Of candidates and \citet{mal96} estimated the number of O~stars to be $\sim$90. \citet{dri93} estimate that there are 2.5 times fewer stars between 15 and 60\,{M$_{\odot}$} in NGC\,595 than NGC\,604, implying that NGC\,595 must be about 2.5 times less massive than NGC\,604. FUV synthesis gives a factor of 5 between the stellar masses of the two GEHR. Recently, optical spectra from Drissen et al. (2005, in preparation) confirmed the presence of several WR candidates within NGC\,595 and classified them. Based on the HST/WFPC2-F170W archival image, the WR stars produce about 30\% of the UV luminosity. Obviously, the observed number of WR stars in this object is incoherent with the FUV synthesis point of view. In an attempt to reproduce the observed FUV spectrogram, simple combinations of individual hot stars are tested. The combinations are comprised of individual late O-type stars (or synthetic models) and WR stars for which $\sim$ 30\% of the total FUV flux comes from 1~WN6/7 star and 4~WN7/8 stars, as classified by Drissen et al. (2005, in preparation). However, the resulting fits are poor, with the stellar combinations always giving wind profiles too strong in emission and having too narrow blue absorption. However, the FUSE atlas of WR stars from \citet{wil04} revealed spectra of WR stars in general with spectral line profiles that are changing considerably from one type to another. A closer look at this atlas shows that HDE\,269927, a WN9 type star from the Galaxy, display line profiles of {\ion{C}{3}} and {\ion{P}{5}} similar to stellar lines of NGC\,595. Replacing the WN7/8 spectra used in the previous combinations by the spectra of HDE\,269927 gives surprisingly good results. In fact, the combination of spectrograms from a O7\,I star (70\% of the flux) as well as 1 WN6 and 4 WN9 stars (30\% of the flux) reproduces well the FUSE data for NGC\,595. This implies two things. First, it appears that the FUV spectra of WR stars show line profiles that change significantly from one spectral type to another, and that probably vary with metallicity as well. Consequently, the few WR spectrograms currently used in the {\tt LavalSB} spectral library are probably not very representative of their spectral types. Fortunately, these stars do not usually contribute significantly to an integrated stellar population and then do not really affect the synthetic spectra. Second, it seems obvious that the FUV spectra of NGC\,595 is dominated by evolved late-type O and WN-late stars. However, one fundamental question remains: how did NGC\,595 come to produce a stellar population enhanced in WR stars? The FUV synthesis of NGC\,595 implies that F(H$\alpha$)=(1.3$\pm$0.2)$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}. Various values are found in the literature. \citet{bos02} obtained 1.1$\times$10$^{-11}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, and \citet{ken79} measured 8.8$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}. It is obvious that the FUV synthesis is not accurate in this case, and possibly also that it does not include the entire GEHR. The FUSE spectrogram of NGC\,595 clearly reveals that a stellar population with a stellar mass of a few 10$^3$\,{M$_{\odot}$} is too small to apply the spectral synthesis technique, at least below 1200\AA. Obviously, statistical fluctuations related to a small number of massive stars are not well represented by an analytical IMF. A more detailed discussion on this subject will be given in \S\ref{mass} \subsection{NGC\,592} \label{n592} Because of its fainter H$\alpha$ luminosity, NGC\,592 is a much less studied GEHR, but not less interesting. The observed FUV spectrogram is shown in Figure~2d, with a rather low S/N of 6. The FUSE aperture contains the entire GEHR \citep{bos02,keel04}. Despite noisy stellar lines, their profiles clearly display extended blue absorption wings from evolved O stars. Comparing both {\ion{P}{5}~$\lambda$1128.0} and {\ion{C}{3}~$\lambda$1175.6} lines to the models, it is possible to reproduce their profiles with a 4.0$\pm$0.5\,Myr stellar population at {Z$_{\odot}$} metallicity. Models at 0.4\,{Z$_{\odot}$} produce too weak P\,Cygni effects in {\ion{C}{3}}. The spectrogram is too noisy to discriminate between various IMF slopes. From H$\alpha$ and H$\beta$ narrow-band images, \citet{bos02} estimated the age of NGC\,592 to be more than 4.5\,Myr, which is not really compatible with FUV line profiles displaying relatively strong P\,Cygni features. In term of metallicity, \citet{keel04} interpolated a value of 0.5\,{Z$_{\odot}$} in [O/H], and Drissen et al. (2005, in preparation) estimated that 12$+$log[O/H]$\sim$8.4 (i.e. 0.5\,Z$_{\odot}$) from [\ion{O}{3}]/H$\beta$ and [\ion{N}{2}]/H$\alpha$ line ratios. These values are consistent with the FUV synthesis considering that {Z$_{\odot}$} models can cover relatively well a metallicity range from 0.4-0.5 to $\sim$1.2\,{Z$_{\odot}$} \citep{pel04}. Using a model of 4.0\,Myr at {Z$_{\odot}$} and a standard IMF, and assuming a Galactic extinction of 0.042 (NED), an E(B-V)$_i$ of 0.07$\pm$0.02 is deduced from the FUV continuum slope. Once the data are corrected for extinction, the stellar mass deduced is (1.1$\pm$0.3)$\times$10$^4$\,{M$_{\odot}$}. This mass is similar to that estimated for NGC\,604, which is consistent with the fact that the stellar line profiles can be reproduced with a synthesis technique and an analytical IMF, contrary to NGC\,595. The FUV flux level implies a unreddened H$\alpha$ flux of (2.7$\pm$0.5)$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, which is the exact value by \citet{bos02}. Other predicted parameters are reported in Table~3. \subsection{NGC\,588} \label{n588} The FUSE spectrogram of NGC\,588 is presented in Figure~2e, with a good S/N of 12. The FUSE aperture includes the entire {\ion{H}{2}} region \citep{bos02,keel04}. Models at {Z$_{\odot}$} produce stellar lines definitely too deep compared to the observations. With models at 0.4\,{Z$_{\odot}$} metallicity, a good fit can be obtained for a 3.5$\pm$0.5\,Myr population with $\alpha$(IMF)$\leq$2.35. A flatter IMF tends to give better results, but it is hard to really distinguish between various IMF slopes because of the relatively low S/N. Good fits can also be obtained with 0.1\,{Z$_{\odot}$} models of 4.5$\pm$1.0\,Myr and still with $\alpha$(IMF)$\leq$2.35. In the literature, ages of 2.8, $>$4.5, and 4.2\,Myr are reported for NGC\,588 \citep[][respectively]{mas99,bos02,jam04}, in general agreement with FUV line profiles. \citet{vil88} derived a precise oxygen abundance of 12+log[O/H]=8.30 (i.e. 0.4\,{Z$_{\odot}$}), favoring the models at 0.4\,{Z$_{\odot}$}. A flat IMF is also favored by \citet{mas99} and \citet{jam04} obtained $\alpha$(IMF)=2.37$\pm$0.16 from a star counting method. Based on the best-fit model at 0.4\,{Z$_{\odot}$}, a low internal extinction of at most 0.06$\pm$0.02 is measured, which leads to a stellar mass of (1.3$\pm$0.6)$\times$10$^3$\,{M$_{\odot}$}. The mass is higher, (4$\pm$1)$\times$10$^3$\,{M$_{\odot}$}, if we consider $\alpha$=2.35. Depending on the extinction law used, E(B-V)$_i$ values between 0.11 and 0.08 are measured \citep{mas99,jam04}. These same authors obtained stellar masses of 534 and 3000-5800\,{M$_{\odot}$}, respectively. The smallest value was deduced from IUE data (aperture of 10$^{\prime\prime}\times$20$^{\prime\prime}$), and the largest mass is from full field imaging data, which explains the discrepancy. FUV data are in relatively good agreement with imaging data, which suggests that most OB stars of NGC\,588 are within the FUSE aperture. With such a mass, the model predicts that F(H$\alpha$)=2.8$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, which is in good agreement with the value of 2-3$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$} measured by \citet{ken79} and \citet{bos02}. The best-fit model predicts 2 WR stars in NGC\,588, which is the exact number found by \citet{jam04} in their HST images with resolved stars. \subsection{NGC\,588-NW} A FUSE spectrogram has been obtained in the vicinity of NGC\,588 (North-West). From the {\it {Digitized Sky Survey}} image (see Fig.~1), this region corresponds to a relatively compact and small cluster with a faint, extended nebular ring. It was first reported by \citet[][their object 281]{bou74} and also identified in the work of \citet{cou87}. The ring suggests that the cluster is more evolved than those synthesized above. The FUSE spectrogram for this cluster is shown in Figure~2f (S/N$\sim$7). Diagnostic stellar lines do not display P\,Cygni profiles. Synthetic models do not reproduce well the line profiles. The best fit is obtained for a stellar population around 5-6\,Myr old at 0.4\,{Z$_{\odot}$}, but the line profiles are not properly fitted. A possible alternative is a single star spectrum, as was the case for NGC\,595. Then, a Galactic O9.5\,III star also consistent with a population of 5-6\,Myr, gives a better match than the model but significant discrepancies still exist. This age is consistent with the presence of the faint extended ring seen around NGC\,588-NW in the visible range. To push the synthesis further, a stellar population of 5.5\,Myr at 0.4\,{Z$_{\odot}$} has been considered and an extinction value around 0 and a stellar mass of about 1$\times$10$^3$\,{M$_{\odot}$} have been roughly estimated for this cluster. This stellar mass is similar to the one obtained for NGC\,595. The relatively low mass of the cluster is a logical explanation for why the synthesis technique does not work well. Rough estimations of predicted observable parameters are reported in Table~3. The study of NGC\,588-NW gives some other clues on the evolution of GEHR. First, the FUSE spectrogram of NGC\,588-NW reveals the presence of an important stellar population. However, because of its slightly greater age (5-6\,Myr instead of $\sim$3.5\,Myr for NGC\,595), the nebular emission is not as strong as for NGC\,595 and this region is consequently much less studied. It is likely that NGC\,588-NW is representative of what NGC\,595 may look like in $\sim$2-3\,Myr. Second, the GEHR is still young and massive enough at this age not to have dissolved yet into the galaxy background. It would be interesting to search for slightly more evolved GEHR to better study their evolution, such as the dissipation timescale of clusters. This kind of cluster (i.e. still very young but with significantly low nebular emission) may be at the origin of the diffuse UV light in starburst galaxies \citep{meu95}. NGC\,588-NW is consistent with clusters of less than 10$^3$\,{M$_{\odot}$} without O-type stars, as described by \citet{chan05} for the diffuse UV component in starbursts. A more extensive search for this kind of object in local galaxies could settle this issue. \subsection{NGC\,5447} \label{n5447} NGC\,5447 is a GEHR in the spiral galaxy M\,101 (7.4\,Mpc) that displays several knots of star formation \citep{bos02}. The FUSE spectrogram has a S/N of 12 and is shown in Figure~3a. As shown in Fig.~1, the FUSE aperture does not include all knots. The spectrogram does not show strong wind profiles, suggesting that most O stars have already disappeared. Models at {Z$_{\odot}$} metallicity produce too deep stellar lines compared to the observations. Models at 0.1\,{Z$_{\odot}$} cannot reproduce both {\ion{P}{5}} and {\ion{C}{3}} features at the same age. The best-fit model is obtained at 4.5$\pm$0.5\,Myr with an IMF slope of 2.35 or flatter. This GEHR has not been extensively studied and no age has been proposed so far for this object. \citet{sco92} deduced an oxygen abundance of 8.3 in 12+log[O/H], compatible with the line depths of {\ion{P}{5}} and {\ion{C}{3}}. The measured FUV slope for NGC\,5447 suggests that E(B-V)$_i$=0. From photographic plates and the Balmer decrement, \citet{smi75} estimated an extinction of 0.37, much larger than the FUV value. The FUV flux indicates a stellar mass of (1.2$\pm$0.2)$\times$10$^5$\,{M$_{\odot}$}. From FUV synthesis, {\tt LavalSB} predicts that F(H$\alpha$)=(5.7$\pm$0.9)$\times$10$^{-13}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, and an EW(H$\alpha$)=1064\AA\ for NGC\,5447. Using photometric data \citet{bos02} measured an H$\alpha$ flux of 4.7$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, and \citet{ken79} obtained a value of 1.6$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}. Since the GEHR is much more extended than the FUSE aperture \citep[see Fig.~5 of][]{bos02}, the factor 5-10 discrepancies can easily be explained. However, the presence of a second generation of stars contributing to the nebular flux but not to the FUV flux cannot be excluded (see \S\ref{2egen}). For their knot~A only, \citet{bos02} obtained that F(H$\alpha$)= 7.5$\times$10$^{-13}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, suggesting that this knot must be the principal contributor to the FUV flux measured with FUSE. \citet{tor89} measured a dereddened equivalent width of 1096\AA\ through a 3.8$^{\prime\prime}\times$12.4$^{\prime\prime}$ slit, in very good agreement with the FUV predictions and the knot~A. \subsection{NGC\,5461} \label{n5461} NGC\,5461 is a very large GEHR ($>$500\,pc in diameter) with multiple components in M\,101 \citep{bos02,keel04,chen05}. The FUSE aperture contains most of the H$\alpha$ emission and should include most of the massive stellar content (see again Fig.~1). The FUSE spectrogram is shown in Figure~3b, with a S/N of about 7. The {\ion{C}{3}} feature displays a wind profile, implying the presence of giant and supergiant {O-type} stars. Models at {Z$_{\odot}$} do not reproduce the stellar line depth. The models at 0.1\,{Z$_{\odot}$} give a good fit for a 4.0$\pm$0.2\,Myr stellar population and an IMF slope flatter than 2.35. A good correspondence is also obtained with 0.4\,{Z$_{\odot}$} models at 3.3$\pm$0.2\,Myr, still with $\alpha$$<$2.35. A multiwavelength study from \citet{ros94} suggests an age between 3.0 and 4.5\,Myr, compatible with FUV line profiles. \citet{lur01} deduced an age between 2.5 and 3.5\,Myr based on EW(H$\beta$), also in general agreement with FUV line profiles. While the age determination method using EW(H$\beta$) is not a recommended diagnostic \citep{ter04}, it appears that it still gives good results at a such very young age. More recently, \citet{chen05} identified about 12 candidate stellar clusters within NGC\,5461 of which half of them are less than 5\,Myr old. The other clusters are probably older and do not seem to contribute much to the FUV flux. Abundances ranging from 8.4 to 8.6 in 12+log[O/H] are found in the literature \citep{tor89,sco92,ros94,lur01}. Their observations favor the FUV synthesis models at 0.4\,{Z$_{\odot}$}. Comparing with the modeled population of 3.3\,Myr at 0.4\,{Z$_{\odot}$} and $\alpha$=1.5, the FUV continuum slope needs no extinction correction. The stellar mass is then (1.5$\pm$0.4)$\times$10$^{4}$\,{M$_{\odot}$}. Using a standard IMF slope of 2.35, the calculated stellar mass is then (5$\pm$1)$\times$10$^4$\,{M$_{\odot}$}. According to \citet{ros94}, the extinction from the Balmer decrement is 0.23, and using an extinction law especially designed for M\,101, they find a stellar mass of 1$\times$10$^{5}$\,{M$_{\odot}$}. According to {\tt LavalSB}, the FUV stellar population should produce an H$\alpha$ flux of (5$\pm$2)$\times$10$^{-13}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}, while H$\alpha$ image data give 6.5 and 3.2$\times$10$^{-12}$\,{ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$} \citep[][respectively]{bos02,ken79}. For this population, the unreddened EW(H$\alpha$) should be about 1200\AA. \citet{tor89} obtained an unreddened value of 1175\AA, in good agreement with {\tt LavalSB} predictions. The differences between the predicted and observed extinction, nebular flux and stellar mass will be discussed in more details in \S\ref{2egen}. \subsection{NGC\,5471} \label{n5471} NGC\,5471 is another GEHR in M\,101 more compact than NGC\,5461 and NGC\,5447 and may contain about 19 star clusters according to \citet{chen05}. Most of the H$\alpha$ emission of this {\ion{H}{2}} region would have been included within the LWRS aperture of FUSE. Unfortunately, this {\ion{H}{2}} region has been observed using the MDRS aperture (4.0$^{\prime\prime}\times$20$^{\prime\prime}$), which implies that some OB stars are not included in the FUV spectrogram presented here (see Fig.~1). Also, in the FUSE data, no flux has been obtained in detector~2, which affects the quality of the synthesis since the LiF2A segment (which falls on the missing detector) is important for the S/N of {\ion{P}{5}} and {\ion{C}{3}} lines \citep[see][]{sah00}. The FUSE spectrogram is shown in Figure~3c with S/N=9. The {\ion{C}{3}} line profile displays no obvious wind feature. The best-fit model is obtained for a stellar population of 4.5$\pm$0.5\,Myr at 0.4\,{Z$_{\odot}$}. At 0.1\,{Z$_{\odot}$}, a modeled stellar population of 3.5-4.0\,Myr can also reproduce the observed line profiles. Because of the noise, a standard IMF as been assumed. \citet{mas99} deduced an age of 2.9\,Myr for NGC\,5471, which is too young to explain the faint P\,Cygni profiles observed in the FUV diagnostic lines. Oxygen abundances ranging from 8.0 to 8.2 \citep[0.2-0.3Z$_{\odot}$][]{tor89,ros94,mas99,bos02} are found in the literature, which is in good agreement with FUV synthesis. Adopting the 0.4\,{Z$_{\odot}$} best-fit model, the comparison between the observed and modeled continuum slopes indicate a low extinction, smaller than the uncertainties of 0.02. The FUV flux level suggests a stellar mass of (7$\pm$1)$\times$10$^{4}$\,{M$_{\odot}$} for NGC\,5471. \citet{mas99} obtained an extinction of 0.07 in the UV range, which is slightly higher than the FUV extinction. The FUV stellar mass deduced is consistent with the mass of 1.2$\times$10$^{5}$\,{M$_{\odot}$} from \citet{mas99}, considering the smaller aperture used with FUSE. Predictions reported in Table~3 are difficult to compare with the literature because of large differences between apertures. However, the FUV flux prediction is always below the values given from larger apertures \citep[e.g.][]{ken79,bos02}. \citet{tor89} measured a dereddened EW of 575\AA\ for H$\alpha$, consistent with the predictions. \subsection{NGC\,5458} \label{n5458} NGC\,5458 is an {\ion{H}{2}} region smaller and fainter than the previous ones in M\,101 and not much studied except for its X-ray source \citep{wan99,pen01,col04}. The FUSE spectrogram is presented in Figure~3d, and shows a S/N$\sim$10. The spectrogram displays photospheric profiles without evident signs of winds in both {\ion{P}{5}} and {\ion{C}{3}} features. Sub-solar metallicity models produce stellar line depths too weak compared to the observations. The best-fit model is obtained for a 5.5-6.0\,Myr old stellar population at {Z$_{\odot}$}. A standard IMF has been assumed since the line profiles are less sensitive to the IMF when evolved O stars have disappeared. The continuum slope indicates a low extinction, below the uncertainties of 0.02. The flux level leads to a stellar mass of (1.1$\pm$0.4)$\times$10$^5$\,{M$_{\odot}$}. Other predicted observable parameters for NGC\,5458 are reported in Table~3. \section{Discussion} The massive stellar contents of several GEHR have been studied in detail using the FUV spectral synthesis. The section below focuses on the global characteristics of the whole sample to better understand the physics of GEHR in general as well as the synthesis technique in the FUV. \subsection{FUV Synthesis of Small Stellar Populations} \label{mass} Spectral synthesis is a powerful technique to obtain a good estimate of the general characteristics of young integrated stellar populations. However, this technique usually assumes that the stars follow an analytical IMF, and that the stars properly fill each bin of the mass function. But how high does the mass of the population must be in order to be accurately described by an analytical IMF? The FUV is a good wavelength range to estimate this minimal mass for young systems. The FUV is especially sensitive to IMF statistical fluctuations at high masses since only O and B\,stars produce many photons below 1200\,\AA. Also, GEHR are very young systems and the disappearance of the most massive stars does not significantly affect the total stellar mass of the system. From FUV synthesis of GEHR in M\,33 and M\,101 (\S\ref{syn}), it appears that a stellar mass greater than 1$\times$10$^3$\,{M$_{\odot}$} is needed to properly fulfill the IMF bins. As shown by NGC\,592, NGC\,604 (LWRS), and GEHR in M\,101, a stellar mass of $\sim$1$\times$10$^4$\,{M$_{\odot}$} does not seem to suffer much of a statistical bias. However, the FUV synthesis of NGC\,604 (MDRS), NGC\,595, and NGC\,588-NW reveals that a stellar mass closer to $\sim$1$\times$10$^3$\,{M$_{\odot}$} becomes too low to obtain reliable values of the age and mass of the star cluster because the stellar line profiles are not those of a standard modeled population, but those of a mix of a limited number of bright stars. Note that the mass limit needs to be higher for younger systems, where the dominant stars are of earlier spectral types than those found in a slightly older population. This is because a younger population needs to better fill the IMF higher mass bins and a more massive total stellar population is thus required. \citet{cer04} studied this problem from a theoretical point of view. The lower mass limit of a few 10$^3$\,{M$_{\odot}$} found here for a synthesized population is fully consistent with their results, which suggest that the minimal initial cluster mass needed for synthesis modeling in the U-band is about 8$\times$10$^3$\,{M$_{\odot}$} for a 5\,Myr population at 0.4\,{Z$_{\odot}$}. Following their calculation, this minimal mass can be slightly lower at shorter wavelengths like the FUV range. Using HST images where the stars of NGC\,588 were resolved, \citet{jam04} obtained a standard IMF slope of 2.37$\pm$0.16 for NGC\,588 by using a star counting technique and estimated a stellar mass of (5.8$\pm$0.5)$\times$10$^{3}$\,{M$_{\odot}$}, consistent with \citet{cer04} and FUV synthesis. FUSE spectral synthesis of GEHR has clearly shown that their calculation not only applies to color bands, but also to stellar line profiles. \subsection{The Flat IMF slope of GEHR} The stellar IMF is a matter of debate since the work of \citet{sal55}. The generally accepted slope\footnote{A slope of 2.35 is traditionally called a Salpeter slope. However, this terminology is not appropriate for stellar masses covered by FUSE since the work of \citet{sal55} applies to a lower mass range.} for the massive OB star regime at all metallicities in every kind of environment (starbursts as well as star clusters), is $\alpha$=2.35 \citep[e.g.][]{mas98,sch00,gre04,pis04}. However, the IMFs of GEHR derived from FUV line profiles seem to favor a relatively flat slope (see Table~2). Since FUV stellar flux is produced only by O and B stars, a small change in their relative numbers can affect the derived IMF slope. This result cannot be associated to a bias due to the FUV synthesis since several, and bigger, young populations have been studied with the same technique and did not show such a flat slope \citep{pel04}. Some hypotheses could physically explain a flat IMF in the FUV range. One hypothesis is that B-type stars could still be more extinguished by dust than earlier type stars. If so, it would then be more difficult to see them in the FUV, producing an artificially flatter IMF. However, the extinction values of individual stars in NGC\,604 obtained by \citet{bru03} do not show a significant correlation with the spectral type, suggesting that B stars are not systematically more extinguished than O stars. Another more plausible possibility is that the massive stars fill the IMF high mass bins relatively well, but not perfectly. If some spectral types have slightly deviant numbers from the analytical IMF, it will slightly change the integrated stellar line profiles in the same direction as NGC\,595 or NGC\,604-MDRS, i.e. by accentuating the integrated wind profiles. Since a flatter IMF also produces more pronounced P\,Cygni profiles, it would be hard to differentiate the two cases. Consequently, even if the population synthesis gives reliable and precise results on most physical parameters of the population (age, mass, metallicity, colors, fluxes) for a $>$1$\times$10$^3$\,{M$_{\odot}$} population (\S\ref{mass}), it appears that the stellar IMF slope derived from the FUV line profiles is a sign of a non-perfect filling of the IMF high mass bins. This last possibility is supported by the IMF obtained from the star counting technique of \citet{jam04} on NGC\,588. They derived a standard IMF slope, but their IMF histogram clearly shows that some mass bins, especially at higher masses, are clearly deviant from the analytical slope. \subsection{A second generation of stars in NGC\,5461} \label{2egen} The spectral synthesis of FUSE data on NGC\,5461 has predicted much lower values for the H$\alpha$ flux (factor of 10), the stellar mass (factor of 2 to 10), and the extinction than has been reported in the literature. These discrepancies are hard to explain since most of the H$\alpha$ emission is included within the FUSE aperture. One plausible explanation is the presence of a second generation of stars in NGC\,5461, like the one observed in the LMC Cluster N11 \citep{wal92}. In the case of the star-forming region N11, the central region is composed of a 3.5\,Myr stellar population which dominates the UV flux. A surrounding nebulae is excited by a younger generation of stars which is not observed at short wavelengths because it is heavily reddened \citep{wal92}. The presence of a second generation of stars in NGC\,5461, younger and consequently more extinguished than the first one, could explain the larger extinction deduced at longer wavelengths, the stellar mass discrepancy as well as the excess in nebular emission. The second generation cluster must then be relatively massive to explain the large differences in flux and mass. It is not excluded that younger stars from different clusters are present rather than a single second generation. Although there is no proof of such a population within NGC\,5461, this {\ion{H}{2}} region is a good candidate to host very massive stars, younger than those actually detected with FUSE. It is also possible that younger stars are present within other GEHR studied here. Unfortunately, because the FUSE aperture does not always include the whole system, it is impossible to confirm here if the difference between the predicted and observed H$\alpha$ fluxes comes from a second generation or not, as it is the case for NGC\,604 for example. Considering the detailed work of \citet{maiz04} on the attenuation maps of NGC\,604, the differences in H$\alpha$ fluxes and stellar masses in GEHR, including NGC\,5461, might also be due, at least partly, to the complexity of the gas and dust spatial distribution. \section{Summary} The evolutionary spectral synthesis technique in the FUV has been used to study the massive stellar content of nine GEHR in M\,33 and M\,101. Stellar masses, internal extinctions, and ages have been obtained for most of them. The comparison of the FUV synthesis results with values obtained from previous available works in various wavelength ranges has shown that the technique is reliable in most cases. The comparison of the GEHR with each other has confirmed observationally that the synthesis technique must be applied to stellar populations of at least a few 10$^3$\,{M$_{\odot}$} in the FUV to avoid statistical fluctuations of the high mass end of the stellar IMF. It has also revealed that a flat IMF slope is apparently favored for GEHR in the FUV, which is likely the first apparent effect of statistical fluctuations of the IMF for low mass populations. FUV data suggests that giant {\ion{H}{2}} regions reach their maximum nebular luminosity around 3.0-3.5\,Myr, coincident with the WR phase. Finally, the {\ion{H}{2}} region NGC\,5461 in M101 is a good candidate to host a second generation of stars more extinguished than, and formed after the cluster actually detected with FUSE. \acknowledgments The author warmly thanks N. R. Walborn and L. Drissen for very helpful comments that considerably improved the scientific content. This work was supported by NASA Long-Term Space Astrophysics grant NAG5-9173.
2024-02-18T23:39:48.380Z
2005-10-24T16:44:12.000Z
algebraic_stack_train_0000
463
9,018
proofpile-arXiv_065-2364
\section{Acknowledgments} \begin{acknowledgments} Support from the EU projects HPMF-CT-2002-02124, BIN2-2001-00580, and MEIF-CT-2003-501099, and the Austrian Science Funds (FWF) project Nr. 17345, are recognized. The authors also wish to thank V. Z\'{o}lyomi and J. K\"{u}rti for valuable discussions. \end{acknowledgments} \clearpage \bibliographystyle{apsrev}
2024-02-18T23:39:48.505Z
2005-10-07T21:04:48.000Z
algebraic_stack_train_0000
475
57
proofpile-arXiv_065-2370
\section{Introduction} Observations of rotation curves of spiral galaxies and measurements of the velocity dispersions of stars in early-type galaxies have provided important evidence for the existence of massive dark matter halos around galaxies (e.g., van Albada \& Sancisi 1986). In addition, these studies have presented evidence of tight relations between the baryonic and dark matter components (e.g., Tully \& Fisher 1977; Faber \& Jackson 1976). Results based on strong lensing by galaxies support these findings (e.g., Keeton, Kochanek \& Falco 1998). The origin of these scaling relations must be closely related to the process of galaxy formation, but the details are still not well understood, mainly because of the complex behaviour of the baryons. Furthermore, on the small scales where baryons play such an important role, the accuracy of cosmological numerical simulations is limited. This complicates a direct comparison of models of galaxy formation to observational data. For such applications, it would be more convenient to have observational constraints on quantities that are robust and easily extracted from numerical simulations. An obvious choice is the virial mass of the galaxy, but most techniques for measuring mass require visible tracers of the potential, confining the measurements to relatively small radii. Fortunately, recent developments in weak gravitational lensing have made it possible to probe the ensemble averaged mass distribution around galaxies out to large projected distances. The tidal gravitational field of the dark matter halo introduces small coherent distortions in the images of distant background galaxies, which can be easily detected in current large imaging surveys. We note that one can only study ensemble averaged properties, because the weak lensing signal induced by an individual galaxy is too small to be detected. Since the first detection of this so-called galaxy-galaxy lensing signal by Brainerd et al. (1996), the significance of the measurements has improved dramatically, thanks to new wide field CCD cameras on a number of mostly 4m class telescopes. This has allowed various groups to image large areas of the sky, yielding the large numbers of lenses and sources needed to measure the lensing signal. For instance, Hoekstra et al. (2004) used 45.5 deg$^2$ of $R_C$-band imaging data from the Red-Sequence Cluster Survey (RCS), enabling them to measure, for the first time, the extent and flattening of galaxy dark matter halos, providing strong support for the cold dark matter (CDM) paradigm. However, the analysis presented in Hoekstra et al. (2004) was based on the $R_C$-band data alone, and consequently lacked redshift information for the individual lenses. An obvious improvement is to obtain redshift information for the lenses (and if possible the sources). This allows one to study the lensing signal as a function of lens properties, most notably the luminosity. Photometric redshifts were used by Hudson et al. (1998) to scale the lensing signal of galaxies in the Hubble Deep Field, and by Wilson et al. (2001) who measured the lensing signal around early-type galaxies as a function of redshift. Smith et al. (2001) and Hoekstra et al. (2003) used spectroscopic redshifts, but the lens samples involved were rather small ($\sim 1000$). The Sloan Digital Sky Survey (SDSS) combines both survey area and redshift information. Its usefulness for galaxy-galaxy lensing was demonstrated clearly by Fischer et al. (2000). More recently, McKay et al. (2001) used the available SDSS redshift information to study the galaxy-galaxy lensing signal as a function of galaxy properties (also see Guzik \& Seljak 2002; Seljak 2002; Sheldon et al. 2004). In this paper we use a subset of the RCS data, for which photometric redshifts have been determined using $B,V,R_C$ and $z'$ data taken using the Canada-France-Hawaii Telescope (see Hsieh et al. 2005 for details). The area covered by these multiwavelength data is approximately 33.6 deg$^2$, resulting in a catalog of $1.2\times 10^6$ galaxies for which a redshift could be determined, making it one of the largest data sets of its kind. This unique data set allows us to measure the virial masses of galaxies as a function of their luminosity. This paper is structured as follows. In \S2 we briefly discuss the data, including the photometric redshift catalog and its accuracy. The results of some basic tests of the photometric redshifts are presented in \S3. In \S4 we discuss the dark matter profile inferred from numerical simulations. The measurement of the virial mass as a function of luminosity in various filters is presented in \S5, as well as our measurement of the baryon fraction in galaxies. Throughout the paper we adopt a flat cosmology with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$ and a Hubble parameter $H_0=100 h$ km/s/Mpc. \section{Data} The Red-Sequence Cluster Survey (RCS) is a galaxy cluster survey designed to provide a large sample of optically selected clusters of galaxies in a large volume (see Gladders \& Yee (2005) for a detailed discussion of the survey). To this end, 92 deg$^2$ of the sky were imaged in both $R_C$ and $z'$ using the CFH12k camera on CFHT and the Mosaic II camera on the CTIO Blanco telescope. This choice of filters allows for the detection of clusters up to $z\sim 1.4$ using the cluster red-sequence method developed by Gladders \& Yee (2000). After completion of the original RCS survey, part of the surveyed area was imaged in both $B$ and $V$ band using the CFHT. This additional color information allows for a better selection of clusters at lower redshifts. These follow-up observations cover $\sim 33.6$ deg$^2$, thus covering $\sim 70\%$ of the CFHT fields. The data and the photometric reduction are described in detail in Hsieh et al. (2005). The galaxy-galaxy lensing results presented in Hoekstra et al. (2004) were based on 45.5 deg$^2$ of $R_C$-band data alone. The addition of $B$ and $V$ imaging data for 33.6 deg$^2$ to the existing $R_C$ and $z'$ data allow for the determination of photometric redshifts for both lenses and sources in this subset of RCS imaging data. This enables the study of the lensing signal as a function of the photometric properties of the lens galaxies (i.e., color and luminosity). In this paper we focus on this multi-color subset of the RCS. To determine the restframe $B$, $V$ and $R$ luminosities we use template spectra for a range in spectral types and compute the corresponding passband corrections as a function of redshift and galaxy color (this procedure is similar to the one described in van Dokkum \& Franx 1996). Provided the observed filters straddle the redshifted filter of interest, which is the case here, this procedure yields very accurate corrections. The CFHT $R_C$ images are used to measure the shapes of galaxies used in the weak lensing analysis. The raw galaxy shapes are corrected for the effects of the point spread function, as described in Hoekstra et al. (2002a). The resulting object catalogs have been used for a range of weak lensing studies (e.g., Hoekstra et al. 2002a, 2002b, 2002c, 2004) and we refer to these papers for a detailed discussion of the shape measurements. The measurements of the lensing signal caused by large scale structure presented in Hoekstra et al. (2002a, 2002b) are very sensitive to residual systematics. The various tests described in these papers suggest that the systematics are well under control. In this paper we use the shape measurements to measure the galaxy-galaxy lensing signal, which is much less sensitive to these observational distortions: in galaxy-galaxy lensing one measures the lensing signal that is perpendicular to the lines connecting many lens-source pairs. These are randomly oriented with respect to the PSF anisotropy, and therefore residual systematics are suppressed. \subsection{Photometric redshift distribution} The determination of the photometric redshifts is described in detail in Hsieh et al. (2005). The empirical quadratic polynomial fitting technique (Connolly et al. 1995) is used to estimate redshifts for the galaxies in the RCS data. The key component in this approach is the creation of a training set. Spectroscopic redshifts from the CNOC2 survey (Yee et al. 2000) are matched to the corresponding objects in the overlapping RCS fields. These data are augmented with observations of the GOODS/HDF-N field, for which the spectroscopic redshifts have been obtained using the Keck telescope (Wirth et al. 2004; Cowie et al. 2004), and the photometry is from the ground-based Hawai'i HDF-N data obtained with the Subaru telescope (Capak et al. 2004). This results in a final training set that includes 4,924 objects covering a large range in redshifts. To minimize the fitting errors arising from different galaxy types, Hsieh et al. (2005) used a kd-tree method with 32 cells in a three-dimensional color-color-magnitude space. The resulting catalog contains $1.2\times 10^6$ galaxies with photometric redshifts. This catalog was matched against the catalog of galaxies for which shapes were measured. This resulted in a sample of $8\times 10^5$ galaxies with $18<R_C<24$, that are used in the analysis presented here. Comparison with the spectroscopic redshifts shows that accurate photometric redshifts, with $\sigma_z<0.06$, can be derived in the range $0.2<z<0.5$. At lower redshifts, the lack of $U$ band data limits the accuracy, whereas at higher redshifts photometric errors increase the scatter to $\sigma_z\sim 0.12$ (see Hsieh et al. 2005 for more details). \begin{figure} \begin{center} \leavevmode \hbox{% \epsfxsize=8cm \epsffile[15 160 575 700]{f1.eps}} \caption{\footnotesize {\it panel a}: The difference in spectroscopic and photometric redshifts for galaxies in the training set, with $18<R<24$ and photometric redshifts $0.2<z_{\rm phot}<0.4$. Our sample of lenses is selected to be in this redshift and magnitude range. The dotted lines indicate the intervals containing 90\% of the galaxies and the dashed lines indicate the 70\% interval. {\it panel b}: Same, but now for the brighter half of the training set, i.e., galaxies with $18<R<21$. {\it panel c}: Same, but for the galaxies with $21<R<24$, the fainter half of the lenses. \label{dzlens}} \end{center} \end{figure} To study the halos of galaxies as a function of color and luminosity we select a sample of lenses at intermediate redshifts: we select galaxies with photometric redshifts $0.2<z<0.4$ and $R_C$-band magnitudes $18<R_C<24$. This redshift range is well covered by the CNOC2 redshift survey at the bright end, and the redshift errors are relatively small. For the background galaxies we limit the analysis to galaxies with $z_{\rm phot}<1$. Figure~\ref{dzlens} shows the difference between spectroscopic and photometric redshifts for different subsets of galaxies with photometric redshifts $0.2<z<0.4$. Panel~a shows the full sample, whereas panels b and c show the bright and faint halves respectively. The distribution is peaked, with 70\% of the galaxies within the range $|\Delta z|<0.06$ (0.05 and 0.07 for the bright and faint subsets, resp.) and 90\% within $|\Delta z|<0.12$ (0.085 and 0.15 for the bright and faint subsets, resp.). The solid histogram in Figure~\ref{zdist}a shows the normalized photometric redshift distribution for the galaxies brighter than $R_C=24$. It is common to parametrize the redshift distribution, and a useful form is given by \begin{equation} p(z)=\frac{\beta}{z_s\Gamma[(1+\alpha)/\beta]}\left(\frac{z}{z_s}\right)^\alpha \exp\left[-\left(\frac{z}{z_s}\right)^\beta\right]. \end{equation} We fit this model to the observed redshift distribution. However, the uncertainties in the photometric redshift determinations can be substantial, and as a result the observed distribution is broadened. We use the observed error distribution, assuming a normal distribution, to account for the redshift errors. For the best fit parameterization we find values of $z_s=0.29$, $\alpha=2$ (fixed) and $\beta=1.295$, which yields a mean redshift of $\langle z\rangle=0.53$. This model redshift distribution (which includes the smoothing by redshift errors) is indicated by the smooth curve in Figure~\ref{zdist}a. In the weak lensing analysis, objects are weighted by the inverse square of the uncertainty in the shear measurement (e.g., see Hoekstra et al. 2000, 2002a). As more distant galaxies are fainter, they tend to have somewhat lower weights and the effective redshift distribution is changed slightly. The dashed histogram in Figure~\ref{zdist}a shows the distribution weighted by the uncertainty in the shape measurement for each redshift bin. The best fit parameterized redshift distribution has parameters $z_s=0.265$, $\alpha=2.2$ (fixed) and $\beta=1.30$, which yields $\langle z\rangle=0.51$, only slightly lower than the unweighted case. \subsection{Implications for cosmic shear results?} Hoekstra et al. (2002b) presented constraints on the matter density $\Omega_m$ and the normalization of the power spectrum $\sigma_8$ by comparing cold dark matter predictions to the observed lensing signal caused by large scale structure. The derived value for $\sigma_8$ depends critically on the adopted redshift distribution. Hoekstra et al. (2002b) used galaxies with $22<R_C<24$ and a redshift distribution given by $z_s=0.302$, $\alpha=4.7$ and $\beta=1.7$, which yields a mean redshift of $\langle z\rangle=0.59$. These parameters were based on a comparison with redshift distributions determined from the Hubble Deep Fields. It is useful to examine how these assumptions compare to the RCS photometric redshift distribution for galaxies with $22<R_C<24$, as displayed in Figure~\ref{zdist}b. The best fit model, indicated by the smooth curve, has parameters $z_s=0.31$, $\alpha=3.50$ and $\beta=1.45$, implying a mean redshift of 0.65, about 10\% higher than used by Hoekstra et al. (2002b). It is important to note, however, that the training set lacks a large number of objects beyond $z=0.8$ and $R_C>22$. Despite these shortcomings, the mean redshift of sources appears higher than what was used in Hoekstra et al. (2002b), thus suggesting that their value for $\sigma_8$ needs to be revised downwards. The suggested change in source redshift could reduce the value for $\sigma_8$ from Hoekstra et al. (2002b) by about $8\%$ to $\sigma_8\sim 0.8$. Unfortunately it is not possible to robustly quantify the size of the revision. We stress that without further work on photometric redshifts for faint, high redshift galaxies, it will be difficult to interpret current and, most importantly, future cosmic shear results. \begin{figure*}[!t] \begin{center} \leavevmode \hbox{% \epsfxsize=8.6cm \epsffile[15 160 575 700]{f2a.eps} \epsfxsize=8.6cm \epsffile[15 160 575 700]{f2b.eps}} \caption{\footnotesize {\it panel a:} The solid histogram shows the normalized photometric redshift distribution for the galaxies with redshifts and magnitudes $R_C<24$ that are included in the weak lensing analysis. The solid smooth curve shows the best fit model redshift distribution (see text for details). {\it panel b:} Similar to panel~a, but for galaxies with $22<R_C<24$, corresponding to the range used by Hoekstra et al. (2002b). It is important to note that the lack of a relatively good training set for $z>0.6$ limits the interpretation. The dashed histogram shows the distributions weighted by the uncertainty in the shape measurement for each redshift bin. \label{zdist}} \end{center} \end{figure*} The galaxy-galaxy lensing signal examined in this paper is much less sensitive to the uncertainty in the redshift distribution of faint, distant galaxies, as most of the signal is caused by lenses at much lower redshifts. As mentioned above, to minimize uncertainties in our results further, we only use background galaxies with redshifts less than 1, and select a sample of lenses with redshift $0.2<z<0.4$. \section{Testing the photometric redshifts} Hsieh et al. (2005) present various tests of the accuracy of the photometric redshifts. Comparison to the available spectroscopic data as well as comparing to other published distributions provides a clear way to quantify the uncertainties. In this section we discuss some additional tests, based on the fact that the amplitude of the lensing signal is a well known function of the source redshift. Such a test provides a useful ``sanity'' check on the validity of the photometric redshift distribution. The azimuthally averaged tangential shear $\langle\gamma_t\rangle$ as a function of distance from the lens is a useful measure of the lensing signal (e.g., Miralda-Escud{\'e} 1991): \begin{equation} \langle\gamma_t\rangle(r)=\frac{\bar\Sigma(<r) - \bar\Sigma(r)}{\Sigma_{\rm crit}}=\bar\kappa(<r)-\bar\kappa(r), \end{equation} \noindent where $\bar\Sigma(<r)$ is the mean surface density within an aperture of radius $r$, and $\bar\Sigma(r)$ is the mean surface density on a circle of radius $r$. The convergence $\kappa$, or dimensionless surface density, is the ratio of the surface density and the critical surface density $\Sigma_{\rm crit}$, which is given by \begin{equation} \Sigma_{\rm crit}=\frac{c^2}{4\pi G}\frac{D_s}{D_l D_{ls}}, \end{equation} \noindent where $D_l$ is the angular diameter to the lens. $D_{s}$ and $D_{ls}$ are the angular diameter distances from the observer to the source and from the lens to the source, respectively. It is convenient to define the parameter \begin{equation} \beta=\max[0,D_{ls}/D_s], \end{equation} \noindent which is a measure of how the amplitude of the lensing signal depends on the redshifts of the source galaxies. For instance, in the case of a singular isothermal sphere (SIS) model, the dimensionless surface density is \begin{equation} \kappa=\gamma_t=\frac{r_E}{2r}, \end{equation} \noindent where $r_E$ is the Einstein radius. Under the assumption of isotropic orbits and spherical symmetry, the Einstein radius (in radians) is related to the velocity dispersion and $\beta$ through \begin{equation} r_E=4\pi\left(\frac{\sigma}{c}\right)^2 \beta. \end{equation} To test the photometric redshifts from the RCS, we use galaxies with photometric redshifts $0.2<z<0.4$ to define a sample of lenses. We compute the ensemble averaged tangential shear around these galaxies (i.e., the galaxy-mass cross-correlation function) as a function of source redshift. Brighter galaxies are expected to be more massive, and should be given more weight. To derive the lensing signal, we assume that the velocity dispersion scales with luminosity as $\sigma\propto L_B^{0.3}$, a choice which is motivated by the oberved slope of the $B$-band Tully-Fisher relation (e.g., Verheijen 2001). We select bins with a width of 0.1 in redshift, and measure the galaxy-mass cross-correlation function (e.g., see Hoekstra et al. 2004; Sheldon et al. 2004) out to 10 arcminutes. This signal arises from the combination of the clustering properties of the lenses and the underlying dark matter distribution. In the remainder of the paper, while studying the properties of dark matter halos around galaxies, we limit the analysis to smaller radii and to `isolated' lenses. However, by extending the range of measurements in this section, the signal-to-noise ratio is higher. The signal is well described by a SIS model for this range of scales (as suggested by the reduced $\chi^2$ values for the fits). The resulting value for the Einstein radius as a function of redshift for the background galaxies is presented in Figure~\ref{re_zbg}. We find a negligible lensing signal for galaxies at the redshift of the lenses, whereas it increases for more distant sources. For a given cosmology and a pair of lens and source redshifts the value of $\beta$ can be readily computed. However, the errors in the photometric redshift determination complicate such a simple comparison between the expected signal and the results presented in Figure~\ref{re_zbg}. As was the case for the photometric redshift distribution, the redshift errors will change the signal. For instance at low redshifts, higher redshift galaxies will scatter into this bin, thus increasing the lensing signal. At higher redshifts, lower redshift object will scatter upwards, lowering the signal. When comparing the observed signal to the signal expected based on the adopted $\Lambda$CDM cosmology we need account for these redshift errors. To this end, we create simulated catalogs. The first step is to compute a model lensing signal based on the observed photometric redshifts (which are taken to be exact). We then create a mock catalog by adding the random errors to the redshift (while leaving the lensing signal unchanged). These random errors are based on the observed distribution (see e.g., Fig.~\ref{dzlens}). We measure the lensing signal as a function of redshift in the mock catalog. This signal, indicated by the solid line in Figure~\ref{re_zbg} can be compared directly to our actual measurements as it now includes the smoothing effect of redshift errors. Figure~\ref{re_zbg} shows that it traces the observed change in amplitude of the lensing signal very well. \begin{figure} \begin{center} \leavevmode \hbox{% \epsfxsize=8cm \epsffile[15 160 575 700]{f3.eps}} \caption{\footnotesize Best fit Einstein radius obtained from a fit to the tangential shear as a function of redshift of the background galaxies. Lenses were selected to have photometric redshifts in the range $0.2<z<0.4$. The solid line corresponds to the dependence of the lensing signal for a $\Lambda$CDM cosmology. The observed lensing signal scales with redshift as expected. \label{re_zbg}} \end{center} \end{figure} Another useful experiment is to measure the lensing signal when the lenses and sources are in the same redshift bin. We note that this procedure enhances the probability that we measure the signal for galaxies which are physically associated. If satellite galaxies tend to be aligned tangentially (radially) this would also lead to a positive (negative) signal. The results from Bernstein \& Norberg (2002), based on an analysis employing spectroscopic redshifts, has shown that intrinsic tangential alignments are negligible. Unfortunately, in our case, the much larger photometric redshift errors (compared to spectroscopic redshifts) effectively suppress this potentially interesting signal. In addition, for the RCS data, the interpretation of the signal requires a large set of spectroscopic redshifts to quantify the contributions of unassociated galaxies in each bin. Instead, we use these measurements as a test of the photometric redshifts. The results are presented in Figure~\ref{re_zbin}. Panel~a shows the results for the tangential shear, whereas Panel~b shows the results when the background galaxies are rotated by 45$^\circ$ (which is a measure of systematics). In both cases, we do not observe a significant signal. The lack of a signal in the tangential shear in this case implies that the errors in photometric redshifts are relatively small. If this were not the case, and higher redshift galaxies would contaminate the samples at lower redshifts, we would expect to observe a positive signal. \begin{figure} \begin{center} \leavevmode \hbox{% \epsfxsize=8cm \epsffile[15 160 575 700]{f4.eps}} \caption{\footnotesize {\it panel a}: Best fit Einstein radius obtained from a fit to the tangential shear when the lenses and sources are selected to be in the same redshift bin. In this case, no signal should be present, in agreement with the measurements. This result also indicates that intrinsic tangential alignments are neglibible. {\it panel b}: Results when the background galaxies are rotated by 45 degrees (`B'-mode). Also in this case no signal is detected. \label{re_zbin}} \end{center} \end{figure} \section{Galaxy dark matter profile} One of the major advantages of weak gravitational lensing over dynamical methods is that the lensing signal can be measured out to large projected distances from the lens. However, at large radii, the contribution from a particular galaxy may be small compared to its surroundings: a simple interpretation of the measurements can only be made for `isolated' galaxies. In practice, galaxies are not isolated, which is particularly true for bright, early-type galaxies. In their analysis of SDSS data, Guzik \& Seljak (2002) quantified the contribution from clustered galaxies using a halo-model approach. As discussed in \S5, we follow a different approach by selecting relatively isolated galaxies. As a result, our results are not strictly valid for the galaxy population as a whole. Nevertheless, the selection procedure is well defined and can be readily implemented when comparing to numerical simulations. We limit the analysis to relatively small distances from the lens, thus ensuring that the signal is dominated by the lens itself. As a result, we need to adopt a model for the mass distribution to relate the lensing signal to the mass of the lens. Our choice is motivated by the results of cold dark matter (CDM) simulations. Collisionless cold dark matter provides a good description for the observed structures in the universe. Numerical simulations, which provide a powerful way to study the formation of structure in the universe, indicate that on large scales CDM gives rise to a particular density profile (e.g., Dubinski \& Carlberg 1991; Navarro, Frenk, \& White 1995, 1996, 1997; Moore et al. 1999). We note, however, that there are still uncertainties regarding the slope at small radii and the best analytical description of the profile (e.g., Moore et al. 1999; Diemand et al. 2004; Hayashi et al. 2004; Tasitsiomi et al. 2004a). Furthermore there is considerable scatter from halo to halo in the simulations. Our observations, however, cannot distinguish between these various profiles, and instead we focus on the commonly used NFW profile, given by \begin{equation} \rho(r)=\frac{M_{\rm vir}}{4\pi f(c)}\frac{1}{r(r+r_s)^2}, \end{equation} \noindent where $M_{\rm vir}$ is the virial mass, which is the mass enclosed within the virial radius $r_{\rm vir}$. The virial radius is related to the `scale radius' $r_s$ through the concentration $c=r_{\rm vir}/r_s$. The function $f(c)=\ln(1+c)-c/(1+c)$. One can fit the NFW profile to the measurements with $M_{\rm vir}$ and concentration $c$ (or equivalently $r_s$) as free parameters. However, numerical simulations have shown that the average concentration depends on the halo mass and the redshift. Hoekstra et al. (2004) constrained the mass and scale radius of the NFW model using a maximum likelihood analysis of the galaxy-galaxy lensing signal, and found that the results agreed well with the predictions from simulations. We therefore adopt the results from Bullock et al. (2001), who found from simulations that \begin{equation} c=\frac{9}{1+z}\left(\frac{M_{\rm vir}}{8.12\times 10^{12} h M_\odot}\right)^{-0.14}. \end{equation} \noindent It is good to note that individual halos in the simulations have a lognormal dispersion of approximately 0.14 around the median. For the virial mass estimates presented here, we will use this relation between mass and concentration, thus assuming we can describe the galaxy mass distribution by a single parameter. \noindent By definition, the virial mass and radius are related by \begin{equation} M_{\rm vir}=\frac{4\pi}{3} \Delta_{\rm vir}(z)\rho_{\rm bg}(z)r_{\rm vir}^3, \end{equation} \noindent where $\rho_{\rm bg}=3H_0^2\Omega_m(1+z)^3/(8\pi G)$ is the mean density at the cluster redshift and the virial overdensity $\Delta_{\rm vir}\approx (18\pi^2+82\xi-39\xi^2)/\Omega(z)$, with $\xi=\Omega(z)-1$ (Bryan \& Norman 1998). For the $\Lambda$CDM cosmology considered here, $\Delta_{\rm vir}(0)=337$. We also note that for the adopted $\Lambda$CDM cosmology the virial mass is different from the widely used $M_{200}$. This mass is commonly defined as the mass contained within the radius $r_{200}$, where the mean mass density of the halo is equal to $200\rho_c$ (i.e., setting $\Delta=200$ and $\rho_{\rm bg}=\rho_c$ in Eqn.~9). Note, however, that other definitions for $M_{200}$ can be found in the literature as well. The expressions for the tangential shear and surface density for the NFW profile have been derived by Bartelmann (1996) and Wright \& Brainerd (2000) and we refer the interested reader to these papers for the relevant equations. \section{Results} As discussed above, we study a sample of lenses with photometric redshifts $0.2<z<0.4$ and $18<R_C<24$. This first selection yields $\sim 1.4\times 10^5$ lenses. We split this sample in a number of luminosity and color bins and determine the virial radii from an NFW model fit to the observed lensing signal. For bright galaxies the lensing signal on small scales is typically dominated by the dark matter halo associated with that galaxy. In the case of faint (low mass) galaxies, however, the signal can easily be dominated by contributions from a massive neighbor. Note that this neighbor need not be physically associated with the lens, as all matter along the line-of-sight contributes to the lensing signal. We can study the relevance of the local (projected) density by measuring the lensing signal around a sample of `faint' lenses ($10^9<L_B<5\times 10^9~h^{-2} L_{B\odot}$), as a function of the projected distance to the nearest `bright' lens ($L_B>5\times 10^9 h^{-2}L_{B\odot}$). This distance can be used as a crude measure of the density around the faint lens (i.e., the smaller the distance, the higher the density). To this end, we split this sample of `faint' lenses into subsets based on their distance to the nearest bright galaxy. We fit a SIS model to the ensemble averaged lensing signal out to 2' ($\sim 400h^{-1}$kpc at the mean distance of the lenses) for each bin. The reduced $\chi^2$ values for the best fit are all close to unity, indicating that the SIS model provides a good fit to these observations. We found that limiting the fit to smaller radii did not change the results apart from increasing the measurement errors. Figure~\ref{re_dist} shows the derived value for the Einstein radius as a function of the distance to the nearest bright galaxy. \begin{figure} \begin{center} \leavevmode \hbox{% \epsfxsize=8cm \epsffile[15 160 575 700]{f5.eps}} \caption{\footnotesize Einstein radius for `faint' lenses as a function of projected distance to the nearest `bright' lens $r_{\rm sep}$. The faint galaxies have luminosities $10^{9}<L_B<5\times 10^9 h^{-2}\hbox{${\rm L}_{B\odot}$}$, whereas the bright galaxies have $L_B>5\times 10^9h^{-2} \hbox{${\rm L}_{B\odot}$}$.\label{re_dist}} \end{center} \end{figure} \begin{figure*}[!th] \begin{center} \leavevmode \hbox \epsfxsize=\hsize \epsffile[20 170 560 460]{f6.eps}} \caption{\footnotesize Tangential shear as a function of projected (physical) distance from the lens for each of the seven restframe $R$-band luminosity bins. To account for the fact that the lenses have a range in redshifts, the signal is scaled such that it corresponds to that of a lens at the average lens redshift ($z\sim 0.32$) and a source redshift of infinity The mean restframe $R$-band luminosity for each bin is also shown in the figure in units of $10^9 h^{-2}$L$_{R\odot}$. The strength of the lensing signal clearly increases with increasing luminosity of the lens. The dotted line indicates the best fit NFW model to the data. The tangential shear profiles for the $B$ and $V$-band are very similar and we only present the final results for the $R$ filter in Figure~\ref{ml_all} \label{gtprof}} \end{center} \end{figure*} The results show a clear increase in lensing signal as the separation decreases, i.e. as the density increases. However, at separations larger than $\sim 30''$, the observed lensing signal appears to be independent of the density. Larger data sets are required to make more definitive statements, but these findings suggest that we can measure the properties of `isolated' faint galaxies by limiting the sample to galaxies which are more than 30 arcseconds away from a brighter galaxy. Although this is a rather strict selection for the faintest galaxies, bright galaxies can be surrounded by many faint galaxies and consequently are not truely isolated. In the remainder of this paper, we present results based on the sample of `isolated' galaxies, unless specified otherwise. This selection reduces the sample of lenses to 94,509 galaxies. \subsection{Mass-luminosity relation} We split the sample of `isolated' lens galaxies into seven luminosity bins and measure the mean tangential distortion as a function of radius out to 2 arcminutes. We fit an NFW profile to these measurements, with the virial mass as a free parameter, as described in \S4. Figure~\ref{gtprof} shows the measurements of the tangential shear as a function of projected distance from the lens for the seven $R$-band luminosity bins. The results for the $B$ and $V$ filters are very similar to the ones presented in Figure~\ref{gtprof}. To account for the fact that the lenses span a range in redshift, we have scaled the signal such that it corresponds to that of a lens at the mean lens redshift ($z\sim 0.32$) and a background galaxy at infinite redshift. In each panel in Figure~\ref{gtprof} the average restframe $R$-band luminosity for each bin is indicated (in units of $10^{9}h^{-2}$L$_{R\odot}$). The vertical scales in each of the panels in Figure~\ref{gtprof} are the same, and as the luminosity of the lenses increases we observe a clear increase in the strength of the lensing signal. The best fit NFW models for each bin are indicated by the dotted curves. Note that the current observations cannot distinguish between an NFW profile (used here) and other profiles such as the SIS model. There are more faint galaxies relative to the number of bright galaxies and the errors in the photometric redshift estimates will have as the net effect of faint galaxies getting scattered to higher luminosity bins, hence biasing the mass at fixed luminosity to a lower value. To estimate the level of this bias we create mock catalogs. We assume a power-law mass-luminosity relation and compute the model lensing signal using the observed photometric redshifts of the lens and source galaxies. We analyse this `perfect' catalog and compute the virial masses as a function of luminosity. We then use the observed photometric redshift error distribution as a function of apparent magnitude (see Figure~\ref{dzlens}) to create a number of new catalogs where the random error is added to the redshift (note that the lensing signal is not changed). These catalogs are also analysed and yield the `observed' virial mass as a function of luminosity. As expected, the resulting masses are smaller than the input masses and the change in mass depends on the luminosity. The results are presented in Figure~\ref{corfac} for the $B$, $V$, and $R$-band. Different choices for the mass-luminosity relation (within reasonable bounds) yield very similar curves. To infer the correct virial mass, we scale the observed virial masses by these curves. \begin{figure} \begin{center} \leavevmode \hbox{% \epsfxsize=8.5cm \epsffile{f7.eps}} \caption{\footnotesize The ratio of the input virial mass and the observed mass after adding photometric redshift errors. The dependence with luminosity is dominated by how the redshift errors depend on brightness. The resulting curves depend only very weakly on the input mass-luminosity relation. The corrections are somewhat different for the various restframe bands. The solid line with filled circles correspond to the $B$-band results, the dashed line with solid squares is for the $V$-band and the dotted line with stars is for the $R$-band data. To infer the correct virial mass, we scale the observed virial masses by these curves. \label{corfac}} \end{center} \end{figure} \begin{table*} \begin{center} \caption{Best fit virial masses\label{tab_vir}} \begin{tabular}{cc|cc|cc} \hline \hline $L_B$ & $M_{\rm vir}$ & $L_V$ & $M_{\rm vir}$ & $L_R$ & $M_{\rm vir}$\\ $[10^9 h^{-2}{\rm L}_\odot] $ & $[10^{11}h^{-1}{\rm M}_\odot]$ & $[10^9 h^{-2}{\rm L}_\odot] $ & $[10^{11}h^{-1}{\rm M}_\odot]$ & $[10^9 h^{-2}{\rm L}_\odot] $ & $[10^{11}h^{-1}{\rm M}_\odot]$ \\ \hline 1.6 & $0.66^{+0.41}_{-0.43}$ & 1.6 & $0.48^{+0.33}_{-0.35}$ & 1.6 & $0.10^{+0.40}_{-0.30}$ \\ 3.5 & $0.86^{+0.42}_{-0.48}$ & 3.4 & $1.05^{+0.69}_{-0.45}$ & 3.4 & $1.24^{+0.65}_{-0.57}$ \\ 6.1 & $1.81^{+0.84}_{-0.75}$ & 6.1 & $3.1^{+1.2}_{-1.0}$ & 6.1 & $1.62^{+0.88}_{-0.84}$ \\ 8.6 & $6.0^{+1.6}_{-1.6}$ & 8.6 & $2.6^{+1.3}_{-1.1}$ & 8.6 & $3.1^{+1.4}_{-1.3}$ \\ 11.7 & $7.7^{+2.0}_{-1.9}$ & 11.9 & $6.6^{+1.9}_{-1.8}$ & 12.0 & $5.0^{+1.9}_{-1.5}$ \\ 16.9 & $16.9^{+5.5}_{-4.9}$ & 17.0 & $20.1^{+5.3}_{-4.9}$ & 16.9 & $11.5^{+4.0}_{-3.4}$ \\ 24.0 & $18.8^{+7.8}_{-6.3}$ & 24.5 & $17.2^{+5.6}_{-4.9}$ & 24.9 & $23.3^{+5.6}_{-5.1}$ \\ \hline \end{tabular} \end{center} \tablecomments{Best fit virial masses as a function of luminosity in the restframe $B$, $V$ and $R$ band. The corresponding values for the concentration $c$ can be computed using Eqn.~8 using a redshift of $z=0.32$ for the lenses. The listed errors indicate the 68\% confidence limits.} \end{table*} At the low luminosity end the corrections are large because of the relatively large errors in redshift. At the bright end, however, the redshift errors are smaller, but the number of bright galaxies is decreasing rapidly (because of the shape of the luminosity function), and a relatively larger fraction of intrinsically lower mass systems ends up in the high luminosity bin, resulting in an increase of the correction factor. The corrections are substantial at both ends, but the origin is well understood and the associated uncertainty is small. The corrected virial masses as a function of luminosity in the $B$, $V$, and $R$-band respectively, are presented in the upper panels of Figure~\ref{ml_all} and the best fit virial masses are listed in Table~\ref{tab_vir}. In all cases we see a clear increase of the virial mass with luminosity. The results suggest a power-law relation between the luminosity and the virial mass, although this assumption might not hold at the low luminosity end (e.g., see lower panels of Fig.~\ref{ml_all}). We therefore fit \begin{equation} M=M_{\rm fid}\left(\frac{L}{10^{10} h^{-2} \hbox{${\rm L}_\odot$}}\right)^\alpha, \end{equation} \noindent to the measurements, where $M_{\rm fid}$ is the virial mass of a fiducial galaxy of luminosity $L=10^{10}h^{-2}$L$_{x\odot}$, where $x$ indicates the relevant filter. The best fit in each filter is indicated by the dashed line in Figure~\ref{ml_all}. The resulting best fit parameters for this mass-luminosity relation are listed in Table~\ref{tab_mass}. We do not observe a change in the slope $\alpha$ for the different filter, but find that $M_{\rm fid}$ is decreasing for redder passbands. Tasitsiomi et al. (2004b) studied the weak lensing mass-luminosity relation from their numerical simulations. This study shows that the interpretation of the mass-luminosity relation presented in Figure~\ref{ml_all} is complicated by the fact that the halos of galaxies of a given luminosity show a scatter in their virial masses. For the model adopted in Tasitsiomi et al. (2004b), the best fit virial mass gives a value between the median and mean mass. The amplitude of this bias depends on the assumed intrinsic scatter in the mass-luminosity relation, which requires further study. The Tasitsiomi et al. (2004b) results imply that our results underestimate the actual mean virial mass, but that the slope of the mass-luminosity relation is not changed. Guzik \& Seljak (2002) measured the mass-luminosity relation using data from the SDSS. The average luminosity of their sample of lenses is higher than studied here. Also, the analysis by Guzik \& Seljak (2002) differs from ours, as they model the contribution from other halos. Using the halo model approach they compute the contributions of other halos to the lensing signal around a galaxy, including that of smooth group/cluster halos. In this paper, we have instead minimized such contributions to the galaxy-galaxy lensing signal by selecting `isolated' galaxies and limiting the analysis to the lensing signal within 400$h^{-1}$kpc from the lens. The results presented in Figures~\ref{re_dist} and~\ref{gtprof} suggest that this approach has worked well. \begin{table} \begin{center} \caption{Best fit parameters of the mass-luminosity relation \label{tab_mass}} \begin{tabular}{lcc} \hline \hline filter & $M_{\rm fid}$ & $\alpha$ \\ & [$10^{11}h^{-1}$M$_\odot$] & \\ \hline B & $9.9^{+1.5}_{-1.3}$ & $1.5\pm0.3$ \\ V & $9.3^{+1.4}_{-1.3}$ & $1.5\pm0.2$ \\ R & $7.5^{+1.2}_{-1.1}$ & $1.6\pm0.2$ \\ \hline \end{tabular} \end{center} \tablecomments{Column~2 lists the virial mass for a galaxy of luminosity $10^{10}h^{-2}$L$_\odot$ in the indicated filter. Column~3 lists the best fit power-law slope of the mass-luminosity relation. The listed errors indicate the 68\% confidence limits.} \end{table} \begin{figure*}[!t] \begin{center} \leavevmode \hbox \epsfxsize=\hsize \epsffile[50 334 570 685]{f8.eps}} \caption{\footnotesize {\it upper panels}: Virial mass as a function of the rest-frame luminosity in the indicated filter. The dashed line indicates the best fit power-law model for the mass-luminosity relation, with the relevant parameters listed in Table~\ref{tab_mass}. {\it lower panels:} Observed rest-frame virial mass-to-light ratios. The results suggest a rise in the mass-to-light ratio with increasing luminosity, albeit with low significance. The dotted line in the panel showing the $B$-band mass-to-light ratio corresponds to model~A from van den Bosch et al. (2003). It matches the observed dependence of the mass-to-light ratio with luminosity, but with an offset towards higher values. \label{ml_all}} \end{center} \end{figure*} Guzik \& Seljak (2002) present results for two different cases of group halo contributions. Depending on the assumed relative importance of such a halo the derived mass changes only slightly. The assumptions for the halo contribution do affect the inferred slopes somewhat, although the change is small for the redder filters. Minimizing the halo contribution yields a power-law slope of $\sim 1.5-1.7$, in excellent agreement with the findings presented here. However, when maximizing the effect the slope decreases to $\sim 1.3-1.4$ in the red filters and to $1.2\pm0.2$ in the $g'$-band. It should be noted that the latter scenario is rather extreme, given that, with the exception of the central galaxy, the relative importance of the group halo is expected to diminish with increasing luminosity (i.e., mass) of the lens. In addition, the different range in luminosities probed in the two studies would most likely affect the results in the bluest filters. Benson et al. (2000) present predictions for the $B$-band mass-luminosity relation based on semi-analytic models of galaxy formation. In the luminosity range probed here, they obtain a power-law slope of $\sim 1.6$. Van den Bosch et al. (2003) used the conditional luminosity functions computed from the 2dF galaxy redshift survey to constrain the variation of the mass-to-light ratio as a function of mass. Van den Bosch consider a number of models, which provide similar mass-luminosity relations for the range of masses probed in this paper. We consider their model A, which is obtained by fitting the data, without constraining the model parameters. For this model the mass-luminosity relation is close to a power law with a slope of 1.3. Hence, both model predictions are in good agreement with our findings and the results of Guzik \& Seljak (2002). The agreement in the slope of the mass-luminosity relation strengthens the conclusion by Guzik \& Seljak (2002) that rotation curves must decline substantially from the optical to the virial radius, in order to reconcile our results with the observed scaling relations at small radii, such as the Tully-Fisher relation. A decrease in rotation velocity is also predicted by semi-analytic models of galaxy formation (e.g., Kauffmann et al. 1999; Benson et al. 2000). Guzik \& Seljak (2002) define the virial mass in terms of an overdensity of 200 times the critical density, which is different from ours. They find a mass $M_{200}=(9.3\pm1.6)\times 10^{11}h^{-1}\hbox{${\rm M}_\odot$}$ for a galaxy with a luminosity of $1.1\times 10^{10}h^{-2}\hbox{$h{\rm M}_\odot/{\rm L}_{{\rm g'}\odot}$}$ at a redshift of $z\sim 0.16$. We convert our mass estimate to their definition, and use the transformations between filters from Fukugita et al. (1996), to relate our results to those of Guzik \& Seljak (2002). Furthermore, we assume that the fiducial galaxy is about 10\% brighter at z=0.32, compared to $z=0.16$. Under these assumptions, our results translate to a mass $M_{200}=(11.7\pm1.7)\times 10^{10}h^{-1} \hbox{${\rm M}_\odot$}$ for a galaxy with a luminosity of $1.1\times 10^{10}h^{-2}\hbox{$h{\rm M}_\odot/{\rm L}_{{\rm g'}\odot}$}$, in agreement with the findings of Guzik \& Seljak (2002) at the $1\sigma$ level. The lower panels in Figure~\ref{ml_all} show the inferred rest-frame mass-to-light ratios as a function of luminosity for the different filters. The results suggest a rise in mass-to-light ratio for galaxies more luminous than $\sim 10^{10}h^{-2}$\hbox{${\rm L}_\odot$} ~and little variation for fainter galaxies. This suggests that a power law is not sufficient to describe the mass-luminosity relation over the range probed here. However, a larger data set is needed to make a firm statement. The dashed curve in Figure~\ref{ml_all} shows the values corresponding to model~A from van den Bosch et al. (2003), converted to the $B-$band and our definition of the virial mass. The predicted mass-to-light ratio is significantly higher compared to our measurements. This could point to a systematic underestimate of the virial masses from lensing due to scatter in the mass-luminosity relation, as suggested by Tasitsiomi et al. (2004b). Nevertheless, the model predictions are in qualitative agreement with the results presented in Figure~\ref{ml_all}, in the sense that they predict a rise for bright galaxies and a small increase in mass-to-light ratio towards lower luminosities. \subsection{Star formation efficiency} In the previous section we studied the dependence of the mass-to-light ratio as a function of luminosity. The results suggest an increase for luminous galaxies. A simple interpretation of these results, however, is complicated because the mix of galaxy type is also a function of luminosity. The more luminous galaxies are likely to be early-type galaxies rather than spiral galaxies. Although we have not classified our sample of lenses, we can use the $B-V$ color as a fair indicator of galaxy type (e.g., Roberts \& Haynes 1994). Furthermore, the color can be used to estimate the mean stellar mass-to-light ratio, which also is a strong function of color (e.g., Bell \& de Jong 2001). Comparison of the virial and stellar mass-to-light ratios then enables us to estimate the relative fraction of the mass that has been transformed into stars. Figure~\ref{mlcol}a shows the inferred $B$-band mass-to-light ratio as a function of restframe $B-V$ color. Figure~\ref{mlcol}a shows a clear increase in mass-to-light ratios for early-type galaxies, which have colors redder than $\sim 0.8$. It is useful to note that our selection of lenses allows for the brightest galaxies in the centres of denser regions to be included. Our simulations show that the inferred masses are not biased, but these tests do not include the smooth contributions from group halos. The resulting mass-to-light ratios are comparable to those determined for rich clusters of galaxies (e.g., Hoekstra et al. 2002d) and massive galaxy groups (Parker et al. 2005). For galaxies with $B-V<0.8$ the mass-to-light ratio does not show a clear change with color and we find an average mass-to-light ratio of $M/L_B=32\pm9\hbox{$h{\rm M}_\odot/{\rm L}_{{\rm B}\odot}$}$. Figure~\ref{mlcol}b shows the results for the $R$-band mass-to-light ratio. For galaxies with $B-V<0.8$ we obtain an average value of $M/L_R=34\pm9\hbox{$h{\rm M}_\odot/{\rm L}_{{\rm R}\odot}$}$. The increase in mass-to-light ratio for red galaxies is smaller in the $R$-band, which is expected since the stellar mass-to-light ratios also vary less. As discussed in the previous section, the measurements presented in Figure~\ref{mlcol} are for a sample of galaxies with an average redshift of $z=0.32$. Note, that to compare these results to measurements at lower redshifts one needs to account for evolution in both the colors and the luminosities of the lens galaxies. To this end we use population synthesis models (e.g., Fioc \& Rocca-Volmerange 1997) which indicate that the galaxies become redder as they age, and that the reddest galaxies dim somewhat faster than the blue galaxies. As mentioned above, it is interesting to estimate the fraction of mass in stars. To do so, we need to relate the luminosity to the stellar mass. Direct measurements of the stellar mass-to-light ratios are difficult, although rotation curves can provide useful limits on the maximum allowed value. Instead we rely on galaxy evolution models, which use evolutionary tracks and assumptions about initial mass function (IMF), the star formation history and feedback, to compute stellar populations as a function of age. There are many obvious difficulties in such work, given the complicated history of galaxies and the uncertainty in the IMF. The latter is of particular importance and gives rise to a relatively large uncertainty in the estimates as we will discuss below. Nevertheless, the dependence of stellar mass-to-light ratio with color is fairly well constrained. Bell \& de Jong (2001) used a suite of galaxy evolution models to show that one expects substantial variation in stellar mass-to-light ratio as a function of galaxy color. Although their work focussed on the properties of spiral galaxies, comparison with results for early-type galaxies suggest that we can extend their calculation to these galaxies as well. Bell \& de Jong (2001) find the models are well described by a linear relation between $\log M/L$ and $B-V$ color, and provide tables with the slope and intercept of these relations. Their results, however, are for $z=0$, but we have converted their results to the mean redshift of our lenses $(z=0.32)$ using predictions based on the PEGASE code (Fioc \& Rocca-Volmerange 1997), provided by D. LeBorgne. Compared to $z=0$, the galaxies are slightly bluer, with stellar mass-to-light ratios $\sim 25\%$ lower at $z=0.32$. \begin{figure} \begin{center} \leavevmode \hbox{% \epsfxsize=8cm \epsffile[15 160 575 700]{f9.eps}} \caption{\footnotesize (a) Rest-frame $B$-band virial mass-to-light ratio as a function of rest-frame $(B-V)$ color. (b) The same, but now for the rest-frame $R$-band. In this case the change for red galaxies is smaller. The filled circles are the measurements for our sample of lenses, which have a mean redshift $z=0.32$. \label{mlcol}} \end{center} \end{figure} The resulting stellar mass-to-light ratio is most sensitive to the assumed IMF and we will consider two `extreme' cases, such that our results should bracket the real properties of galaxies. The first IMF we consider is the one proposed by Salpeter (1955). The $z=0.32$ stellar mass-to-light ratios based on the PEGASE code by Fioc \& Rocca-Volmerange (1997) are presented in the bottom left panel of Figure~\ref{fstar} for the $B$ (solid line) and $R$-band (dashed line). As noted by Bell \& de Jong (2001) a standard Salpeter (1955) IMF results in mass-to-light ratios that are too high to fit rotation curves of spiral galaxies. Hence, this model can be considered extreme in the sense that it provides a high estimate for the mass in stars. Instead, Bell \& de Jong (2001) propose a scaled Salpeter IMF, which is equivalent to reducing the number of low mass stars (which contribute to the mass, but not to the luminosity). We use the parameters from their Table~1. The results for this IMF, which fits the rotation curve data better, are shown in the bottom right panel of Figure~\ref{fstar}. We use these model stellar mass-to-light ratios to calculate the ratio $M_{\rm vir}/M_*$ as a function of color in both $B$ and $R$ band. The results are also presented in Figure~\ref{fstar}. For a given model, the results between the two filters agree very well, and the average ratios are listed in Table~\ref{tab_star}, for a Hubble parameter of $H_0=71$ km/s/Mpc. However, the two models yield significantly different ratios. \begin{figure} \begin{center} \leavevmode \hbox{% \epsfxsize=9cm \epsffile{f10.eps}} \caption{\footnotesize Lower panels show the stellar mass-to-light ratios in the $B$-band (solid lines) and $R$-band (dashed lines) for the results from PEGASE models using a Salpeter IMF and $Z=0.02$ (left) and a scaled Salpeter IMF (right) from Table~1 from Bell \& de Jong (2001). The mass-to-light ratios have been evolved to $z=0.32$, which corresponds to the mean redshift of our lens sample. The upper panels show the resulting ratios of virial mass and stellar mass for the $B$ and $R$-band data. For a given IMF, the results obtained in the different filters agree well, but it is clear that the mean values (indicated by the shaded areas) depend strongly on the adopted IMF. \label{fstar}} \end{center} \end{figure} \begin{table*} \begin{center} \caption{Stellar mass and baryon fractions\label{tab_star}} \begin{tabular}{llcclcc} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{PEGASE} & ~~ & \multicolumn{2}{c}{SCALED}\\ & & $B$ & $R$ & & $B$ & $R$ \\ \hline all & $M_{\rm vir}/M_*$ & $14\pm2$ & $15\pm2$ & & $27\pm4$ & $28\pm4$ \\ & $f_{\rm bar\rightarrow *}$ & $0.41^{+0.07}_{-0.05}$ & $0.39^{+0.06}_{-0.05}$ & & $0.22^{+0.04}_{-0.03}$ & $0.21^{+0.03}_{-0.03}$ \\ & $f^{\rm gal}_{\rm bar}$ & $0.070^{+0.012}_{-0.009}$ & $0.065^{+0.010}_{-0.008}$ & & $0.037^{+0.005}_{-0.004}$ & $0.035^{+0.005}_{-0.004}$ \\ & & & & & & \\ $B-V<0.8$ & $M_{\rm vir}/M_*$ & $9\pm2$ & $10\pm3$ & & $17\pm5$ & $18\pm5$ \\ & $f_{\rm bar\rightarrow *}$ & $0.65^{+0.20}_{-0.14}$ & $0.60^{+0.20}_{-0.12}$ & & $0.34^{+0.11}_{-0.08}$ & $0.32^{+0.12}_{-0.07}$ \\ & $f_{\rm bar}$ & $0.11^{+0.03}_{-0.02}$ & $0.10^{+0.03}_{-0.02}$ & & $0.057^{+0.018}_{-0.013}$ & $0.055^{+0.021}_{-0.011}$ \\ & & & & & & \\ $B-V>0.8$ & $M_{\rm vir}/M_*$ & $26\pm4$ & $28\pm4$ & & $42\pm6$ & $45\pm6$ \\ & $f_{\rm bar\rightarrow *}$ & $0.22^{+0.03}_{-0.03}$ & $0.21^{+0.03}_{-0.03}$ & & $0.14^{+0.02}_{-0.02}$ & $0.13^{+0.02}_{-0.02}$ \\ & $f^{\rm gal}_{\rm bar}$ & $0.038^{+0.006}_{-0.005}$ & $0.036^{+0.005}_{-0.004}$ & & $0.024^{+0.004}_{-0.003}$ & $0.022^{+0.004}_{-0.003}$ \\ & & & \\ \hline \end{tabular} \end{center} \tablecomments{Note: Results for the PEGASE model using a standard Salpeter IMF and scaled Salpeter IMF from Bell \& de Jong (2001). These models have been evolved to a redshift of $z=0.32$ to allow for a direct comparison with the measurements. For different color selections, the rows list respectively, the ratio of virial mass over stellar mass, the implied fraction of baryons transformed into stars and the total visible baryon fraction in galaxies. Note that the results for the $B$ and $R$-band are not independent. We have adopted a Hubble constant of $H_0=71$km/s/Mpc and a universal baryon fraction of $\Omega_b/\Omega_m=0.17$ (e.g., Spergel et al. 2003).} \end{table*} Observations of the cosmic microwave background (e.g., Spergel et al. 2003) have yielded accurate measurements of the baryon fraction in the universe. Based on WMAP observations, Spergel et al. (2003) obtained $\Omega_b h^2=0.024\pm0.001$ and $\Omega_m h^2=0.14\pm 0.02$. If we assume that baryons do not escape the dark matter overdensity they are associated with, the ratio of mass in baryons to the total mass of the halo is $M_{\rm bar}/M_{\rm vir}=\Omega_b/\Omega_m= 0.17\pm0.03$. In the following, we will also assume $H_0$=71/km/s/Mpc, which is the currently favoured value. For the PEGASE Salpeter model, this implies that the fraction of the mass in stars is $0.070\pm0.011$ (average of the $B$ and $R$ value), whereas the scaled Salpeter IMF yields a lower value of $0.037\pm0.005$. Comparison with the value of $\Omega_b/\Omega_m$ from CMB measurements suggests that only $\sim 40\%$ and $\sim 22\%$ of the baryons are converted into stars for the standard and scaled Salpeter IMFs respectively. The actual results for the two filters considered here are indicated separately in Table~\ref{tab_star} by $f_{{\rm bar}\rightarrow *}$. Table~\ref{tab_star} also lists the average results when we consider blue and red galaxies separately. The implied star formation efficiencies for early-type galaxies are low. We note that similar efficiencies have been inferred for galaxy clusters (e.g., Lin, Mohr \& Stanford 2003). Interestingly, our results imply that late-type galaxies convert a $\sim 2$ times larger fraction of baryons into stars. This result is robust, as it does not depend much on the adopted IMF. Guzik \& Seljak (2002) also found a factor of $\sim 2$ difference in star formation efficiency between early and late-type galaxies, in good agreement with our findings. Hence, these results provide very important, direct observational constraints on the relative star formation efficiency during galaxy formation for different galaxy types. These findings suggest that the mechanism for the formation of early type galaxies is somehow more efficient in removing gas compared to late type galaxies. Ram pressure stripping might be more prevalent, given that early type galaxies are typically found in high density regions, or they might form while developing strong winds that blow out most of the baryons. Irrespective of the process responsible for ejecting baryons, the resulting galaxy will always have a stellar fraction which is greater or equal to the fraction of stars in its progenitors. If we consider the situation where early-type galaxies form through mergers, it is clear that not all early-type galaxies can be the result of merging the late-type galaxies studied in this paper. Ejecting $\sim 60\%$ of the stars during the merger process might seem an option, but this is hard to envision without removing a similar fraction of the dark matter halo. Hence, the progenitors of early-type galaxies must have had a low fraction of their mass in stars. This could be achieved if early-type galaxies (or their progenitors) formed early on without forming new stars at later times (because they lost their gas) and if later type galaxies sustained their star formation for a much longer time, thus building up a larger fraction of mass in stars. Recent estimates of the star formation rates of high redshift galaxies, suggest a qualitatively similar picture, in which early-type galaxies formed the bulk of their stars very early on with a sharp drop in star formation rates at $z\sim 2$, and less massive (late-type) galaxies continue to form most of their stars at a later time and over a much longer period of time (e.g., McCarthy et al. 2004; Juneau et al. 2005). \subsection{Visible baryon fraction} In addition to stars, galaxies contain some gas, which needs to be included if we are to do a full accounting of the visible baryon contents of galaxies. Although the amount of molecular hydrogen is uncertain, the amount of neutral hydrogen is relatively well determined from 21cm line studies. The relative amount of HI gas is a function of galaxy type, with late-type galaxies being more gas rich. We use the results from Roberts \& Haynes (1994) for the $M_{\rm HI}/L_B$ ratio to estimate the amount of mass in gas (correcting the hydrogen mass to account for the primoridial helium abundance). The inclusion of gas slightly raises the mass in detected baryons for the bluest galaxies but this component is negligible for the red galaxies. Adding estimates for the amount of molecular hydrogen does not change the numbers either. The resulting fraction of the mass in baryons in galaxies, $f^{\rm gal}_{\rm bar}$, is listed in Table~\ref{tab_star} as well. Only for the blue galaxies, under the assumption of a standard Salpeter (1955) IMF, is the baryon fraction marginally consistent with the value determined from observations of the CMB (Spergel et al. 2003). However, as discussed earlier, the results for this model should be considered upper limits to the baryon fraction, given that the stellar mass-to-light ratios are too high to fit rotation curves (e.g., Bell \& de Jong 2001). The results from the Scaled IMF from Bell \& de Jong are probably more representative of the actual baryon fractions in galaxies, thus implying that a significant fraction of the gas must have been lost. \section{Conclusions} We have measured the weak lensing signal as a function of restframe $B$, $V$, and $R$-band luminosity for a sample of `isolated' galaxies, with photometric redshifts $0.2<z<0.4$. This selection of relatively isolated galaxies minimizes the contribution of group/cluster halos and nearby bright galaxies. The photometric redshifts were derived by Hsieh et al. (2005) using $BVR_Cz'$ photometry from the Red-Sequence Cluster Survey. To add to the extensive study described in Hsieh et al. (2005), we confronted the photometric redshifts to tests that are unique to weak lensing. These results showed that the lensing signal around a sample of foreground galaxies scales with source redshift as expected. The photometric redshift distribution determined by Hsieh et al. (2005) suggests that the mean redshift of galaxies used in the measurement of the lensing signal by large scale structure (Hoekstra et al. 2002a; 2002b) is somewhat higher than previously assumed. If correct, this would imply a somewhat lower value for the normalization of the matter power spectrum, $\sigma_8$, compared to the published results. The difference is expected to be less than $\sim 10\%$, but with the current data we cannot reliably quantify the size of the change. Virial masses were determined by fitting an NFW model to the tangential shear profile. Note, that intrinsic scatter in the mass-luminosity relation will result in an underestimate of the mean virial mass for a galaxy of a given luminosity, as suggested by Tasitsiomi et al. (2004b). The magnitude of this effect depends on the assumed scatter, and we have ignored this in our analysis. We found that the virial mass as a function of luminosity is well described by a power-law with a slope of $\sim 1.5$, with similar slopes for the three filters considered here. This result agrees with other observational studies (Guzik \& Seljak, 2002) and predictions from semi-analytic models of galaxy formation (e.g., Kauffmann et al. 1999; Benson et al. 2000; van den Bosch et al. 2003). For a galaxy with a fiducial luminosity of $10^{10}h^{-2}$L$_{B\odot}$ we obtained a mass $M_{\rm vir}=9.9^{+1.5}_{-1.3}\times 10^{11}$M$_\odot$. Converting this result to match the filter and definition for the mass used by Guzik \& Seljak (2002), yields a mass of $M_{200}=(11.7\pm1.7)\times 10^{10}h^{-1} \hbox{${\rm M}_\odot$}$ for a galaxy with a luminosity of $1.1\times 10^{10}h^{-2}\hbox{$h{\rm M}_\odot/{\rm L}_{{\rm g'}\odot}$}$, in agreement with Guzik \& Seljak (2002), who found $M_{200}=(9.3\pm1.6)\times 10^{11}h^{-1}\hbox{${\rm M}_\odot$}$. We examined the efficiency with which baryons are converted into stars. To do so, we used the restframe $B-V$ color as a measure of the mean stellar mass-to-light ratio. The color also provides a crude indicator of galaxy type (e.g., Robert \& Haynes 1994). We considered a standard and a scaled Salpeter IMF (see Bell \& de Jong 2001). The latter is more realistic, whereas the former yields stellar mass-to-light ratios that are too high to fit rotation curves of spiral galaxies. Irrespective of the adopted IMF, we found that the stellar mass fraction is about a factor of two lower for early-type galaxies, as compared to late-type galaxies. Including the fraction of baryons in gas only increases the fraction of observed baryons slightly. Hence, our results suggest that galaxy formation is very inefficient in turning baryons into stars and in retaining baryons. These results provide important, direct observational constraints for models of galaxy formation. Under the assumption that the scaled Salpeter IMF is correct, our results imply that late-type galaxies convert $\sim 33$\% of baryons into stars. Early-type galaxies do much worse, with an efficiency of $\sim 14$\%. This implies that the progenitors of early-type galaxies have a low fraction of their mass in stars. A possible explanation of this result is that early-type galaxies formed early on and stopped forming new stars, because they lost most of their baryons (e.g., through winds or ram pressure stripping). If later type galaxies, on the other hand, continued to form stars this would lead to a higher stellar mass fraction. Such a scenario is, at least qualitatively, in agreement with recent estimates of the star formation rates of high redshift galaxies (e.g., McCarthy et al. 2004; Juneau et al. 2005).
2024-02-18T23:39:48.519Z
2005-10-06T02:00:06.000Z
algebraic_stack_train_0000
477
11,214
proofpile-arXiv_065-2525
\section{Introduction} \label{sec:intro} In recent years there has been a great interest in the study of absorption effects on transport properties of classically chaotic cavities \cite{Doron1990,Lewenkopf1992,Brouwer1997,Kogan2000,Beenakker2001,Schanze2001, Schafer2003,Savin2003,Mendez-Sanchez2003,Fyodorov2003,Fyodorov2004,Savin2004, FyodorovSavin2004,Hemmady2004,Schanze2005,Kuhl2005,Savin2005,MMM2005} (for a review see Ref.~\cite{Fyorev}). This is due to the fact that for experiments in microwave cavities~\cite{Richter,Stoeckmann}, elastic resonators~\cite{Schaadt} and elastic media~\cite{Morales2001} absorption always is present. Although the external parameters are particularly easy to control, absorption, due to power loss in the volume of the device used in the experiments, is an ingredient that has to be taken into account in the verification of the random matrix theory (RMT) predictions. In a microwave experiment of a ballistic chaotic cavity connected to a waveguide supporting one propagating mode, Doron {\it et al}~\cite{Doron1990} studied the effect of absorption on the $1\times 1$ sub-unitary scattering matrix $S$, parametrized as \begin{equation} S=\sqrt{R}\, e^{i\theta}, \label{S11} \end{equation} where $R$ is the reflection coefficient and $\theta$ is twice the phase shift. The experimental results were explained by Lewenkopf {\it et al.}~\cite{Lewenkopf1992} by simulating the absorption in terms of $N_p$ equivalent ``parasitic channels", not directly accessible to experiment, each one having an imperfect coupling to the cavity described by the transmission coefficient $T_p$. A simple model to describe chaotic scattering including absorption was proposed by Kogan {\it et al.}~\cite{Kogan2000}. It describes the system through a sub-unitary scattering matrix $S$, whose statistical distribution satisfies a maximum information-entropy criterion. Unfortunately the model turns out to be valid only in the strong-absorption limit and for $R\ll 1$. For the $1\times 1$ $S$-matrix of Eq.~(\ref{S11}), it was shown that in this limit $\theta$ is uniformly distributed between 0 and $2\pi$, while $R$ satisfies Rayleigh's distribution \begin{equation} P_{\beta}(R) = \alpha e^{-\alpha R}; \qquad R \ll 1, \hbox{ and } \alpha \gg 1, \label{Rayleigh} \end{equation} where $\beta$ denotes the universality class of $S$ introduced by Dyson~\cite{Dyson}: $\beta=1$ when time reversal invariance (TRI) is present (also called the {\it orthogonal} case), $\beta=2$ when TRI is broken ({\it unitary} case) and $\beta=4$ corresponds to the symplectic case. Here, $\alpha=\gamma\beta/2$ and $\gamma=2\pi/\tau_a\Delta$, is the ratio of the mean dwell time inside the cavity ($2\pi/\Delta$), where $\Delta$ is the mean level spacing, and $\tau_a$ is the absorption time. This ratio is a measure of the absorption strength. Eq.~(\ref{Rayleigh}) is valid for $\gamma\gg 1$ and for $R \ll 1$ as we shall see below. The weak absorption limit ($\gamma\ll 1$) of $P_{\beta}(R)$ was calculated by Beenakker and Brouwer~\cite{Beenakker2001}, by relating $R$ to the time-delay in a chaotic cavity which is distributed according to the Laguerre ensemble. The distribution of the reflexion coefficient in this case is \begin{equation} P_{\beta}(R) = \frac{\alpha^{1+\beta/2}}{\Gamma(1+\beta/2)} \frac{e^{-\alpha/(1-R)}}{(1-R)^{2+\beta/2}}; \qquad \alpha\ll 1. \label{Laguerre} \end{equation} In the whole range of $\gamma$, $P_{\beta}(R)$ was explicitly obtained for $\beta=2$~\cite{Beenakker2001}: \begin{equation} P_2(R) = \frac{e^{-\gamma/(1-R)}}{(1-R)^3} \left[ \gamma (e^{\gamma}-1) + (1+\gamma-e^{\gamma}) (1-R) \right], \label{beta2} \end{equation} and for $\beta=4$ more recently~\cite{FyodorovSavin2004}. Eq.~(\ref{beta2}) reduces to Eq.~(\ref{Laguerre}) for small absorption ($\gamma\ll 1$) while for strong absorption it becomes \begin{equation} \label{bigbeta2} P_2(R) = \frac{\gamma \, e^{-\gamma R/(1-R)}}{(1-R)^3}; \qquad \gamma\gg 1. \end{equation} Notice that $P_2(R)$ approaches zero for $R$ close to one. Then the Rayleigh distribution, Eq.~(\ref{Rayleigh}), is only reproduced in the range of few standard deviations i.e., for $R \stackrel{<}{\sim} \gamma^{-1}$. This can be seen in Fig.~\ref{fig:fig1}(a) where we compare the distribution $P_2(R)$ given by Eqs.~(\ref{Rayleigh}) and~(\ref{bigbeta2}) with the exact result given by Eq.~(\ref{beta2}) for $\gamma=20$. As can be seen the result obtained from the time-delay agrees with the exact result but the Rayleigh distribution is only valid for $R\ll 1$. Since the majority of the experiments with absorption are performed with TRI ($\beta=1$) it is very important to have the result in this case. Due to the lack of an exact expression at that time, Savin and Sommers~\cite{Savin2003} proposed an approximate distribution $P_{\beta=1}(R)$ by replacing $\gamma$ by $\gamma\beta/2$ in Eq.~(\ref{beta2}). However, this is valid for the intermediate and strong absorption limits only. Another formula was proposed in Ref.~\cite{Kuhl2005} as an interpolation between the strong and weak absorption limits assuming a quite similar expression as the $\beta=2$ case (see also Ref.~\cite{FyodorovSavin2004}). More recently~\cite{Savin2005}, a formula for the integrated probability distribution of $x=(1+R)/(1-R)$, $W(x)=\int_x^\infty P_0^{(\beta=1)}(x)dx$, was obtained. The distribution $P_{\beta=1}(R)=\frac 2{(1-R)^2}P_0^{(\beta=1)}(\frac{1+R}{1-R})$ then yields a quite complicated formula. Due to the importance to have an ``easy to use'' formula for the time reversal case, our purpose is to propose a better interpolation formula for $P_{\beta}(R)$ when $\beta=1$. In the next section we do this following the same procedure as in Ref.~\cite{Kuhl2005}. We verify later on that our proposal reaches both limits of strong and weak absorption. In Sec.~\ref{sec:conclusions} we compare our interpolation formula with the exact result of Ref.~\cite{Savin2005}. A brief conclusion follows. \section{An interpolation formula for $\beta=1$} From Eqs.~(\ref{Rayleigh}) and~(\ref{Laguerre}) we note that $\gamma$ enters in $P_{\beta}(R)$ always in the combination $\gamma\beta/2$. We take this into account and combine it with the general form of $P_2(R)$ and the interpolation proposed in Ref.~\cite{Kuhl2005}. For $\beta=1$ we then propose the following formula for the $R$-distribution \begin{equation} P_1(R) = C_1(\alpha) \frac{ e^{-\alpha/(1-R)} }{ (1-R)^{5/2} } \left[ \alpha^{1/2} (e^{\alpha}-1) + (1+\alpha-e^{\alpha}) {}_2F_1 \left( \frac 12,\frac 12,1;R \right)\frac{1-R}2 \right], \label{beta1} \end{equation} where $\alpha=\gamma/2$, ${}_2F_1$ is a hyper-geometric function~\cite{Abramowitz}, and $C_1(\alpha)$ is a normalization constant \begin{equation} C_1(\alpha) = \frac{\alpha} { (e^{\alpha} - 1) \Gamma(3/2,\alpha) + \alpha^{1/2}( 1 + \alpha - e^{\alpha} ) f(\alpha)/2 } \end{equation} where \begin{equation} f(\alpha) = \int_{\alpha}^{\infty} \frac{e^{-x}}{x^{1/2}} \, \, {}_2F_1 \left( \frac 12,\frac 12,1;1-\frac{\alpha}{x}\right) \end{equation} and $\Gamma(a,x)$ is the incomplete $\Gamma$-function \begin{equation} \Gamma(a,x) = \int_x^{\infty} e^{-t} t^{a-1} dt. \label{Gammafunc} \end{equation} In the next sections, we verify that in the limits of strong and weak absorption we reproduce Eqs.~(\ref{Rayleigh}) and~(\ref{Laguerre}). \section{Strong absorption limit} \begin{figure} \begin{center} \includegraphics[width=8.0cm]{fig1.eps} \caption{Distribution of the reflection coefficient for absorption strength $\gamma=20$, for (a) $\beta=2$ (unitary case) and (b) $\beta=1$ (orthogonal case). In (a) the continuous line is the exact result Eq.~(\ref{beta2}) while in (b) it corresponds to the interpolation formula, Eq.~(\ref{beta1}). The triangles in (a) are the results given by Eq.~(\ref{bigbeta2}) for $\beta=2$ and in (b) they correspond to Eq.~(\ref{bigbeta1}). The dashed line is the Rayleigh distribution Eq.~(\ref{Rayleigh}), valid only for $R\stackrel{<}{\sim}\gamma^{-1}$ and $\gamma\gg1$. } \label{fig:fig1} \end{center} \end{figure} In the strong absorption limit, $\alpha\rightarrow\infty$, $\Gamma(3/2,\alpha)\rightarrow\alpha^{1/2}e^{-\alpha}$, and $f(\alpha)\rightarrow\alpha^{-1/2}e^{-\alpha}$. Then, \begin{equation} \lim_{\alpha\rightarrow\infty} C_1(\alpha) = \frac{\alpha e^{\alpha}}{ (e^{\alpha}-1)\alpha^{1/2} + (1+\alpha-e^{\alpha})/2} \simeq \alpha^{1/2}. \end{equation} Therefore, the $R$-distribution in this limit reduces to \begin{equation}\label{bigbeta1} P_1(R) \simeq \frac{ \alpha \, e^{-\alpha R/(1-R)} }{ (1-R)^{5/2} } \qquad \alpha \gg 1 , \end{equation} which is the equivalent of Eq.~(\ref{bigbeta2}) but now for $\beta=1$. As for the $\beta=2$ symmetry, it is consistent with the fact that $P_1(R)$ approaches zero as $R$ tends to one. It reproduces also Eq.~(\ref{Rayleigh}) in the range of a few standard deviations ($R\stackrel{<}{\sim}\gamma^{-1}\ll 1$), as can be seen in Fig.~\ref{fig:fig1}(b). \section{Weak absorption limit} For weak absorption $\alpha\rightarrow 0$, the incomplete $\Gamma$-function in $C_1(\alpha)$ reduces to a $\Gamma$-function $\Gamma(x)$ [see Eq.~(\ref{Gammafunc})]. Then, $P_1(R)$ can be written as \begin{eqnarray} P_1(R) && \simeq \frac{\alpha} { (\alpha+\alpha^2/2+\cdots)\Gamma(3/2)- (\alpha^{5/2}/2+\cdots )f(0)/2 } \nonumber \\ && \times \frac{ e^{-\alpha/(1-R)} }{(1-R)^{5/2}} \big[ \alpha^{3/2} + \alpha^{5/2}/2 +\cdots \nonumber \\ && - ( \alpha^2/2 + \alpha^3/6 +\cdots){}_2F_1(1/2,1/2,1;R)(1-R)/2 \big] . \end{eqnarray} By keeping the dominant term for small $\alpha$, Eq.~(\ref{Laguerre}) is reproduced. \section{Comparison with the exact result} \begin{figure} \begin{center} \includegraphics[width=8.0cm]{fig2.eps} \caption{Distribution of the reflection coefficient in the presence of time-reversal symmetry for absorption strength $\gamma=1$, 2, 5, and 7. The continuous lines correspond to the distribution given by Eq.~(\ref{beta1}). For comparison we include the exact results of Ref.~\cite{Savin2005} (dashed lines).} \label{fig:fig2} \end{center} \end{figure} In Fig.~\ref{fig:fig2} we compare our interpolation formula, Eq.~(\ref{beta1}), with the exact result of Ref.~\cite{Savin2005}. For the same parameters used in that reference we observe an excellent agreement. In Fig.~\ref{fig:fig3} we plot the difference between the exact and the interpolation formulas for the same values of $\gamma$ as in Fig.~\ref{fig:fig2}. The error of the interpolation formula is less than 4\%. \begin{figure}[b] \begin{center} \includegraphics[width=8.0cm]{fig3.eps} \caption{Difference between the exact result and the interpolation formula, Eq.~(\ref{beta1}), for the $R$-distribution for $\beta=1$ for the same values of $\gamma$ as in Fig.~\ref{fig:fig2}.} \label{fig:fig3} \end{center} \end{figure} \section{Conclusions} \label{sec:conclusions} We have introduced a new interpolation formula for the reflection coefficient distribution $P_{\beta}(R)$ in the presence of time reversal symmetry for chaotic cavities with absorption. The interpolation formula reduces to the analytical expressions for the strong and weak absorption limits. Our proposal is to produce an ``easy to use'' formula that differs by a few percent from the exact, but quite complicated, result of Ref.~\cite{Savin2005}. We can summarize the results for both symmetries ($\beta=1$, 2) as follows \begin{equation} P_{\beta}(R) = C_{\beta}(\alpha) \frac{ e^{-\alpha/(1-R)} }{ (1-R)^{2+\beta/2} } \left[ \alpha^{\beta/2} (e^{\alpha}-1) + (1+\alpha-e^{\alpha}) {}_2F_1 \left(\frac{\beta}2,\frac{\beta}2,1;R\right) \frac{\beta(1-R)^{\beta}}2 \right], \end{equation} where $C_{\beta}(\alpha)$ is a normalization constant that depends on $\alpha=\gamma\beta/2$. This interpolation formula is exact for $\beta=2$ and yields the correct limits of strong and weak absorption. \ack The authors thank to DGAPA-UNAM for financial support through project IN118805. We thank D. V. Savin for provide us the data for the exact results we used in Figs.~\ref{fig:fig2} and~\ref{fig:fig3} and to J. Flores and P. A. Mello for useful comments. \section*{References}
2024-02-18T23:39:48.969Z
2005-10-29T18:09:11.000Z
algebraic_stack_train_0000
503
2,001
proofpile-arXiv_065-2603
\section{Introduction} \label{sec:intro} Among the most dramatic structures in the interstellar medium (ISM) of disk galaxies are large shells and supershells. These objects are generally observed as voids in the neutral hydrogen (\HI) distribution surrounded by swept-up walls. In some nearby galaxies, like the Large and Small Magellanic Clouds, the \HI\ structure of the disk is dominated by shells and supershells \citep{kim99,hatzidimitriou05}. The ISM of the Milky Way is also riddled with tens, if not hundreds, of \HI\ shells \citep[e.g.][]{heiles79,heiles84,mcgriff02a,ehlerova05}. It is thought that most \HI\ shells are formed by stellar winds or supernovae, or the combined effects of both. This explanation is particularly convincing for smaller shells, with diameters of a few tens of parsecs and formation energies on the order of $10^{52}$ ergs. However, the larger shells, or supershells, are enigmatic. They seem to require unreasonably large ($>10^{53}$ ergs) formation energies in order to maintain expansion velocities of $\sim 20$ ${\rm km~s^{-1}}$\ at radii in excess of a few hundred parsecs \citep{heiles84}. The stellar wind and supernova from a given massive star is capable of injecting $\sim 10^{51}$ ergs of energy into the ISM, suggesting that many hundreds and even thousands of massive stars are required to power the most energetic shells. In this case, it is expected that multiple generations of star formation are required, and although numerous studies have searched for evidence of triggered star formation associated with \HI\ shells there are few examples \citep{oey05}. \HI\ supershells play a significant role in the energy budget of the ISM and can also play a role in the exchange of matter between the disk and halo. \HI\ supershells can grow large enough to exceed the scale height of the \HI\ disk. In this case, the shell expands rapidly along the density gradient away from the disk until it becomes unstable and breaks out, venting its hot internal gas to the halo. This ``chimney'' process may provide a mechanism for distributing hot gas and metals away from the disk \citep*{dove00}. \HI\ shell blow-outs are predicted for, and indeed observed in, a number of dwarf galaxies where the gravitational potential of the disk is smaller than in large spirals \citep{maclow99,marlowe95}. Occasionally chimneys are observed in large spiral galaxies, with good examples being NGC 891, NGC 253 and NGC 6946 \citep{rand90,howk97,boomsma05}. In the Milky Way, very little is known about the impact of chimneys on the formation, structure and dynamics of the halo. In fact, only a handful of relatively small chimneys are known in the Milky Way, e.g. the Stockert chimney \citep{muller87}, the W4 chimney \citep{normandeau96,reynolds01}, the Scutum supershell \citep{callaway00} and GSH 277+00+36 \citep{mcgriff00}. Together, the known chimneys are not capable of providing the thermal energy required to support the halo. One of the largest \HI\ supershells in the Milky Way is GSH 242-03+37, discovered by \citet{heiles79} in his seminal work on Galactic shells. GSH 242-03+37 is located at $l=242\arcdeg$, $b=-3\arcdeg$, which is in the direction of the so-called ``Puppis window'', an area of very low visual extinction \citep{fitzgerald68}. The shell has an angular diameter of $15\arcdeg$. Its kinematic distance is 3.6 kpc, implying a physical diameter of $\sim 1$ kpc. \citet{heiles79} suggested that GSH 242-03+37 is still expanding with an expansion velocity of $v_{\rm exp} \approx 20$ ${\rm km~s^{-1}}$\ and from that he estimated an expansion energy of $E_E \sim 1.6 \times 10^{54}$ ergs. Despite the impressive size and implied energetics of this shell, there has been very little follow-up work. \citet{stacy82} made a thorough study of the \HI\ in the region $239\arcdeg \leq l \leq 251\arcdeg$ with the aim of correlating \HI\ features with optical spiral tracers. The survey concentrated mainly on smaller scale \HI\ features, and while it mentioned GSH 242-03+37, no detailed images were available or discussed. Here we use new data from the Galactic All-Sky Survey (GASS) to study the \HI\ supershell GSH 242-03+37. We present the highest angular resolution ($\sim 15\arcmin$) and most sensitive images of the shell published so far. We show that GSH 242-03+37 has in fact broken out of both sides of the Galactic plane through three large channels. We show that these chimney openings are capped at high Galactic latitude with very narrow, low surface brightness filaments. We discuss the chimney caps and their long-term fate in \S \ref{sec:chimneys}. In \S\ref{subsec:otherwavelengths} we examine archival X-ray (ROSAT) and H-alpha (SHASSA) data to explore the various gas phases associated with the chimney. In \S \ref{sec:stars} we discuss the stellar content of the shell and in \S \ref{sec:minivoid} we explore the possible association of a small shell that appears inside GSH 242-03+37. \section{Observations and Analysis} \label{sec:obs} The \HI\ data presented here are from the first pass of the Parkes Galactic All-Sky Survey (GASS; McClure-Griffiths et al., 2005, in prep.). GASS is a project to image \HI\ at Galactic velocities ($-400~{\rm km~s^{-1}} \leq v_{\rm LSR} \leq +450~{\rm km~s^{-1}}$) for the entire sky south of declination $0\arcdeg$. GASS uses the Parkes multibeam to produce a fully sampled atlas of \HI\ with an angular resolution of $15\arcmin$, spectral resolution of $0.8~{\rm km~s^{-1}}$, and to an rms sensitivity of 80-90 mK. The survey will be corrected for stray radiation effects, according to the method described in \citet{kalberla05}, to ensure high reliability of the \HI\ spectra. Observations for the survey began in January 2005 and will continue through to 2007. When complete, GASS will be the first fully sampled all-sky survey of \HI\ on sub-degree scales. The full survey details, including its scientific goals, will be described in a future paper. Here we briefly describe the observations and data reduction techniques to allow assessment of the data presented. GASS is conducted as an on-the-fly mapping survey, with each point in the sky scanned twice. We use the Parkes multibeam \citep{staveley-smith96}, which is a thirteen beam receiver package mounted at prime focus on the Parkes Radiotelescope near Parkes NSW, Australia. The thirteen beams of the multibeam are packed in a hexagonal configuration with a beam separation on the sky of $29\farcm1$ for the inner beams. On-the-fly mapping is performed by scanning the telescope at a rate of 1~deg~min$^{-1}$, recording spectra every 5 seconds. While scanning, the receiver package is rotated by $19\fdg1$ with respect to the scan direction to ensure that the inner seven independent beams make parallel tracks equally spaced by $9\farcm5$ on the sky. Scans in both right ascension and declination will be made for the full survey, although only a few RA scans have been included in this paper. The declination scans are made at a constant RA and are 8 deg long in declination. After a scan the receiver package is offset in RA to perform an interleaved scan, reducing the spacing between adjacent beam tracks to $4\farcm7$. Spectra are recorded in a special correlator mode that allows 2048 channels across an 8 MHz bandwidth on all thirteen beams. In-band frequency switching is used to allow for robust bandpass correction. We switch every 5 seconds between center frequencies of 1418.8345 MHz and 1421.9685 MHz. Bandpass calibration is done in near real-time using the {\em Livedata} package, which is part of the ATNF subset of the {\em aips++} distribution. The bandpass correction algorithm employed was designed expressly for the GASS frequency-switched data. It works on each beam, polarization and IF independently, performing a robust polynomial fit to the quotient spectrum (one frequency divided by the second frequency) after masking the emission by examining the spectrum both spectrally and spatially. {\em Livedata} also performs the Doppler correction to shift the spectra to the Local Standard of Rest (LSR). Absolute brightness temperature calibration was performed from daily observations of the IAU standard line calibration regions S8 and S9 \citep{williams73}. Calibrated spectra are gridded into datacubes using the {\em Gridzilla} package, also part of the ATNF subset of the {\em aips++} distribution. The gridding algorithm used in {\em Gridzilla} is described in detail in Barnes et al.\ (2001)\nocite{barnes01}. GASS spectra were imaged using a weighted median technique with a cell size of $4\arcmin$, a Gaussian smoothing kernel with a full width half max of 12\arcmin, and a cutoff radius of $8\arcmin$. The effective resolution of the gridded data is $\sim 15\arcmin$. The per channel rms of the resulting image cubes near the Galactic plane is $\sim 120$ mK. These data are not corrected for stray radiation and may therefore contain some low-level spurious features. For the data presented here we have compared our images with the low resolution stray radiation corrected Leiden/Argentine/Bonn survey \citep{kalberla05,bajaja05} to verify features. \section{Results} \label{sec:results} \citet{heiles79} cataloged GSH 242-03+37 with a low confidence rating, suggesting uncertainty about the shell's veracity. With the improved angular and spectral resolution of the GASS data, as well as the availability of improved data visualization tools, we are confident that this shell meets the three criteria for shell identification given in \citet{mcgriff02a}, i.e.\ that the void is well-defined over more than three consecutive velocity channels with an interior to exterior brightness contrast of 5 or more, that the void changes shape with velocity and that a velocity profile through the shell shows a well-defined dip flanked by peaks. In Figure \ref{fig:hishell} we show velocity channel images of the shell as multiple panels. The shell is visible as the large void in the center of the images, between LSR\footnote{All velocities are quoted with respect to the kinematic Local Standard of Rest.} velocities $v\approx 30$ ${\rm km~s^{-1}}$\ and $v\approx50$ ${\rm km~s^{-1}}$. Every fourth velocity channel is displayed here to give an impression of the dynamic structure in the shell. The first and last panels of Fig.~\ref{fig:hishell} show the approximate front and back caps of the shell. The shell extends over approximately 18 degrees in longitude and 10 degrees in latitude. However, there are clear breaks on the top and bottom of the shell, as seen in the velocity channel images. These are indicative of chimney openings and will be discussed thoroughly below. We find that the center of the shell is at a slightly different location than given in \citet{heiles79}. We define the center as the velocity of least emission in the spectral profile through the shell center and the geometric center of the shell at that velocity. Using these criteria, the center of the shell is at $l=243\arcdeg$, $b=-1.6\arcdeg$, $v=+42$ ${\rm km~s^{-1}}$, notably different than the coordinates implied by its name. A velocity profile through the shell center is shown in the top panel of Figure \ref{fig:profile}. The shell is the clearly defined dip in the profile at $v\approx 40$ ${\rm km~s^{-1}}$. The front and back caps of the shell are marked on Fig.\ \ref{fig:profile} and are apparent as the bumps in the velocity profile at $v\approx 27$ ${\rm km~s^{-1}}$\ and $v\approx 57$ ${\rm km~s^{-1}}$\ on both sides of the void. The shell is located at a Galactic longitude where the rotation curve is relatively simple, allowing us to translate radial velocity approximately into distance. The lower panel of Fig.\ \ref{fig:profile} plots the velocity-distance relation at $l=244\arcdeg$ from the \citet*{brand93} rotation curve, assuming the IAU recommended values for the Galactic center distance, $R_0=8.5$ kpc, and LSR velocity, $\Theta_0 = 220$ ${\rm km~s^{-1}}$. From this relationship the kinematic distance of the shell is 3.6 kpc, as was also found by \citep{heiles79}. The shell is at a Galactocentric radius of $R_{\rm g} = 10.7$ kpc. The error on the kinematic distance is on the order of 10\% because of uncertainties in determining the central velocity of the shell, random cloud-to-cloud motions in the ISM and errors in the rotation curve. The radius of the shell along the plane is $R_{\rm sh} = 565 \pm 65$ pc. At a distance of 3.6 kpc our resolution is approximately 16 pc. Because of the large size of the shell it may be elongated because of differential rotation in the Galactic plane \citep{tenorio88}. This will distort the shell and affect its lifetime as discussed below. The interior of the shell has extremely low brightness temperature values when compared with the rest of the Galactic plane; the mean brightness temperature in the shell interior is only $T_{\rm b} = 4$ K with a standard deviation on the mean of $\sigma_{\rm T} = 1.5$ K. Assuming the gas is optically thin, this implies a mean column density of $N_{\rm H} = 1.3 \pm 0.6 \times 10^{20}~{\rm cm^{-2}}$ and a mean internal \HI\ number density of $n_{\rm H}\sim 0.07~{\rm cm^{-3}}$ if the shell is spherical. \subsection{GSH 242-03+37 Physical Properties} The physical properties of GSH 242-03+37, such as radius, mass, expansion velocity, and expansion energy were estimated by \citet{heiles79}. Our values are only slightly different from those. Both our values and Heiles' values, where different, are given in Table \ref{tab:params}. Expansion velocities for shells are usually estimated as half of the total measured velocity width, $\Delta v$, of the shell. The full velocity width of GSH 242-03+37, through the center of the shell, is approximately $\Delta v=25$ ${\rm km~s^{-1}}$. Because of the relationship between distance and radial velocity, there is a complicated coupling of the expansion velocity, $v_{\rm exp}$, and the velocity width due to the line-of-sight physical dimension of the shell, $v_p$. A simplistic way of de-coupling the expansion velocity and velocity width due to physical size is to use the velocity gradient, $dv/dr$, to estimate the contribution of the physical size to the total velocity width. Again using the \citet*{brand93} rotation curve, we find that at $v=42$ ${\rm km~s^{-1}}$\ along this line of sight $dv/dr \sim 10~{\rm km~s^{-1}~kpc^{-1}}$. If the diameter of the shell along the line-of-sight is comparable to its diameter in the plane of the sky, then $v_p \sim 10~{\rm km~s^{-1}}$. We then make the simplifying assumption that the total velocity width is $\Delta v \approx 2v_{exp} + v_p = 25$ ${\rm km~s^{-1}}$, implying $v_{\rm exp} \approx 7$ ${\rm km~s^{-1}}$. The expansion energy, $E_E$, of a shell is defined by \citet{heiles79} to be the equivalent energy that would have been deposited at the center of the shell to account for the observed radius and expansion. The expansion energy, based on the calculations of \citet{chevalier74} for supernova expansion, is $E_E = 5.3 \times 10^{43}\,n_0^{1.12}\,R_{\rm sh}^{3.12}\,v_{\rm exp}^{1.4}$, where $n_0$ is the ambient density measured in ${\rm cm^{-3}}$, $R_{\rm sh}$ is in parsecs, $v_{\rm exp}$ is in ${\rm km~s^{-1}}$. This equation makes the extreme simplifying assumption that the ambient medium, $n_0$, into which the shell is expanding, is homogeneous with constant density. We know that this cannot be true on small scales, but for very large shells the density variations largely average out and the equation provides a reasonable standard energy estimate with which to compare shells. For GSH 242-03+37, assuming $n_0 \approx 1~{\rm cm^{-3}}$, the expansion energy is $E_E\sim 3.1\times 10^{53}$ ergs. Another limitation of the expansion energy equation is that it does not account for energy lost to high latitudes by shell break-out. Therefore, for GSH 242-03+37 the expansion energy is only a lower limit. We note that our expansion energy estimate is about a factor of 10 lower than the value quoted in \citet{heiles79}. The difference between our value and Heiles' can be accounted for by our assumption of an ambient density of $n_0 \approx 1~{\rm cm^{-3}}$, whereas \citet{heiles79} uses $n_0 \approx 2~{\rm cm^{-3}}$, and from the lower expansion velocity estimated here. If the average energy output of a single O or B star via its stellar winds and supernovae is $\sim 10^{51}$ ergs, then more than 300 massive stars are required to expand the shell to its current size. There are no known coveal stellar clusters of that size in the Milky Way, which suggests that GSH 242-03+37 was formed through the effects multiple generations of massive stars. Age estimates for \HI\ shells are also fraught with large uncertainties. Unless a powering source, such as an OB association, that can be aged independently is associated with the shell it is often impossible to accurately estimate the age of a shell. We can, however, estimate a shell's dynamic age based on models of the evolution of supernova remnants in the late radiative phase. In this case the dynamic age, $t_6$ in units of Myr for a shell of radius, $R_{\rm sh}$, given in pc and $v_{\rm exp}$ given in units of ${\rm km~s^{-1}}$, is given by $t_6 = 0.29 \,R_{sh}/v_{\rm exp}$ \citep{cioffi88}. For GSH 242-03+37, the dynamic age is $t\sim 21$ Myr. Comparing with other known Galactic shells, GSH 242-03+37 is relatively old \citep{heiles79,heiles84,mcgriff02a}. Ultimately the lifetime of \HI\ shells is limited by the development of instabilities along their walls and the onset of deformation and shear due to differential rotation in the Galactic plane. Both effects become significant at around 20 Myr \citep{dove00,tenorio88}. Differential rotation will distort the shell so that it no longer appears spherical. At a Galactocentric radius of 10.3 kpc this is a moderate effect; over 20 Myr a static shell of radius 1 kpc will distort to have an axial ratio in the plane of $\sim $1.5:1. \subsection{GSH 242-03+37 Morphology} \label{sec:morphology} The three-dimensional morphology of GSH 242-03+37 is extremely interesting. One of the noteworthy aspects of the morphology is that the shell is not spherical, but has some ``scalloped'' structure along the edges as can be seen in Figures \ref{fig:hishell} and \ref{fig:annotate}. The bases of these scallops are separated by several degrees or arcmin, or $\sim 400-500$ pc. In three locations, two at the bottom and one at the top, these arches are weaker or absent presenting extensions from the Galactic plane towards the halo. The three break-outs are at: $(l,b)=(245\fdg2,+3\fdg8)$, $(243\fdg0, -8\fdg3)$ and $(236\fdg5,-8\fdg2)$. These break-outs or ``chimneys'' have some vertical structures that clearly separate them from the ambient medium. At very low brightness temperatures ($\sim 1.5$ K) the chimney structures are capped, each about 1.6 kpc from the center of the shell. These caps are marked on Figure \ref{fig:annotate}, which is a channel image at $v=+45$ ${\rm km~s^{-1}}$. They are also visible in the channel images shown in Figure \ref{fig:hishell}. Like the shell, the caps are visible over $\sim 20$ ${\rm km~s^{-1}}$\ of velocity space, which suggests that they are not only associated with the shell, but also physically extended. The structure and nature of these caps will be discussed in \S \ref{sec:chimneys}. Figure \ref{fig:walls} shows a slice across the shell in the longitudinal direction. The slice is taken at $v=39.4$ ${\rm km~s^{-1}}$, $b=0\fdg20$. The shell is clearly empty and the walls of the shell are very sharp. The walls show a brightness temperature contrast of 10 to 20 from the shell interior to the shell wall over one to two resolution elements, or $\sim 16 - 32$ pc. Referring to similarly strong shell walls in GSH 277+00+36, \citet{mcgriff03b} suggested that the sharpness of the walls is indicative of compression as associated with a shock. \citet{stacy82} noted one cloud in the shell interior at $(l,b,v)$ = $(242\fdg2, -4\fdg6,36~{\rm km~s^{-1}})$, pointing out that it was unusual as one of the only clouds within a largely evacuated area. With the sensitivity of GASS it is clear that there is quite a lot of structure inside the shell, though in general it is only at the $T_b \sim 5$ K level. There are a number of interlocking rings throughout the shell interior. Most of these are relatively circular and noticeable over several velocity channels. The cloud cataloged by \citet{stacy82} appears to belong to a thin ring structure near the center of GSH 242-03+37, as shown in Figure \ref{fig:minishell}. This ring is distinguished from the rest of the internal structure because it encloses an interesting region that is even more evacuated than the rest of the shell, appearing as a shell within the shell. This ``mini-shell'' is centered at $(l,b,v)$ = $(242\fdg9,-2\fdg3,45~{\rm km~s^{-1}})$ ${\rm km~s^{-1}}$\ and has an angular diameter of $\sim 3\fdg8$. The interior has lower brightness temperatures than the main void; the brightness temperatures associated with mini-shell are $\sim 2$ K inside the void, about a factor of two lower than in the main void, and $\sim 7 - 10$ K along the ``walls''. There are few regions in the Galactic plane that show such low brightness temperatures and most are associated with known \HI\ shells. Another particularly noticeable feature is the compact cloud a $(l,b,v)$ = $(240\fdg9,+4\fdg9,45~{\rm km~s^{-1}})$, which can be seen in several of the channel images presented in Figure~\ref{fig:minishell}. The unresolved cloud is surrounded by a ring of emission with a diameter of 2\arcdeg. Also centered on this position, about 3 degrees away, is an arc of emission. These features, though noticeable, have no obvious physical explanation. \subsection{Comparison with other wavelengths} \label{subsec:otherwavelengths} We have obtained publicly available data from the ROSAT all-sky survey maps of the diffuse X-ray background \citep{snowden97}. We compared the $1/4$ keV, $3/4$ keV and 1.5 keV emission with the \HI\ distribution. There is clear evidence for $\frac{1}{4}$ keV excess emission in the shell interior. It would be surprising, however, if this excess were physically associated with shell. At $1/4$ keV, unity optical depth corresponds to \HI\ column densities of about $1\times 10^{20}~{\rm cm^{-2}}$ \citep{snowden97}, giving a mean free path for $1/4$ keV X-rays of only $\sim 65$ pc. Even though the \HI\ column density through the Puppis window is low, at the distance of GSH 242-03+37 the foreground \HI\ column density is significant at $\sim 2 \times 10^{21}~{\rm cm^{-2}}$. Examining the \HI\ data cube, we find that the morphology of the X-ray excess agrees very well with the foreground \HI\ shell, GSH 242-04-05 \citep{heiles79}. It is therefore likely that the $1/4$ keV X-ray excess emission traces hot gas in the foreground object, not in GSH 242-03+37. The mean-free path for X-rays in the higher energy bands is much longer; 1.5 keV electrons have a mean free path of $\sim 3$ kpc \citep{snowden97}. Despite the potential detectability of higher energy X-rays, we find no correlation between the \HI\ distribution and the $3/4$ or 1.5 keV emission. The lack of harder X-ray features is not unexpected, however, because $3/4$ and 1.5 keV emission suggests very hot gas with temperatures on the order of $10^7$ K. Old large shells, like GSH 242-03+37 are not expected to have gas temperatures much higher than about $3.5 \times 10^6$ K \citep{maclow88}. We have also compared the \HI\ distribution to the velocity integrated H$\alpha$ emission from the SHASSA survey of the Southern sky \citep{gaustad01}. Because of the low extinction towards this shell it should be possible to detect H$\alpha$ emission to the 3.6 kpc distance of the shell. Unfortunately, because there is no velocity discrimination in the SHASSA data it is very difficult to distinguish foreground H$\alpha$ features from features at the distance of the shell. We find very few obvious correlations between the \HI\ at $v=35-50$ ${\rm km~s^{-1}}$\ and H$\alpha$ over the entire field of the shell. The only potential candidate for agreement is a faint H$\alpha$ filament that lies just inside the \HI\ mini-shell at $l=243\arcdeg$, $b=-2\fdg3$. This feature, however, is in a confused region and no definitive association can be made. Finally, we have searched the FUSE catalog of O{\sc vi} detections (Wakker at al. 2003) to determine if there is hot, high-latitude gas at the same velocity as GSH 242-03+37. Two O{\sc vi} detections below the plane of the Galaxy are near GSH 242-03+37: the first towards PKS 0558-504 at 11 ${\rm km~s^{-1}}$\ and the second towards NGC 1705 at 29 ${\rm km~s^{-1}}$. However, neither of these pointings lie within the capped areas of the chimney outflows and the O{\sc vi} is observed to be at lower velocities than that of the shell. Although it is likely that the shell is filled with hot gas, no connections between the FUSE O{\sc vi} detections and the morphology of GSH 242-03+37 can be made at this time. \section{Discussion} \label{sec:discussion} GSH 242-03+37 is a remarkable structure. Its size, age, and morphology are all at the extreme range of observed Galactic shell parameters. The only known Galactic supershell to display similar properties is GSH 277+00+36 \citep{mcgriff03b}. The similarities between these two shells are intriguing. Both shells have radii of 350 - 500 pc and expansion energies on the order of $10^{53}$ ergs. Both are located far from the Galactic center at Galactocentric distances of $\sim 10$ kpc. The morphology of the two shells, particularly their walls and extended $z$ structure, are strikingly similar \citep[c.f.][Figure 1]{mcgriff03b}. Although GSH 277+00+36 is a much less confusing structure than GSH 242-03+37, it exhibits similar chimney break-outs above and below the plane, as well as the same scalloped structure along the walls. The similar morphology of these two objects is intriguing and leads us to speculate that their large scale features may be dominated by common global Galactic phenomena, for example the \HI\ scale height of the Galactic disk (R. Sutherland 2005, private communication), rather than local phenomena, such as the distribution of powering stars. It should be possible to determine which effects dominate with high resolution MHD simulations of supershell evolution. GSH 242-03+37, because it is closer, offers some advantages over GSH 277+00+36. In GSH 242-03+37 we can observe very weak chimney caps and explore the stellar content. Here we discuss some of the characteristics that are specific to GSH 242-03+37. \subsection{Stellar content} \label{sec:stars} The stellar content of Galactic \HI\ supershells is largely unknown. Most shells lie in the Galactic plane behind several magnitudes of visual extinction, precluding most optical stellar surveys. Infrared surveys, such as 2MASS, can probe to much greater distances than in the optical but the line-of-sight confusion of stars makes it difficult to associate specific stars with supershells. As it is, most of the known OB associations in the Galactic plane are at distances of less than 3 kpc. By contrast, most \HI\ shells are at distances larger than 2 kpc because the confusion of gas at local velocities makes detecting shells difficult. Fortunately GSH 242-03+37 is located in the ``Puppis Window'' and benefits from numerous stellar studies. \citet{kaltcheva00} recently compiled and updated the $uvby\beta$ photometry of luminous OB stars in the Puppis-Vela region, providing a nearly complete table of the stellar types and distances. We have extracted from their table all stars with projected distances within 1 kpc of the center of the shell. For the region $234 \leq l \leq 252\arcdeg$, $-7 \leq b \leq +7\arcdeg$ there are 22 OB stars, of which the earliest is an O9 type star. These stars are plotted as crosses on Figure \ref{fig:stars}. As one might expect for an old shell, there are very few massive stars in the center of the object, with the notable exception of a small cluster near $(l,b)=(242\arcdeg,-5\arcdeg)$. These stars belong to a cluster of O and B-type stars, which appear to lie along the rim of the internal mini-shell. The remaining stars are located near the shell walls. Along the right wall the stars appear to lie near the foci of the loops that characterize the scalloped structure of GSH 242-03+37. This coincidence seems to suggest that the stars near the edge of the shell are contributing to the continued expansion of the shell and determining the morphology of the walls. In addition, the stars trace the left-hand edge of the upper chimney wall, extending to $b=6\fdg5$ or $z\approx 400$ pc at a distance of 3.6 kpc. The mean height for Galactic OB stars is only $90$ pc \citep{miller79}, so OB stars at a height of 400 pc are unusual. One obvious explanation for their position is that they were formed out of dense material raised to high $z$ by the shell. A potential tracer of the past population of massive stars is the current population of pulsars. \citet{perna04} examined the number of pulsars within several of the largest supershells in the Milky Way, including GSH 242-03+37. They compared the numbers of known pulsars with Monte Carlo simulations of the pulsar population to predict how many pulsars should be associated with a shell, assuming a multiple supernovae formation scenario for the shell. Although there are only 2 known pulsars within GSH 242-03+37, based on the current sensitivity of pulsar searches they predicted that there should be 7 pulsars within the shell. The known pulsars therefore provide very few constraints on the progenitor stars that may have formed the supershell. \subsection{The Mini-shell} \label{sec:minivoid} Figure \ref{fig:minishell} shows the central region of GSH 242-03+37. The mini-shell (described in \S \ref{sec:morphology}) is apparent at at $(l,b,v)$ = $(242\fdg9,2\fdg3,45~{\rm km~s^{-1}})$. If it, too, is at a distance of 3.6 kpc then the mini-shell has a radius of $R_{\rm sh} \approx 120$ pc. There is no clear indication that the structure changes size with velocity although it is persistent over at least 7 ${\rm km~s^{-1}}$\ of velocity width. The shell is likely a stationary structure or one with a very small expansion velocity. From our data we can only estimate an upper limit to the expansion velocity of $v_{\rm exp}\leq \Delta v /2 \sim 3$ ${\rm km~s^{-1}}$, which is less than the turbulent velocity of warm \HI\ clouds, typically $\sim 7$ ${\rm km~s^{-1}}$\ \citep{belfort84}. It is therefore impossible to distinguish the shell's expansion from random cloud motions in the ISM. For a stationary shell a rough estimate of the age of the shell can be made from the sound crossing time for a 120 pc radius. Assuming $c_s \sim 10$ ${\rm km~s^{-1}}$\ for $T\sim 8000$ K \HI, the age is $t \sim R_{\rm sh}/c_s = 12$ Myr. How does the mini-shell relate to GSH 242-03+37? From the \HI\ velocity channel images it appears as if the mini-shell is located within GSH 242-03+37. It is difficult to understand how a cool neutral shell could form within a mostly evacuated supershell. We have compared our estimates of the internal density of the supershell with the swept-up mass of the mini-shell showing that there is not enough gas in the supershell to form the mini-shell. From the measured column density along the mini-shell walls we estimate that the typical \HI\ density in the walls is only $n_H \sim 0.4~{\rm cm^{-3}}$, a factor of a few lower than typical ISM values \citep{dickey90}. This gives a swept-up mass for the shell of $\sim 7 \times 10^4~{\rm M_{\odot}}$. If the shell were formed near the center of the main void, where we estimated that the typical \HI\ density is only $n_H \sim 0.07~{\rm cm^{-3}}$, then the total amount of mass enclosed in a sphere of radius 120 pc is only $\sim 1.2 \times 10^4~{\rm M_{\odot}}$, a factor of six less than the swept-up mass of the shell. If the excess mass in the mini-shell came from swept-up ionized gas that had since recombined and cooled it would imply an ionized density of $\sim 0.3~{\rm cm^{-3}}$. This is much higher than the typical ionized densities in shell interiors, which are usually on the order of $\sim 5 \times 10^{-3}~{\rm cm^{-3}}$ \citep{maclow88}. An alternative explanation is that the mini-shell is a very old structure that was formed through the stellar winds of the stars whose supernovae shocks eventually contributed to the large shell. In this case, the cool walls of the mini-shell might have been overtaken by the supernovae shocks, which subsequently expanded to much larger radii. The size of the mini-shell is approximately consistent with a late stage stellar wind bubble. A final suggestion is that the mini-shell formed not near the center of the shell, as it seems in the projected image, but at the edge or outside of the supershell where the gas densities should be higher. In order to be fully outside the main shell the mini-shell would require a systematic velocity that is $\sim 5 - 10$ ${\rm km~s^{-1}}$\ different from the LSR motion at its position. Given the dynamical nature of a large shell, that kind of motion is not unreasonable. Although it seems unlikely that the mini-shell formed in the interior of a swept-out supershell it is not possible with our data to distinguish between the latter two scenarios. Along the lower wall of the mini-shell lies a small cluster of six O \& B type stars \citep*{kaltcheva00,kaltcheva01}. The cluster is at a distance of $3.2\pm0.2$ kpc and contains the stars: LS 538 (B0II), 514 (B1II) 507 (B0.5 III), 534 (B3), 528 (O9III), and 511 (B2) \citep{kaltcheva00}. These stars are coincident with the brightest part of the mini-shell. Of these stars, only the later-type B2 and B3 stars are still on the main-sequence. If we assume coeval star formation, the age of the cluster must be between $\sim 7$ and 15 Myr \citep{schaller92}, which is comparable to the age estimate for the mini-shell. There are two possible scenarios to explain the presence of a stellar cluster on the edge of this internal shell. The first is that the mini-shell formed from the stellar winds or supernovae of a single very massive star that was part of the stellar cluster. This seems unlikely if the stars are coeval, as the age of the stellar cluster is comparable to or less than the age of the shell. Alternately, the cluster may have formed out of gas compressed on the edge of the expanding mini-shell, causing a new generation of stars along the walls of the mini-shell. For as well as we can determine the ages of the shell and the cluster, this scenario seems most likely. Many studies have searched for evidence of triggered star formation in supershells. In a recent study of the W3/W4 complex \citet{oey05} found evidence for three generations of star formation. They point out that, statistically, a hierarchical system of three or more generations of star formation is more suggestive of a causal relationship between the generations than a two generation system. In the GSH 242-03+37 system there is evidence of multiple epochs of star formation, contributing respectively to the $\sim 21$ Myr old supershell, the OB stars near the edges of the supershell, the mini-shell and the stellar cluster along the edge of the mini-shell. Whether these multiple epochs of star formation represent multiple generations or simply continuing star formation is not clear. Unfortunately, the ambiguous position of the mini-shell and the uncertainties in the shell ages makes it extremely difficult to identify three unique generations of star formation in the system. We therefore state that the system is {\em suggestive} of triggered star formation, but that the evidence is not conclusive. \subsection{Chimneys and Halo Clumps} \label{sec:chimneys} GSH 242-03+37 has three chimney break-outs with the dominant one towards positive latitudes. All three chimneys are capped approximately 1.6 kpc above the center of the shell. The morphology of the positive latitude chimney in particular is reminiscent of the models of chimney formation, such as \citet*{maclow89,tomisaka86}. \citet{maclow89} showed that for a superbubble expanding in a Galactic disk with an exponential atmosphere the shell will extend far beyond the Galactic mid-plane, developing a polar cap at $z\sim 1500$ pc. In the late stages of shell evolution gravitational acceleration dominates the dynamics of the slowly expanding shell and the polar cap should become Rayleigh-Taylor unstable and fragment. It is this fragmentation that allows hot gas filling the shell cavity to escape to the halo \citep{dove00}. As seen in Fig.~\ref{fig:annotate} the upper cap of GSH 242-03+37 shows some evidence for fragmentation. There are a number of dense concentrations along the general arc of the cap, as well as some regions where the arc appears absent. We note however, that even with the sensitivity of GASS, the brightest features along this arc are only a few Kelvin in brightness temperature, so very faint portions of the arc may not be detectable. If, as these observations indicate, the polar caps of expanding supershells can reach heights of $z \sim 1500$ pc before fragmenting it raises questions about the ultimate fate of the fragmented caps and the expected size distribution for forming clouds. These questions have been addressed in a general sense in a variety of numerical simulations, \citep[e.g.][]{avillez00,avillez01}. \citet{avillez00}, for example, predicts that expanding shells should produce condensations of size 5 - 100 pc on timescales of tens of million years. The simulations explain the cloudlets in terms of expelled chimney gas that has cooled and recombined. GSH 242-03+37, on the other hand, seems to be producing cool clouds from the fragmented shell, a process that presumably takes place well before expelled chimney gas can cool. The cool shell of GSH 242-03+37 appears to be fragmenting into cloudlets with sizes of a few tens of parsecs. The linewidths of these clumps are on the order of $\sim 10~{\rm km~s^{-1}}$, indicating thermal temperatures of $T\sim 10^3$ K or lower. We measure column densities (assuming they are optically thin) for these clumps of $N_H \sim$ few $\times 10^{19}~{\rm cm^{-2}}$. If the clumps are roughly spherical, then their average \HI\ number density is $n \sim 1~{\rm cm^{-3}}$ and their \HI\ mass is $\sim 100~{\rm M_{\odot}}$. The \HI\ mass of the clumps is well below the dynamical mass limit to be gravitationally bound, which is $\sim 2 \times 10^5~{\rm M_{\odot}}$ for a 10 pc cloud with a 10 ${\rm km~s^{-1}}$\ linewidth. Are these clumps in pressure equilibrium with their surroundings? The thermal pressure of these clumps is $nT \sim 10^3~{\rm cm^{-3}~K}$. The thermal pressure of the lower halo at a $z$-height of 1.6 kpc is uncertain. By mass and volume the dominant component of the ISM at $z\sim 1.6$ kpc is warm ionized gas \citep{ferriere01}, with a density of $\sim 4\times 10^{-3}~{\rm cm^{-3}}$ and a temperature of $\sim 8000$ K \citep{reynolds91}. We also know from observations of the soft X-ray background \citep{snowden98} and O{\sc vi} absorption \citep{savage03}, among others, that there is a significant diffuse, hot ($T\sim 10^{5-6}$ K) component to the lower halo gas. This component is difficult to observe directly but from FUSE O{\sc vi} measurements \citet{savage03} suggest that it is distributed as a patchy, plane-parallel exponential with a scale height of $\sim 2.3$ kpc. The contribution of the hot, ionized medium to the thermal pressure of the lower halo is a matter for debate with estimates ranging from $\sim 10~{\rm K~cm^{-3}}$ \citep{boulares90} to $\sim 10^3~{\rm K~cm^{-3}}$ \citep{shull94}. It is generally agreed, however, that the presence of this medium, as well as its patchy nature are probably due to exhausting hot gas from supershells like GSH 242-03+37. Therefore, the best estimate for the thermal pressure of the ambient medium most likely comes from estimates of the thermal pressure in the interior of an evolved supershell. It is not trivial to estimate the thermal pressure in the interior of GSH 242-03+37 because the shell has begun to break-out, releasing its pressure and also because the shell is sufficiently evolved that radiative cooling is important in the interior. We can, however, roughly estimate the maximum internal thermal pressure before break-out, assuming an adiabatic interior where the internal density is dominated by mass evaporated from the cold dense shell \citep{maclow88}. The internal thermal pressure for an evolved spherical shell of age, $t_7 = t /10^7$ yr, formed with an energy deposition rate of $L_{38} \approx E_E/t /(10^{38}~{\rm erg~s^{-1}})$ is $nT = (1.4 \times 10^4~{\rm K~ cm^{-3}})\,L_{38}^{14/35}n_0^{21/35}t_7^{-28/35}$ \citep{maclow88}. If, once again, we assume that the ambient density is $n_0 \sim 1~{\rm cm^{-3}}$, then the internal pressure is $\sim 1.4 \times 10^4~{\rm K~cm^{-3}}$. Given that the shell has begun to break apart and that its $z$-height far exceeds its dimension along the Galactic plane, this pressure estimate is almost certainly too large but it provides a useful limit to the pressure around the clumps. We may therefore conclude that the clumps are likely in equilibrium or moderately pressure-supported by the hot gas from the shell. We would like to know whether and for how long clouds created from the fragmentation of a shell could survive against evaporation due to heat flux from the ambient medium. If the ambient medium is indeed dominated by warm ionized gas, with $T\sim 8000$ K, then the classical thermal evaporation is extremely long when compared to the 20 - 30 Myr lifetime of the shell \citep{cowie77}. Even if the ambient medium is dominated by the hot, diffuse gas of the shell interior, the evaporation time is $\sim 35$ Myr. We can expand on this estimation somewhat by following \citet{mckee77b} who consider the simple scenario of cool, spherical clouds embedded in a hot medium where the fate of the cloud is controlled by the effects of both radiative losses and incoming heat flux. Those authors provide analytic solutions to determine the critical radius, $R_{cr}$ below which clouds evaporate and above which they condense material from the surrounding medium. The critical radius is determined solely by the pressure in the ambient medium. From Figure 2 in \citet{mckee77b} we estimate that the critical radius for $T\sim 8000$ K gas with a density of $\sim 4\times10^{-3}~{\rm cm^{-3}}$ is $\sim 3$ pc. The clouds, however, are also affected by the very hot, tenuous medium interior to the shell. Even if this gas is at $\sim 10^6$ K with densities of $10^{-2}~{\rm cm^{-3}}$ the critical radius will be on the order of 15 pc and even lower for smaller ambient densities. Given these very crude assumptions it seems that the clouds should be near or above the critical radius for all expected temperatures in their environs. These clouds should therefore be relatively long-lived and may even condense matter onto themselves. Another interesting question is what size scales should we expect to see represented from fragmenting caps. This of course depends on the instability process responsible for the fragmentation. If the shell cap fragments through classical Rayleigh-Taylor instabilities all size scales are expected to be represented, but the growth timescale of the instability is proportional to the square-root of the size scale so we should expect to see the smaller scales first. In addition, in the presence of a magnetic field a lower limit is applied to the size of growing modes and also a fastest growing mode is established \citep{mcgriff03b}. In that case, the size scales observed in supershells may provide probes of the ambient medium. In GSH 242-03+37 we observe large polar caps that appear to be breaking into clumps with radii on the order of tens of parsecs. The size, density, linewidths and $z$-height of these clumps is very similar to halo cloudlets detected by \citet{lockman02a} and those found in a recent study of the GASS pilot region (A.\ Ford et al.\ 2005a, in prep.). The origin of the \citet{lockman02a} clouds is still quite uncertain. If the polar caps of supershells like GSH 242-03+37 can break into small clumps with parsec or tens-of-parsec size scales these clumps should be much longer lived than the shell itself. Although these ideas are still rather speculative, the similarities of properties suggests that it would be worth pursuing the fragmenting shell model further. Some important questions to answer will be: how long can the clouds survive? what is their $z$ distribution? and, given that they are massive compared to their surroundings, how long before they will drop back to the Galactic plane? We will address these questions in a future paper comparing the properties of halo cloudlets with simulations of the long term evolution of supershells (A.\ Ford et al.\ 2005b, in prep.). \section{Conclusions} \label{sec:conclusions} We have presented new \HI\ images of the Galactic supershell GSH 242-03+37 from the Galactic All-Sky Survey (GASS). GSH 242-03+37 is one of the largest shells in the Galaxy with a radius of $R_{sh} = 565\pm 65$ pc. We show that the supershell is broken at the edge of the disk, both above and below the plane. The resultant structure has three ``chimney'' openings that are capped with very narrow filaments all situated $\sim 1.6$ kpc above the disk midplane. These ``caps'' are extremely reminiscent of the caps seen on expanding supershells in simulations, such as those by \citet{maclow89} and \citet{tomisaka98}. In supershell evolutionary theories these shells should become Rayleigh-Taylor unstable and the polar caps break into clumps. The caps of GSH 242-03+37 appear to show clump structures with sizes on the order of 20 pc, which may indicate the onset of break-out. We estimate that clouds formed through this break-out may survive longer than the parent shell. The size, temperature and $z$-heights of these clouds are similar to the halo cloudlets detected near the disk in the inner Galaxy \citep{lockman02a}. We suggest that the Lockman cloudlets may be formed through the fragmentation of high-$z$ supershell caps. All-sky surveys like GASS will provide an extremely valuable database for testing this idea. GASS will have the sky coverage, resolution and sensitivity necessary to detect and study the relationship of small-scale structures in the halo to structures in the Galactic disk. We have searched catalogs of OB stars for massive stars in the vicinity of GSH 242-03+37. We find very few stars at the center of the shell, but there are 22 OB stars that lie near the internal edges of the shell, of which the earliest is an O9 type star. There are six OB stars with ages between 7 and 13 Myr \citep{kaltcheva01} that lie along a small ``mini-shell'' that looks as if it is inside GSH 242-03+37. It is difficult to understand how a neutral shell could form in the evacuated cavity of GSH 242-03+37. We therefore suggest that it lies at the edge of the shell and that the OB stars were formed in material compressed along the walls of the mini-shell. The agreement between the main shell structure, the identification of 22 OB stars near the shell walls, the mini-shell and its corresponding cluster of young OB stars are suggestive, but not conclusive evidence for triggered star formation. \acknowledgements The Parkes Radio Telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. This research was performed while D.J.P.\ held a National Research Council Research Associateship Award at the Naval Research Laboratory. Basic research in astronomy at the Naval Research Laboratory is funded by the Office of Naval Research. D.J.P. also acknowledges generous support from NSF MPS Distinguished International Research Fellowship grant AST0104439. B.K.G. acknowledges the financial support of the Australian Research Council through its Discovery Project program. We are extremely grateful to Warwick Wilson, March Leach, Brett Preisig, Tim Ruckley and John Reynolds for their efforts in enabling the GASS correlator mode in time for our first observations.
2024-02-18T23:39:49.203Z
2005-10-11T07:55:04.000Z
algebraic_stack_train_0000
518
8,479
proofpile-arXiv_065-2628
\section{Introduction} The Earth is the only known example of a life-hosting world, even though terrestrial exoplanets have been searched for since the discovery of the first Earth-mass exoplanets by Wolszczan \& Frail (1992). However, planets similar to the Earth, Venus or Mars, in size, density or orbital parameters are still beyond the reach of the present capabilities for planet detection around normal stars. Until now, mostly giant exoplanets have been discovered. Remarkable progresses have been made recently with the discovery of planets in the mass range of 14 to 21~Earth masses (14 to 21~M$_\oplus$, see McArthur et al.\ 2004; Santos et al.\ 2004), and most recently a $\sim$7.5~M$_\oplus$ planet orbiting \object{GJ~876} (Rivera et al.\ 2005). We may speculate, then, that smaller planets with sizes down to that of the Earth might be observed in a near future. Among the 161 planets\footnote{From J.~Schneider's Extrasolar Planets Encyclop\ae dia at \texttt{vo.obspm.fr/exoplanetes/encyclo/encycl.html}. See also the web page of the IAU Working Group on Extrasolar Planets at \texttt{www.ciw.edu/boss/IAU/div3/wgesp}.} detected so far, eight have been discovered or re-discovered as they were transiting their parent star, producing a photometric occultation. The last transiting planet identified is a Saturn-mass planet orbiting \object{HD~149\,026}, a bright $V=8.15$ G0\,{\sc iv} star (Sato et al.\ 2005). The first discovered transiting giant exoplanet, HD~209\,458b (Henry et al.\ 2000; Charbonneau et al.\ 2000; Mazeh et al.\ 2000), is the object of intense investigations dedicated to characterizing its hot atmosphere. Probing planetary atmospheres by stellar occultations is an effective method used for a lot of planets and their satellites in the Solar System, from Venus to Charon (see, e.g., Elliot \& Olkin 1996). With this technique, we can observe the thin atmospheric ring surrounding the optically thick disk of the planet: the limb. In the case of giant exoplanets, though, the star is only partially occulted (1.6\% for the transiting planet \object{HD~209\,458b}). The spectrum of the star light transmitted and filtered by the lower and thick giant exoplanet atmosphere consequently presents extremely weak absorption features (from $10^{-3}$ to $10^{-4}$, see Seager \& Sasselov 2000; Hubbard et al.\ 2001; Brown 2001). Despite the difficulties, such dim signatures were detected: Charbonneau et al.\ (2002) measured the lower atmosphere of \object{HD~209\,458b} as they detected a \mbox{$(2.32 \pm 0.57) \cdot 10^{-4}$} photometric diminution in the sodium doublet line of the parent star at 589.3~nm. However its upper atmosphere, which extends up to several planet radii, shows even larger signatures. Vidal-Madjar et al.\ (2003, 2004) observed a \mbox{$15 \pm 4 \%$} absorption in the Lyman~$\alpha$ (Ly$_\alpha$) emission line of \object{HD~209\,458} at 121.57~nm as well as absorptions from atomic carbon (\mbox{$7.5 \pm 3.5 \%$}) and oxygen (\mbox{$13 \pm 4.5 \%$}) in the upper atmosphere. In this work, we will discuss the possibility to detect and to characterize the lower atmospheres of exoplanets using signatures comparable in origin to the one detected by Charbonneau et al.\ (2002). The idea is to extend the use of transmission spectroscopy to hypothetical Earth-size planets. We estimate that these exoplanets present at least two orders of magnitude less signal than gaseous giants, as the transit of the planet itself would have a dimming of $\sim$10$^{-5}$ (the transit depth, $\Delta F / F$, where $F$ is the stellar flux, can be expressed as $(R_P / R_\star)^2$, with $R_P$ and $R_\star$ standing for the radii of the planet and the star, respectively). The atmospheres of Earth-size exoplanets should span over $\sim$100-km height without considering potential upper atmospheres. Depending on their transparency -- which would give an equivalent optically thick layer of $\sim$10~km -- the expected occultations caused by atmospheric absorptions should be $\sim$10$^{-7}$ to $\sim$10$^{-6}$. Earth-size planets are probably the most challenging objects to detect with transmission spectroscopy. The orders of magnitude given above, in fact, raise many questions: is it realistic to seek for possible features that dim, with an instrumentation that might or might not be available in a near future? What are the strongest signatures we should expect? What kind of planet could be the best candidate to look at? We have developed a one-dimensional model of transmission at the limb to give quantitative answers to these questions. Since we use the stellar light to explore the planetary atmospheres, we chose to focus on the wavelength range where the largest number of photons is available, i.e., between 200 and 2\,000~nm. The model is described in Sect.~\ref{sec:model}. The detectability of the selected atmospheres depends on the signal-to-noise ratio (S/N) achievable with a space telescope spectrograph. The constraints on idealized observations and the method we used to calculate their S/N, are described in Sect.~\ref{sec:S/N}. Finally, the results for the specified cases are given and discussed in Sect.~\ref{sec:results}. \section{Model description} \label{sec:model} \subsection{Geometric description of the model} \label{sec:geometry} The general geometry of a transiting system is described by Brown (2001). In the present work we consider a non-transient occultation for the `in transit' phase, with a null phase configuration (configuration~2 in Brown's Fig.~1), that is, the planet is centered in the line of sight with respect to the star. This configuration both maximizes the area of the atmosphere that is filtering the stellar light and minimizes any effects linked to the stellar limb darkening (Seager \& Sasselov 2000). The stellar light is filtered through the atmospheric limb of the planet, as sketched in Fig.~\ref{fig:transmission_geometry}. In the following we detail the integration of the atmospheric opacity along a stellar light path (or cord) through the limb of the planet. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Fig1.ps}} \caption{Sketch of the transmission of the stellar light through the planetary limb. The planet itself, i.e. the `solid' disk (in grey) is optically thick at all wavelengths. The quantity $\mathrm{d}l$ is the elemental length along the line of sight. In the calculation, we prefer to use the height $h$ instead of $l$. The scale in the figure has been distorted for clarity.} \label{fig:transmission_geometry} \end{figure} \subsubsection{Opacity along the line of sight} We calculate the total opacity of the model atmosphere, $\tau_{\lambda}$, along a cord, parallel to the line of sight, as the sum of the opacity of each species $i$ present in the atmosphere, $\tau_{\lambda} = \sum_{i}\tau_{\lambda,i}$. We can calculate the opacity along the cord as a function of its impact parameter, $b$: \begin{equation} \label{eq:opacity} \tau_{\lambda,i}(b) = 2 \int_{0}^{+\infty} A_{\lambda,i} \rho_i(h) \mathrm{d} l, \end{equation} where $A_{\lambda,i}$ is the absorption coefficient for the species $i$ at the wavelength $\lambda$, expressed in cm$^2$\,g$^{-1}$, and $\rho_i(h)$ is the mass density in g\,cm$^{-3}$ of the species $i$ at an altitude $h$ in the atmosphere. Now, re-expressing Eq.~\ref{eq:opacity} as a function of the height $z = h + R_P$, with $R_P$ being the planet radius, we obtain: \begin{equation} \tau_{\lambda,i}(b) = 2 \int_{b}^{b_\mathrm{max}} A_{\lambda,i} \rho_i(z-R_P) \frac{z \mathrm{d} z}{\sqrt{z^2-b^2}}, \end{equation} where $b_\mathrm{max}$ is the height of the higher atmospheric level we are considering. The method to estimate $b_\mathrm{max}$ is presented in Sect.~\ref{sec:b_max}. \subsubsection{Spectrum ratio} \label{sec:spectrum_ratio} Consider the stellar flux received by the observer during the planetary transit to be $F_{\mathrm{in}}$, and the flux received when the planet is not occulting the star to be $F_{\mathrm{out}}$. Brown (2001) defined $\Re$ to be the ratio between those two quantities, and $\Re'$ (the so-called spectrum ratio) as $\Re'=\Re - 1$. Here, $\Re'$ is the sum of two distinct types of occultations: \begin{itemize} \item The occultation by the `solid' surface of the planet, optically thick at all wavelengths. Projected along the line of sight, this is a disk of radius $R_P$ and the occultation depth is simply $(R_P/R_\star)^2$. \item The wavelength-dependent occultation by the thin ring of gaseous components that surrounds the planetary disk, which can be expressed as $\Sigma_\lambda / (\pi R_\star^2)$. The area, $\Sigma_\lambda$, is the atmospheric equivalent surface of absorption and may be calculated as: \begin{equation} \Sigma_\lambda = \int_{R_P}^{b_\mathrm{max}} 2 \pi b \mathrm{d} b \left[1 - \mathrm{e}^{-\tau_\lambda(b)}\right]. \end{equation} \end{itemize} The resulting spectrum ratio is: \begin{equation} \label{eq:spectrum_ratio} \Re'(\lambda) = - \frac{\Sigma_\lambda + \pi R_P^2}{\pi R_\star^2}. \end{equation} Note that $\Re' < 0$. \subsection{Description of the atmospheric profiles} Along a single cord, stellar photons are crossing several levels of the spherically stratified atmosphere. We generate an atmospheric model using the vertical profiles from Tinetti et al.\ (2005a, 2005b) and Fishbein et al.\ (2003) for the Earth and from the Venus International Reference Atmosphere (VIRA, Kliore et al.~1985) for Venus. These atmospheric data include the profiles of pressure, $p$, temperature, $T$, and various mixing ratios, $Y$. The atmospheres are initially sampled in 50 levels, ranging from the ground level to an altitude of about 80~km for the Earth and about 50~km for Venus. Both profiles stop below the homopause, so we assume hydrostatic equilibrium for the vertical pressure gradient. A useful quantity to describe atmospheres in hydrostatic equilibrium is the scale height, $H$, i.e. the height above which the pressure decreases by a factor $e$. The scale height explicitly depends on the temperature, as $H = k \mathcal{N}_A T / (\mu g)$, where $k$ and $\mathcal{N}_A$ are the Boltzmann's and Avogadro's constants while $\mu$ is the mean molar mass of the atmospheric gas. Since $g$ is the acceleration due to gravity, $H$ also implicitly depends on the radius and the density of the planet\footnote{To avoid confusion between the density of the atmosphere and the mean density of the planet, the latter is denoted $\rho_P$}. Consequently, less dense objects are likely to have more extensive atmospheres, hence they are easier to detect (Brown 2001). Density and size of planets are therefore key parameters for the present work. In order to estimate their influence, we test a set of different planetary types ranging from the Titan-like giant planet's satellite ($\rho_P \approx 2$~g\,cm$^{-3}$, $R_P \approx 0.5$~Earth radius -- 0.5~R$_\oplus$) to the `super-Earth' object ($\rho_P \approx 6$~g\,cm$^{-3}$, $R_P \approx 2$~R$_\oplus$). For the physical properties of plausible, theoretically predicted planets such as a `super-Earth', we use the mass-radius relation model from Dubois (2004) and from Sotin et al.\ (2005). Our atmospheric model allows the re-scaling of vertical profiles depending on the acceleration due to gravity of the planet and the atmospheric pressure at the reference level. \subsubsection{Molecular composition of the atmosphere} Our simplified atmospheric profiles contain only the species that may produce interesting spectral signatures in the chosen wavelength range (0.2 to 2~$\mu$m), viz., water vapor (H$_2$O), carbon dioxide (CO$_2$), ozone (O$_3$) and molecular oxygen (O$_2$). Molecular nitrogen (N$_2$) has also been considered, though lacking marked electronic transitions from the UV to the near IR. Nevertheless, it is a major species in Earth's atmosphere and it has a detectable signature via Rayleigh scattering at short wavelengths. We consider three types of atmospheres: (A) N$_2$/O$_2$-rich, (B) CO$_2$-rich and (C) N$_2$/H$_2$O-rich cases. The first two types can be associated with existing planetary atmospheres, respectively Earth and Venus. The last type (C) could correspond to the atmosphere of an Earth-mass volatile-rich planet such as an `ocean-planet' described by L\'eger et al.\ (2004). The basis for building a `toy model' of an H$_2$O-rich atmosphere are found in L\'eger et al.\ (2004) and Ehrenreich et al.\ (2005b, see Sect.~\ref{sec:H2O-rich_atmo}). Vertical gradients in the chemical composition and temperature of each of these atmospheres are plotted in Fig.~\ref{fig:A_profile} (N$_2$/O$_2$-rich), Fig.~\ref{fig:B_profile} (CO$_2$-rich) and Fig.~\ref{fig:C_profile} (N$_2$/H$_2$O-rich). Table~\ref{tab:composition} summarizes the mean chemical compositions of these model atmospheres. \begin{table*} \centering \begin{tabular}{*{8}{c}} \hline \hline Type & $\mu$ (g\,mol$^{-1}$) & $Y_{\mathrm{N}_2} (\%)$ & $Y_{\mathrm{H}_2\mathrm{O}}$ (\%) & $Y_{\mathrm{CO}_2}$ (\%) & $Y_{\mathrm{O}_2}$ (\%) & $Y_{\mathrm{O}_3}$ (\%) & Used for models \\ \hline N$_2$/O$_2$-rich & 28.8 & 78 & 0.3 & 0.03 & 21 & ${<10^{-3}}^*$ & A1, A2, A3 \\ CO$_2$-rich & 43.3 & 4 & $3\cdot10^{-4}$ & 95 & 0 & 0 & B1, B2, B3 \\ N$_2$/H$_2$O-rich & 28.7 & 80 & 10 & 10 & 0 & 0 & C1, C2, C3 \\ \hline \end{tabular} \caption{Mean volume mixing ratio of atmospheric absorbers for the different types of model atmospheres considered. \newline (*) Ozone is only present in model~A1.} \label{tab:composition} \end{table*} \subsubsection{Temperature profiles} \label{sec:temperature_profile} As mentioned above, we use Earth and Venus vertical temperature profiles as prototype for N$_2$/O$_2$-rich and CO$_2$-rich atmospheres (see Sect.~\ref{sec:choice}). Moreover we assume an isothermal profile in the thermosphere, instead of the real one. This is an arbitrary, but conservative choice, since the temperature should on the contrary rise in the thermosphere enhancing the atmosphere's detectability (see Sect.~\ref{sec:temperature_effect}). \subsubsection{Upper limit of the atmosphere} \label{sec:b_max} We set the profiles to extend up to a critical height $b_\mathrm{max}$ from the centre of the planet, or $h_\mathrm{max}$ from the surface (\mbox{$b_\mathrm{max} = R_P + h_\mathrm{max}$}). This limit corresponds to the altitude above which the molecular species we considered (H$_2$O, O$_3$, CO$_2$, O$_2$) are likely to be destroyed or modified either by photo-dissociating or ionizing radiations, such as Ly$_\alpha$ or extreme-UV (EUV). Therefore, the critical height corresponds to the mesopause on Earth (at $\approx 85$~km). The column density of the terrestrial atmosphere above that altitude, $\mathcal{N}_{\geq 85\mathrm{\,km}}$, is sufficient to absorb all Ly$_\alpha$ flux. In fact, as the number density of the atmospheric gas, $n(h)$, decreases exponentially with height, we can simply consider \mbox{$\mathcal{N}_{\geq 85\mathrm{\,km}} \propto n_{85 \mathrm{\,km}} \cdot H_{85 \mathrm{\,km}}$}, where $n_{85 \mathrm{\,km}}$ and $H_{85\mathrm{~km}}$ are the density and the scale height of the terrestrial atmosphere at 85~km, respectively. Similarly, we set the upper limit of a given atmosphere, $h_\mathrm{max}$, to the altitude below which the photo-dissociating photons are absorbed. We assume that $h_\mathrm{max}$ is the altitude where the column density equals that of the terrestrial atmosphere at 85~km, that is \mbox{$n_{h_\mathrm{max}} \cdot H_{h_\mathrm{max}} = (n_{85 \mathrm{\,km}})_\oplus \cdot (H_{85 \mathrm{\,km}})_\oplus$}. We determine $h_\mathrm{max}$ by scaling this equation. Values of $h_\mathrm{max}$ for the different models are given in Table~\ref{tab:models}. Similarly to neutral elements absorbing light below $h_\mathrm{max}$, it is likely that ionized elements are absorbing light above this limit, though we do not include this effect in the model. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Fig2.ps}} \caption{Atmospheric profiles, A1. The plot shows the total number density profile (thin solid line) of the atmosphere in cm$^{-3}$, and that of the five species included in our model, namely, N$_2$ (dotted line), O$_2$(dash-dot-dot-dotted line), H$_2$O (dashed line), CO$_2$ (long-dashed line) and O$_3$ (dash-dotted line). Temperature (thick line up to 80~km) and mixing ratios of the different species are those of Earth. Temperature is assumed to be constant above that height. The thickest horizontal line shows the position of the cloud layer.} \label{fig:A_profile} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Fig3.ps}} \caption{Atmospheric profiles, B1. The legend is identical to that in Fig.~\ref{fig:A_profile}. The temperature profile and mixing ratios are that of Venus. The temperature is considered to be constant above 50~km. Carbon dioxide is barely visible because it is by far the major constituent so its line is superimposed with that of the total density.} \label{fig:B_profile} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Fig4.ps}} \caption{Atmospheric profiles, C1. Same legend as in Fig.~\ref{fig:A_profile} and Fig.~\ref{fig:B_profile}. The temperature profile follows a dry adiabat in the first 10~km of the atmosphere, until the point where $e \geq e_\mathrm{sat}$. Next, it follows a steeper saturated adiabat up to 20~km high. The temperature gradient is arbitrarily set to be isothermal above this point. The cloud top (thickest line) is one scale height above the higher point where $e \geq e_\mathrm{sat}$. For reasons detailed in the text (see Sect.~\ref{sec:H2O-rich_atmo}), this point corresponds to the level where the temperature gradient becomes isothermal.} \label{fig:C_profile} \end{figure} \subsubsection{Presence of clouds} \label{sec:clouds} In the wavelength range of interest, the surface of Venus is almost completely hidden by clouds. Therefore, it seems reasonable to model these types of clouds to a first order approximation by assuming that they act as an optically thick layer at a given altitude. As a result, clouds effectively increase the apparent radius of the planet and the transiting spectrum gives information only about atmospheric components existing above the cloud layer. The top of the cloud layer is a free parameter for N$_2$/O$_2$- and CO$_2$-rich atmospheres (set to 10 and 30~km, taken from the Earth and Venus, respectively). We treat the case of the N$_2$/H$_2$O-rich atmosphere separately because H$_2$O is a highly condensable species. \subsubsection{Composition, vertical structure and location of the clouds in a N$_2$/H$_2$O-rich atmosphere} \label{sec:H2O-rich_atmo} The temperature gradient of an atmosphere containing non-negligible amount of condensable species, like H$_2$O, significantly departs from the case where no condensation occurs. A correct estimation of the temperature profile is crucial to determine the scale height, hence the detectability of that atmosphere. In an H$_2$O-rich atmosphere, the evolution of the adiabatic temperature gradient is driven by the ratio of the partial pressure of water vapor, $e$, to the saturating vapor pressure, $e_\mathrm{sat}$. This ratio should also determine the levels at which the water vapor is in excess in the air and condenses (for $e / e_\mathrm{sat} > 1$), i.e.\ the levels where clouds may form. Our initial conditions at the $z=0$ level ($z^0$) are the temperature $T^0$ and pressure $p^0$. With these quantities we can estimate $e_\mathrm{sat}$, which depends only on the temperature, using the Clausius-Clapeyron equation: \begin{equation} \label{eq:Clausius-Clapeyron} e_\mathrm{sat}(T) = p^* \exp{\left[\frac{\mu_{\mathrm{H}_2\mathrm{O}} L_v}{\mathcal{N}_A k} \left( \frac{1}{T^*} - \frac{1}{T} \right) \right]} \end{equation} where $p^*$ and $T^*$ are the reference pressure ($1.013\cdot10^{5}$~Pa) and temperature (373~K), $\mu_{\mathrm{H}_2\mathrm{O}}$ is the molar mass of water and $L_v$ is the latent heat of vaporization for water ($2.26\cdot10^{10}$~erg\,g$^{-1}$). Assuming that the planet is covered with liquid water (e.g., an ocean-planet; see L\'eger et al.\ 2004) and that $T^0$ is `tropical' (e.g. 340~K), the humidity at the surface is high so that the value of $e^0$ must be an important fraction of $e_\mathrm{sat}(T^0)$. We set $e^0$ to half the value of $e_\mathrm{sat}(T^0)$. The volume mixing ratio of water can be expressed as $Y_{\mathrm{H}_2\mathrm{O}} = e / p$, and we can calculate it at the surface of the planet. The atmosphere of an ocean-planet may also contain a significant quantity of CO$_2$. We arbitrarily set this quantity constant to $Y_{\mathrm{CO}_2} = 0.1$ (L\'eger et al.\ 2004; Ehrenreich et al.\ 2005b). Molecular nitrogen is the major constituent of the atmosphere of the Earth and the second more abundant species in the atmosphere of Venus, and therefore we chose to include it to complete the chemical composition of this atmosphere. The mixing ratio of N$_2$ was set to be $Y_{\mathrm{N}_2} = 1 - Y_{\mathrm{CO}_2} - Y_{\mathrm{H}_2\mathrm{O}}$ at any level. Assuming the atmosphere contains only N$_2$, H$_2$O and CO$_2$, we can obtain the mean molar mass of the atmospheric gas ($\mu^0 = \sum_i Y_i^0 \mu_i$) and that of the dry atmospheric gas ($\mu_\mathrm{d}^0 = \mu^0 - Y_{\mathrm{H}_2\mathrm{O}}^0 \mu_{\mathrm{H}_2\mathrm{O}}$), the mean specific heat of dry air ($C_p^0 = \sum {C_p}_i Y_i^0 \mu_i / \mu_\mathrm{d}^0$) and the scale height $H_0$ (all at the level $z^0$). For the $z^{j+1}$ level, we need to evaluate the temperature gradient between $z^j$ and $z^{j+1}$. There are two cases (Triplet \& Roche 1986): \begin{itemize} \item $e^j < e_\mathrm{sat}^j$; in this case the temperature follows a dry adiabatic gradient, \begin{equation} \label{eq:dry_gradient} {\Delta T}_\mathrm{dry} = \frac{-g}{C_p^j}. \end{equation} \item $e^j = e_\mathrm{sat}^j$; in this case the gradient is saturated, \begin{equation} \label{eq:sat_gradient} {\Delta T}_\mathrm{sat} = {\Delta T}_\mathrm{dry} \frac{\left( 1 + r_\mathrm{sat}^j \right) \left[ 1 + L_v r_\mathrm{sat}^j / (R_\mathrm{dry}^j T^j) \right]}{1 + \frac{r_\mathrm{sat}^j}{C_p^j} \left[{C_p}_{\mathrm{H}_2\mathrm{O}} + L_v^2 \frac{1 + r_\mathrm{sat}^j R_{\mathrm{H}_2\mathrm{O}}R_\mathrm{dry}^j}{R_{\mathrm{H}_2\mathrm{O}} (T^j)^2} \right]} \end{equation} where $r_\mathrm{sat}^j = (\mu_{\mathrm{H}_2\mathrm{O}} e_\mathrm{sat}^j) / [\mu_\mathrm{d}^j (p^j - e_\mathrm{sat}^j) ]$ is the mixing ratio of saturated air, $R_\mathrm{dry}^j = \mathcal{N}_A k / \mu_\mathrm{dry}^j$ and $R_{\mathrm{H}_2\mathrm{O}} = \mathcal{N}_A k / \mu_{\mathrm{H}_2\mathrm{O}}$ are the specific constant of dry air at the level $z^j$ and water (respectively). \end{itemize} If $z^{j+1} < 20$~km, we select the appropriate gradient accordingly to the value of $e / e_\mathrm{sat}$, and get the value of the temperature $T^{j+1}$. Above 20~km, we assume the temperature profile becomes isothermal ($T^{j+1} = T^j$). The assumption of an isothermal atmosphere, already discussed in Sect.~\ref{sec:temperature_profile}, is somewhat arbitrary but is motivated by an analogy with the atmosphere of the Earth, where the temperature gradient becomes positive from about 20 to 50~km. Taking an isothermal temperature gradient will conservatively mimic the presence of a stratosphere. However, it has important consequences since it allows H$_2$O to be significantly present above the cloud top. In fact, above 20~km, the temperature stops decreasing, preventing condensation from occurring (the saturation vapor pressure depends only on temperature). Our assumption consequently fixes the height of the cloud deck to the point where the temperature profile is isothermal (actually, one scale height above that point). If we set this point higher, we would increase the amount of clouds hence reducing the detectable portion of atmosphere. In addition, the cloud formation would certainly take the corresponding latent heat of condensation out of the atmospheric gas, contributing, as a consequence, to cool the atmosphere at the level of the cloud layer. We calculate $H^{j+1}$, $p^{j+1} = p^j \cdot \exp{\left(-z^{j+1}/H^{j+1}\right)}$, $e_\mathrm{sat}^j$ (from Eq.~\ref{eq:Clausius-Clapeyron}) and either $e^{j+1} = e^j \cdot \exp{\left[(z^j - z^{j+1}) / H^{j+1}) \right]}$, if the atmosphere is not saturated or $e^{j+1} = e_\mathrm{sat}^{j+1}$, if the atmosphere is saturated. We finally find all $Y_i^{j+1}$, $\mu_\mathrm{dry}^{j+1}$ and ${C_p}_\mathrm{dry}^{j+1}$ and then iterate the process for all atmospheric levels. The higher and the lower pressure levels where $e = e_\mathrm{sat}$ indicate respectively the bottom and the top of the region where clouds are forming. We assume the cloud layer does not extend over one scale height above the top of the cloud forming region. However, we can still have $e \leq e_\mathrm{sat}$ higher in the atmosphere, and thus H$_2$O can be present above the clouds. \subsection{Description of atmospheric absorptions} \subsubsection{Chemical species} We used the program \texttt{LBLABC} (Meadows \& Crisp 1996), a line-by-line model that generates monochromatic gas absorption coefficients from molecular line lists, for each of the gases, except ozone, present in the atmosphere. The line lists are extracted from the HITRAN~2000 databank (Rothman et al.\ 2003). We calculated the absorption coefficients for O$_2$, H$_2$O and CO$_2$ in our wavelength range we (i.e., from 200 to 2\,000~nm). The absorption coefficients relative to these species depend on pressure and temperature. We verified that those variations do not impact significantly on the results obtained (see Sect.~\ref{sec:results}) and we decided to use the absorption coefficients calculated at the pressure and temperature of the cloud layer, i.e., 10~km in models~A1, A2 \&~A3, 30~km in models~B1, B2 \&~B3 and from 25 to 70~km in models~C1 to~C3. We then assumed these absorption coefficients to be constant along the $z$-axis. This is a fairly good approximation since molecules at that atmospheric level contribute more substantially to the transmitted spectrum than molecules at the bottom of the atmosphere. Absorption coefficients for H$_2$O, CO$_2$, O$_3$ and O$_2$ are compared in Fig.~\ref{fig:abc}. The spectrum of O$_3$ is unavailable in HITRAN at wavelengths lower than 2.4~$\mu$m. However it has strong absorption in the Hartley (200--350~nm) and Chappuis (400--750~nm) bands. Thus we took the photo-absorption cross-sections, $\sigma$ (in cm$^{2}$), from the GEISA/cross-sectional databank (Jacquinet-Husson et al.\ 1999) and converted them into absorption coefficients, $A$ (in cm$^{2}$\,g$^{-1}$), such as $A = \sigma \mathcal{N}_A / \mu$, where $\mu$ is the molar mass of the component. As shown in Fig.~\ref{fig:abc_variation}, the pressure and the temperature variations do not have a significant influence over the cross sections/absorption coefficients of O$_3$. We therefore used the values given for $p = 1$~atm\footnote{1~atm = 1\,013~hPa.} and $T = 300$~K, and set them constant along the $z$-axis. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Fig5.ps}} \caption{Absorption coefficients of atmospheric absorbers (in cm$^2$\,g$^{-1}$), as a function of the wavelength. The photo-absorption coefficients corresponding to H$_2$O, O$_2$, O$_3$ and CO$_2$ (solid lines) are plotted against their respective Rayleigh scattering coefficient (dotted line), except O$_3$, plotted against the Rayleigh scattering coefficient of N$_2$.} \label{fig:abc} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Fig6.ps}} \caption{Dependence of the absorption coefficient of O$_3$ on pressure and temperature. For clarity, each line has been shifted down by $5 \cdot 10^4$~cm$^2$\,g$^{-1}$ with respect to the previous one.} \label{fig:abc_variation} \end{figure} \subsubsection{Rayleigh diffusion} Light is scattered toward short wavelengths by atmospheric molecules whose dimensions are comparable to $\lambda$. Rayleigh diffusion could be an important indicator of the most abundant atmospheric species. Molecular nitrogen, for instance, does not present any noticeable spectroscopic lines between 0.2 and 2~$\mu$m. With a transit observation, the presence of a gas without spectroscopic lines like nitrogen in the Earth atmosphere can be indirectly inferred from the wavelength-dependance of the spectrum ratio continuum. Since Rayleigh scattering cross section of CO$_2$ is high, Venus-like atmospheric signatures should also present an important Rayleigh scattering contribution. We have therefore estimated these different contributions. The Rayleigh scattering cross section, $\sigma_R$, can be expressed in cgs units as: (Bates 1984; Naus \& Ubachs 1999; Sneep \& Ubachs 2004) \begin{equation} \label{eq:rayleigh_xsc} \sigma_R(\bar{\nu}) = \frac{24 \pi^3 \bar{\nu}^4}{n^2} \left( \frac{r(\bar{\nu})^2 - 1}{r(\bar{\nu})^2 + 2} \right) \end{equation} where $\bar{\nu} = 1 / \lambda$, $n$ is the number density (cm$^{-3}$) and $r$ is the refractive index of the gas. The total Rayleigh scattering includes weighted contributions from N$_2$, O$_2$, CO$_2$ and H$_2$O (i.e., $\sigma_R = \sum_{i} Y_i {\sigma_R}_i$), and so we need all the corresponding refractive indexes. These are found in Bates (1984) and Sneep \& Ubachs (2004) for N$_2$, O$_2$ and CO$_2$.\footnote{We noted a typographical error in the CO$_2$ refractive index formula (Eq.~13) in Sneep \& Ubachs (2004): in order to yield the correct values, results from this expression should be divided by $10^3$ (M.~Sneep, personal communication).} The refractive index for H$_2$O comes from Schiebener et al.\ (1990). Tests have proved the different refractive indexes do not significantly change with temperature and pressure. We have therefore calculated the indexes for standard conditions ($15\degr$C and 1013~hPa). \subsubsection{Refraction} Depending on the wavelength, the refraction may bring into the line of sight rays coming from different parts of the star. To quantify the importance of that effect, we calculate the maximum deviation, $\Delta\theta$, due to the wavelength dependence of the refraction index, using the formula given by Seager \& Sasselov (2000) and the refractive index at the surface ($h = 0$) between 0.2 and 2~$\mu$m. We obtain \mbox{$\Delta\theta \approx 0.3\arcmin$}. This represents about 1.5\%, 1\% and 0.5\% of the angular diameter of the star (F-, G- and K-type star, respectively) as seen from the planet. We can therefore consider this effect negligible as long as there are no important variations of the stellar flux on scales lower than the surface corresponding to these numbers. \subsection{Choice of test models} \label{sec:choice} We chose 9 cases, divided into 3 categories: 1~R$_\oplus$-planets (models~A1, B1 and~C1), 0.5~R$_\oplus$-planets (A2, B2 and~C2) and 2~R$_\oplus$-planets (A3, B3 and~C3). The parameters for each model are summarized in Table~\ref{tab:models}. For theses ranges of planetary radii, the depth of the occultation by the tested planets will differ by a factor of $\sim$16 at most during their transit. Notice that a better detection of the transit itself does not always imply a better detection for the atmosphere of the transiting planet. On the contrary, in some cases, the fainter the transit is, the more detectable the atmosphere will be! In any case, we naturally need to secure the detection of the planet itself before looking for an atmosphere. The choice of studying planets with a variety of sizes gives us the possibility to explore a large range of planet characteristics, in mass, radius and density. The Earth density is 5.5~g\,cm$^{-3}$. A planet having the internal composition of the Earth and twice its radius would weigh $\sim$10 times more, while a planet half large would weigh $\sim$10 times less (Sotin et al.~2005). That gives densities of 6.1 and 4~g\,cm$^{-3}$, respectively. We thus have 3 cases, each of which can be coupled with a plausible atmosphere. We chose a N$_2$/O$_2$-rich atmosphere (similar to that of the Earth) for models~A1, A2 and~A3, and a Cytherean (i.e., Venus-like\footnote{Cythera (\emph{K}$\acute{\upsilon}\theta\eta\rho\alpha$) is an Ionian island where, according to the Greek mythology, the goddess Aphrodite/Venus first set foot. See \texttt{http://en.wikipedia.org/wiki/Cytherean}.}) CO$_2$-rich atmosphere for models~B1, B2 and~B3. Note that the atmospheric pressure profiles are scaled from the 1~R$_\oplus$ cases (A1 and B1) to the 0.5 and 2~R$_\oplus$ models. In doing so, we did not include any species that showed a peak of concentration in altitude, such as the O$_3$ layer in model~A1. In fact, the O$_3$ peak does not depend only on the hydrostatic equilibrium, but also on the photochemical equilibrium at the tropopause of the Earth. For that reason O$_3$ is absent in models~A2 and~A3. L\'eger et al.\ (2004) suggested the existence of `ocean-planets', whose internal content in volatiles (H$_2$O) might be as high as 50\% in mass. Such planets would be much less dense than telluric ones. We are particularly interested in those ocean-planets since the lower the density of the planet is, the higher the atmosphere extends above the surface. These objects could have densities of 1.8, 2.8 and 4.1~g\,cm$^{-3}$ for radii of 0.5, 1 and 2~R$_\oplus$ (Sotin et al.~2005), which are relatively small, but reasonable if compared with Titan's density (1.88~g\,cm$^{-3}$). The huge quantity of water on the surface of an ocean-planet could produce a substantial amount of water vapor in their atmosphere, if the temperature is high enough. A non-negligible concentration of CO$_2$ might be present as well in those atmospheres (Ehrenreich et al.~2005b). Using this information on ocean-planets, we can simulate three extra cases, namely~C1, C2 and~C3 (Table~\ref{tab:models}). \subsection{Choice of different stellar types} \label{sec:distance} In this work, we consider planets orbiting in the habitable zone (HZ) of their parent star. Our atmospheric models are not in fact a good description for planets orbiting too close to their parent star. For instance, the heating of the atmosphere by an extremely close star could trigger effects like evaporation, invalidating the hydrostatic equilibrium we assumed (see, for instance, Lecavelier des Etangs et al.\ 2004; Tian et al.\ 2005). The reduced semi-major axis $a_r$ of the orbit of all planets we have considered is defined as: \begin{equation} \label{eq:a_r} a_r = a \cdot (L_\star / L_\odot)^{-0.5}. \end{equation} We set $a_r = 1$~astronomical unit (AU), so that the planet is in the HZ of its star. Here we focus on Earth-size planets orbiting around different main sequence stars, such as K-, G- and F-type stars, since the repartition of stellar photons in the spectrum is different from one spectral type to another. Planets in the HZ of K, G and F stars, with $a_r = 1$~AU, should have a real semi-major axis of 0.5, 1 and 2~AU, respectively. \section{Signal-to-noise ratio for ideal observations} \label{sec:S/N} Prior to the atmospheres, we need to detect the planets themselves with a dedicated survey, as the one proposed by Catala et al.\ (2005). The transmission spectroscopy we theoretically study here require the use of a large space telescope. Hence, we need to quantify the S/N of such observations to determine the detectability of the atmospheric signatures for a transiting Earth-size exoplanet. The S/N will depend on both instrumental and astrophysical parameters. \subsection{Instrumental requirements} \label{sec:S/N_instru} The first relevant parameter relative to the instrumentation is the effective area of the telescope collecting mirror, $S$, which can be expressed as $S=(\epsilon D)^2 \pi / 4$. The coefficient $\epsilon^2$ accounts for the instrumental efficiency and $\epsilon D$ is thus the `effective diameter' of the mirror. Up to present, all exoplanetary atmospheric signatures have been detected by the Space Telescope Imaging Spectrograph (STIS) on board the \emph{Hubble Space Telescope (HST)}. This instrument, now no longer operative, was very versatile\footnote{STIS was used for imagery, spectro-imagery, coronography and low and high resolution spectroscopy.} and consequently not planned to have high efficiency. It had a throughput $\epsilon^2 \approx 2\%$ from 200 to 300~nm, and $\epsilon^2 \approx 10\%$ from 350 to 1\,000~nm. As the majority of photons we are interested in is available in the range from 350 to 1\,000~nm, we reasonably assume that a modern spectrograph has a mean $\epsilon^2$ significantly greater than 10\% from 200 to 2\,000~nm. Present day most efficient spectrographs have $\epsilon^2 \approx 25\%$ in the visible, so it seems reasonable to imagine that next generation spectrographs, specifically designed to achieve high sensitivity observations, could have throughput of $\epsilon^2 \approx 25\%$, or $\epsilon = 50\%$. Another parameter linked to the instrument is the spectral resolution, $\mathcal{R}$. In the following, $\mathcal{R}$ will be assumed to be about 200, i.e. 10~nm-wide spectral feature can be resolved. Finally, it is legitimate to question the ability of the instrument detectors to discriminate the tenuous ($\sim$ 10$^{-6}$) absorption features in the transmitted spectra of Earth-size planets. In a recent past, sodium was detected at a precision of 50~parts-per-million (ppm) on a line as thin as about 1~nm by Charbonneau et al.\ (2002) using STIS. According to our results (see Sect.~\ref{sec:results}), some absorption features from Earth-size planet atmospheres show a $\sim$1~ppm dimming over $\sim$100~nm: the technological improvement required to fill the gap should not be unachievable. Besides, since we deal with \emph{relative} measurements -- the in-transit signal being compared to the out-of-transit one -- there is no need to have detectors with a perfect absolute calibration. Only a highly stable response over periods of several hours is required. Nevertheless, instrumental precision remains a challenging issue whose proper assessment will require further, detailed studies. \subsection{Physical constraints on the observation} \label{sec:S/N_physics} The number of photons detected as a function of wavelength depends on the spectral type of the star, while the total number of photons received in an exposure of duration $t$ depends on the apparent magnitude of the star, $V$. The stellar spectra $F_{\star}^{V=0}(\lambda)$ are from \object{$\rho$~Capricorni} (F2\,{\sc iv}), \object{HD~154\,760} (G2\,{\sc v}) and \object{HD~199\,580} (K2\,{\sc iv}) and are taken from the Bruzual-Persson-Gunn-Stryker (BPGS) spectrophotometry atlas\footnote{Available on \texttt{ftp.stsci.edu/cdbs/cdbs2/grid/bpgs/}.}. The fluxes (erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$) are given at a null apparent magnitude, so we re-scaled them for any apparent magnitude $V$, \mbox{$F_\star = F_{\star}^{V=0} \cdot 10^{-0.4 V}$}. The three corresponding spectra are plotted for a default magnitude $V=8$ in Fig.~\ref{fig:stars}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Fig7.ps}} \caption{Spectrum of a K2 (dashed line), G2 (solid line), and F2-type stars (dotted line) between 0.2 and 2~$\mu$m. The fluxes are scaled to an apparent magnitude $V=8$.} \label{fig:stars} \end{figure} The stellar type determines the radius and the mass of the star, so the transit duration (and thus the maximum time of exposure during the transit) is different depending on the star we consider. The transit duration is also a function of the semi-major axis of the planet orbit. Since we chose a constant reduced distance ($a_r = 1$~AU) for all planetary models (see Sect.~\ref{sec:distance}), the duration of transit depends on the stellar luminosity as well. From Zombeck (1990), we obtain the radii of F and K stars relatively to that of the Sun, respectively $R_\mathrm{F}/R_\odot \approx 1.25$ and $R_\mathrm{K}/R_\odot \approx 0.75$, the mass ratios, $M_\mathrm{F}/M_\odot \approx 1.75$ and $M_\mathrm{K}/M_\odot \approx 0.5$, and the luminosity ratios, respectively $L_\mathrm{F}/L_\odot \approx 4$ and $L_\mathrm{K}/L_\odot \approx 0.25$. Using Eq.~\ref{eq:a_r}, the duration of the transit is: \begin{equation} \label{eq:time_transit} \tau \approx \frac{13 \pi}{4}~\rm{h} \cdot \frac{R_\star}{R_\odot} \left(\frac{M_\star}{M_\odot}\right)^{-0.5} \left(\frac{L_\star}{L_\odot} \right)^{0.25}, \end{equation} where $13\pi/4$~h is the mean transit duration of a planet at 1~AU across a G star averaged over all possible impact parameter of the transit. From Eq.~\ref{eq:time_transit} we obtain mean transit durations of 7.6, 10.2 and 13.6~h for K-, G- and F-type star, respectively. In the following, we set $t = \tau$. Ideally, our observations are limited only by the stellar photon noise -- the detection of sodium at a precision of $\sim$50~ppm in the atmosphere of \object{HD~209\,458b} by Charbonneau et al. (2002) was in fact limited by the stellar photon noise. However, at the low signal levels we are searching for, the intrinsic stellar noise might need to be considered as well. Stellar activity, as well as convective motions will cause variations in both intensity and color in the target stars, on a large variety of timescales. The impact of stellar micro-variability on the detectability of photometric transits has been addressed by a number of studies (see, e.g., Moutou et al., 2005; Aigrain et al., 2004 -- especially their Fig.~8; Lanza et al., 2004), all pointing towards photometric variability levels in the range of $\sim$100--1\,000~ppm for durations of a few days. This is to be compared to the strength and duration of the atmospheric signatures we want to look at: they are $\sim$1~ppm variations lasting a few hours. While indeed the different time frequency and spectral content of these signatures versus the stellar noise will hopefully allow to discriminate the two, the impact of stellar micro-variability on such faint signals is likely to be significant, and may limit the ability to detect an atmosphere in a transiting planet. For instance, Aigrain et al. (2004) suggested K stars are more adapted than G or F stars regarding to the detection of terrestrial planets versus stellar micro-variability. However, note that the observation of several transits for each planet considered will confirm the signal detected in the first transit. For instance, at $a_r=1$~AU around a K star, a planet has a period of $\approx 0.3$~yr, allowing to schedule several transit observations within a short period of time. Finally, the usual technique to detect a spectral signature from a transit is to compare in-transit and out-of-transit observations (Vidal-Madjar et al.\ 2003, 2004). For all these reasons, we will assume in the following to be able to discriminate a transit signal from the stellar activity and consequently the photon-noise to be the limiting factor. Nevertheless, further and detailed analysis is certainly needed to quantify the effect of stellar micro-variability, as a function of the stellar type, but this is outside the scope of this paper. \subsection{Calculation of the signal-to-noise ratio} Now let $\varphi_\star$ be the maximum number of photons per element of resolution that can be received during $\tau$: \mbox{$\varphi_\star = F_\star(\lambda) \cdot \lambda / (h_P c) \cdot \mathcal{R} \cdot S \cdot \tau$}, where $h_P$ is Planck's constant and $c$ the speed of light. Some photons are blocked or absorbed by the planet, therefore the actual number of photons received during the transit is \mbox{$\varphi = \varphi_\star (1 + \Re')$} per element of resolution. From the observations, it is possible to obtain $\tilde{R}_P$, an estimate of the radius of the transiting planet $R_P$ (e.g., by using the integrated light curve or a fit to the observed spectrum ratio). This value corresponds to the flat spectrum ratio (i.e., a planet without atmosphere) that best fits the data. The corresponding number of photons received during an observation per element of resolution is therefore expressed as: \mbox{$\tilde{\varphi} = \varphi_\star \left[1 - (\tilde{R}_P / R_\star)^2\right]$}. The weighted difference between $\varphi$ and $\tilde{\varphi}$ can reveal the presence or the absence of a planetary atmosphere. We express the $\chi^2$ of this difference over all the elements of resolution $k$ as \mbox{$\sum_{k} \left[ \left(\varphi_k - \tilde{\varphi}_k \right) / \sigma_{\varphi_k} \right]^2$}. Here, the uncertainty of the number of photons received is considered to be dominated by the stellar photon noise (see Sect.~\ref{sec:S/N_physics}), that is $\sigma_\varphi = \sqrt{\varphi}$. We thus have: \begin{equation} \label{eq:chi2} \chi^2 = \sum_{k} \left( \frac{{\varphi_\star}_k}{1 + \Re'_k } \left[ \Re'_k + \left(\tilde{R}_P / R_\star \right)^2 \right]^2 \right). \end{equation} Given the $\chi^2$, the S/N can be directly calculated taking its square root. The best estimation can be obtained by minimizing the $\chi^2$ with respect to the radius $\tilde{R}_P$, i.e., \mbox{$\partial \chi^2 / \partial \tilde{R}_P = 0$}. From this formula we can calculate the estimated radius: \begin{equation} \label{eq:R_estimate} \tilde{R}_P = R_\star \sqrt{- \frac{\sum_{k}\left[{\varphi_\star}_k \Re'_k / \left(1+\Re'_k \right) \right]}{\sum_{k} \left[{\varphi_\star}_k / \left( 1 + \Re'_k \right) \right]}}. \end{equation} Once we determine if an atmosphere is observable or not (depending on the S/N ratio), we can use a similar approach to quantify the detectability of the single atmospheric absorber contributing to the total signal $\varphi$. Let \mbox{$\hat{\varphi}_i = {\varphi_\star} (1 + \hat{\Re'}_i)$} be the signal obtained by filtering the stellar light out of all atmospheric absorbers except the $i^\mathrm{th}$, and let $\tilde{\left(\hat{\varphi}_i\right)}$ be its estimation. Here, $\hat{\Re}'_i$ is the spectrum ratio calculated when the species $i$ is not present in the atmosphere. Further, since $\tilde{\left(\hat{\varphi}_i\right)} \approx \alpha_i \hat{\varphi}_i$, we can deduce the presence of absorber $i$ in the atmosphere, by simply comparing the fit we made assuming its absence ($\alpha_i \hat{\varphi}_i$) with the measured signal ($\varphi$): \begin{equation} \chi^2_i = \sum_{k} \left( \frac{{\varphi_\star}_k}{1 + \Re'_k} \left[ \left(1 + \Re'_k \right) - \alpha_i \left(1 + \hat{\Re}'_{ik} \right) \right]^2 \right), \end{equation} where \begin{equation} \alpha_i = \frac{\sum_k \left[ {\varphi_\star}_k \left( 1 + \hat{\Re}'_{ik} \right) \right]}{\sum_{k} \left[ {\varphi_\star}_k \left( 1 + \hat{\Re}'_{ik} \right)^2 / \left( 1 + \Re'_k \right) \right]}. \end{equation} \section{Results and discussion} \label{sec:results} The results of our computations are displayed in Tables~\ref{tab:results1} \&~\ref{tab:results2} and plotted as spectrum ratios in Figs.~\ref{fig:ABC_ratios}, \ref{fig:DEF_ratios} \&~\ref{fig:GHI_ratios}. \subsection{Spectral features of interest} Here we summarize the contributions of each atmospheric absorber to the spectrum ratio for various models. The spectral resolution of the plots presented here is 10~nm. The most prominent spectral signatures, when present, are those of O$_3$ and H$_2$O. Carbon dioxide is hard to distinguish from H$_2$O bands and/or its own Rayleigh scattering. Molecular oxygen transitions are too narrow to significantly contribute to the spectrum ratio. \subsubsection{Ozone} In the spectral domain studied here, the Hartley (200--350~nm) and Chappuis (420--830~nm) bands of O$_3$ appear to be the best indicators of an Earth-like atmosphere. These bands are large (respectively 150 and 600~nm) and lay at the blue edge of the spectrum, where spectral features from other species are missing. There is noticeably no contamination by H$_2$O, and O$_2$ strong transitions are narrow and could be easily separated. Ozone bands significantly emerge from Rayleigh scattering and they correspond to very strong transitions, despite the small amount of O$_3$ present in the model~A1 atmosphere (\mbox{$Y_{\mathrm{O}_3} < 10^{-5}$}). When present, ozone is more detectable in an atmosphere similar to model~A2. \subsubsection{Water} The signature of H$_2$O is visible in a transit spectrum only if H$_2$O is substantially abundant above the clouds. This is not the case for models of Earth-like atmosphere like A1, A2 and~A3. On the contrary, the models of the ocean-planets (C1, C2 and~C3) show a major contribution from this molecule, in the form of four large bands that dominate the red part of the spectrum (at $\lambda \ga 950$~nm). For these three cases, H$_2$O can be significantly abundant above the clouds. \subsubsection{Carbon dioxide} The lines of CO$_2$ are about as strongly emerging from the `continuum' than the H$_2$O ones, but are often overlapping with these lines. The transitions around 1\,600~nm and the ones around 1\,950~nm are the easiest to identify, other bands are not observable if water is present. Rayleigh scattering and photo-absorption cross sections of CO$_2$ are comparable at most wavelengths below 1.8~$\mu$m (see Fig.~\ref{fig:abc}), except for a few $\sim$10-nm wide bands. In fact, the more CO$_2$ is present in the atmosphere, the more opaque the atmosphere becomes. This implies it would be impossible for an observer on the surface of Venus to see the Sun. Carbon dioxide may be more detectable farther in the infrared, hence making desirable further investigations up to 2.5~$\mu$m. \subsubsection{Molecular oxygen} Molecular oxygen does not appear in the plots: its bands at 620, 700, 760 and 1\,260~nm are too thin to appear with only 10~nm resolution. Besides, its Rayleigh scattering cross section almost completely masks its absorption features (see Fig.~\ref{fig:abc}) so that no large bands of O$_2$ can be used as an indicator of its presence. However, note that the presence of O$_3$ indirectly indicates the presence of O$_2$, as pointed out by L\'eger et al.\ (1993) and others. \subsubsection{Rayleigh scattering} When Hartley and Chappuis bands of O$_3$ are absent (all cases but A1), the Rayleigh scattering signature is clearly visible in the blue part of the spectrum ratio. On one side it masks the presence of some transitions, like those of O$_2$ and some of CO$_2$, but on the other side it can provide two important informations: (i) even if the spectral features cannot be distinguished because they are too thin or faint, the characteristic rising `continuum' as $\lambda^{-4}$ for short wavelengths is a clear indication that the planet has an atmosphere, and (ii) it indirectly indicates the presence of the most abundant species of the atmosphere, such as CO$_2$ and N$_2$, even if N$_2$ shows no spectral signature in the observed domain. As a consequence, Rayleigh scattering can be considered a way to detect N$_2$, provided clouds and/or aerosols do not in turn mask the Rayleigh scattering signature. To summarize, it is possible to detect the presence of the atmosphere of a transiting exoplanet thanks to the Rayleigh scattering, whatever the composition of the atmosphere is. Moreover, it is theoretically possible to discriminate between an O$_2$-rich atmosphere, where O$_3$ is expected to be present (L\'eger et al.~1993; Sagan et al.~1993) and a H$_2$O-rich atmosphere, as the O$_3$ lifetime is supposed to be extremely brief in a water-rich environment. In other words, we should be able to distinguish telluric Earth-like planets with low volatile content from volatile-rich planets. On the other hand, high spectral resolution is needed to discriminate between H$_2$O-rich planets and Cytherean worlds (B1, B2, B3). \subsection{Parameters influencing the signal-to-noise ratio} \subsubsection{Influence of the star} \label{sec:influence_star} From Table~\ref{tab:results1} it is clear that the best targets are K-type stars, rather than G- or F-type stars, the former allowing much better S/N than the latter. Two factors are determining the role of the star in the capabilities of detecting an exoplanet atmosphere: (i) The size $R_\star$ of the star, which directly influences the S/N (see Eq.~\ref{eq:chi2}) and the duration of transit (Eq.~\ref{eq:time_transit}) and (ii) the semi-major axis of the planet's orbit, which influences both the duration of transit and the probability to observe the transit from Earth (see below). These factors can explain the discrepancies between the S/N values obtained for different kind of stars in Table~\ref{tab:results1}. The probability, $\alpha$, that a planet transiting its parent star might be seen from the Earth is defined as $\alpha \equiv P\{{\rm transit}\} = R_\star / a $, with $R_\star$ being the radius of the star and $a$ the semi-major axis of the planet's orbit. This probability is about 10\% for `hot Jupiters', while it is 0.3\%, 0.5\% and 0.7\% for planets orbiting in the HZ of a F, G or K star, respectively. In addition, K stars are more numerous than other types of stars. From the CDS database, we find there is approximatively a total of $10\,000 \cdot 10^{0.6(V-8)}$ main sequence stars brighter than a given magnitude $V$ on the whole sky.\footnote{We consider mostly bright stars, for which the distribution is essentially isotropic.} About $3/5$ of these are K type stars, against only $1/10$ for G stars. Let us now define $\beta$ to be the number of planet(s) per star, and $\gamma$ to be the fraction of the sky that is considered for a transit detection survey (in other words the efficiency of surveys to find the targets). We list in Table~\ref{tab:results2} the number of potential targets for each model. This number, $N$, corresponds to the number of targets detected with a 10-m telescope mirror effective size and with a S/N greater than or equal to 5. It is given by: \begin{equation} \label{eq:N_computed} N_{\mathrm{S/N} \geq 5,~\epsilon D = 10\mathrm{\,m}} = N_0 \cdot \alpha \cdot \beta \cdot \gamma \cdot \left(\frac{\mathrm{S/N}_{V=8,\,\epsilon D=10\mathrm{\,m}}}{5}\right)^3, \end{equation} where $N_0$ is about 6\,000, 1\,000 and 3\,000 for K, G and F stars respectively, i.e. the number of stars with magnitude $\geq 8$, and S/N$_{V=8,\,\epsilon D=10\,m}$ is the expected S/N ratio computed for a given atmosphere of a planet orbiting a $V=8$ star with a telescope having a mirror effective area of 10~m (this value is given in the last column of Table~\ref{tab:results1}). Since no Earth-size planet has been discovered so far, we have no real estimate of $\beta$. In the following, when it is not a free parameter we consider $\beta=1$\footnote{Actually, $\beta = 2$ in the Solar System because there are two Earth-size planets with atmospheres, namely Venus and the Earth.}. Catala et al.\ (2005) propose a $30\degr \times 30\degr$ survey dedicated to find planets around $< 11^\mathrm{th}$-magnitude stars, i.e., $\gamma \approx$ 2--3\% for such a project. Let be $N_{\mathrm{S/N},~\epsilon D}$, the number of potential targets reaching a minimum S/N ratio for a given mirror effective size $\epsilon D$, which scales from the value calculated using Eq.~\ref{eq:N_computed}, $N_{\mathrm{S/N} \geq 5, \epsilon D = 10\mathrm{\,m}}$, in the following way: \begin{equation} \label{eq:N_scaled} N_{\mathrm{S/N},~\epsilon D} = N_{\mathrm{S/N}\geq 5,~\epsilon D=10\mathrm{\,m}} \cdot \left(\frac{\mathrm{S/N}}{5}\right)^{-3} \cdot \left(\frac{\epsilon D}{10\mathrm{~m}}\right)^3. \end{equation} The values obtained for atmospheric detection are strongly in favor of a small, late type star. Note that this is also true for the detection of the planetary transit as well. \subsubsection{Effect of the atmospheric temperature gradient} \label{sec:temperature_effect} The thick CO$_2$ Venus-like atmospheres (B1, B2 and~B3, see Table~\ref{tab:results1} \&~\ref{tab:results2}) are more difficult to detect than other cases. Even if we set the top of the clouds at 10~km height, the detection remains more challenging than for model~A1. That is somewhat surprising, partly because CO$_2$ has strong transitions, particularly in the near infrared, and partly because of the larger scale height at the surface of the planet (14.3~km for model~B1, 8.8~km for model~A1). As a consequence, the atmosphere in model~B1 should have a larger vertical extent than in model~A1. In reality, the difficulty to characterize the atmospheres of models~B1, B2 and~B3 is related to the temperature profiles we chose (see Fig.~\ref{fig:A_profile} and~\ref{fig:B_profile}): At 50~km of altitude, the temperature of model~B1 is roughly 60~K colder than that of model~A1. This model in fact benefits from the positive stratospheric temperature gradient of the Earth. Moreover, the atmosphere for model~B1 ($\mu_\mathrm{B1}=43$~g\,mol$^{-1}$) is heavier than the one for model~A1 ($\mu_\mathrm{A1}=29$~g\,mol$^{-1}$). Therefore, at high altitude, the scale height is larger in model~A1 than in model~B1 (respectively 7.6~km and 3.9~km at an altitude of 50~km). \subsubsection{Effect of atmospheric pressure} \label{sec:pressure_effect} Note that the thickness of the atmosphere in model~B1 is almost half the one in A1, despite the intense surface pressure (100~atm), which should help to increase the upper level of the atmosphere, limited by the UV photo-dissociation ($h_\mathrm{max}$). The exponential decrease of pressure prevents, in fact, $p_0$ to play a key role: in order to counterbalance the effect of the negative temperature gradient, the surface pressure should have been $>10^6$~atm to obtain absorptions similar to the case of the Earth (model~A1). \subsubsection{Effect of the planet gravity and density} The atmospheric absorption is, at a first order, proportional to $H \cdot R_P$. At a given temperature and for a given atmospheric composition, the scale height $H$ is proportional to the inverse of the gravity acceleration, $g^{-1}$, or equivalently to $R_P^2/M_P$, where $M_P$ is the mass of the planet. As a result, the absorption is expected to be roughly inversely proportional to the bulk density of the planet, $\rho_P$, independently of the planet size. This effect is illustrated by the following examples: models~C1, C2 and~C3 all benefit from very extended atmospheres, given the weak value of $g$ in the three cases. For a planet as dense as the Earth (such that $g_\mathrm{C1} = g_\mathrm{A1}$), the results for the N$_2$/H$_2$O-rich atmosphere in models~C are close to the ones obtained for models~A. Both models~C and~A, present typical spectral features. In model~A1, ozone, for which the concentration peaks at the tropopause, gives a prominent signature in the blue edge of the spectral domain (the Hartley and Chappuis bands, as seen in Fig.~\ref{fig:ABC_ratios}, top panel). On the contrary, the saturated atmosphere of model~C1, which sustain H$_2$O up to high altitudes, yields strong bands around 0.14 and 0.19~$\mu$m (Fig.~\ref{fig:ABC_ratios}, bottom panel). The role played by $g$ can be better understood by comparing model~A3 or~B3 ($g=24.5$~m\,s$^{-2}$) to model~A2 or~B2 ($g=3.9$~m\,s$^{-2}$), and model~C3 ($g=14.7$~m\,s$^{-2}$) to model~C2 ($g=2$~m\,s$^{-2}$). Using absorption spectroscopy, it is clear that the atmospheres of small and light planets (i.e., with low surface gravity) are easier to detect than the ones of large and dense planets (i.e., with high surface gravity). Small and light exoplanets, however, may not be able to retain a thick atmosphere. In fact, high thermal agitation of atmospheric atoms causes particles to have a velocity in the tail of the Maxwellian distribution allowing them to escape into space (i.e., Jean's escape). It is therefore questionable if planets of the size of Titan can have a dense atmosphere at 1~AU from their star. Models~A2, B2 and~C2 enter that category. This problem concerns both small planets and giant exoplanet satellites. According to Williams et al.\ (1997), a planet having the density of Mars could retain N and O over more than 4.5~Gyr if it has a mass greater than 0.07~M$_\oplus$. Model planets~A2 and B2 have masses of 0.1~M$_\oplus$ and a density equivalent to that of Mars ($\approx$4~g\,cm$^{-3}$) so they would be able to retain an atmosphere (though they may not be able to have a 1~atm atmosphere, as for Mars). The ocean-planet model~C2 has a mass of 0.05~M$_\oplus$ for a density of 2.8~g\,cm$^{-3}$, and according to Williams et al.\ (1997), its atmosphere should consequently escape. However, although at 1~AU from the star, such a planet also has a huge reservoir of volatile elements. This reservoir should help to `refill' the escaping atmosphere. Note that an hydrodynamically escaping atmosphere should be easier to detect than a stable one, since it can bring heavier elements into the hot upper atmosphere. This effect is illustrated by the absorptions seen by Vidal-Madjar et al.\ (2003, 2004) in the spectrum of \object{HD~209\,458}, which originate in its transiting giant planet hydrodynamically escaping atmosphere. A model of an `escaping ocean' is studied by Jura (2004). This process would give interesting absorption signatures in the H$_2$O bands from the lower atmosphere and in the signatures of the photo-dissociation products of H$_2$O from the upper atmosphere (such as an absorption of Lyman~$\alpha$ photons by the hydrogen atom). See detailed discussion in Jura (2004). \begin{table*} \centering \begin{tabular}{*{10}{c}} \hline \hline Model & Description & Atm. type & $R_P$ & $M_P$ & $\rho_P$ & $g$ & $p_0$ & $H_0$ & $h_\mathrm{max}$ \\ & & & (R$_\oplus$) & (M$_\oplus$) & (g\,cm$^{-3}$) & (m\,s$^{-2}$) & (atm) & (km) & (km)\\ \hline A1 & ($\approx$)Earth & N$_2$/O$_2$-rich & 1 & 1 & 5.5 & 9.8 & 1 & 8.8 & 85 \\ B1 & ($\approx$)Venus & CO$_2$-rich & 1 & 1 & 5.5 & 9.8 & 100 & 14.3 & 50 \\ C1 & medium ocean-planet & N$_2$/H$_2$O-rich & 1 & 0.5 & 2.8 & 4.9 & 1 & 20.0 & 260 \\ A2 & small Earth & N$_2$/O$_2$-rich & 0.5 & 0.1 & 4.0 & 3.9 & 1 & 24.7 & 260 \\ B2 & small Venus & CO$_2$-rich & 0.5 & 0.1 & 4.0 & 3.9 & 1 & 40.0 & 99 \\ C2 & small ocean-planet & N$_2$/H$_2$O-rich & 0.5 & 0.05 & 1.8 & 2.0 & 1 & 61.4 & 499 \\ A3 & `super-Earth' & N$_2$/O$_2$-rich & 2 & 9 & 6.1 & 24.5 & 1 & 3.9 & 30 \\ B3 & `super-Venus' & CO$_2$-rich & 2 & 6 & 6.1 & 24.5 & 100 & 6.4 & 30 \\ C3 & big ocean-planet & N$_2$/H$_2$O-rich & 2 & 9 & 4.1 & 14.7 & 1 & 6.7 & 60 \\ \hline \end{tabular} \caption{Summary of test models.} \label{tab:models} \end{table*} \begin{table*} \centering \begin{tabular}{*{9}{c}} \hline \hline Model & Description & Star & \multicolumn{6}{c}{Signal-to-noise ratio} \\ & & & \multicolumn{6}{c}{(S/N)$_{V=8,\,\epsilon D=10\rm{~m}}$} \\ & & & w/o cloud & w/ clouds & H$_2$O & CO$_2$ & O$_3$ & O$_2$ \\ \hline & & K & 5.2 & 3.5 & 1.7 & 1.1 & 1.9 & 0.2 \\ A1 & ($\approx$)Earth & G & 3.2 & 2.3 & 0.8 & 0.5 & 1.2 & 0.2 \\ & & F & 2.3 & 1.7 & 0.5 & 0.3 & 0.9 & 0.1 \\ \hline & & K & 4.0 & 2.3 & 0.0 & 2.3 & - & - \\ B1 & ($\approx$)Venus & G & 2.1 & 1.2 & 0.0 & 1.2 & - & - \\ & & F & 1.3 & 0.7 & 0.0 & 0.7 & - & - \\ \hline & medium & K & 41 & 39 & 39 & 11 & - & - \\ C1 & ocean- & G & 22 & 20 & 20 & 5.4 & - & - \\ & planet & F & 14 & 13 & 13 & 3.3 & - & - \\ \hline & & K & 6.9 & 6.3 & 3.8 & 2.8 & - & 0.7 \\ A2 & small Earth & G & 4.3 & 4.0 & 1.8 & 1.4 & - & 0.5 \\ & & F & 3.2 & 3.0 & 1.1 & 0.8 & - & 0.3 \\ \hline & & K & 5.8 & 3.3 & 0.0 & 3.3 & - & - \\ B2 & small Venus & G & 3.0 & 1.6 & 0.0 & 1.7 & - & - \\ & & F & 1.9 & 1.0 & 0.0 & 1.0 & - & - \\ \hline & small & K & 47 & 46 & 46 & 17 & - & - \\ C2 & ocean- & G & 26 & 25 & 25 & 8.6 & - & - \\ & planet & F & 17 & 16 & 16 & 5.2 & - & - \\ \hline & & K & 4.6 & 1.1 & 0.9 & 0.5 & - & 0.1 \\ A3 & super-Earth & G & 2.5 & 0.6 & 0.4 & 0.2 & - & 0.1 \\ & & F & 1.7 & 0.4 & 0.3 & 0.1 & - & 0.0 \\ \hline & & K & 5.6 & 0 & 0 & 0 & - & - \\ B3 & super-Venus & G & 2.9 & 0 & 0 & 0 & - & - \\ & & F & 1.9 & 0 & 0 & 0 & - & - \\ \hline & big & K & 20 & 13 & 12 & 3.2 & - & - \\ C3 & ocean- & G & 10 & 6.5 & 6.3 & 1.5 & - & - \\ & planet & F & 6.7 & 4.1 & 4.0 & 0.9 & - & - \\ \hline \end{tabular} \caption{Summary of results: signal-to-noise ratios obtainable with a telescope mirror effective size of $\epsilon D = 10$~m pointing at a $V=8$ star. To get the S/N ratios for a different effective size $\epsilon D$, exposure time during transit, $t$, and/or apparent magnitude of the star, $V$, the result scales with \mbox{$(\epsilon D / 10\mathrm{~m}) \cdot (t/\tau)^{0.5} \cdot 10^{-0.2 (V - 8)}$} where $\tau$ is defined by Eq.~\ref{eq:time_transit}. The S/N by species are calculated for the models with clouds.} \label{tab:results1} \end{table*} \begin{table*} \centering \begin{tabular}{*{9}{c}} \hline \hline Model & Description & Star & Mirror & Limiting & Number & \multicolumn{3}{c}{Number~of~targets} \\ & & & eff. size (m) & magnitude & of stars & \multicolumn{3}{c}{for models w/ clouds} \\ & & & $(\epsilon D)_{\mathrm{S/N}\geq5,\,V=8}$ & $(V_\mathrm{Lim})_{\mathrm{S/N}\geq5,\,\epsilon D=10\mathrm{\,m}}$ & & \multicolumn{3}{c}{$(N)_{\mathrm{S/N}\geq5}$, $\epsilon=50\%$} \\ & & & w/ clouds & w/ clouds & & $\beta \cdot \gamma = 1$ & $\beta \cdot \gamma = 3\%$ & $\beta \cdot \gamma = 10\%$ \\ & & & & & & $D = 20$~m & D = $30$~m & $D = 30$~m \\ \hline & & & (a) & (b) & (c) & \multicolumn{3}{c}{(d)} \\ \hline & & K & 14 & 7.22 & 2\,042 & 14 & 1 & 4 \\ A1 & ($\approx$)Earth & G & 22 & 6.31 & 96 & $<1$ (0.4) & $\ll 1$ & $<1$ (0.1) \\ & & F & 29 & 5.66 & 118 & $<1$ (0.3) & $\ll 1$ & $<1$ (0.1) \\ \hline & & K & 21 & 6.31 & 580 & 4 & $<1$ (0.4) & 1 \\ B1 & ($\approx$)Venus & G & 43 & 4.90 & 13 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\ & & F & 68 & 3.73 & 8 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\ \hline & medium & K & 1.3 & 12.5 & $>3\cdot10^6$ & 19\,602 & 1\,984 & 6\,615 \\ C1 & ocean- & G & 2.5 & 11.0 & 63\,095 & 321 & 32 & 108 \\ & planet & F & 3.9 & 10.1 & 54\,591 & 157 & 15 & 52 \\ \hline & & K & 8 & 8.50 & 11\,971 & 84 & 8 & 28 \\ A2 & small Earth & G & 13 & 7.51 & 508 & 2 & $<1$ (0.2) & $<1$ (0.6) \\ & & F & 17 & 6.90 & 656 & 1 & $<1$ (0.1) & $<1$ (0.3) \\ \hline & & K & 15 & 7.10 & 1\,730 & 12 & 1 & 4 \\ B2 & small Venus & G & 31 & 5.52 & 32 & $<1 (0.1)$ & $\ll 1$ & $\ll 1$ \\ & & F & 50 & 4.50 & 23 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\ \hline & small & K & 1.1 & 12.8 & $>4\cdot10^6$ & 33\,569 & 3\,398 & 11\,329 \\ C2 & ocean- & G & 2.0 & 11.5 & 125\,892 & 600 & 60 & 202 \\ & planet & F & 3.1 & 10.5 & 94\,868 & 307 & 31 & 103 \\ \hline & & K & 45 & 4.71 & 63 & $<1$ (0.4) & $\ll 1$ & $<1$ (0.1) \\ A3 & super-Earth & G & 86 & 3.39 & 1 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\ & & F & 121 & 2.51 & 1 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\ \hline & & K & $> 10^3$ & - & 0 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\ B3 & super-Venus & G & $> 10^3$ & - & 0 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\ & & F & $> 10^3$ & - & 0 & $\ll 1$ & $\ll 1$ & $\ll 1$ \\ \hline & big & K & 4.0 & 10.1 & 109\,182 & 682 & 69 & 230 \\ C3 & ocean- & G & 7.7 & 8.57 & 2\,197 & 10 & 1 & 3 \\ & planet & F & 13 & 7.57 & 1\,656 & 4 & $<1$ (0.4) & 1 \\ \hline \end{tabular} \caption{Summary of results: mirror effective size and number of targets. \newline (a) Effective size $(\epsilon D)_{\mathrm{S/N}\geq5,\,V=8}$ of the telescope mirror required to obtain $\mathrm{S/N}=5$ for a $V=8$ star, based on the numbers displayed for the models with clouds (see Table~\ref{tab:results1}).\newline (b) The limiting magnitude at which the number of targets in the last column is given. This can be expressed as \mbox{$(V_\mathrm{Lim})_{\mathrm{S/N}\geq5,\,\epsilon D=10\mathrm{\,m}} = 5 \cdot \log_{10} \left[ \left(\mathrm{S/N}_{V=8,\,\epsilon D = 10}\right) / 5 \cdot (\epsilon D) / 10\mathrm{~m} \right]+8$}. \newline (c) Total number of given spectral-type stars brighter than the limiting magnitude.\newline (d) Number of potential targets calculated with Eq.~\ref{eq:N_computed}, using the S/N value of the models with clouds and assuming various $\beta \cdot \gamma$ values. The coefficients $\beta$ and $\gamma$ are defined in the text. When the number of potential targets is slightly less than 1, the value is given between parenthesis. Use Eq.~\ref{eq:N_scaled} to scale the value displayed in the column to any mirror effective size $\epsilon D$ and minimum S/N.} \label{tab:results2} \end{table*} \begin{figure}[htbp!] \resizebox{\hsize}{!}{\includegraphics{Fig8a.ps}} \resizebox{\hsize}{!}{\includegraphics{Fig8b.ps}} \resizebox{\hsize}{!}{\includegraphics{Fig8c.ps}} \caption{Spectrum ratios for models~A1 (a), B1 (b) and~C1 (c). The spectrum ratios have been respectively shifted by the values in parenthesis so that the absorption by the `solid disk' of the planet is 0~ppm. In the case of models with clouds, the `solid disk' is artificially increased by the cloud layer. The dashed line indicates the best fit estimation of the radius of the planet, $\tilde{R}_P$ (see Sect.~\ref{sec:S/N}) if we suppose there is no atmosphere.} \label{fig:ABC_ratios} \end{figure} \begin{figure}[htbp!] \resizebox{\hsize}{!}{\includegraphics{Fig9a.ps}} \resizebox{\hsize}{!}{\includegraphics{Fig9b.ps}} \resizebox{\hsize}{!}{\includegraphics{Fig9c.ps}} \caption{Spectrum ratios for models~A2 (a), B2 (b) and~C2 (c). The `saturation effect' in H$_2$O lines, for model~C2, is a consequence of the atmosphere being optically thick at the upper atmospheric level, $h_\mathrm{max}$. In fact, if one consider there is no more water above this level due to photo-dissociation (see Sect.~\ref{sec:b_max}), such transmitted spectrum plots allow to determine the level where H$_2$O photo-dissociation occurs in an exoplanet atmosphere.} \label{fig:DEF_ratios} \end{figure} \begin{figure}[htbp!] \resizebox{\hsize}{!}{\includegraphics{Fig10a.ps}} \resizebox{\hsize}{!}{\includegraphics{Fig10b.ps}} \resizebox{\hsize}{!}{\includegraphics{Fig10c.ps}} \caption{Spectrum ratios for models~A3 (a), B3 (b) and~C3 (c).} \label{fig:GHI_ratios} \end{figure} \section{Conclusion} The vertical extent of the atmosphere is of extreme importance as concerns the detectability of a remote atmosphere by absorption spectroscopy. This tends to favor less dense objects, like giant exoplanet satellites (as would be an `exo-Titan') or volatile-rich planets (as ocean-planets, theoretically possible but not observed yet). Cytherean atmospheres are the most challenging to detect. Surface parameters, such as surface pressure and temperature, are not crucial. A temperature gradient that becomes positive at few tens of kilometers height (for instance owing to photochemistry) might help the detection. Our results show that late-type stars are better for detecting and characterizing the atmospheres of planets in transit, since they are smaller, more numerous and present a better probability of being transited by a planet. The strongest signatures of the atmosphere of a transiting Earth-size planet could be those of H$_2$O (6~ppm in the case of hypothetical ocean-planets), O$_3$ ($\sim$1--2~ppm) and CO$_2$ (1~ppm), considering our spectral study from the UV to the NIR (i.e., from 0.2 to 2~$\mu$m). The presence of an atmosphere around hundreds of hypothetical `ocean-planets' (models C) could be detected with a 10--20~m telescope. The atmospheres of tens of giant exoplanet satellites (model A2) could be in the range of a 20--30~m instrument. A 30--40~m telescope would be required to probe Earth-like atmospheres around Earth-like planets (model A1). These numbers suppose that Earth-size planets are frequent and are efficiently detected by surveys. Finally, planets with an extended upper atmosphere, like the ones described by Jura (2004), hosting an `evaporating ocean', or the planets in an `hydrodynamical blow-off state', are the natural link between the planets we have modelled here and the observed `hot Jupiters'. \begin{acknowledgement} We warmly thank Chris Parkinson for careful reading and comments that noticeably improved the manuscript, David Crisp for the code \texttt{LBLABC} and the anonymous referee for thorough reading and useful comments on the manuscript. G. Tinetti is supported by NASA Astrobiology Institute -- National Research Council. \end{acknowledgement}
2024-02-18T23:39:49.283Z
2005-10-07T13:52:12.000Z
algebraic_stack_train_0000
522
12,458
proofpile-arXiv_065-2731
\section{Introduction} Establishing how galaxies formed and evolved to become today's galaxies remains one of the fundamental goals of theorists and observers. The fact that we see a snapshot of the universe as if it was frozen in time, prevents us from directly following the process of galaxy assembly, growth, ageing, and morphological metamorphosis with time. The alternative commonly pursued is to look for evolutionary signatures in surveys of large areas of the sky . Recently, Heavens et al. (2004) analyzed the `fossil record' of the current stellar populations of $\sim$100,000 galaxies ($0.005<z<0.34$) from the Sloan Digital Sky Survey (SDSS) and noted a mass dependence on the peak redshift of star--formation. They claim that galaxies with masses comparable to a present-day L* galaxy appears to have experienced a peak in activity at $z\sim0.8$. Objects of lower (present-day stellar) masses ($< 3 \times 10^{11}$M$_{\odot}$) peaked at $z\le$0.5. Bell et al. (2004) using the COMBO-17 survey (Classifying Objects by Medium-Band Observations in 17 filters) found an increase in stellar mass of the red galaxies (i.e. early--types) by a factor of two since $z\sim 1$. Papovich et al. (2005) using the HDF-N/NICMOS data suggest an increase in the diversification of stellar populations by $z\sim$1 which implies that merger--induced starbursts occur less frequently than at higher redshifts, and more quiescent modes of star-formation become the dominant mechanism. Simultaneously, around $z\sim$1.4, the emergence of the Hubble--sequence galaxies seems to occur. Connecting the star formation in the distant universe ($ z > 2$) to that estimated from lower redshift surveys, however, is still a challenge in modern astronomy. Using the Lyman break technique (e.g. Steidel et al. 1995), large samples of star--forming galaxies at $2<z<4.5$ have been identified and studied. Finding unobscured star forming galaxies in the intermediate redshift range ($0.5<z<1.5$) is more difficult since the UV light ($\lambda$ $\sim$ 1000--2000 \AA) that comes from young and massive OB stars is redshifted into the near-UV. The near-UV detectors are less sensitive than optical ones which makes UV imaging expensive in telescope time. For instance, $\sim$30\% of HST time in the Hubble Deep Field campaign was dedicated to the U-band (F300W - $\lambda_{\rm max}$ = 2920 \AA), whereas the other 70\% was shared between B, V, and I-bands. Inspite of this, the limiting depth reached in the U band is about a magnitude shallower than in the other bands. Recently, Heckman et al. (2005) attempted to identify and study the local equivalents of Lyman break galaxies using images from the UV-satellite GALEX and spectroscopy from the SDSS. Amongst the UV luminous population, they found two kinds of objects: 1) massive galaxies that have been forming stars over a Hubble time which typically show morphologies of late-type spirals; 2) compact galaxies with identical properties to the Lyman break galaxy population at $z\sim 3$. These latter are genuine starburst systems that have formed the bulk of their stars within the last 1--2 Gyr. Establishing the population of objects that contributes to the rise in the SFR with lookback time has strong implications to theories of galaxy evolution and can only be confirmed by a proper census of the galaxy population at the intermediate$-z$ epoch ($0.4<z<1.5$). In the present paper we identify a sample of intermediate redshift UV luminous galaxies and seek to understand their role in galaxy evolution. We have used data from the Great Observatories Origins Deep Survey (GOODS) in combination with an ultra deep UV image taken with HST/WFPC2 (F300W) to search for star-forming galaxies. The space-UV is the ideal wavelength to detect unobscured star-forming galaxies whereas the multiwavelength ACS images (B, V, i, z) are ideal for morphological analysis of the star-forming objects. This paper is organized as follows: \S 2 describes the data processing, \S 3 presents the sample, \S 4 discusses redshifts, \S 5 presents various issues concerning their colors and age, \S 6 describes the morphological classification, \S 7 discusses the sizes while and presents comparison with Lyman Break Galaxies. Finally, \S 8 summarizes the main conclusions. Throughout this paper, we use a cosmology with $\Omega_{\rm M}=0.3$, $\Omega_{\Lambda}=0.7$~and $h=0.7$. Magnitudes are given in the AB-system. \section{The Data} The Ultra Deep Field (UDF) provided the deepest look at the universe with HST taking advantage of the large improvement in sensitivity in the red filters that ACS provides. In parallel to the ACS UDF other instruments aboard HST also obtained deep images (Fig. \ref{hudfpar2w}). In this paper we analyze the portion of the data taken with the WFPC2 (F300W) which falls within the GOODS-S area (Orient 310/314); another WFPC2 image overlaps with the Galaxy Evolution From Morphology and SEDs (GEMS) survey area. Each field includes several hundred exposures with a total exposure time of 323.1 ks and 278.9 ks respectively. The 10$\sigma$ limiting magnitude measured over 0.2 arcsec$^{2}$ is 27.5 magnitudes over most of the field, which is about 0.5 magnitudes deeper than the F300W image in the HDF-N and 0.7 magnitudes deeper than that in the HDF-S. \subsection{Data Processing} A total of 409 WFPC2/F300W parallel images, with exposure times ranging from 700 seconds to 900 seconds overlap partially with the GOODS-S survey area. Each of the datasets was obtained at one of two orientations of the telescope: (i) 304 images were obtained at Orient 314 and (ii) 105 images were obtained at Orient 310. We downloaded all 409 datasets from the MAST data archive along with the corresponding data quality files and flat fields. By adapting the drizzle based techniques developed for data processing by the WFPC2 Archival Parallels Project (Wadadekar et al. 2005), we constructed a cosmic ray rejected, drizzled image with a pixel scale of 0.06 arcsec/pixel. Small errors in the nominal WCS of each individual image in the drizzle stack were corrected for by matching up to 4 star positions in that image with respect to a reference image. Our drizzled image was then accurately registered with respect to the GOODS images by matching sources in our image with the corresponding sources in the GOODS data, which were binned from their original scale of 0.03 arcsec/pixel to 0.06 arcsec/pixel. Once the offsets between the WFPC2 image and the GOODS image had been measured, all 409 images were drizzled through again taking the offsets into account, so that the final image was accurately aligned with the GOODS images. The WFPC2 CCDs have a small but significant charge transfer efficiency problem (CTE) which causes some signal to be lost when charge is transferred down the chip during readout. The extent of the CTE problem is a function of target counts, background light and epoch. Low background images (such as those in the F300W filter) at recent epochs are more severely affected. Not only sources, but also cosmic rays leave a significant CTE trail. We attempted to flag the CTE trails left by cosmic rays in the following manner: if a pixel was flagged as a cosmic ray, adjacent pixels in the direction of readout (along the Y-axis of the chip) were also flagged as cosmic-ray affected. The number of pixels flagged depended on the position of the cosmic ray on the CCD (higher row numbers had more pixels flagged). With this approach, we were able to eliminate most of the artifacts caused by cosmic-rays in the final drizzled image. \section{Catalogs} We detected sources on the U-band image using SExtractor (SE) version 2.3.2 (Bertin \& Arnouts 1996). Our detection criterion was that a source must exceed a $1.5\sigma$ sky threshold in 12 contiguous pixels. We provided the weight image (which is an inverse variance map) output by the final drizzle process as a {\it MAP$_{-}$WEIGHT} image to Sextractor with {\it WEIGHT$_{-}$TYPE} set to {\it MAP$_{-}$WEIGHT}. This computation of the weight was made according to the prescription of Casertano et al. (2000). It takes into account contributions to the noise from the sky background, dark current, read noise and the flatfield and thus correctly accounts for the varying S/N over the image, due to different number of overlapping datasets at each position. During source detection, the sky background was computed locally. A total of 415 objects were identified by SE. Fig.~\ref{numcounts} shows the cumulative galaxy counts using {\it MAG$_{-}$AUTO} magnitudes (F300W) from SE. Only sources within the region of the image where we have full depth data were included in this computation. \section{Redshifts} Spectroscopic redshifts are available for 12 of the objects in the F300W catalog (taken from the ESO/GOODS-CDFS spectroscopy master catalog\footnote{ http://www.eso.org/science/goods/spectroscopy/CDFS$_{-}$Mastercat/}). For the remaining objects, we calculate photometric redshifts using a version of the template fitting method described in detail in Dahlen et al. (2005). The template SEDs used cover spectral types E, Sbc, Scd and Im (Coleman et al. 1980, with extension into UV and NIR-bands by Bolzonella et al. 2000), and two starburst templates (Kinney et al. 1996). In addition to data from the F300W band, we use multi-band photometry for the GOODS-S field, from $U$~to $K_s$~bands, obtained with both $HST$~and ground-based facilities (Giavalisco et al. 2004). As our primary photometric catalog, we use an ESO/VLT ISAAC $K_s$-selected catalog including $HST$~WFPC2 $F300W$~and ACS $BViz$~data, combined with ISAAC $JHK_s$ data. We choose this combination as our primary catalog due to the depth of the data and the importance to cover both optical and NIR-bands when calculating photometric redshifts. This catalog provides redshifts for 72 of the objects detected in the $F300W$~band. The two main reasons for this relatively low number is that part of the WFPC2 {\it $F300W$}~image lies outside the area covered by ACS+ISAAC, and that UV selected objects are typically blue and may therefore be too faint to be included in a NIR selected catalog. For these objects, we use a ground-based photometric catalog selected in the $R$-band which includes ESO (2.2m WFI, VLT-FORS1, NTT-SOFI) and CTIO (4m telescope) observations covering $UBVRIJHK_s$. This adds 146 photometric redshifts. Finally, to derive photometric redshifts for objects that are too faint for inclusion in either of the two catalogs described above, we use ACS $BViz$ and WFPC2 $F300W$ photometry to obtain photometric redshifts. This adds 76 photometric redshifts to our catalog. In summary, we have spectroscopic redshifts for 12 objects and photometric redshifts for 294. Subsequent analysis in this paper, only includes the 306 sources with photometric or spectroscopic redshifts. The remaining 109 objects in the $F300W$~catalog belong to one or more of the following four categories (i) outside the GOODS coverage area (ii) too faint for photometric redshifts to be determined, (iii) identified as stars (iv) are `single' objects in the optical (and/or NIR) bands but are fragmented into multiple detections in the $F300W$-band. In such cases, photometric redshifts are only calculated for the `main' object. The redshift distribution of our sample is shown in Figure \ref{histphtzall}. To investigate the redshift accuracy of the GOODS method, we compare the photometric redshifts with a sample of 510 spectroscopic redshifts taken from the ESO/GOODS-CDFS spectroscopy master catalog. We find an overall accuracy $\Delta_z\equiv\langle|z_{\rm phot}-z_{\rm spec}|/(1+z_{\rm spec})\rangle\sim 0.08$ after removing a small fraction ($\sim$3\%) of outliers with $\Delta_z>0.3$. Since starburst galaxies, which constitute a large fraction of our sample, have more featureless spectra compared to earlier type galaxies with a pronounced 4000\AA-break, we expect the photometric redshift accuracy to depend on galaxy type. Dividing our sample into starburst and non-starburst populations, we find $\Delta_z\sim$0.11 and $\Delta_z\sim$0.07, respectively. This shows that the photometric redshifts for starburst have a higher scatter, the increase is, however, not dramatic. Also, the distribution of the residuals (spectroscopic redshift -- photometric redshift), has mean value that is close to zero for both, the starburst and the total population. Therefore, derived properties such as mean absolute magnitudes and mean rest-frame colors, should not be biased due to the photometric redshift uncertainty. \section{Colors} Using information from the photometric redshifts, rest-frame absolute magnitudes and colors are calculated using the recipe in Dahlen et al. (2005). The rest-frame U--B and B--V color distributions (Fig.~\ref{histub}) show a peak in the blue side of the distribution (U--B$\sim$0.4 and B--V$\sim$0.1). The majority of the objects that have these colors are actually in the high redshift bin and have $z_{\rm phot} > 0.7$ as shown in Fig.~\ref{histubz}. The bimodality in colors seen in the HDF-S (Wiegert, de Mello \& Horellou 2004) is not seen in this sample which is UV-selected and deficient in red objects. In Fig.~\ref{plotuvvphotz1p2}, we show the rest-frame U--V color and V-band absolute magnitude of all galaxies with $0.2<z_{\rm phot}<1.2$. The trend is similar to the one found recently by Bell et al. (2005) for $\sim$1,500 optically-selected $0.65 \le z_{\rm phot}<0.75$ galaxies using the 24 $\mu$m data from the Spitzer Space Telescope in combination with COMBO-17, GEMS and GOODS. However, the 25 galaxies in our UV-selected sample, which are in the same redshift range, are on average redder (U--V=0.79 $\pm$ 0.13 (median=0.83)) and fainter (M$_{\rm V}$=--19.1 $\pm$ 0.32 (median=--19.3)) than the average values in Bell et al. of all visually-classified types. This is due to the low depth of the GEMS survey coverage (one HST orbit per ACS pointing) which was used to provide the rest-frame V-band data of their sample. The UV-selected galaxies we are analyzing have deeper GOODS multiwavelength data (3, 2.5, 2.5 and 5 orbits per pointing in B, V, i, z, respectively) which GEMS lacks whenever outside the GOODS field. Fig.~\ref{plotuvagesb99} shows the U--V color evolution produced using the new version of the evolutionary synthesis code, Starburst99 (Vazquez \& Leitherer 2005) with no extinction correction. The new code (version 5.0) is optimized to reproduce all stellar phases that contribute to the integrated light of a stellar population from young to old ages. As seen from Fig.~\ref{plotuvagesb99}, the UV-selected sample has U--V colors typical of ages $>$100 Myr (U--V $>$ 0.3; average U--V=0.79$\pm$0.06). The 25 objects with $0.65 \le z_{\rm phot}<0.75$, for example, have U--V typical of ages 10$^{8.4}$ to 10$^{10}$ yr. Although we cannot rule out that these object might have had a different star formation history, and not necessarily produced stars continuously as adopted in the model shown, they do not have the U--V colors of young instantaneous bursts (10$^{6}$ yr) which have typically U--V $<$ --1.0 (Leitherer et al. 1999). Vazquez \& Leitherer (2005) have tested the predicted colors by comparing the models to sets of observational data. In Fig.~\ref{plotvibvdatasb99} we reproduce their Fig.~19, a color-color plot of the super star clusters and globular clusters of NGC 4038/39 (The Antennae) by Whitmore et al. (1999) together with model predictions and our data of UV-selected galaxies. No reddening correction was applied to the clusters which can be as high as E(B-V)=0.3 due to significant internal reddening in NGC 4038/49. The clusters are divided into three distinct age groups (i) young, (ii) intermediate ages (0.25 -- 1 Gyr) and (iii) old (10 Gyr). Vazquez \& Leitherer analyzed the effects of age and metallicity in the color predictions and concluded that age-metallicity degeneracy in the intermediate-age range ($\sim$ 200 Myr) is not a strong effect. This is the age when the first Asymptotic Giant Branch (AGB) stars influence the colors in their models. The vertical loop at (B--V)$\sim$ 0.0-0.3 is stronger at solar metallicity and is caused by Red Super Giants which are much less important at lower abundances. We interpret the large spread in the color--color plot of our sample as a combination of age, metallicity and extinction correction. The latter can bring some of the outliers closer to the model predictions, e.g. an E(B-V)=0.12 running parallel to the direction of metallicity and age evolution would bring more objects closer to the younger clusters with ages $<$ 0.25 Gyr. \section{Morphology} Classifying the morphology of faint galaxies has proved to be a very difficult task (e.g. Abraham et al. 1996; van den Bergh et al. 1996; Corbin et al. 2001; Menanteau et al. 2001) and automatized methods are still being tested (e.g. Conselice 2003, Lotz et al. 2005). In such a situation, spectral types which are obtained from the template fitting in the photometric redshift technique are a good morphology indicator (e.g. Wiegert, de Mello \& Horellou 2004) and in combination with other indicators help constrain galaxies properties. In Fig.~\ref{histst} we show the distribution of the spectral types of our sample. As expected in a UV-selected sample, the majority of the objects have SEDs typical of late-type and starburst galaxies. This trend does not uniformly hold if we separate the sample in redshift bins (Fig.~\ref{histstz}). The lower redshift bin ($z_{\rm phot}<0.7$) has a mix of all types whereas the higher redshift bin has mostly ($\sim$60\%) starbursts. The average absolute magnitudes for the different spectral types in the UV-selected sample are M$_{\rm B}$= --20.59 $\pm$ 0.24 (E/Sa), M$_{\rm B}$= --18.61 $\pm$ 0.17 (Sb-Sd-Im), M$_{\rm B}$= --17.80 $\pm$ 0.16 (Starbursts). The median absolute magnitudes for these types of galaxies are M$_{\rm B}$= --20.52 (E/Sa), --18.71 (Sb-Sd-Im) and --17.62 (Starbursts) which are, except for the early-types, fainter than the GOODS-S sample M$_{\rm B}$ = --20.6 (E/Sa), --19.9 (Sb-Sd), and --19.6 (starburst) (Mobasher et al. 2004). This difference is due to the magnitude limit (R$_{\rm AB}$ $<$ 24) imposed in that sample selection, which was not used in our UV-selected sample; i.e. our UV-selected sample is probing fainter objects at the same redshift range ($0.2 < z_{\rm phot} < 1.3$). Despite the fact that our sample is UV-selected, there are 13 objects with SEDs typical of early-type galaxies (E/Sa) at this redshift range. Two of them are clearly spheroids with blue cores ($z_{\rm phot}$$\sim$0.6--0.7, B--V$\sim$0.7--0.8 and B$\sim$--22) and are similar to the objects analyzed recently in Menanteau et al. (2005). These objects are particularly important since they can harbor a possible connection between AGN and star-formation. Studies of the HDF-N has shown how difficult it is, to interpret galaxy morphology at optical wavelengths, when they are sampling the rest frame UV for objects at high redshifts. In the rest-frame near-UV galaxies show fragmented morphology, i.e. the star-formation that dominates the near-UV flux is not constant over the galaxy, but occurs in clumps and patchy regions (Teplitz et al. 2005). Therefore, rest-frame optical wavelengths give a better picture of the structure and morphology of the galaxies. We used the ACS (BVi) images to visually classify our sample and adopted the following classification: (1) elliptical/spheroid, (2) disk, (3) peculiar, (4) compact, (5) low surface brightness, (6) no ACS counterpart. Objects classified as compact have a clear nuclear region with many showing a tadpole morphology; objects classified as peculiar are either interacting systems or have irregular morphologies; objects classified as low-surface-brightness (lsb) do not show any bright nuclear region, and objects classified as (6) are outside the GOODS/ACS image. The distribution of types as a function of redshift is shown in Fig.~\ref{histmorph} and reveals two interesting trends: (i) the decrease in the number of disks at $z>0.8$ and (ii) the increase in the number of compact and lsb galaxies at $z_{\rm phot}>0.8$. Moreover, as seen in Fig.~\ref{histmorphst}, there is a clear difference in the morphology of starbursts (dashed line in the figure) and non-starbursts. Starbursts tend to be compact, peculiar or lsb while the non-starbursts have all morphologies. Since our sample is UV-selected, star-forming disks are either less common at higher$-z$ or there is a selection effect which is responsible for the trend. For instance, we could have missed faint disks which hosts nuclear starbursts and classified the object as compact. Deeper optical images are needed in order to test this possibility. In Fig.~\ref{plotbbvall} we compare our sample properties of colors and luminosity with typical objects from Bershady et al. (2000) which includes typical Hubble types, dwarf ellipticals and luminous blue compact galaxies at intermediate redshifts. Clearly, the UV-selected sample has examples of all types of galaxies. However, a populated region of the color-luminosity diagram with M$_{\rm B}$ $>$ --18 and B--V$<$ 0.5 does not have counterparts either among the local Hubble types or among luminous blue compact galaxies. The average morphology of those objects is $4.21 \pm 0.58$ (type 4 is compact and type 5 is lsb), 38\% are compact and 45\% are lsb, the remaining 17\% are either spheroids or disks. 87\% of them have spectral types $>$ 4.33 (spectral types 4 and 5 are typical of Im and starbursts). \section{Sizes} We have used the half-light radii and the Petrosian radii to estimate the sizes of the galaxies following the steps described in Ferguson et al. (2004). Half-light radius was measured with SExtractor and the Petrosian radius was measured following the prescription adopted by the Sloan Digital Sky Survey (Stoughton et al. 2002). In order to estimate the overall size of galaxies, and not only the size of the star-forming region, we measured sizes as close to the rest-frame B band as possible, i.e. objects with 0.2$<$$z_{\rm phot}$$<$0.6 had theirs sizes measured in the F606W image, objects with 0.6$<$$z_{\rm phot}$$<$0.8 in the F775W image, and objects with 0.8$<$$z_{\rm phot}$$<$1.2 in the F850LP image. The correspondence between the two size measures was verified except for a few outliers: (i) three objects with r$_{\rm h}$ $>$20 pixel (1 pixel = 0.06 arcsec/pixel) and Petrosian radius $>$50 pixel which are large spirals, and (ii) an object with r$_{\rm h}$ $\sim$21 pixel and Petrosian radius $\sim$44 pixel which is a compact blue object very close to a low surface brightness object. The half-light radius of the latter object is over-estimated due to the proximity of the low surface brightness object. In Fig.~\ref{histlightarcsec} we show the observed half-light radii (arcsec) distribution per redshift interval. The increase of small objects at 0.8$<$$z_{\rm phot}$$<$1.0 is related to what is seen in Fig.\ref{histmorph} where the number of compact galaxies peaks at the same redshift interval, i.e. compacts have smaller sizes. The majority of the objects at 0.8$<$$z_{\rm phot}$$<$1.2 have r$_{\rm h}$ $<$ 0.5 arcsec in the rest-frame B band. For comparison with high-$z$ samples which measure the sizes of galaxies at 1500 \AA, we measured the half-light radius in the F300W images of all galaxies with 0.66$<$$z_{\rm phot}$$<$1.5, corresponding to rest frame wavelength in the range 1200--1800 \AA. The average r$_{\rm h}$ is $0.26 \pm 0.01$ arcsec ($2.07 \pm 0.08$ kpc). Fig.~\ref{plotblightkpchdf} shows the distribution of the derived half-light radii (kpc) as a function of the rest-frame B magnitudes. Five objects have r$_{\rm h}$ $>$ 10 kpc and are not included in the figure. The broad range in size from relatively compact systems with radii of 1.5--2 kpc to very larger galaxies with radii of over 10 kpc agrees with the range in sizes of the luminous UV-galaxies at the present epoch (Heckman et al. 2005). We included in Fig.~\ref{plotblightkpchdf} the low--$z$ sample (0.7$<z<1.4$) from Papovich et al. (2005) which is selected from a near-infrared, flux-limited catalog of NICMOS data of the HDF-N. We have compared r$_{\rm h}$ and M$_{\rm B}$ for the two samples, ours and Papovich et al. (2005), using Kolmogorov-Smirnov (KS) statistics and found that the UV-selected and the NIR-selected samples are not drawn from the same distribution at the 98\% confidence level (D=0.24 and D=0.26 for r$_{\rm h}$ and M$_{\rm B}$, respectively - D is the KS maximum vertical deviation between the two samples). The median values of the UV-selected objects is r$_{\rm h}$=3.02 $\pm$ 0.11 kpc and M$_{\rm B}$=--18.6 $\pm$ 0.1 which are larger and fainter than the NIR-selected sample values of r$_{\rm h}$= 2.38 $\pm$ 0.06 kpc M$_{\rm B}$=--19.11 $\pm$ 0.07. This is due to a number of low surface brightness objects (36\% or 16 out of 44) that are found in our sample which are faint (M$_{\rm B}$ $> -20$) and large (r$_{\rm h}$ $\ge$ 3 kpc). These objects are not easily detected in NIR but are common in our UV-selected sample due to the depth of the U-band image which can pick up star-forming LSBs. It is interesting to see how the properties of galaxies in our sample compare with Lyman Break Galaxies at $2<z<4.5$. Despite the fact that they are both UV-selected, LBGs belong to a class of more luminous objects. Typical M$_{\rm B}$ of LBGs at $z\sim$3 are --23.0$\pm$1 (Pettini et al. 2001) whereas our sample has average M$_{\rm B}$=--18.43$\pm$0.13. Three color composite images of the most luminous objects in our sample (M$_{\rm B}$ $<$ --20.5) as shown in Fig.~\ref{luminous}. There is clearly a wide diversity in morphology of these objects. Four of them are clearly early-type galaxies, three are disks showing either strong star formation or strong interaction, and two of them are what we called low surface brightness and compact. LBGs show a wide variety in morphology ranging from relatively regular objects to highly fragmented, diffuse and irregular ones. However, even the most regular LBGs show no evidence of lying on the Hubble sequence. LBGs are all relatively unobscured, vigourously star-forming galaxies that have formed the bulk of their stars in the last 1-2 Gyr. Our sample is clearly more varied: it includes early-type galaxies that are presumably massive and forming stars only in their cores, as well as starburst-type systems that are more similar to the LBGs, although much less luminous. This implies that even the starbursts in our sample are either much less massive than LBGs or are forming stars at a much lower rate or both. The low surface brightness galaxies have no overlap with the LBGs and form an interesting new class of their own. \section{Summary} We have identified 415 objects in the deepest near-UV image ever taken with HST reaching magnitudes as faint as m$_{\rm AB}$=27.5 in the F300W filter with WFPC2. We have used the GOODS multiwavelength images (B, V, i, z) to analyze the properties of 306 objects for which we have photometric redshifts (12 have spectroscopic redshifts). The main results of our analysis are as follows: \begin{enumerate} \item UV-selected galaxies span all the major morphological types at 0.2 $<$$z_{\rm phot}$$<$ 1.2. However, disks are more common at lower redshifts, 0.2 $<$$z_{\rm phot}$$<$ 0.8. \item Higher redshift objects (0.7 $<$$z_{\rm phot}$$<$ 1.2) are on average bluer than lower$-z$ and have spectral type typical of starbursts. Their morphologies are compact, peculiar or low surface brightness galaxies. \item Despite of the UV-selection, 13 objects have spectral types of early-type galaxies; two of them are spheroids with blue cores. \item The majority of the UV-selected objects have rest-frame colors typical of stellar populations with intermediate ages $>$ 100 Myr. \item UV-selected galaxies are on average larger and fainter than NIR-selected galaxies at 0.7 $<$$z_{\rm phot}$$<$ 1.4; the majority of the objects are low-surface-brightness. \item The UV-selected galaxies are on average fainter than Lyman Break Galaxies. The ten most luminous ones span all morphologies from early-types to low surface brightness. \end{enumerate} \acknowledgments We are grateful to G. Vazquez for providing us with the models and data used in Fig.\ref{plotvibvdatasb99} and to the GOODS team. Support for this work was provided by NASA through grants GO09583.01-96A and GO09481.01-A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
2024-02-18T23:39:49.644Z
2005-10-06T02:03:53.000Z
algebraic_stack_train_0000
543
5,030
proofpile-arXiv_065-2905
\section{Introduction} \label{sec:intro} The cold dark matter (CDM) paradigm has become the standard framework for the formation of large-scale structure and galaxies. Small fluctuations in the initial density field grow by means of gravitational instability until they collapse to form virialized dark matter haloes. This growth process is hierarchical in the sense that small clumps virialize first, and aggregate successively into larger and larger objects. Galaxies form from the gas that is shock heated by the gravitational collapse and then subsequently cools (White \& Rees 1978; but see also Birnboim \& Dekel 2003 and Keres {et al.~} 2004). Therefore, a proper understanding of galaxy formation relies on an accurate description of the structure and assembly of these dark matter haloes. This problem is tackled by a combination of both N-body simulations and analytical models. Although N-body simulations have the advantage that they follow the formation of dark matter haloes into the non-linear regime, they are expensive, both in terms of labor (analyzing the simulations) and CPU time. Therefore, accurate analytical models are always useful. The most developed of these is the Press-Schechter (PS) formalism, which allows one to compute the (unconditional) halo mass function (Press \& Schechter 1974). Bond {et al.~} (1991), Bower (1991) and Lacey \& Cole (1993) extended the PS formalism, using the excursion set approach, to compute conditional mass functions. These allow the construction of merger histories, the computation of halo formation times, and detailed studies of spatial clustering and large scale bias (e.g., Kauffmann \& White 1993; Mo \& White 1996, 2002; Mo, Jing \& White 1996, 1997; Catelan {et al.~} 1998; Sheth 1998; Nusser \& Sheth 1999; Somerville \& Kolatt 1999; Cohn, Bagla \& White 2001). Numerous studies in the past have tested the predictions of extended Press-Schechter (EPS) theory against numerical simulations. Although the unconditional mass function was found to be in reasonable agreement, it systematically over (under) predicts the number of low (high) mass haloes (e.g., Jain \& Bertschinger 1994; Tormen 1998; Gross {et al.~} 1998; Governato {et al.~} 1999; Jenkins {et al.~} 2001). Similar discrepancies have been found regarding the conditional mass function (Sheth \& Lemson 1999; Somerville {et al.~} 2000), which results in systematic offsets of the halo formation times predicted by EPS (e.g., van den Bosch 2002a). Finally, Bond {et al.~} (1991) have shown that the PS approach achieves a very poor agreement on an object-by-object basis when compared with simulations (for a review, see Monaco 1998). It is generally understood that these discrepancies stem from the assumption of spherical collapse. Numerous studies have investigated schemes to improve the EPS formalism by using ellipsoidal, rather than spherical collapse conditions, thereby taking proper account of the aspherical nature of collapse in a CDM density field (e.g., Sheth, Mo \& Tormen 2001, hereafter SMT01; Sheth \& Tormen 2002; Chiueh \& Lee 2001; Lin, Chuieh \& Lee 2002). Although this results in unconditional mass functions that are in much better agreement with numerical simulations (e.g., SMT01; Jenkins {et al.~} 2001), they have been unable thus far to yield conditional mass functions of sufficient accuracy that reliable merger trees can be constructed. Despite its systematic errors and uncertainties, the PS formalism has remained the standard analytical approach in galaxy formation modeling. In particular, the extended Press-Schechter theory is used extensively to compute merger histories and mass assembly histories (hereafter MAHs) which serve as the back-bone for models of galaxy formation (e.g., Kauffmann, White \& Guiderdoni 1993; Somerville \& Primack 1999; Cole {et al.~} 2000; van den Bosch 2001; Firmani \& Avila-Reese 2000). This may have profound implications for the accuracy of these models. For instance, the mass assembly histories of dark matter haloes are expected to impact on the star formation histories of the galaxies that form inside these haloes. In addition, the merger and mass assembly history of individual haloes may also be tightly related to their internal structure. As shown by Wechsler {et al.~} (2002; hereafter W02) and Zhao {et al.~} (2003a;b), the MAH is directly related to the concentration of the resulting dark matter halo (see also Navarro, Frenk \& White 1997; Bullock {et al.~} 2001; Eke, Navarro \& Steinmetz 2001). Errors in the mass assembly histories of dark matter haloes may therefore result in erroneous predictions regarding the star formation history and the rotation curve shapes and/or the zero-point of the Tully-Fisher relation (e.g., Alam, Bullock \& Weinberg 2002; Zentner \& Bullock 2002; Mo \& Mao (2000); van den Bosch, Mo \& Yang 2003). Clearly, a detailed understanding of galaxy formation requires a description of the growth history of dark matter haloes that is more accurate than EPS. Although $N$-body simulations are probably the most reliable means of obtaining accurate assembly histories of dark matter haloes, they are computationally too expensive for some purposes. As an alternative to the EPS formalism and N-body simulations, perturbative techniques have been developed that describe the growth of dark matter haloes in a given numerical realization of a linear density field. These include, amongst others, the truncated Zel'dovich (1970) approximation (Borgani, Coles \& Moscardini 1994), the peak-patch algorithm (Bond \& Myers 1996a,b) and the merging cell model (Rodriguez \& Thomas 1996; Lanzoni, Mamon \& Guiderdoni 2000). Recently, Monaco, Theuns \& Taffoni (2002b) developed a numerical code that uses local ellipsoidal collapse approximations (Bond \& Myers 1996a; Monaco 1995) within Lagrangian Perturbation Theory (LPT, Buchert \& Ehlers 1993; Catelan 1995). This code, called PINOCCHIO (PINpointing Orbit-Crossing Collapsed HIerarchical Objects), has been shown to yield accurate mass functions, both conditional and unconditional (Monaco {et al.~} 2002a,b; Taffoni, Monaco \& Theuns 2002), and is therefore ideally suited to study halo assembly histories, without having to rely on computationally expensive N-body simulations. This paper is organized as follows. In Section~\ref{sec:theory} we give a detailed overview of (extended) Press-Schechter theory, including a discussion of its short-comings and its modifications under ellipsoidal collapse conditions, and describe the Lagrangian perturbation code PINOCCHIO. In Section~\ref{sec:sim} we compare the MAHs obtained from PINOCCHIO, the EPS formalism, and N-body simulations. We show that PINOCCHIO yields MAHs that are in excellent agreement with numerical simulations, and do not suffer from the shortcomings of the EPS formalism. In the second part of this paper we then analyze a large, statistical sample of MAHs obtained with PINOCCHIO for haloes spanning a wide range in masses. In Section~\ref{sec:ftime} we use these MAHs to study, in a statistical sense, various characteristic epochs and events in the mass assembly history of a typical CDM halo. We analyze the statistics of major merger events in Section~\ref{sec:majmerprop}. Finally, Section~\ref{sec:concl} summarizes our results. \section{Theoretical background} \label{sec:theory} \subsection{Extended Press-Schechter theory} \label{sec:EPS} In the standard model for structure formation the initial density contrast $\delta({\bf x}) = \rho({\bf x})/\bar{\rho} - 1$ is considered to be a Gaussian random field, which is therefore completely specified by the power spectrum $P(k)$. As long as $\delta \ll 1$ the growth of the perturbations is linear and $\delta({\bf x},t_2) = \delta({\bf x},t_1) D(t_2)/D(t_1)$, where $D(t)$ is the linear growth factor linearly extrapolated to the present time. Once $\delta({\bf x})$ exceeds a critical threshold $\delta^{0}_{\rm crit}$ the perturbation starts to collapse to form a virialized object (halo). In the case of spherical collapse $\delta^{0}_{\rm crit} \simeq 1.68$. In what follows we define $\delta_0$ as the initial density contrast field linearly extrapolated to the present time. In terms of $\delta_0$, regions that have collapsed to form virialized objects at redshift $z$ are then associated with those regions for which $\delta_0 > \delta_c(z) \equiv \delta^{0}_{\rm crit}/D(z)$. In order to assign masses to these collapsed regions, the PS formalism considers the density contrast $\delta_0$ smoothed with a spatial window function (filter) $W(r;R_f)$. Here $R_f$ is a characteristic size of the filter, which is used to compute a halo mass $M = \gamma_f \bar{\rho} R_f3/3$, with $\bar{\rho}$ the mean mass density of the Universe and $\gamma_f$ a geometrical factor that depends on the particular choice of filter. The {\it ansatz} of the PS formalism is that the fraction of mass that at redshift $z$ is contained in haloes with masses greater than $M$ is equal to two times the probability that the density contrast smoothed with $W(r;R_f)$ exceeds $\delta_c(z)$. This results in the well known PS mass function for the comoving number density of haloes: \begin{eqnarray} \label{PS} \lefteqn{{{{\rm d}}n \over {{\rm d}} \, {\rm ln} \, M}(M,z) \, {{\rm d}}M =} \nonumber \\ & & \sqrt{2 \over \pi} \, \bar{\rho} \, {\delta_c(z) \over \sigma2(M)} \, \left| {{{\rm d}} \sigma \over {{\rm d}} M}\right| \, {\rm exp}\left[-{\delta_c2(z) \over 2 \sigma2(M)}\right] \, {{\rm d}}M \end{eqnarray} (Press \& Schechter 1974). Here $\sigma2(M)$ is the mass variance of the smoothed density field given by \begin{equation} \label{variance} \sigma2(M) = {1 \over 2 \pi2} \int_{0}^{\infty} P(k) \; \widehat{W}2(k;R_f) \; k2 \; {{\rm d}}k. \end{equation} with $\widehat{W}(k;R_f)$ the Fourier transform of $W(r;R_f)$. The {\it extended} Press-Schechter (EPS) model developed by Bond {et al.~} (1991), is based on the excursion set formalism. For each point one constructs `trajectories' $\delta(M)$ of the linear density contrast at that position as function of the smoothing mass $M$. In what follows we adopt the notation of Lacey \& Cole (1993) and use the variables $S = \sigma2(M)$ and $\omega = \delta_c(z)$ to label mass and redshift, respectively. In the limit $R_f \rightarrow \infty$ one has that $S = \delta(S) = 0$, which can be considered the starting point of the trajectories. Increasing $S$ corresponds to decreasing the filter mass $M$, and $\delta(S)$ starts to wander away from zero, executing a random walk (if the filter is a sharp $k$-space filter). The fraction of matter in collapsed objects in the mass interval $M$, $M+{\rm d}M$ at redshift $z$ is now associated with the fraction of trajectories that have their {\it first upcrossing} through the barrier $\omega = \delta_c(z)$ in the interval $S$, $S+{\rm d}S$, which is given by \begin{equation} \label{probS} P(S ,\omega) \; {{\rm d}}S = {1 \over \sqrt{2 \pi}} \; {\omega \over S^{3/2}} \; {\rm exp}\left[-{\omega2 \over 2 S}\right] \; {{\rm d}}S \end{equation} (Bond {et al.~} 1991; Bower 1991; Lacey \& Cole 1993). After conversion to number counting, this probability function yields the PS mass function of equation~(\ref{PS}). Note that this approach does not suffer from the arbitrary factor two in the original Press \& Schechter approach. Since for random walks the upcrossing probabilities are independent of the path taken (i.e., the upcrossing is a Markov process), the probability for a change $\Delta S$ in a time step $\Delta \omega$ is simply given by equation~(\ref{probS}) with $S$ and $\omega$ replaced with $\Delta S$ and $\Delta \omega$, respectively. This allows one to immediate write down the {\it conditional} probability that a particle in a halo of mass $M_2$ at $z_2$ was embedded in a halo of mass $M_1$ at $z_1$ (with $z_1 > z_2$) as \begin{eqnarray} \label{probSS} \lefteqn{P(S_1,\omega_1 \vert S_2,\omega_2) \; {{\rm d}}S_1 =} \nonumber \\ & & {1 \over \sqrt{2 \pi}} \; {(\omega_1 - \omega_2) \over (S_1 - S_2)^{3/2}} \; {\rm exp}\left[-{(\omega_1 - \omega_2)2 \over 2 (S_1 - S_2)}\right] \; {{\rm d}}S_1 \end{eqnarray} Converting from mass weighting to number weighting, one obtains the average number of progenitors at $z_1$ in the mass interval $M_1$, $M_1 + {\rm d}M_1$ which by redshift $z_2$ have merged to form a halo of mass $M_2$: \begin{eqnarray} \label{condprobM} \lefteqn{{{{\rm d}}N \over {{\rm d}}M_1}(M_1,z_1 \vert M_2,z_2) \; {{\rm d}}M_1 =} \nonumber \\ & & {M_2 \over M_1} \; P(S_1,\omega_1 \vert S_2,\omega_2) \; \left\vert {{{\rm d}}S \over {{\rm d} M}} \right\vert \; {{\rm d}}M_1. \end{eqnarray} This conditional mass function can be combined with Monte-Carlo techniques to construct merger histories (also called merger trees) of dark matter haloes. \subsection{Ellipsoidal collapse} \label{sec:ellips} In an attempt to improve the inconsistencies between EPS and numerical simulations (see Section~\ref{sec:intro}), various authors have modified the EPS formalism by considering ellipsoidal rather than spherical collapse. For ellipsoidal density perturbations, the conditions for collapse not only depend on the self-gravity of the perturbation, but also on the tidal coupling with the external mass distribution; external shear can actually rip overdensities apart and thus prevent them from collapsing. Since smaller mass perturbations typically experience a stronger shear field, they tend to be more ellipsoidal. Therefore, it is to be expected that the assumptions of spherical collapse in the standard EPS formalism are more accurate for more massive haloes, whereas modifications associated with ellipsoidal collapse will be more dramatic for smaller mass haloes. The way in which ellipsoidal collapse modifies the halo formation times with respect to the EPS predictions depends on the definition of collapse. Ellipsoidal perturbations collapse independently along the three different directions defined by the eigen vectors of the deformation tensor (defined as the second derivative of the linear gravitational potential). It is customary to associate the first axis collapse with the formation of a 2-dimensional pancake-like structure, the second axis collapse with the formation of a 1-dimensional filament, and the third axis collapse with the formation of a dark matter halo. Most authors indeed have associated halo formation with the collapse of the third axis (e.g., Bond \& Myers 1996a; Audit, Teyssier \& Alimi 1997; Lee \& Shandarin 1998; SMT01), though some have considered the first axis collapse instead (e.g., Bertschinger \& Jain 1994; Monaco 1995). For first-axis collapse one predicts that haloes form earlier than in the spherical case, whereas the opposite applies when considering third-axis collapse. Clearly, the implications of considering ellipsoidal rather than spherical collapse depend sensitively on the collapse definition. In order to incorporate ellipsoidal collapse in a PS-like formalism, one needs to obtain an estimate of the critical overdensity for collapse $\delta_{ec}$. Various studies have attempted such schemes. For instance, SMT01 used the ellipsoidal collapse model to obtain \begin{equation} \label{ellips} \delta_{ec}(M,z) = \delta_{c}(z) \left( 1 + 0.47 \left[{\sigma2(M) \over \delta2_{c}(z)} \right]^{0.615}\right). \end{equation} Here $\delta_c(z)$ is the standard value for the spherical collapse model. Solving for the upcrossing statistics with this particular barrier shape results in halo mass functions that are in excellent agreement with those found in simulations (Sheth \& Tormen 1999; Jenkins {et al.~} 2001). Unfortunately, no analytical expression for the conditional mass function is known for a barrier of the form of equation~(\ref{ellips}), and one has to resort to either approximate fitting functions (Sheth \& Tormen 2002), or one has to use time-consuming Monte-Carlo simulations to determine the upcrossing statistics (Chiueh \& Lee 2001; Lin {et al.~} 2002). Although the resulting conditional mass functions ${{{\rm d}}N \over {{\rm d}}M_1}(M_1,z_1 \vert M_2,z_2) \; {{\rm d}}M_1$ have been found to be in good agreement with numerical simulations if a relatively large look-back time is considered (i.e., if $\Delta z = z_2-z_1 \ga 0.5$), there is still a large disagreement for small $\Delta z$. This is probably due to the neglect of correlations between scales in the excursion set approach (Peacock \& Heavens 1990; Sheth \& Tormen 2002). This is unfortunate as it does not allow these methods to be used for the construction of merger histories or MAHs. Lin {et al.~} (2002) tried to circumvent this problem by introducing a small mass gap between parent halo and progenitor halo, i.e., each time step they require that $S_1 - S_2 \geq f \, \delta_c2(z_2)$. Upon testing their conditional mass function with this mass gap against numerical simulations they find good agreement for $f = 0.06$, and claim that with this modification the excursion set approach {\it can} be used to construct merger histories under ellipsoidal collapse conditions. However, they only tested their conditional mass functions for $\Delta z \geq 0.2$, whereas accurate merger histories require significantly smaller time steps. For instance, van den Bosch (2002a) has argued for timesteps not larger than $\Delta \omega = \omega_1 - \omega_2 \simeq 0.1$, which, for an Einstein-de Sitter (EdS) cosmology, corresponds to $\Delta z \simeq 0.06$ (see also discussion in Somerville \& Kolatt 1999). Furthermore, with the mass gap suggested by Lin {et al.~} (2002), each time step there is a minimum amount of mass accreted by the halo, which follows from $S_1 - S_2 = f \, \delta_c2(z_2)$. This introduces a distinct maximum to the halo half-mass formation time, the value of which depends sensitively on the actual time-steps taken. To test this, we constructed MAHs of CDM haloes using the method of van den Bosch (2002a) but adopting the conditional probability function of Lin {et al.~} (2002). This resulted in MAHs that are in very poor agreement with numerical simulations. In particular, the results were found to depend strongly on the value of $\Delta \omega$ adopted. In summary, although introducing ellipsoidal collapse conditions in the excursion set formalism has allowed the construction of accurate unconditional mass functions, there still is no reliable method based on the EPS formalism that allows the construction of accurate merger histories and/or MAHs. \begin{figure*} \centerline{\psfig{file=mf.ps,width=1.0\hsize,angle=270}} \caption{Panels in the upper row show the (unconditional) halo mass functions at 4 different redshifts, as indicated. Different symbols (each with Poissonian error bars) correspond to 5 different PINOCCHIO simulations randomly selected from P0, each with a different mass resolution. Dashed and solid lines correspond to the PS and SMT01 mass functions, respectively, and are shown for comparison. Panels in the lower row show the percentual difference between the PS and SMT01 mass functions (dashed lines) and that between the PINOCCHIO and the SMT01 mass functions (symbols with errorbars). Clearly, the PS mass function overestimates (underestimates) the number of small (high) mass haloes, while PINOCCHIO yields mass functions that are in excellent agreement with SMT01 (and thus with N-body simulations). Note that the SMT01 halo mass function best fits the mass function of simulated halos that is identified with an FOF linking length of 0.2 times the mean particle separation. The mean density of a halo so seletced is similar to that within a virialized halo based on the spherical collapse model. PINOCCHIO haloes and PS haloes are all defined so that the mean density within a halo is similar to that based on the spherical collapse model.} \label{fig1} \end{figure*} \subsection{PINOCCHIO} \label{sec:pino} Although the problem of obtaining accurate merging histories under ellipsoidal collapse conditions can be circumvented by using N-body simulations, the time-expense of these simulations is a major hurdle. An attractive alternative is provided by the LPT code PINOCCHIO developed recently by Monaco {et al.~} (2002b). Below we give a short overview of PINOCCHIO, and we refer the interested reader to Monaco {et al.~} (2002a,b) and Taffoni {et al.~} (2002) for a more elaborate description. PINOCCHIO uses Lagrangian perturbation theory to describe the dynamics of gravitational collapse. In LPT the comoving (Eulerian) coordinate $\textbf{x}$ and the initial Lagrangian coordinate $\textbf{q}$ of each particle are connected via \begin{equation} \label{displace} \textbf{x}(\textbf{q},t)= \textbf{q}+\textbf{S}(\textbf{q},t), \end{equation} with $\textbf{S}$ the displacement field. The first-order term of $\textbf{S}(\textbf{q},t)$ is the well-known Zel'dovich approximation (Zel'dovich 1970): \begin{equation} \label{zeld} \textbf{S}(\textbf{q},t)= -D(t) {\partial \psi \over \partial \textbf{q}} \end{equation} with $\psi(\textbf{q})$ the rescaled linear gravitational potential, which is related to the density contrast $\delta_0(\textbf{q})$ extrapolated to the present time by the Poisson equation \begin{equation} \label{poisson} \nabla2\psi(\textbf{q})= \delta_0(\textbf{q}), \end{equation} Since the Lagrangian density field is basically $\rho_{\rm L}(\textbf{q}) = \bar{\rho}$, the (Eulerian) density contrast is given by \begin{equation} \label{euldens} 1 + \delta(\textbf{x},t) = {1 \over {\rm det}(J)} \end{equation} with $J = \partial \textbf{x} / \partial \textbf{q}$ the Jacobian of the transformation given in~(\ref{displace}). Note that the density formally goes to infinity when the Jacobian determinant vanishes, which corresponds to the point in time when the mapping $\textbf{q} \rightarrow \textbf{x}$ becomes multi-valued, i.e. when orbits first cross leading to the formation of a caustic. Since the (gravitationally induced) flow is irrotational the matrix $J$ is symmetric and can thus be diagonalized: \begin{equation} \label{euldensdiag} 1 + \delta(\textbf{x},t) = {1 \over \prod_{i=1}^{3}[1 - D(t) \lambda_i(\textbf{q})]} \end{equation} with $-\lambda_i$ the eigenvalues of the deformation tensor ${\partial2 \psi/\partial q_i \partial q_j}$. PINOCCHIO starts by constructing a random realization of a Gaussian density field $\rho({\textbf{q}})$ (linearly extrapolated to $z=0$) and the corresponding peculiar potential $\phi(\textbf{q})$ on a cubic grid. The density fluctuation field is specified completely by the power spectrum $P(k)$, which is normalized by specifying the value of $\sigma_8$, defined as the rms linear overdensity at $z=0$ in spheres of radius $8 h^{-1} \>{\rm Mpc}$. The density and peculiar potential fields are subsequently convolved with a series of Gaussians with different values for their FWHM $R$. For the $2563$ simulations used in this paper, 26 different linearly sampled values of $R$ are used. For a given value of $R$ the density of a mass element (i.e., `particle') will become infinite as soon as at least one of the ellipsoid's axes reaches zero size (i.e., when $D(t) = 1/\lambda_i$). At this point orbit crossing (OC) occurs and the mass element enters a high-density multi-stream region. This is the moment of first-axis collapse. Since the Jacobian determinant becomes multivalued at this stage, one can not make any further predictions of the mass element's fate beyond this point in time. Consequently, it is not possible in PINOCCHIO to associate halo collapse with that of the third axis. For each Lagrangian point $\textbf{q}$ (hereafter `particle') and for each smoothing radius $R$ this OC (i.e., collapse) time is computed, and the highest collapse redshift $z_c$, the corresponding smoothing scale $R_c$, and the Zel'dovich estimate of the peculiar velocity ${\bf v}_c$ are recorded. PINOCCHIO differs from the standard PS-like method when it comes to assigning masses to collapsed objects. Rather than associating a halo mass with the collapsed mass element based directly on the smoothing scale $R_c$ at collapse, PINOCCHIO uses a fragmentation algorithm to link neighboring mass elements into a common dark matter halo. In fact, the collapsed mass element may be assigned to a filament or sheet rather than a halo. After sorting particles according to decreasing collapse redshift $z_c$ the following rules for accretion and merging are adopted: Whenever a particle collapses and none of its Lagrangian neighbors (the six nearest particles) have yet collapsed, the particle is considered a seed for a new halo. Otherwise, the particle is accreted by the nearest Lagrangian neighbor that already has collapsed if the Eulerian distance $d$, computed using the Zel'dovich velocities ${\bf v}$ at the time of collapse, obeys $d \leq f_a R_M$, where $R_M = M^{1/3}$ is the radius of a halo of $M$ particles. If more than one Lagrangian neighbor has already collapsed, it is simultaneously checked whether these haloes merge. This occurs whenever, again at the time of collapse, the mutual Eulerian distance between these haloes is $d \leq f_M R_M$, where $R_M$ refers to the larger halo. Note that with this description, up to six haloes may merge at a given time. The collapsing particles that according to these criteria do not accrete onto a halo at their collapse time are assigned to a filament. In order to mimic the accretion of filaments onto haloes, filament particles can be accreted by a dark matter halo at a later stage when they neighbor (in Lagrangian space) an accreting particle. Finally, in high density regions it can happen that pairs of haloes that are able to merge are not touched by newly collapsing particles for a long time. Therefore, at certain time intervals pairs of touching haloes are merged if they obey the above merging condition. The accretion and merging algorithm described above has five free parameters. In addition to the parameters $f_a$ and $f_M$ three additional free parameters have been introduced by Monaco {et al.~} (2002b). We refer the reader to this paper for details. This relatively large amount of freedom may seem a weakness of PINOCCHIO. However, it is important to realize that even N-body codes require some free parameters, such as the linking-length in the Friends-Of-Friends (FOF) algorithm used to identify dark matter haloes. Furthermore, we do not consider these parameters as free in what follows. Rather, we adopt the values advocated by Monaco {et al.~} (2002a,b), which they obtained by tuning PINOCCHIO to reproduce the conditional and unconditional mass function of N-body simulations. \begin{figure} \centerline{\psfig{file=mg.ps,width=0.5\textwidth,angle=270}} \caption{The mass assembly histories of dark matter haloes with present-day masses in the four mass bins as indicated in the panels. The upper two panels are based on the $100 h^{-1}{\rm Mpc}$-box simulations, P1 and S1, while the lower two panels use data from the $300 h^{-1}{\rm Mpc}$-box simulations, P2 and S2. The thin lines are 40 MAHs randomly selected from the PINOCCHIO simulations. The thick solid line in each panel shows the average of all the MAHs obtained in the PINOCCHIO simulaions in the corresponding mass bin. The thick dotted line shows the average MAH extracted from the simulations. The thick dashed line shows the average MAH obtained from 3000 EPS realizations (properly sampled from halo mass function).} \label{MAH} \end{figure} \begin{figure} \centerline{\psfig{file=mg_2.ps,width=0.5\textwidth,angle=270}} \caption{The dashed curve in each panel shows the difference between the average MAHs predicted by the EPS model and by the N-body simulation, while the solid curve shows the difference between PINOCCHIO prediction and N-body simulation. The the upper two panels use data from P1 and S1, while the lower two panels use data from P2 and S2. Data are not shown for $z \lower.5ex\hbox{\gtsima} 3$ because the MAHs are not well represented at such high redshifts in the simulations.} \label{MAH2} \end{figure} {\bf \begin{figure} \centerline{\psfig{file=scatter.ps,width=0.5\textwidth,angle=270}} \caption{The {\it standard deviation} of the MAHs, $S_{\rm M}(z)$, normalized by the average MAH, $M(z)$, in four mass bins. Solid lines are results from PINOCCHIO, while dotted lines are results from N-body simulations. As in Fig.~\ref{MAH} and Fig.~\ref{MAH2}, the upper two panels use data from P1 and S1, while the lower two panels use data from P2 and S2.} \label{scatter} \end{figure} } \begin{table} \begin{center} \caption{Ensemble of PINOCCHIO simulations (P0)} \begin{tabular}{cccc} \hline Box size ($h^{-1}$ Mpc)& $N_{\rm run}$ & $M_{p}$ ($h^{-1}\>{\rm M_{\odot}}$) & $N_{\rm MAH}$ \\ \hline 20 & 12 & $4.0 \times 107$ & 2,690 \\ 40 & 8 & $3.2 \times 108$ & 1,863 \\ 60 & 8 & $1.1 \times 109$ & 796 \\ 80 & 6 & $2.5 \times 109$ & 1,438 \\ 100 & 6 & $5.0 \times 109$ & 2,799 \\ 140 & 4 & $1.4 \times 10^{10}$ & 410 \\ 160 & 2 & $2.0 \times 10^{10}$ & 299\\ 200 & 9 & $4.0 \times 10^{10}$ & 2,629 \\ \hline \end{tabular} \end{center} \medskip A listing of the PINOCCHIO simulations used in this paper. All simulations use $2563$ particles and adopt the standard $\Lambda$CDM concordance cosmology. In order to get good statistics, we choose a combination of box sizes so that we can select thousands of well-resolved (with more than 2000 particles) haloes in each mass bin we adopt in the paper. This ensemble of PINOCCHIO simulations is referred to as `P0' in the text. The first column of Table 1 lists the box size of the simulation in $h^{-1} \>{\rm Mpc}$. The second column lists the number of independent realizations run. The particle mass $M_p$ (in $h^{-1} \>{\rm M_{\odot}}$) is listed in the third column, while the fourth column lists the total number of haloes (summed over all $N_{\rm run}$ realizations) with more than 2000 particles and for which a MAH has been obtained. \end{table} \section{Simulations} \label{sec:sim} In this paper we use PINOCCHIO simulations to study the mass assembly histories (MAHs) of dark matter haloes. We follow previous studies (Lacey \& Cole 1993; Eisenstein \& Loeb 1996; Nusser \& Sheth 1999; van den Bosch 2002a) and define the MAH, $M(z)$, of a halo as the main trunk of its merger tree: at each redshift, the mass $M(z)$ is associated with the mass of the most massive progenitor at this redshift, and we follow this progenitor, and this progenitor only, further back in time. In this way, this `main progenitor halo' never accretes other haloes that are more massive than itself. Note that although at each branching point we follow the most massive branch, this does not necessarily imply that the main progenitor is also the most massive of {\it all} progenitors at any given redshift. Below we describe the PINOCCHIO simulations, the N-body simulations, and the EPS method used to construct MAHs. \subsection{PINOCCHIO simulations} \label{sec:pinsim} Because the progenitors of a present-day halo become smaller at higher redshift, we can only follow the MAHs to a sufficiently high redshift if the halo at $z=0$ contains a large enough number of particles. When constructing MAHs with PINOCCHIO, we only use haloes that contain more than 2000 particles at the present time, and we trace each MAH to the redshift at which its main progenitor contains less than 10 particles. In order to cover a large range of halo masses, we have carried out 55 PINOCCHIO simulations with $2563$ particles each and spanning a wide range of box sizes and particle masses (see Table~1, we call this suite of PINOCCHIO simulations P0 hereafter). The choice of box sizes ensures that there are several thousand well-resolved haloes in each of the mass bins considered. Each of these simulations takes only about 6 hours of CPU time on a common PC (including the actual analysis), clearly demonstrating its advantage over regular N-body simulations. This suite of PINOCCHIO simulations has adopted the $\Lambda$CDM concordance cosmology with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, $h=0.7$ and $\sigma_8=0.9$. With simulation box sizes ranging from $20 \>h^{-1}{\rm {Mpc}}$ to $200\>h^{-1}{\rm {Mpc}}$, and particle masses ranging from $4 \times 10^{7} h^{-1} \>{\rm M_{\odot}}$ to $4 \times 10^{10} h^{-1} \>{\rm M_{\odot}}$, we are able to study the MAHs of present-day haloes with masses $> 8 \times 10^{10} h^{-1} \>{\rm M_{\odot}}$. The construction of the MAHs is straightforward: PINOCCHIO outputs a halo mass every time a merger occurs, i.e., when a halo with more than 10 particles merges into the main branch. If we require an estimate of the halo mass at any intermediate redshift, $z$, we use linear interpolation in $\log(1+z)$ between the two adjacent output redshifts. \subsection{N-body simulations} \label{sec:nbody} For comparison we also used MAHs extracted from two sets of N-body simulations (referred to as S1 and S2). These N-body simulations follow the evolution of $5123$ particles in a periodic box of $100 \>h^{-1}{\rm {Mpc}}$ (S1) and $300 \>h^{-1}{\rm {Mpc}}$ (S2) on a side, assuming slightly different cosmologies (see Table 2 for details). The snapshot outputs of each simulation are evenly placed at 60 redshifts between $z=0$ and $z=15$ in $\ln(1+z)$ space. In each simulation and at each output, haloes are identified using the standard FOF algorithm with a linking length of $b=0.2$. Haloes obtained with this linking length have a mean overdensity of $\sim 180$. A halo at redshift $z_1$ is identified as a progenitor of a halo at $z_2 < z_1$ if more than half of its mass is included in the halo at $z_2$. The resulting lists of progenitor haloes are used to construct the MAHs. In our analysis, we only use haloes more massive than $10^{11}h^{-1}\>{\rm M_{\odot}}$ at the present time in S1 and halos more massive than $10^{13}h^{-1}\>{\rm M_{\odot}}$ in S2. Thus, in each simulation only halos with more than $\sim 600$ particles at $z=0$ are used, which allows us to trace the MAHs to sufficiently high redshift with sufficiently high resolution. For comparison, we also generate two sets of PINOCCHIO simulations, P1 and P2, using exactly the same numbers of particles and cosmologies as in S1 and S2, respectively (see Table 2). \subsection{Monte-Carlo simulations} \label{sec:moncar} We also generate MAHs using Monte-Carlo simulations based on the standard EPS formalism. We adopt the N-branch tree method with accretion suggested by Somerville \& Kolatt (1999, hereafter SK99). This method yields more reliable MAHs than for example the binary-tree method of Lacey \& Cole (1993). In particular, it ensures exact mass conservation, and yields conditional mass functions that are in good agreement with direct predictions from EPS theory (i.e., the method is self-consistent). To construct a merger tree for a parent halo of mass $M$ the SK99 method works as follows. First a value for $\Delta S$ is drawn from the mass-weighted probability function \begin{equation} \label{probdS} P(\Delta S ,\Delta \omega) \; {{\rm d}}\Delta S = {1 \over \sqrt{2 \pi}} \; {\Delta \omega \over \Delta S^{3/2}} \; {\rm exp}\left[-{(\dW2) \over 2 \Delta S}\right] \; {{\rm d}}\Delta S \end{equation} (cf. equation~[\ref{probSS}]). Here $\Delta \omega$ is a measure for the time step used in the merger tree, and is a free parameter (see below). The progenitor mass, $M_p$, corresponding to $\Delta S$ follows from $\sigma2(M_p) = \sigma2(M) + \Delta S$. With each new progenitor it is checked whether the sum of the progenitor masses drawn thus far exceeds the mass of the parent, $M$. If this is the case the progenitor is rejected and a new progenitor mass is drawn. Any progenitor with $M_p < M_{\rm min}$ is added to the mass component $M_{\rm acc}$ that is considered to be accreted onto the parent in a smooth fashion (i.e., the formation history of these small mass progenitors is not followed further back in time). Here $M_{\rm min}$ is a free parameter that has to be chosen sufficiently small. This procedure is repeated until the total mass left, $M_{\rm left} = M - M_{\rm acc} - \sum M_p$, is less than $M_{\rm min}$. This remaining mass is assigned to $M_{\rm acc}$ and one moves on to the next time step. For the construction of MAHs, however, it is not necessary to construct an entire set of progenitors. Rather, at each time step, one can stop once the most massive progenitor drawn thus far is more massive than $M_{\rm left}$. This has the additional advantage that one does not have to define a minimum progenitor mass $M_{\rm min}$ (see van den Bosch 2002a for details). In principle, since the upcrossing of trajectories through a boundary is a Markov process, the statistics of progenitor masses should be independent of the time steps taken. However, the SK99 algorithm is based on the {\it single} halo probability (equation~[\ref{probdS}]), which does not contain any information about the {\it set} of progenitors that make up the mass of $M$. In fact, mass conservation is enforced `by hand', by rejecting progenitor masses that overflow the mass budget. As shown in van den Bosch (2002a), this results in a time step dependency, but only for relatively large time steps. For sufficiently small values of $\Delta \omega$ the algorithm outlined above yields accurate and robust results (see also SK99). Throughout this paper we adopt a timestep of $\Delta z=0.05$. Our tests with different values of $\Delta z$ from $0.01$ to $0.05$ have shown that this time step is small enough to achieve stable results, that is, when we decrease the time step to $\Delta z=0.01$, the change in the average MAH is less than 1\%. \subsection{Comparison} \label{sec:comp} \begin{table*} \begin{center} \caption{Reference PINOCCHIO and N-body simulations} \begin{tabular}{lccccccc} \hline\hline Simulation Name & $N_{\rm p}$ &Box size ($h^{-1}$ Mpc) & $M_{p} (h^{-1}\>{\rm M_{\odot}})$ & $\Omega_{\rm m}$ & $\Omega_\Lambda$ & $h$ & $\sigma_8$ \\ \hline\hline S1 (N-body) & $5123$ &100 & $5.5 \times 10^{8}$ & 0.268 & 0.732 & 0.71 & 0.85\\ P1 (PINOCCHIO)& $5123$ &100 & $5.5 \times 10^{8}$ & 0.268 & 0.732 & 0.71 & 0.85\\ \hline S2 (N-body)& $5123$ &300 & $1.3 \times 10^{11}$ & 0.236 & 0.764 & 0.73 & 0.74 \\ P2 (PINOCCHIO)& $5123$ &300 & $1.3 \times 10^{11}$ & 0.236 & 0.764 & 0.73 & 0.74\\ \hline\hline \end{tabular} \end{center} \medskip \end{table*} We now compare the MAHs obtained with all three methods discussed above. The upper panels of Fig.~\ref{fig1} plot the (unconditional) halo mass functions at four different redshifts, as indicated, obtained from 5 arbitrary PINOCCHIO runs with different box sizes in P0. Dashed lines correspond to the analytical halo mass functions obtained using the standard PS formalism (equation~[\ref{PS}]), while the solid lines indicate the mass functions of SMT01 based on ellipsoidal collapse. The latter have been shown to accurately match the mass functions obtained from N-body simulations (e.g., Sheth \& Tormen, 1999; SMT01). The symbols in the lower panels of Fig.~\ref{fig1} plot the differences between the PINOCCHIO and the SMT01 mass functions, while the dashed lines indicate the differences between the PS and the SMT01 mass functions. Clearly, the PINOCCHIO mass functions are in excellent agreement with those of SMT01, and thus also with those obtained from N-body simulations. In addition, Taffoni {et al.~} (2002) have shown that PINOCCHIO also accurately matches the {\it conditional} mass functions obtained from numerical simulations. We now investigate whether the actual MAHs obtained from PINOCCHIO are also in good agreement with the numerical simulations. Fig.~\ref{MAH} plots the average MAHs obtained from the PINOCCHIO, N-body and EPS simulations, for halos with the present masses in the following four mass ranges: $\log(M_0/h^{-1}\>{\rm M_{\odot}})=$11-12, 12-13, 13-14 and 14-15. For comparison, in each panel we also show 40 randomly selected MAHs from the PINOCCHIO simulations (P1 and P2). To ensure mass resolution, results for the low-mass bins (the two upper panels) are based on simulations with the small box size, i.e. S1 and P1. Results for the high-mass bins (the two lower panels) are based only on simulations with the large-box size (S2 and P2) in order to obtain a large number of massive halos. The thick solid curve in each panel corresponds to the average MAH obtained by averaging over all the halos, in the mass range indicated, found in one of the PINOCCHIO simulations (P1 and P2). The thick dashed lines correspond to the average MAHs obtained from 3000 EPS Monte-Carlo simulations (properly weighted by the halo mass function). The thick dotted lines show the average MAHs obtained from the two N-body simulations (S1 and S2). In Fig.~\ref{MAH2}, a detailed comparison between these results are presented. As can be seen in Fig.~\ref{MAH2}, the average MAHs obtained with PINOCCHIO are in good agreement with those obtained from the N-body simulations (with differences smaller than 10\%). Note that there are uncertainties in the identification of dark haloes in N-body simulations using the FOF algorithm. Sometimes two physically separated haloes can be linked together and identified as one halo if they are bridged by dark matter particles, which can change the halo mass by 5\% on average. The agreement between PINOCCHIO and simulation shown in Fig.~\ref{MAH2} is probably as good as one can hope for. The EPS model, however, yields MAHs that are systematically offset with respect to those obtained from the N-body simulations: the EPS formalism predicts that haloes assemble too late (see also van den Bosch 2002a; Lin, Jing \& Lin 2003; W02). Fig.~\ref{scatter} shows the ratio between the standard deviation of the MAHs, $S_{\rm M}(z)$, and the average MAH $M(z)$, as a function of redshift $z$. As one can see, the agreement between the PINOCCHIO and N-body simulations is also reasonably good. In summary, the Lagrangian Perturbation code PINOCCHIO yields halo mass functions (both conditional and unconditional), and mass assembly histories that are all in good agreement with N-body simulations. In particular, it works much better than the standard PS formalism, and yet is much faster to run than numerical simulations. PINOCCHIO therefore provides a unique and fast platform for accurate investigations of the assembly histories of a large, statistical sample of CDM haloes. \section{Halo formation times} \label{sec:ftime} Having demonstrated that the PINOCCHIO MAHs are in good agreement with those obtained from N-body simulations, we now use the suite of 55 PINOCCHIO simulations, P0, listed in Table~1 to investigate the assembly histories of a large sample of haloes spanning a wide range in halo masses. The assembly history of a halo can be parameterized by a formation time (or equivalently formation redshift), which characterizes when the halo assembles. However, since the assembly of a halo is a continuous process, different `formation times' can be defined, each focusing on a different aspect of the MAH. Here we define and compare the following four formation redshifts: \begin{enumerate} \item $z_{\rm half}$: This is the redshift at which the halo has assembled half of its final mass. This formation time has been widely used in the literature. \item $z_{\rm lmm}$: This is redshift at which the halo experiences its last major merger. Unless stated otherwise we define a major merger as one in which the mass ratio between the two progenitors is larger than $1/3$. This definition is similar to $z_{\rm jump}$ defined in Cohn \& White (2005). Major mergers may have played an important role in transforming galaxies and in regulating star formation in galaxies. Their frequency is therefore important to quantify. \item $z_{\rm vvir}$: This is the redshift at which the virial velocity of a halo, $V_{\rm vir}$, defined as the circular velocity at the virial radius, reaches its current value, $V_0$, for the first time. Since $V_{\rm vir}$ is a measure for the depth of the potential well, $z_{\rm vvir}$ characterizes the formation time of the halo's gravitational potential. \item $z_{\rm vmax}$: This is the redshift at which the halo's virial velocity reaches its maximum value over the entire MAH. As we show below, the value of $V_{\rm vir}$ is expected to increase (decrease) with time, if the time scale for mass accretion is shorter (longer) than the time scale of the Hubble expansion. Therefore, $z_{\rm vmax}$ indicates the time when the MAH transits from a fast accretion phase to a slow accretion phase. \end{enumerate} In an N-body simulation one can infer the virial velocity of a halo from its internal structure. In the case of PINOCCHIO simulations, however, no information regarding the density distribution of haloes is available. However, we may use the fact that CDM haloes always have a particular (redshift and cosmology dependent) overdensity. This allows us to define the virial velocity at redshift $z$ as \begin{equation} \label{eq:vcz} V_{\rm vir}(z) = \sqrt{G M_{\rm vir} \over R_{\rm vir}} = \left[ \frac{\Delta_{\rm vir}(z)}{2}\right]^{1/6} \left[M_{\rm vir}(z) \, H(z)\right]^{1/3} \end{equation} Here $M_{\rm vir}$ and $R_{\rm vir}$ are the virial mass and virial radius of the halo, respectively, and $H(z)$ is the Hubble parameter. The quantity $\Delta_{\rm vir}(z)$ is the density contrast between the mean density of the halo and the critical density for closure, for which we use the fitting formula of Bryan \& Norman (1998), \begin{equation} \label{delc} \Delta_{\rm vir}(z) = 18 \pi2 + 82 [\Omega_{\rm m}(z)-1] - 39 [\Omega_{\rm m}(z)-1]2 \end{equation} \begin{figure} \vbox{ \psfig{file=typMAH.eps,angle=270,width=1.0\hsize} \psfig{file=typvc.eps,angle=270,width=1.0\hsize} \caption{{\it Upper panel:} the MAH of a randomly chosen halo with a mass of $1.02 \times 10^{13} h^{-1}\>{\rm M_{\odot}}$. Various characteristic events during the assembly of this halo are indicated: $z_{\rm vmax}$ (open triangle), $z_{\rm half}$ (open circle), and $z_{\rm vvir}$ (cross). The solid dots with an arrow indicate major mergers (those with a mass ratio larger than $1/3$). {\it Lower panel:} same as in upper panel, except that here the evolution of the halo virial velocity is shown.} \label{fig:zformex} } \end{figure} As an illustration, Fig.~\ref{fig:zformex} plots the MAH, $M(z)/M_0$ (upper panel), and the history of the virial velocity, $V_{\rm vir}(z)/V_0$ (lower panel) for a randomly selected halo (with $M_0 = 1.02 \times 10^{13} h^{-1} \>{\rm M_{\odot}}$). All major merger events are marked by a solid dot plus arrow. The last major merger occurs at $z_{\rm lmm}= 1.60$. The other formation redshifts, $z_{\rm half}=1.59$, $z_{\rm vvir}=3.77$, and $z_{\rm vmax}=1.23$ are marked by an open circle, a cross, and an open triangle, respectively. \begin{figure} \centerline{\psfig{file=zformcorr.ps,angle=270,width=1.0\hsize}} \caption{The correlations between various halo formation redshifts for haloes with present day masses in the range $10^{11} h^{-1} \>{\rm M_{\odot}} \leq M \leq 10^{12} h^{-1} \>{\rm M_{\odot}}$. The value of $r_s$ in each panel shows the corresponding Spearman rank-order correlation coefficient. Due to the finite time resolution in the PINOCCHIO simulations, in some cases the values of two formation times can be the same.} \label{fig:zformcorr} \end{figure} Fig.~\ref{fig:zformcorr} plots the correlations between the various formation redshifts, for haloes with masses in the range $10^{11} - 10^{12} h^{-1}\>{\rm M_{\odot}}$. The value of $r_s$ in each panel shows the corresponding Spearman rank-order correlation coefficients. Clearly, there is significant correlation among all the formation redshifts, but the scatter is quite large. This demonstrates that these different formation times characterize different aspects of a given MAH. Unlike simulation which outputs snapshots at arbitrary times, PINOCCHIO only outputs when a merger occurs and the merger is treated as instantaneous. Consequently, some formation times can have exactly the same value in PINOCCHIO simulations. Note that the correlation shown in the lower left panel is quite similar to that obtained by Cohn \& White (2005) for simulated clusters of galaxies. Note also that typically, $z_{\rm vvir} > z_{\rm half}$ and $z_{\rm vvir} > >z_{\rm lmm}$. This shows that haloes {\it in this mass range} established their potential wells before they accreted a major fraction of their mass. The last major merger typically occurred well before $z_{\rm half}$, which indicates that most of that mass has been accreted in a fairly smooth fashion (see also W02 and Zhao {et al.~} 2003a). \begin{figure} \centerline{\psfig{file=zform.ps,angle=270,width=1.0\hsize}} \caption{The probability distributions of $z_{\rm half}$ (dotted lines), $z_{\rm vvir}$ (dashed lines), $z_{\rm vmax}$ (dot-dashed lines) and $z_{\rm lmm}$ (thick solid lines). Results are shown for four different mass bins, as indicated in each panel. Note that the scale of the four panels is different! See text for a detailed discussion.} \label{fig:zform1} \end{figure} \begin{figure} \centerline{\psfig{file=fracm.ps,angle=270,width=1.0\hsize}} \caption{The distributions of the halo mass fraction at various formation times. Different line-styles correspond to different definitions of the formation time, as indicated in the upper left-hand panel. As in Fig.~\ref{fig:zform1}, different panels correspond to different halo mass bins, as indicated.} \label{fig:zform2} \end{figure} Fig.~\ref{fig:zform1} shows the distributions of the four formation redshifts defined above. Results are shown for four different mass bins, as indicated. For all four formation redshifts, the median is higher for haloes of lower masses. This reflects the hierarchical nature of the assembly of dark matter haloes: less massive systems assemble (`form') earlier. Note that the distribution of formation times is also broader for lower mass haloes. For haloes with $M_0 \ga M^{*} \simeq 10^{13} h^{-1} \>{\rm M_{\odot}}$\footnote{Here $M^{*}$ is the characteristic non-linear mass defined by $\sigma(M^{*}) = \delta_{\rm crit}0$}, all the distribution functions except that of $z_{\rm half}$ are peaked at, or very near to, $z = 0$. This shows that the majority of these haloes are still in their fast accretion phase, so that their potential wells are still deepening with time. On the other hand, haloes with $M_0 \ll M^{*}$ typically have $z_{\rm vvir} > z_{\rm half}$ and $z_{\rm vvir} >z_{\rm lmm}$ (cf. Fig.~\ref{fig:zformcorr}), indicating that their potential wells have already been established, despite the fact that they continue to accrete appreciable amounts of mass. Fig.~\ref{fig:zform2} shows the distributions of the ratio $M(z_{\rm form}) / M_0$, with $z_{\rm form}$ one of our four formation redshifts. By definition, the distribution of $M(z_{\rm half}) / M_0$ is a $\delta$-function at $M(z_{\rm form})/M_0 = 0.5$, and is therefore not shown. For haloes with $M_0 < 10^{13} h^{-1} \>{\rm M_{\odot}}$, the virial velocity has already reached the present day value when the halo has only assembled 10\%-20\% of its final mass. Thus, these systems assemble most of their mass without significant changes to the depth of their potential well. Only for massive haloes with $M_0 \ga 10^{14} h^{-1} \>{\rm M_{\odot}}$ is the median of $M(z_{\rm vvir}) / M_0$ larger than 0.5, implying that they have assembled the majority of their present day mass through major (violent) mergers. If we define major mergers as those with a progenitor mass ratio that is at least $1/3$, the distribution of $M(z_{\rm lmm})/M_0$ is remarkably flat. This implies that some haloes accrete a large amount of mass after their last major merger, while for others the last major merger signals the last significant mass accretion event. Remarkably, the distribution of $M(z_{\rm lmm})/M_0$ is virtually independent of $M_0$. For low mass haloes, the flatness of the distribution of $M(z_{\rm lmm})/M_0$ simply reflects the broad distribution of $z_{\rm lmm}$. However, for massive haloes with $M \ga M^{*}$, the distribution of $z_{\rm lmm}$ is fairly narrow. Therefore, for these haloes the flatness of the $M(z_{\rm lmm})/M_0$ distribution implies that, since their last major merger, they have accreted a significant amount of mass due to minor mergers. Since the last major merger occurred fairly recently, this is another indication that massive haloes are still in their fast accretion phase. \section{The properties of major mergers} \label{sec:majmerprop} During the assembly of dark matter haloes, major mergers play an important role. Not only does a major merger add a significant amount of mass, it also deepens the halo's potential well. Furthermore, in current models of galaxy formation, a major merger of two galaxy-sized haloes is also expected to result in a merger of their central galaxies, probably triggering a starburst and leading to the formation of an elliptical galaxy. Therefore, it is important to quantify the frequency of major mergers during the formation of CDM haloes. \begin{figure} \centerline{\psfig{file=checkNjump.ps,angle=270,width=1.0\hsize}} \caption{The median, $\langle N_{\rm jump}\rangle$, and dispersion, $\sigma_{N_{\rm jump}}$, of the distribution of the number of mass jumps, $N_{\rm jump}$, in the MAHs, versus $n$ (see text for definitions). Left panels show comparison between P1 and S1, while right panels show comparison between P2 and S2. Note that the agreement between the PINOCCHIO simulations and $N$-body simulations is remarkable and the mass dependence is rather weak.} \label{njump} \end{figure} As mentioned above, in a PINOCCHIO simulation mergers of dark matter haloes are treated as instantaneous events, and the masses of the merger progenitors are recorded whenever a merger happens. This makes it very convenient to identify mergers in PINOCCHIO. On the other hand, in an $N$-body simulation halos are identified only in a number of snapshots, and so the accuracy of identifying mergers is limited by the time intervals of the snapshots. For example, if we define major mergers by looking for halos for which the mass ratio between its second largest and largest progenitors exceeds 1/3 in the last snapshot, we may miss major mergers in which the two progenitors were assembled during the two snapshots. On the other hand, if we identify major mergers in a simulation by looking for halos whose masses increase by a factor between 1/4 and 1 in the next snapshot, we will overestimate the number of major merger events, because some of the halos may have increased their masses by accretion of small halos rather than through major mergers. In the simulations used here (S1 and S2), the time intervals between successive snapshots are about 0.3-0.6 Gyr, comparable to the time scales of major mergers, and the two definitions of major mergers described above lead to a factor of 2 difference in the number of major mergers. Because of this, it is difficult to make a direct comparison between PINOCCHIO and N-body simulations in their predictions for the number of major mergers. In order to check the reliability of PINOCCHIO in predicting the number of major mergers, we use quantities that are related to the number of major mergers but yet can be obtained from both our N-body and PINOCCHIO simulations. We first construct PINOCCHIO haloes at each of the snapshots of our N-body simulations. We then follow the MAH of each of the present halo using the snapshots and identify the number of events in which the mass of a halo increases by a factor exceeding $1/n$ between two successive snapshots, where $n$ is an integer used to specify the heights of the jumps. In practice, we trace the MAH backward in time until the mass of the halo is 1\% of the final halo mass. Since exactly the same analysis can also be carried out for the N-body simulations, we can compare, for a given $n$ and for halos of given mass at the present time, the statistics of the number of jumps, $N_{\rm jump}$, predicted by PINOCCHIO simulations with that given by the N-body simulations. We found that the distribution of $N_{\rm jump}$ for a given $n$ can be well fit by a Gaussion distribution, and in Fig.~\ref{njump} we plot the median $\langle N_{\rm jump}\rangle $ and standard deviation $\sigma_{N_{\rm jump}}$ versus $n$, in several mass bins. The agreement between PINOCCHIO and N-body simulations is remarkably good. Although $N_{\rm jump}$ is not exactly the number of major mergers, the good agreement between PINOCCHIO and N-body simulations makes us believe that it is reliable to use PINOCCHIO to make predictions for the statistics of major mergers. \begin{figure} \centerline{\psfig{file=Nmm.Dist.ps,angle=270,width=1.0\hsize}} \caption{The distribution of the number of major mergers (those with a mass ratio larger than $1/3$) in our PINOCCHIO simulations. Lines in different styles represent different mass bins. Note that the distributions are virtually independent of halo mass.} \label{mm} \end{figure} \begin{figure} \centerline{\psfig{file=MMstat.ps,angle=270,width=1.0\hsize}} \caption{Distribution of the number of mergers (in PINOCCHIO simulations) with a mass ratio larger than $1/3$ (upper left-hand panel), $1/4$ (upper right-hand panel), and $1/6$ (lower left-hand panel). In all three cases all haloes with masses in the range from $10^{11} h^{-1} \>{\rm M_{\odot}}$ to $10^{15} h^{-1}\>{\rm M_{\odot}}$ are used. The dotted curves show the best-fit Gaussians, the median and standard deviation of which are indicated in the lower right-hand panel.} \label{fig:mmstat} \end{figure} In order to investigate the statistics of major mergers in detail, we count the number of major mergers for each of the halos in the ensemble of simulations P0. Here again we only trace a halo back to a time when the mass of its main progenitor is 1\% of the halo's final mass. This choice of lower mass limit is quite arbitrary. However, some limit is necessary, because otherwise there will be a large number of major mergers involving progenitors with excessively small masses at very early times. Furthermore this mass limit is also the one we use in defining $N_{\rm jump}$. The large number of halos in the ensemble ensures that each mass bin contains about 2000 haloes. Fig.~\ref{mm} plots the distributions of the number of major mergers (with a progenitor mass ratio $\ge 1/3$) for haloes of different masses at the present time. A halo experiences about 1 to 5 major mergers during its mass assembly history, with an average of about 3. Note that the $N_{\rm mm}$-distributions are virtually independent of halo mass. As we have shown in Section~\ref{sec:ftime}, however, the redshifts at which these mergers occur do depend strongly on halo mass: while most major mergers occur before $z \simeq 2$ for galaxy-sized haloes, they occur much more recently in the more massive, cluster-sized haloes. \begin{figure} \centerline{\psfig{file=mmfit1.ps,angle=270,width=1.0\hsize}} \caption{The median (upper panel) and dispersion (lower panel) of the number distributions of mergers with a mass ratio $M_1/M_2 \geq 1/n$, as a function of $n$. Steeper lines in each panel are the data from all progenitors (summing over all branches of the merger trees) while flatter lines are the results from the main branch. In both cases, we have divided haloes into two mass bins as indicated in each panel. Open triangles connected with dashed lines show the results for haloes with masses $<10^{13}h^{-1}{\rm M}_\odot$, while open circles connected with dotted lines show the results for haloes with masses $\ge 10^{13}h^{-1}{\rm M}_\odot$. The solid lines are the linear regressions of the data drawn from the whole halo catalogue, with the slopes and zero points indicated.} \label{fig:mmfit} \end{figure} As pointed out above, the progenitor mass ratio used to define a major merger is quite arbritrary. We therefore also investigate the frequency of mergers with a mass ratio larger than $1/n$ with $n=2, 4,5,6,7,8$ (in addition to the $n=3$ discussed thus far). We find that even with these values of $n$ the distributions of $N_{\rm mm}$ are still virtually independent of halo mass. This allows us to consider a single $N_{\rm mm}$-distribution for haloes of all masses. Fig.~\ref{fig:mmstat} plots these distributions for three different values of $n$ as indicated. Each of these distributions is reasonably well described by a Gaussian function (dashed curves). Note that the use of a Gaussian function is not entirely appropriate, because $N_{\rm mm}$ cannot be negative. However, since the median value of $N_{\rm mm}$ is, in all cases, significantly larger than the width of the distribution, a Gaussian fit is still appropriate. To show how the $N_{\rm mm}$-distribution depends on $n$, we plot, as in Fig.~\ref{fig:mmfit}, the median and the dispersion of this distribution as functions of $n$. As one can see, both the median and the dispersion increase roughly linearly with $n$, but the slope for the median ($\sim 1$) is much larger than that for the dispersion ($\sim 0.1$). Note that the results for haloes with masses $<10^{13}h^{-1}\>{\rm M_{\odot}}$ and $>10^{13}h^{-1}\>{\rm M_{\odot}}$ are similar, suggesting the distribution of the number of major mergers is quite independent of halo mass. Thus far we have only focused on the (major) merger events that merge into the main branch of the merger tree. For comparison, we also consider the merger rates of {\it all} progenitors, independent of whether they are part of the main branch or not. As before we only consider progenitors with masses in excess of one percent of the final halo mass. The skewer lines in Fig.~\ref{fig:mmfit} show the median and dispersion of the number of such mergers as functions of $n$. Here again, both the median and dispersion have roughly linear relations with $n$. The median number of such major mergers is roughly three times as high as that of major mergers associated with the main branch, and the dispersion increases with $n$ much faster. \begin{figure} \centerline{\psfig{file=pkNmmBA.ps,angle=270,width=1.0\hsize}} \caption{The probability distributions of the number of major mergers (those with a mass ratio larger than $1/3$) before (solid lines) and after (dashed lines) $z_{\rm vmax}$. Note that the vast majority of major mergers occur at $z > z_{\rm vmax}$, demonstrating that the growth of the halo's virial velocity is mainly driven by major mergers.} \label{VpkNmmBA} \end{figure} As mentioned above, major mergers are expected to be accompanied by rapid changes of the halo's potential well, due to a resulting phase of violent relaxation. To show such relation in more detail, Fig.~\ref{VpkNmmBA} shows the distributions of the number of major mergers (defined with $n=3$) before and after the formation redshift $z_{\rm vmax}$. For haloes in all mass ranges, only a very small fraction (less than 5\%) experiences a major merger at $z<z_{\rm vmax}$. This demonstrates once again that the growth of the virial velocity is mainly caused by major mergers. This result may have important implications for understanding the structure of dark matter halos. As shown in Lu et al. (2006), if the buildup of the potential well associated with a dark matter halo is through major mergers, then the velocities of dark matter particles may be effectively randomized, a condition that may lead to a density profile close to the universal density profile observed in $N$-body simulations. Also, if galaxy disks are formed during a period when no major mergers occur, our result suggests that the potential wells of the halos of spiral galaxies should change little during disk formation. \section{Conclusions} \label{sec:concl} In the current paradigm, galaxies are thought to form in extended cold dark matter haloes. A detailed understanding of galaxy formation, therefore, requires a detailed understanding of how these dark matter haloes assemble. Halo formation histories are typically studied using either numerical simulations, which are time consuming, or using the extended Press Schechter formalism, which has been shown to be of insufficient accuracy. In this paper, we have investigated the growth history of dark matter haloes using the Lagrangian perturbation code PINOCCHIO, developed by Monaco {et al.~} (2002a). We have demonstrated that the mass assembly histories (MAHs) obtained by PINOCCHIO are in good agreement with those obtained using N-body simulations. Since PINOCCHIO is very fast to run, does not require any special hardware such as supercomputers or Beowulf clusters, and does not require any labor intensive analysis, it provides a unique and powerful tool to study the statistics and assembly histories of large samples of dark matter haloes for different cosmologies. Confirming earlier results based on N-body simulations (e.g. W02; Zhao {et al.~} 2003a,b), we find that typical MAHs can be separated into two phases: an early, fast accretion phase dominated by major mergers, and a late, slow accretion phase during which the mass is mainly accreted from minor mergers. However, the MAHs of individual haloes are complicated, and therefore difficult to parameterize uniquely by a single parameter. We therefore defined four different formation times: the time when a halo acquires half of its final mass, the time when the halo's potential well is established, the time when a halo transits from the fast accretion phase to the slow accretion phase, and the time when a halo experiences its last major merger. Using a large number of MAHs of haloes spanning a wide range in masses, we studied the correlations between these four formation redshifts, as well as their halo mass dependence. Although all four formation times are correlated, each correlation reveals a larger amount of scatter. For all four formation redshifts, it is found that more massive haloes assemble later, expressing the hierarchical nature of structure formation. Haloes with masses below the characteristic non-linear mass scale, $M^{*}$, establish their potential wells well before they have acquired half of their present day mass. The potential wells associated with more massive haloes, however, continue to deepen even at the present time. The time when a halo reaches its maximum virial velocity roughly coincides with the time where the MAH transits from the fast to the slow accretion phase. If we define major mergers as those with a progenitor mass ratio larger than $1/3$, then on average each halo experiences about 3 major mergers after its main progenitor has acquired one percent of its present day mass. In addition, we found that the number of major mergers the main branch of the merging tree has experienced is linearly correlated with the mass ratio between the merging progenitors. For the whole merging tree, the number of major mergers is about 3 times that of the major mergers in the main branch. The distribution of the number of major mergers a halo has experienced is virtually independent of its mass, and the ratio between the halo mass immediately after the last major merger and the final halo mass has a very broad distribution, implying that the role played by major mergers in building up the final halo can differ significantly from system to system. \section*{Acknowledgments} We are grateful to Pierluigi Monaco, Tom Theuns and Giuliano Taffoni for making their wonderful code PINOCCHIO publicly available with an easy to understand manual, and to Xi Kang for letting us share his EPS merging tree code. We also thank the Shanghai Supercomputer Center, the grants from NFSC (No. 10533030) and Shanghai Key Projects in Basic Research (No. 05XD14019) for the N-body simulations used in this paper. HJM would like to acknowledge the support of NSF AST-0607535, NASA AISR-126270 and NSF IIS-0611948. FvdB acknowledges useful and lively discussions with Risa Wechsler during an early phase of this project. \bigskip
2024-02-18T23:39:50.148Z
2007-05-04T18:37:41.000Z
algebraic_stack_train_0000
580
11,311
proofpile-arXiv_065-2907
\section{Introduction} Alon and Cederbaum just presented a pathway from condensation to fermionization where the evolution of the density profile of a cold bosonic system with increasing scattering length was followed [1]. With increasing scattering length the density profile acquires more and more oscillations, until their number eventually equals to N, the number of bosons. Once the number of oscillations equals the number of bosons and the system becomes fermionized. They showed that the ground state and density profile of a bosonic system strongly depend on the shape of the trap potential. This allows one to design systems with an intriguing ground state, e.g., for which one part is condensed and the other one is fermionized. \newline However, the present author, based on the experiences in the quantum kinetic theory (especially the Uehling-Uhlenbeck approach) [2], would like to make some remarks about their claim : the pathway from condensation to fermionization depends on the shape of the potential. When more particles are trapped in the double-well trap employed there, the physical picture obtained above remains essentially the same? Alon and Cederbaum followed the pathway from weakly- to strongly-interacting bosons in real space and obtain a wavefunction picture of the ground state of cold atoms in the trap. This will allow them to monitor directly the fine changes in the spatial density of interacting bosons as the inter-particle interacting ($\lambda_0$) increases (say, $\lambda=\lambda_0 (N-1)/(2\pi)=1$, the energy functional is minimal when all atoms reside in one orbital, thus recovering Gross-Pitaevskii theory [1]; $N$ is the number of bosons). In fact, Alon and Cederbaum chose as an illustrative 1D example the potential profile $V (x)$ (cf. Fig. 1 in [1]) which was an asymmetric double-well potential with the left well being deeper and narrower than the right one. With their numerical results, from $\lambda \approx 155$ (or $\gamma\equiv \lambda_0/2n \sim 9$) on, all bosons reside in different orthogonal orbitals-the bosonic system has become fully fermionized! \newline Considering the spin characteristic and the Pauli blocking effect in quantum-mechanic sense [2-4], the above mentioned results should better be attributed to the intrinsic particle-statistics. To remind the readers with the definition of Pauli blocking, we shall cite the relevant work below for introducing the Uehling-Uhlenbeck equation [2-4], which extends its applicability to particles which obey non-conventional statistics. Consider particles (mass m) which obey the Bose-Einstein or Fermi-Dirac statistics. These particles interact elastically by means of binary encounters (cross section $\sigma$). In the space homogeneous case the UUE reads as follows ($\Lambda$=1 for bosons, $\Lambda$ = $-1$ for fermions; $\Lambda$ is the parameter characterizes the Pauli blocking effect): \begin{displaymath} \frac{\partial f}{\partial t} = \frac{1}{h^3} \int d{\bf p}_1 d {\bf \Sigma}' [f' f'_1 (1 + \Lambda f)(1 + \Lambda f_1)-f f_1(1 + \Lambda f')(1 + \Lambda f'_1 )] \times g \sigma(g;\xi)) ; \end{displaymath} where $f = f({\bf p}; t)$ is the quantum distribution function (dimensionless) and $f' = f({\bf p}'; t)$; $f_1=f({\bf p}_1; t)$; $f'_1 =f({\bf p}'_1 ; t)$. We denote by ${\bf p}$ and ${\bf p}_1$ the momenta before collision, while ${\bf p}'$ and ${\bf p}'_1$ are the momenta after collision: $ {\bf p}' = ({\bf p}+{\bf p}_1+m g{\bf \Omega}')/2$; $ {\bf p}'_1 = ({\bf p}+{\bf p}_1-m g {\bf \Omega}')/2$ ; where $g=|{\bf p}-{\bf p}_1|/m$ and ${\bf \Omega}'$ is a unit vector. The cross section $\sigma$, in general, depends on both $g$ and $\xi$, where $\xi ={\bf \Omega}\cdot {\bf \Omega}'$; ${\bf \Omega} = 1/ ({\bf p}- {\bf p}_1)$. \newline The tuning of $\lambda$ in [1] is nothing but the tuning of the rarefaction parameter in [4] together with the Pauli-blocking parameter (say, $B$ in [4] or cf. Chu in [5]). There might be resonances (or localizations), however, once $\lambda$ increases to the order of magnitude of 1000 or more [5]. For bosons, the parity of particle and antiparticle is the same in contrast to fermions where the parity reverses between antiparticles. The other issue is the effective mass for different systems characterized by, say, $\lambda$. To conclude in brief, the Pauli blocking effect and the possible resonance (or localization) are crucial for those results in [1] which needs to be further checked.
2024-02-18T23:39:50.158Z
2005-10-05T02:00:51.000Z
algebraic_stack_train_0000
581
737
proofpile-arXiv_065-2949
\section{Introduction} The Milky Way is a galaxy of stars radiating most of their energy at optical wavelengths. But from stellar birth to stellar death, from the vast reaches of interstellar space to the tiniest of stellar corpses, radio and X-ray observations provide crucial diagnostics in our quest to understand the structure and evolution of our Galaxy and its denizens. These two spectral regimes are particularly crucial for studying massive stars: throughout their lives, stellar Lyman continuum photons produce \ion{H}{2} regions with their associated free-free radio emission, while stellar wind shocks produce X-rays; in death, the remnants of supernovae are the brightest radio and X-ray sources in the Galaxy. Furthermore, the Galaxy is largely transparent in the radio and hard X-ray bands, giving us an unobstructed view through the plane, even at $b=0\arcdeg$. We are in the process of conducting a large-scale survey of the Galactic plane at X-ray wavelengths with XMM, the first results of which have been reported elsewhere (Hands et al.\ 2004). Here we describe a complementary effort to provide a new, high-resolution, high-sensitivity view of centimetric radio emission in the Milky Way. While significant progress has been made recently in surveying the extragalactic radio sky ({\it e.g.}, NVSS, SUMSS, and {\sl FIRST}), the Galactic plane still remains inadequately explored. Even though the NVSS (Condon et al.\ 1998) covered the plane, it did so in snapshot mode with $uv$ coverage insufficient to achieve high dynamic range (typical values achieved are $\sim$30:1). The Canadian Galactic Plane Survey project (English et al.\ 1998) is covering a large region of the plane in the second quadrant with better dynamic range, but with a resolution of only $65\arcsec$ and limited sensitivity in the continuum. The third and fourth Galactic quadrants are being surveyed using Parkes and the Australia Compact Telescope Array in the Southern Galactic Plane Survey (McClure-Griffiths at al. 2001), although the angular resolution is only $\sim 2\arcmin$ and the $5\sigma$ detection threshold is $\sim 35$~mJy in the continuum. Fifteen years ago, we used observations originally taken by Dicke et al.\ (unpublished) in the B-configuration of the Very Large Array\footnote {The VLA is a facility of the National Radio Astronomy Observatory which is operated by Associated Universities, Inc. under cooperative agreement with the National Science Foundation.} (supplemented by additional 20~cm and 6~cm time awarded to us) to produce a catalog of over 4000 compact sources within a degree of the plane in the longitude range $-20\arcdeg<l<120\arcdeg$ (Becker et al.\ 1990; Zoonematkermani et al.\ 1990; White, Becker, and Helfand 1991; Helfand et al. 1992). Although the original analysis provided maps that were complete only to $\sim 20$ mJy at 20 cm, this remains the highest resolution and most sensitive census of compact sources over a large segment of the Galaxy. Comparison with the IRAS survey led us to identify more than 450 ultracompact \ion{H}{2} regions, over 100 new planetary nebulae (which fill in the gap near $b=0$ caused by extinction in optical searches -- Kistiakowsky and Helfand 1995), and, along with 90 cm maps we obtained covering a small portion of the longitude range, more than a dozen new supernova remnant candidates. Motivated by the torrent of new, high-resolution mid-infrared data from the GLIMPSE Legacy survey with Spitzer (Benjamin et al.\ 2003) and taking advantage of modern data analysis algorithms developed for our FIRST survey (Becker, White \& Helfand 1995; White et al.\ 1997), we have recently completed a reanalysis of all of the existing snapshot data (over 3000 individual pointings including some new data designed to fill holes and improve quality in poorly covered regions). This work yielded 6 and 20~cm catalogs with over 6000 entries and flux density thresholds nearly a factor of two below those of the original analysis (White, Becker \& Helfand 2005). Nonetheless, the single-configuration, snapshot nature of these observations renders the data problematic for all but the most compact radio sources in the plane. A high-sensitivity, high-resolution, high-dynamic-range map of the radio continuum emission from the Galactic plane is now possible with a relatively modest investment of telescope time owing to advances in the VLA receivers over the last decade, the implementation of the highly efficient ''survey mode" slewing algorithm, and improvements to the AIPS software package. We have begun to make this possibility a reality by producing a $5\arcsec$-resolution image of 27 degrees of Galactic longitude in the first quadrant. Our plan over the coming several years is to extend this survey over the entire Spitzer GLIMPSE longitude range in the north, covering $5\arcdeg<l<65\arcdeg$. This Multi-Array Galactic Plane Imaging Survey or MAGPIS (a moniker appropriate for the authors whose careers have been based on collecting random shiny objects gathered from overflights of much of the celestial sphere in several regions of the electromagnetic spectrum) is designed to provide a definitive archive of the Galactic sky at 20~cm. In Section 2 we describe the survey parameters and the data acquired to date; in addition, we discuss complementary datasets we have used in our analysis and introduce the MAGPIS website, which offers comprehensive access to all of our data products. Section 3 outlines our analysis strategy, presents the imaging results, and provides a statistcal characterization of the survey sensitivity threshold and dynamic range. We then discuss our detection algorithms for both discrete and diffuse sources, and present the source catalogs as well as an atlas for all extended emission regions (\S4). Section 5 includes a discussion of a preliminary comparison between MAGPIS and the MSX mid-IR data, and previews the prospects for a more complete census of \ion{H}{2} regions in the first quadrant. This is followed by a discussion (\S6) of the nonthermal emission regions detected in our survey, including the discovery of several dozen new supernova remnant candidates. We summarize our results in Section 7. \section{The MAGPIS Survey: Design and Data Aquisition} As noted above, radio emission is a prominent signature of massive stars; \ion{H}{2} regions, pulsars, supernova remnants, and black hole binaries are all the products of O and early B stars that have a small scale height. This fact, coupled with constraints on the total observing time available, has led us to restrict our Galactic latitude coverage to $|b|<0\fdg8$. This is greater than the OB star scale height (Reed 2000) for all distances beyond 3~kpc and covers a region up to $z=\pm 230$~pc at the solar circle on the far side of the Galaxy (we adopt $R_{\odot} = 8.5$~kpc throughout). \subsection{The 20~cm data} Our first tranche of Galactic longitude, $32\arcdeg>l>19\arcdeg$, was chosen to complement our first X-ray data set and to explore the tangent to the Scutum spiral arm. The second segment we have completed covers the region $19\arcdeg>l>5\arcdeg$; we stopped at $5\arcdeg$ mainly because the central regions have been reasonably well-mapped previously. We intend to continue the survey as time becomes available, first to the GLIMPSE upper longitude limit of $l=65\arcdeg$ and later to both higher and lower longitudes. Data are collected in the B-, C-, and D-configurations of the VLA operating in pseudo-continuum mode at 20~cm; two 25-MHz bandwidths centered at 1365~MHz and 1435~MHz are broken into seven 3~MHz channels to minimize bandwidth smearing as well as to reduce significantly our sensitivity to interference. The loss of a factor of two in bandwidth over the standard continuum mode is not important, since virtually all maps are dynamic-range, rather than sensitivity, limited. \begin{figure*} \epsscale{0.7} \plotone{f1.eps} \caption{ The hexagonal grid of 252 VLA pointing centers used for the MAGPIS 20~cm survey. } \label{fig-grid} \end{figure*} \begin{figure*} \epsscale{0.7} \plotone{f2.eps} \caption{ The variation in the rms noise as a function of position after the overlapping images have been co-added. The rms is normalized to unity at field center for a single pointing. The top panel shows a cut in latitude at the edge of the survey ($l=32\arcdeg$), and the bottom panel shows the rms along a line passing near the field centers at $b=0\fdg6$. The rms noise is uniform, with a peak-to-peak variation of only $\sim \pm10$\% except at the edges of the surveyed area. } \label{fig-coverage} \end{figure*} The pointing pattern is displayed in Figure~\ref{fig-grid}. The close-packed hexagonal array provides uniform coverage with a peak-to-minimum variation in sensitivity of $<20\%$ (after co-adding of adjacent images --- see Fig.~\ref{fig-coverage}). We observe each location four times in each of the three configurations spaced roughly equally in hour angle over a range $\pm4$ hrs to maximize $uv$ coverage; the result is an average of $\sim12$ minutes per field per configuration, providing a theoretical noise level ($\sim 0.08$ mJy) far below the dynamic range limit of the maps. In the second round of observations we have saved observing time by using the full 12 minutes per field in the B configuration, but reducing the integration time by a factor of two in the two lower-resolution configurations (while maintaining the observational cadence at multiple hour angles). This reduces our sensitivity by $\sim 20\%$ in the least-populated map regions, although, again, most of the images are dynamic-range limited and the effect on the final source catalog is minimal. A total of 165 hours of time has been accumulated to date in the MAGPIS project. Table~1 lists the observing epochs and configurations used to construct the 252 individual images, which cover an area of over 42~deg$^2$. \tabletypesize{\scriptsize} \begin{deluxetable}{ccccc} \tablecolumns{5} \tablecaption{Observing Log} \tablehead{ & \multicolumn{4}{c}{VLA Configuration} \\ \colhead{Description} & \colhead{B} & \colhead{C} & \colhead{D} & \colhead{BnC} } \startdata Phase 1, 20~cm\tablenotemark{a} & Mar--Apr 2001 & Aug--Sept 2001 & Aug--Sept 2000 & May 2001\\ & 31 hrs & 28 hrs & 28 hrs & 1.5 hrs\\ Phase 2, 20~cm\tablenotemark{b} & Jan 2004 & Feb--Mar 2004 & Apr 2003 & \\ & 36 hrs & 19 hrs & 18 hrs & \\ Phase 1, 90~cm\tablenotemark{c} & & Sept 2001 & & \\ & & 3.5 hrs & & \\ \enddata \tablenotetext{a}{20 cm Phase 1: $19\arcdeg<l<32\arcdeg \, , \, |b|<0\fdg8$} \tablenotetext{b}{20 cm Phase 2: $5\arcdeg<l<19\arcdeg \, , \, |b|<0\fdg8$} \tablenotetext{c}{90 cm Phase 1: $20\arcdeg<l<33\arcdeg \, , \, |b|<2\arcdeg$} \end{deluxetable} \tabletypesize{\footnotesize} \subsection{The 90~cm data} Even with high-quality images, a single frequency is insufficient to identify source classes unambiguously and to disentangle thermal and nonthermal emission in crowded regions. As part of our initial observation program for MAGPIS, we obtained 3.5 hours of 90~cm pseudo-continuum observations in the C configuration of the VLA during September of 2001. Eight pointings were used to cover the region $20\arcdeg<l<33\arcdeg, |b|<2\arcdeg$. The data were reduced using a $15\arcsec$ pixel size and have a resolution of $\sim 70\arcsec$. In addition, we retrieved from the VLA archive 90~cm data originally taken by Brogan et al.\ (2005) that cover the remainder of our current survey area ($3\fdg6<l<20\arcdeg, |b|<2\arcdeg$). These data were reduced using a $6\arcsec$ pixel size and have a resolution of $\sim 25\arcsec$. \subsection{The mid-IR data} We have retrieved the mid-IR images and catalogs of the Midcourse Space Experiment (MSX -- Price et al.\ 2001) from the IPAC database for the regions our survey covers to date. For ease of comparison, we have regridded the E-band ($20\,\mu$m) data onto the same $l,b$ grid used to present the primary MAGPIS images. We have also constructed ratio maps for the 20~cm and $20\,\mu$m data for use in separating thermal and nonthermal emission. An example of such an image is displayed in Figure~\ref{fig-msxcolor}. High values of the radio-to-IR ratio generally indicate nonthermal radio emission such as is produced by supernova remnants, while low values tend to highlight dusty \ion{H}{2} regions, although pulsar wind nebulae, dusty old supernova remnant shells, and dust-free \ion{H}{2} regions can in principle exhibit intermediate ratios. We defer a quantitative discussion of the comparison of the radio and mid-IR emission to a future paper. \begin{figure*} \epsscale{0.9} \plotone{f3.eps} \caption{ Combined radio-IR image demonstrating the separation of thermal and non-thermal emission. The radio image is used to set the intensity of the displayed image, while the radio-IR flux ratio is used to set the color hue and saturation. Objects with strong IR emission (typical of thermal radio sources) appear in black and white, while objects that have absent or weak IR emission (typical of nonthermal radio sources) appear in colors ranging from green to red depending on the upper limit on the radio-to-IR ratio. Both the known SNR G28.6-0.1 and a previously undiscovered remnant at G28.56-0.01 are apparent, as are a half-dozen thermal sources with varying morphologies. } \label{fig-msxcolor} \end{figure*} \subsection{The MAGPIS website} Consistent with our past practice, the raw VLA data on which MAGPIS is based have been available in the VLA archive from the day they were taken. To facilitate use of these data by the broadest possible community, we have constructed the MAGPIS website (\url{http://third.ucllnl.org/gps}), which presents our data products in easily accessible forms. In addition to the full-resolution 20~cm images, the site provides the complementary 90~cm images, the regridded MSX $20\,\mu$m images, and an image atlas of diffuse emission regions (see below). The single-configuration 6 and 20~cm images from our earlier snapshot surveys (White, Becker \& Helfand 2005) are also available. Images can be displayed with user-specified coordinates, box sizes, and intensity scales or can be downloaded as FITS files. The full discrete-source and diffuse catalogs are available for retrieval or through a search query function, as are our catalogs and publications from our earlier snapshot survey work. We expect to add our XMM X-ray survey data and the Spitzer GLIMPSE survey images and catalogs as they become available. \section{The MAGPIS Images} In contrast to the extragalactic radio sky, which is rather sparsely populated by mostly compact sources, radio emission in the Galactic plane is dominated by bright, diffuse \ion{H}{2} regions and supernova remnants. Thus, the single-snapshot observations and two-dimensional mapping approximations that worked well in the FIRST and NVSS surveys are inadequate for producing high-dynamic-range images for MAGPIS. In this case, the VLA data must be treated as a three-dimensional data set. In practice, 3-d distortions scale with offset from the image center; thus, one way to minimize 3-d effects is to tile the VLA's $30\arcmin$ primary beam with many small images. We have used a grid of 21 by 21 images, each of which is 128 by 128 pixels in size. Our initial data were reduced on a Sun Ultra 60, with each image requiring $\sim12$ hours to CLEAN. We subsequently migrated the analysis to a dual-processor Pentium 4 computer that is approximately seven times faster. Since the images are greatly improved by self-calibration, each field has to be reprocessed several times. Even with data from the D configuration, the resulting maps suffer missing flux from large-scale structure ($\gg 1\arcmin$) to which the VLA is insensitive. To correct for this deficiency, we combined the VLA images with images from a 1400 MHz survey made with the Effelsberg 100-m telescope ($\sim 7\arcmin$ angular resolution). The AIPS task IMERG makes FFTs of both the VLA and Effelsberg images, combines the derived FFT amplitudes, and then converts back to the image plane to produce the final individual images. We use a $6\farcs2 \times 5\farcs4$ restoring beam on maps with a pixel size of $2\farcs$. The individual maps are ultimately summed and rebinned to produce mosaic images in Galactic coordinates. The dynamic range varies somewhat with location but, measured as a ratio of the peak flux in the brightest source to the full image rms, is typically in excess of 1000:1 in a 1 deg$^2$ image. Over most of the images, 1~mJy point sources are easily detected. \section{The MAGPIS Catalogs} The large, diffuse emission features and variable background, coupled with source size scales ranging from arcseconds to degrees, render impractical the type of automated source detection algorithms applied to extragalactic radio surveys. Thus, we have employed the human eye-brain detection system to search the 16.7 million MAGPIS beam areas for radio sources. We divided the problem into two parts: the detection and cataloging of discrete objects less than a few beam areas in size and unconfused by extensive diffuse emission, and regions of sky in which significant diffuse emission is present. \subsection{Discrete source detection} A square field was defined around each candidate discrete source. In cases where it was impossible to isolate a single emission peak (e.g., for overlapping or closely clustered sources), multiple sources were included in one field and the field was flagged as "multiple''. The default field size was $34\arcsec\times34\arcsec$, but this size was adjusted for larger sources (increased), for high density areas (decreased), or for other reasons (increased or decreased on a case by case basis) such as nearby bad pixels, proximity to the edge of the maps, etc. For the entire survey area this process yielded 2628 single-source fields, and 467 multiple-source fields. The AIPS task HAPPY (see White et al.\ 1997) was then run on each of the fields. In HAPPY, a local rms level was calculated for each field using an area of three times the input field area; a minimum detection level of five times the local rms was set for each field. Using the HAPPY output, we rejected any source with a fitted peak flux, $F_p < 1.0$~mJy or $\le 5.0$ times the local rms, whichever is higher. We also rejected any source with a fitted minor axis less than $3\farcs5$ (the beam minor axis is $5\farcs2$, and experience from the use of HAPPY in the FIRST survey shows that the vast majority of such ``skinny'' sources are spurious sidelobes); this only eliminated one source that passed the $F_p$ and rms criteria. This process yielded a catalog of 3229 sources. Although restricting HAPPY to predetermined fields around candidate sources should reduce the number of spurious detections, this method is still susceptible to poor fits resulting from complex, extended emission as well as areas of patterned noise near bright sources. To assess these potential causes of contamination, we flagged for further examination HAPPY solutions \begin{enumerate} \item when HAPPY fit more than one source in a single-source input field (67 fields, 144 sources); \item when HAPPY fit more than two sources in a multiple-source input field (69 fields, 219 sources); or \item when any fit not meeting (1) or (2) had a major axis $>15\arcsec$, a major to minor axis ratio greater than 2.0, or $F_{int}/F_p > 5.0$ (136 fields, 143 sources). \end{enumerate} In total, 506 fields were flagged and examined. Of these 338 were determined to be good fits, 88 to be ``acceptable'' fits, 56 to be artifacts or noise, and 24 to be extended rather than discrete sources; the latter were moved to the extended source atlas (see below) and the artifacts were deleted. The 88 ``acceptable'' fits are all (in our best judgement) real radio sources, but they are distinguished from good fits in that, upon inspection, it is clear that the two-dimensional Gaussian employed by HAPPY is a poor representation of the source surface brightness distribution. We report their HAPPY-derived parameters in Table~2 for consistency, but flag them accordingly. The final catalog, presented in Table~2, includes 3149 discrete sources. A Galactic longitude $\pm$ latitude-based name (col.~1) is followed by the peak and integrated flux densities from the Gaussian fit (cols.~2 and~3), with the $S_i$ value flagged for the ``acceptable'' fits described above, and the estimated rms noise level (col.~4). The major and minor axes (full-width at half-maximum) and position angle for the elliptical Gaussian complete the morphological description of these compact sources. The last two columns give the infrared 8~$\mu$m and 21~$\mu$m flux densities for sources with MSX matches (described further below in \S\ref{section-msxmatch}). Owing to the variable background and numerous regions of bright diffuse emission, the threshold for discrete source detection varies significantly over the survey region. We can obtain a mean value for the threshold by comparing the source surface density with that of the FIRST survey. That survey covers 9033 deg$^2$ of the extragalactic sky and includes 781,450 sources not flagged as sidelobes, for a mean surface density of 86.54 deg$^{-2}$ at a flux density threshold of 1.0~mJy\footnote{The snapshot images of the FIRST survey require the addition of a 'CLEAN bias' of 0.25~mJy to the measured flux densities. The greater $uv$ coverage achieved in the multi-array, multi-snapshot MAGPIS survey should significantly reduce CLEAN bias, although it is improbable that the bias is zero. Since, however, absolute calibration is unlikely to be accurate to better than 10\% in light of our addition of single-dish data, and none of our scientific projects require flux densities this accurate, we ignore CLEAN bias in this work.}. The mean source surface density of discrete sources in MAGPIS is 74.8 deg$^{-2}$ considering the entire survey area of 42.1 deg$^2$ and 73.0 deg$^{-2}$ in the 35.6 deg$^2$ lying outside regions of diffuse emission; the former value is higher owing to source clustering. Matching this surface density while allowing for several hundred true Galactic sources outside regions of diffuse emission (see \S5.1), we find an effective discrete source threshold of $\sim 1.5$~mJy that yields 58.6 extragalactic sources deg$^{-2}$ in {\sl FIRST}. Thus, our survey is significantly incomplete between the minimum reported flux density of 1~mJy and $\sim 2$~mJy, but, over the 85\% of the area outside regions of diffuse emission, it is largely complete above this range. Note that a large majority of the discrete radio sources detected even within $1\arcdeg$ of the Galactic plane are extragalactic objects; this is evident from the lack of a strong Galactic latitude dependence of our source counts seen in Figure~\ref{fig-lathist}. Observations at other wavelengths are required to identify the Galactic components of the discrete source population. \begin{figure*} \epsscale{0.7} \plotone{f4.eps} \caption{ Galactic latitude distribution for the discrete source catalog. Even in the Galactic plane the 20~cm radio sky is dominated by extragalactic sources, so no strong latitude dependence is seen. The counts fall off in the $|b|>45$~arcmin bins due to the drop in sensitivity at the edge of the survey. } \label{fig-lathist} \end{figure*} \subsection{The diffuse source atlas} The elliptical Gaussians used in fitting the discrete sources are a poor approximation to the surface brightness distributions for nearly all of the more extended radio sources detected in our survey. Furthermore, for sources extended by more than $\sim 60\arcsec$, our VLA $uv$ coverage is inadequate to derive accurate flux densities, and the addition of the single dish data, while an asset in making images, has unquantifiable effects on derived flux densities. Thus, we again turn to the eye-brain system for identifying diffuse sources and source complexes, and do not attempt to derive accurate flux density measurements for these sources. The entire survey region was examined by eye, and regions of extended emission were identified and enclosed in square boxes ranging in size from 1 arcmin$^2$ to $48\arcmin \times 48\arcmin$. In some instances regions are defined by a single coherent source, while in others a complex of diffuse emission regions is included. A total of 398 such regions covering 7.6 deg$^2$ were so identified. For each region, the peak flux density, minimum flux density (a proxy for the noise level in the region), total area, and net flux density were recorded; we emphasize that these flux densities are not necessarily accurate reflections of integrated source intensity and, in some regions, include the flux density of several related -- or possibly unrelated -- sources; they provide only a rough guide to source intensities. We have subtracted from the integrated flux density in each region the sum of the flux densities of the discrete sources from Table~2 that fall within the region; a table listing the cataloged discrete sources within each region is available at the MAGPIS website. In order to estimate the accuracy of our diffuse flux density estimates, we have compared our flux densities for the 25 known supernova remnants in our survey region with those tabulated in Green (2004). We exclude remnants for which the tabulated value is uncertain (listed with a ``?'' in Green's catalog), as well as those that do not fall completely within our survey coverage. We scale the 1.0~GHz flux densities listed in Green's catalog to our observing frequency of 1.4~GHz using the tabulated spectral indices. We find a good correlation between the flux densities, albeit with an offset that depends on the size of the remnant (Fig.~\ref{fig-snrflux}). We conclude that the integrated flux densities listed for the diffuse sources typically overestimate the true fluxes of very large sources by factors of two or more due to backgrounds and confusing sources and recommend caution when using them. \begin{figure*} \epsscale{0.7} \plotone{f5.eps} \caption{ Flux densities from our diffuse source catalog for supernova remnants from the Green (2004) catalog. The x symbols show that our catalog flux densities for these extended sources are typically higher than the Green values by factors of two or more. Subtracting a corrective offset of 0.07~Jy/arcmin$^2$ from our flux densities improves the agreement (dots). } \label{fig-snrflux} \end{figure*} The diffuse regions are cataloged in Table~3. A Galactic longitude$\pm$ latitude-based name is found in column 1. The box size in column 2 and an intensity scaling factor for display purposes (col.~3) precede the brightest pixel value (col.~4) and its location (cols.~5 \& 6), the minimum flux density recorded in the box (col.~7), and the integrated flux density inside the box (col.~8). Column 9 provides names for known supernova remnants. Cleaving to the maxim that quantifies the relative information content of words and pictures, we have constructed a diffuse source atlas to accompany the full survey images on the MAGPIS website. Here, Table~3 is reproduced with active links that allow the user to overlay circles representing sources from the discrete source catalog and contours of the $20\,\mu$m images from the MSX catalog (Eagen et al.\ 2003). Each image can also be downloaded as a FITS file. The website also includes large area ($4\fdg5\times1\fdg6$) JPEG versions of the MAGPIS images with the diffuse region boxes overplotted. It is difficult to display these high-dynamic-range images with a single constrast stretch, and indeed the discrete sources are almost invisible in these images, but they are nonetheless useful for viewing the environment of the diffuse sources. The MAGPIS discrete source catalog and diffuse source atlas provide an improvement of more than an order of magnitude in both angular resolution and sensitivity over existing Galactic plane survey data. When combined with existing catalogs at other wavelengths along with data from X-ray and infrared surveys currently underway, MAGPIS will provide a resource for studying both thermal and nonthermal processes that mark the evolution of massive stars in the Milky Way. We have a number of followup projects underway; below we briefly comment on the impact the survey is likely to have on our knowledge of the \ion{H}{2} region and supernova remnant populations of the Galaxy. \section{Galactic Thermal Emission Regions: MAGPIS and Mid-IR Images} The critical dependence of an \ion{H}{2} region's radio luminosity on the ionizing flux of its exciting star(s) allow for the contruction of a particularly pure census of massive star formation: the 20cm radio flux density falls by a factor of 300 between exciting star types O9.5 and B1 such that, at 20 kpc, O-star \ion{H}{2} regions fall a factor of $>30$ above our survey threshold, while less-massive star-forming complexes (which produce at most B stars) fall a factor $>10$ below it\footnote{These numbers are for optically thin nebulae. Optically thick \ion{H}{2} regions are self-absorbed at 20~cm such that only stars earlier than O7 would fall above our threshold at the far side of the Galaxy. Our old 6~cm snapshot survey allows us to find these sources down to spectral type O9.5; see Giveon et al.\ (2005) for details.}. To separate the \ion{H}{2} regions from the more numerous extragalactic source populations and the extended regions of Galactic nonthermal emission requires observations at another wavelength. Our 6~cm snapshot survey is useful for the most compact sources ($D<15\arcsec$) but resolves out flux on larger scales. Since most \ion{H}{2} regions also contain dust that is heated by the stellar radiation, the mid-IR also can serve as a useful discrimnant. \begin{figure*} \epsscale{1.0} \plotone{f6.eps} \caption{ Examples of \ion{H}{2} complexes from our radio survey with the MSX $20\,\mu$m image contours overlaid, showing the excellent radio-IR correspondence for these thermal sources. } \label{fig-radioir} \end{figure*} Figure~\ref{fig-radioir} shows several examples of \ion{H}{2} complexes from our radio survey with contours from the MSX $20\,\mu$m images overlaid. The degree of correspondence is remarkably good and provides a straightforward method for separating thermal and nonthermal emission in star formation regions. On the MAGPIS website we also provide large-scale radio images with boxes marking the previously published \ion{H}{2} regions collected in the Paladini et al.\ (2003) meta-catalog. It is clear that the MAGPIS data (along with other radio and IR surveys) will enable the construction of a vastly improved \ion{H}{2} region catalog. We defer a detailed analysis of the \ion{H}{2} region population to a future paper; here we provide some simple statistics for compact and ultracompact \ion{H}{2} regions by matching our discrete source data to the MSX catalogs as an indication of the wealth of information such a comparison contains. The higher-resolution and greater sensitivity of the Spitzer GLIMPSE data soon to become available will fill in the 2--8$\,\mu$m band and provide crucial information in the most crowded regions. \subsection{Match to the MSX $20\,\mu$m catalog} \label{section-msxmatch} The 20cm survey region is completely covered by the MSXPSCv2.3 "MSX6C" (Egan et al 2003) data set. We searched for MSX6C sources using a search radius of $12\arcsec$ around each of the discrete 20~cm sources. To be accepted as a match, the MSX6C source was required to have a quality flag of 2 in at least one of the four bands (see Lumsden et al.\ 2002). If more than one MSX6C source fell within the search radius for a single 20~cm source, the MSX6C source closest to the 20~cm source was kept (this only occurred once). A total of 376 MSX6C sources corresponding to 418 20~cm sources were matched in this manner. To estimate the number of false matches we repeated the matching process using fake catalogs produced by shifting the MSX6C catalog $\pm10\arcmin$ and $\pm20\arcmin$ in longitude. Since, for example, the vast majority (78\%) of the MSX sources are stars detected only in the $8\,\mu$m band (very few of which have radio counterparts), we can greatly reduce the false match rate by assessing the false rates separately for sources detected in different band combinations. We have followed the methology descibed in Giveon et al.\ (2004; see also White et al.\ 1991), to arrive at a false-match reliability criterion for each of the band combinations in which a 20~cm-MSX6C match existed. Using a reliability of $R>90\%$\footnote{This eliminates MSX sources detected only in the $8\,\mu$m band as well as those detected in $8\,\mu$m and $12\,\mu$m only, and $8\,\mu$m, $12\,\mu$m, and $14\,\mu$m only. This removes 131 sources (67\% of which are false matches), leading to a catalog of matches that is $>95\%$ reliable and $\sim 90\%$ complete.} we find 245 MSX6C sources (of which $\sim 8$ should be false) matched to 278 20~cm sources. Of these, 217 are single 20~cm-MSX6C matches, 23 are cases in which one MSX6C source matches two 20~cm sources, and 5 represent one MSX6C source matching three 20~cm sources. \begin{figure*} \epsscale{1.0} \plotone{f7.eps} \caption{ Sky distribution of MAGPIS 20~cm radio sources from the discrete source catalog. Red dots show sources with confident (reliability $>0.90$) infrared counterparts in the MSX catalog. The radio-MSX matches are clearly concentrated toward the plane. } \label{fig-skydist} \end{figure*} \begin{figure*} \epsscale{0.7} \plotone{f8.eps} \caption{ Galactic latitude distribution for the 245 MSX sources having reliable radio matches. These sources are highly concentrated toward the plane of the Galactic disk. } \label{fig-msxlathist} \end{figure*} The distribution of the matched and unmatched sources on the sky is displayed in Figure~\ref{fig-skydist}. While sources with infrared matches are found throughout the latitude coverage, it is clear that they concentrate toward the plane. The latitude distribution is displayed in Figure~\ref{fig-msxlathist}. The distribution peaks at $b=0\arcdeg$ with a surface density of 22 sources deg$^{-2}$ (when regions obscured by bright diffuse sources are excluded), and has a full width half maximumn of less than $15\arcmin$. Examination of the atlas of extended emission shows that there are more thermal sources inside the 6.5 deg$^2$ subsumed by the atlas images than outside, and most of these sources are not included in the discrete source catalog. We estimate that there are a total of more than 600 distinct \ion{H}{2} regions in our 42 deg$^2$ survey area, although we defer to a future publication the development of a detailed catalog and its analysis. \section{Galactic Nonthermal Emission in MAGPIS} Supernova remnants (SNRs) are among the brightest radio sources -- and the brightest X-ray sources -- in the Galaxy. They are a dominant source of mechanical energy input to the ISM, drive the Galaxy's chemical evolution, and mark the birthsites of neutron stars and black holes. Yet our knowledge of the Galactic population is woefully incomplete, owing to the low angular resolution of previous radio and hard X-ray surveys of the plane, and the soft spectral response of previous X-ray imaging observations. A total of 231 remnants appears in the latest catalog (Green 2004); Brogan et al.\ (2004) have recently added three new remnants in one of our fields. The current rate of discovery is a few remnants per year. However, based on 1) extragalactic SN rates ($\sim$ 1--2 per century) combined with SNR lifetimes (2.5-5$\times10^4$ yr), and 2) a detailed analysis of the current SNR distribution (Helfand et al.\ 1989), we expect the total population to be between 500 and 1000. The youngest remnant we know is 340 years old; four to seven younger ones exist somewhere in the Galaxy. MAGPIS can detect pulsar-driven remnants to a luminosity $10^{-4}$ that of the Crab Nebula at the edge of the Galaxy (or $\sim$ 10\% that of 3C58, the least luminous young Crab-like remnant known). For shell-like SNRs, our survey will be sensitive to all young remnants. For example, we will detect remnants throughout the surveyed volume down to luminosities 0.01\% that of Cas A, and can even see a clone of the underluminous historical remnant SN 1006 at 20 kpc: it would appear as a $1\arcmin$-diameter source with a flux density of $\sim 25$~mJy. Our survey could detect a remnant equivalent to SN87A from the time it was 3 years old anywhere in the survey region, and would resolve such a remnant only 15 years after the explosion. \begin{figure*} \epsscale{0.9} \plotone{f9.eps} \caption{ MAGPIS 20~cm images of the eleven known supernova remnants in the survey area with diameters less than $10\arcmin$. } \label{fig-knownsnr} \end{figure*} Twenty-five known remnants fall within the current survey area, and all are easily detected. The known remnants are indicated in Table~3; in many instances, the maps presented here are the best available. Images for the eleven remnants smaller than $10\arcmin$ in diameter --- most of which lack high-resolution maps in the literature --- are displayed in Figure~\ref{fig-knownsnr}. As can be seen by browsing the diffuse-source atlas, there are a large number of shell-like sources detected in our survey. Without observations at other wavelengths, however, it is impossible to separate the thermal and non-thermal sources to derive a list of new SNR candidates. Fortunately, as noted above, we do have VLA data covering the entire region at 90~cm, as well as the MSX mid-IR images. A simple qualitative comparison of these three datasets (available for the reader at the MAGPIS website) allows us to identify quickly high-probablity SNR candidates. In Table~4, we present 49 new SNR candidates in our $27\arcdeg$ slice of Galactic longitude. To derive this list, we have required: \begin{itemize} \item the object has a very high ratio of 20~cm to $20\,\mu$m flux (i.e., it is typically undetectable in the 20$\,\mu$m MSX image); \item the object has a counterpart in our 90~cm images with a similar morphology and a higher peak flux density; and \item the object has a distinctive SNR morphology. For shell-type remnants we require at least half of a complete shell, while for the two pulsar wind nebula candidates we see a centrally peaked brightness distribution. \end{itemize} For most of these candidates the data in columns 1, 3, 4 and 5 are repeated from Table 3. Column 2 gives the source diameter (as opposed to the display box size in column 3, which is always larger). Five of the the entries in this table are components of larger sources listed in Table 3, with three associated with the large diffuse complex at G19.60${-}$0.20 and two associated with G6.50${-}$0.48. \begin{figure*} \epsscale{0.9} \plotone{f10.eps} \caption{ MAGPIS 20~cm images of 12 new supernova remnant candidates. } \label{fig-snrcand} \end{figure*} Images for a dozen candidates ranging in size from $40\arcsec$ to $9\arcmin$ are displayed in Figure~\ref{fig-snrcand}. Not unexpectedly, the diameter distribution for our remnant candidates varies markedly from that of the known remnant population. Assuming that followup spectral and polarimetric observations confirm the large majority of these sources as SNRs, we will have tripled the number of known remnants in this region of the Galaxy. However, while the number of remnants with $D \ge 10\arcmin$ will only rise from 13 to 16, the number with $10\arcmin > D \ge 5\arcmin$ will quadruple from 5 to 19, while the number with $D<5\arcmin$ will rise more than sevenfold from 5 to 37. Particularly interesting among these new SNR candidates are those that may harbor young, high-$\dot E$ pulsars. In addition to the two PWN candidates, there are two shell-like remnants with central diffuse emission peaks highly reminiscent of composite SNRs. Given the core-collapse SN rate in the Galaxy, we should expect to find $\sim 10$ neutron stars younger than the Crab and 3C58 pulsars. While these new sources are significantly dimmer than even the underluminous PWN 3C58, if they are at distances of $\sim 15$~kpc, their luminosities are comparable. Also noteworthy are the three shell-type remnants with diameters less than $1\arcmin$. At 15~kpc, their diameters are $\sim 3$~pc, corresponding to an age of $\sim 130$ years for a radio expansion rate comparable to that of SN87A. The SNR candidates listed in Table~4 far from exhaust the nonthermal emission features in our survey area; a roughly comparable number of filaments and arcs with apparently nonthermal radio spectra and no IR counterparts are seen. Furthermore, there are several regions in which thermal and nonthermal features are cospatial; these will require scaled-array observations at several frequencies to disentangle. Nontheless, it is clear that high-dynamic-range, high-sensitivity observations of the type reported here are essential for characterizing fully the Galactic SNR population. \section{Summary and Future Prospects} We have presented a centimetric image of the plane of the Milky Way in the first quadrant that represents an improvement over existing surveys by more than an order of magnitude in resolution, sensitivity, and dynamic range. The survey detection threshold is 1 to 2~mJy over most of the survey area. We identify over 3000 discrete radio sources and $\sim 400$ regions of diffuse emission, presenting catalogs and atlases that quantify the source properties. We include complementary 90~cm images over the entire survey region and provide a comparison with mid-IR data; taken together, these latter two datasets help to separate thermal from nonthermal emssion regions. We find several hundred \ion{H}{2} regions in the survey area, many reported here for the first time. We also identify 49 high-probability supernova remnant candidates, including a seven-fold increase in the number of remnants with diameters smaller than $5\arcmin$ in the survey region. All of the survey's results are available at the MAGPIS website. Considerable work remains to exploit fully the survey results. A complementary hard X-ray survey over portions of this region is being conducted with XMM-Newton, and several followup obervations of interesting sources are scheduled with XMM and Chandra. Scaled-array polarimetric and photometric observations with the VLA are required to confirm the SNR candidates. As the Spitzer GLIMPSE program images become available, further progress will be possible in identifying compact and ultra-compact \ion{H}{2} regions and in using these to provide a census of the OB star population; higher frequency observations with the VLA will be required to identify optically thick \ion{H}{2} regions. Future observations to extend the MAGPIS coverage area will provide the basis for a comprehensive view of massive star birth and death in the Milky Way. \acknowledgments DJH and RHB acknowledge the support of the National Science Foundation under grants AST-05-07598 and AST-02-6-55; DJH was also supported in this work by NASA grant NAG5-13062. RHB's work was supported in part under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under contract W-7405-ENG-48. RLW acknowledges the support of the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy under NASA contract NAS5-26555.
2024-02-18T23:39:50.271Z
2005-10-16T22:53:15.000Z
algebraic_stack_train_0000
589
7,328
proofpile-arXiv_065-2957
\section{Introduction} The nature of DM within the Universe is one of the fundamental problems facing modern physics. N-body cosmological simulations predict a ``universal'' profile for DM halos over a wide range of mass-scales \citep[hereafter NFW]{navarro97}. In an hierarchical formation scenario the early epoch of formation of low-mass halos should ``freeze in'' more tightly concentrated halos at the galaxy scale than are observed in clusters \citep{bullock01a}. What is less clear, however, is the way in which the DM halo responds to the condensation of baryons into stars. If the galaxy is assembled by a series of mergers, however, the baryonic and dark matter may be mixed in such a way that the total gravitating mass follows the NFW profile \citep{loeb03a}. Alternatively, present-day ellipticals may retain the ``memory'' of the original contraction \citep{gnedin04a}. We present X-ray determined mass profiles for 7 early-type galaxies, spanning the mass range $\sim 10^{12}$--$\sim 10^{13}$${\rm M_\odot}$, chosen from the \chandra\ archive to be sufficiently bright and relaxed enough to yield interesting mass constraints. Two companion posters, Gastaldello et al and Zappacosta et al (both this volume) address DM halos at group and cluster scales. \begin{figure}[!h] \centering \includegraphics[height=7cm,width=4cm,angle=270]{fig1a.eps} \includegraphics[height=7cm,width=4cm,angle=270]{fig1b.eps} \caption{\small{Temperature {\em (upper panel)} and mass {\em (lower panel)} profiles of NGC\thinspace 720. The mass data are shown with models comprising simple NFW (dashed line), NFW plus stellar (dotted line) and compressed NFW plus stellar (solid line) potentials.}} \label{figure1} \end{figure} \begin{table*} \centering \caption{\small{Best-fit values of ${\rm M_{vir}}$, in units of $10^{12}$${\rm M_\odot}$, and c for the different mass models fitted. Where no error is quoted the parameter value was fixed. Error bars are at 1-$\sigma$. Figures in square parentheses are systematic error estimates (see text). Other figures in parentheses represent the change in the best-fit value if ${\rm M_*}$/\lb\ is varied by $\pm$20\%.}} \begin{small} \begin{tabular}{lrllrllr} \hline Galaxy & \multicolumn{3}{c}{NFW} & \multicolumn{4}{c}{Compressed NFW+ stars} \\ & $\chi^2$/dof & ${\rm M_{vir}}$ & c & $\chi^2$/dof & ${\rm M_{vir}}$ & c & ${\rm M_*}$/\lb \\ \hline NGC\thinspace 720 & 2/9 & $3.4^{+1.5}_{-0.9}$ & 47$\pm$15 & 1/9 & $6.1^{+5.2}_{-2.4}$ & 19$^{+8}_{-6}$ & 3.3 \\ & & [$^{+1.7}_{-0.7}$] & [$\pm$8] & & [$^{+15}_{-1.2}$]($^{+1.4}_{-0.9}$)& [$^{+2}_{-7}$]($\pm$4) & \\ NGC\thinspace 1407 & 23/11 & $9.0^{+6.3}_{-3.5}$ & 36$^{+13}_{-9}$ & 18/11 & 300($>$30) & 4.6$\pm$4.2 & 4.7 \\ & & [$^{+4.3}_{-3.7}$] & [$^{+14}_{-8}$] & & [-250](-230) & [$^{+7.2}_{-1.9}$]($\pm$4.1) \\ NGC\thinspace 4125 & 23/11 & $1.0^{+0.2}_{-0.1}$ & 88$\pm$14 & 19/11 & 1.8$^{+0.8}_{-0.4}$ & 25$^{+8}_{-6}$ & 2.4 \\ & & [$^{+0.3}_{-0.4}$] & [$^{+44}_{-7}$] & & [$^{+0.2}_{-1.1}$]($^{+0.7}_{-0.3}$)& [+38]($\pm$10) &\\ NGC\thinspace 4261 & 23/12 & ${1.5^{+0.3}_{-0.2}}$ & 160$\pm$20 & 21/12 & $2.6^{+1.8}_{-1.0}$ & 38$^{+23}_{-14}$ & 4.6 \\ & & [$^{+0.4}_{-0.2}$] & [$^{+10}_{-30}$] & & [$\pm$1.2]($^{+2.0}_{-0.7}$) & [$^{+23}_{-18}$]($\pm$26)\\ NGC\thinspace 4472 & 53/21 & $10^{+4}_{-3}$ & 30$^{+7}_{-5}$ & 30/20 & $55^{+160}_{-28}$ & 11$\pm$4 & 0.87$\pm$0.14 \\ & & [$^{+0.4}_{-0.6}$] & [$^{+20}_{-2}$] & & [$^{+2}_{-30}$] & [$^{+1.7}_{-0.8}$]\\ NGC\thinspace 4649 & 30/7 & $2.5^{+0.4}_{-0.3}$ & 140$\pm$10 & 21/7 & $17^{+36}_{-9}$ & 24$\pm8$ & 4.7\\ & & [$^{+0.1}_{-1.0}$] & [$^{+30}_{-4}$] & & [$^{+2}_{-11}$]($^{+130}_{-10}$)& [$^{+13}_{-1}$]($\pm$18)\\ NGC\thinspace 6482 & 0.6/5& $2.3^{+0.4}_{-0.3}$ & 99$\pm$16 & 0.4/5 & $3.5^{+1.3}_{-0.9}$ & $36^{+9}_{-7}$ & 1.2\\ & & [$^{+0.2}_{-0.1}$] & [$\pm$4] & & [$\pm$0.3]($^{+0.6}_{-0.4}$) & [$\pm$2]($\pm$9)\\\hline \end{tabular} \end{small} \label{table1} \end{table*} \section{Data analysis} The \chandra\ data were processed with \ciao\ 3.2.2, following standard procedures. Due to the low surface brightness of the data, special care was taken in treating the background, for which we adopted a modelling procedure (see Humphrey et al, 2005). We fitted the spectra from concentric annuli with an APEC model (plus unresolved point-source component in ${\rm D_{25}}$) to determine temperature and density. The best-fitting abundances were similar to other early-type galaxies \citep{humphrey05a}. \section{Mass profiles} The gravitating mass profiles were inferred from the temperature and density profiles in two ways. First, we used parameterised models for each, although we did not find a universal profile fitted either, and derived mass profiles under the assumption of hydrostatic equilibrium (we discuss the possible impact of low-significance asymmetries in some systems--- e.g.\ \citealt{randall04a}--- in Humphrey et al, 2005, in prep). The mass profiles were clearly more extended than the optical light, indicating significant DM. Within ${\rm R_{e}}$, we found M/\lb\ for the gravitating matter varied from 2.3--9.3 ${\rm M_\odot}$/${\rm L_\odot}$. In Fig.~\reference{}{figure1}, we show the best-fit temperature and mass profiles for NGC\thinspace 720. Alternatively, we also used the temperature profile, and an assumed mass profile (see below) to derive a density model, which we fitted to the data. This procedure gave more robust mass constraints. These techniques are outlined in Humphrey et al (2005). Simple NFW fits to the data gave very large ($\gg 20$) values for c, the halo concentration, in contrast to the typical values predicted by simulations \citep[$\sim$15 e.g.][]{bullock01a}. To investigate whether baryonic matter affects the mass profile, we included an \citet{hern90} mass component to trace the stars and allowed the DM halo to be compressed due to baryonic condensation \citep{gnedin04a}. Assuming all mass within ${\rm R_{e}}$\ is stellar did not give meaningful fits. Fixing stellar mass (${\rm M_*}$) within ${\rm R_{e}}$\ to be half of the total reduced c, bringing ${\rm M_{vir}}$\ and c into better agreement with simulations. This model fitted all the galaxies well. We note that if adiabatic compression of the DM halo was turned off, for a fixed ${\rm M_*}$/\lb, c was significantly higher. Our results were very sensitive to ${\rm M_*}$/\lb, which could only be constrained in NGC\thinspace 4472 (in which it was $\sim$1). In general, though, we found ${\rm M_{vir}}$\ and c were consistent with simulations \citep{bullock01a}, albeit very uncertain. In Table~\reference{}{table1} we show a summary of our results and, in addition, the sensitivity to ${\rm M_*}$/\lb\ and the spectral analysis choices (e.g.\ $N_{\rm H}$\ or background modelling). \section*{Acknowledgments} We thank Oleg Gnedin for making available his adiabatic contraction code. Support for this work was provided by NASA under grant NNG04GE76G issued through the Office of Space Sciences Long-Term Space Astrophysics Program. \bibliographystyle{apj_hyper}
2024-02-18T23:39:50.293Z
2005-10-29T01:27:35.000Z
algebraic_stack_train_0000
591
1,227
proofpile-arXiv_065-3027
\section{Introduction} \label{intro} Relatively little is known about the submillimetre properties of `normal' galaxies in the local Universe. The advent of \textit{IRAS}\/ in the 1980s brought the first investigations of dust in relatively large samples of galaxies (e.g. Devereux \& Young 1990), yet the limitations of investigating dust at far-IR wavelengths are marked; the strong temperature dependence of thermal emission means that even a small amount of warm dust can dominate the emission from a substantially larger proportion of cold dust, and \textit{IRAS} is only sensitive to dust with \mbox{$T>30$\,K}. \textit{IRAS}\/ studies of `normal' galaxies (e.g. Devereux \& Young 1990) found a high value of the gas-to-dust ratio ($\sim$1000), an order of magnitude higher than found for the Milky Way ($\sim$160; Dunne et al. 2000), indicating that \textit{IRAS} may have `missed' $\sim$90\% of the dust in late-type galaxies. \textit{IRAS}\/ also revealed relatively little about the dust in early-type galaxies, since only $\sim$15\% of ellipticals were detected by \textit{IRAS}\/ (Bregman et al. 1998). The next major step in the study of dust in galaxies is to make observations in the submillimetre waveband \mbox{($100\,\micron\le \lambda \le 1$\,mm)} since the 90\% of dust that is too cold to radiate in the far-IR will be producing most of its emission in this waveband. The advent of the SCUBA camera on the James Clark Maxwell Telescope (JCMT)\footnote{The JCMT is operated by the Joint Astronomy Center on behalf of the UK Particle Physics and Astronomy Research Council, the Netherlands Organisation for Scientific Research and the Canadian National Research Council.} (Holland et al. 1999) opened up the submillimetre waveband for astronomy and made it possible, for the first time, to investigate the submillimetre emission of a large sample of galaxies; prior to SCUBA only a handful of submillimetre measurements had been made of nearby galaxies, using single-element bolometers. In particular, in contrast to the extensive survey work going on at other wavelengths, prior to SCUBA it was not possible to carry out a large survey in the submillimetre waveband. SCUBA has 2 bolometer arrays (850\hbox{\,$\umu$m } and 450\,\micron) which operate simultaneously with a field of view of $\sim$2 arcminutes. At 850\hbox{\,$\umu$m } SCUBA is sensitive to thermal emission from dust with fairly cool temperatures \mbox{($T\geq10$\,K)} so crucially, whereas \textit{IRAS} was only sensitive to warmer dust \mbox{($T>30$\,K)}, SCUBA should trace most of the dust mass. \subsection{A local submillimetre galaxy survey} \label{local-survey} A survey of the dust in nearby galaxies is also important because of the need to interpret the results from surveys of the distant Universe. Many deep SCUBA surveys have been carried out (Smail, Ivison \& Blain 1997; Hughes et al. 1998; Barger et al. 1998, 1999; Blain et al. 1999a; Eales et al. 1999; Lilly et al. 1999; Mortier et al. 2005), but studies of the high redshift Universe, and in particular studies of cosmological evolution (Eales et al. 1999; Blain et al. 1999b), have until now depended critically on assumptions about, rather than measurements of, the submillimetre properties of the \textit{local} Universe. Prior to the existence of a direct local measurement of the submillimetre luminosity function (LF) most deep submillimetre investigations have started from a local \textit{IRAS} 60\hbox{\,$\umu$m } LF, extrapolating out to submillimetre wavelengths by making assumptions about the average FIR-submm SED. However, as shown by Dunne et al. (2000), this underestimates the local submillimetre LF, and thus a direct measurement of the local submillimetre LF is vital for overcoming this significant limitation in the interpretation of the results of high-redshift surveys. The ideal method of carrying out a submillimetre survey of the local Universe would be to survey a large area of the sky and then measure the redshifts of all the submillimetre sources found by the survey. However, with current submillimetre instruments such a survey is effectively impossible since, for example, the field of view of SCUBA is only $\sim$2 arcminutes. The alternative method, and the only one that is currently practical, is to carry out targeted submillimetre observations of galaxies selected from statistically complete samples selected in other wavebands. With an important proviso, explained below, it is then possible to produce an unbiased estimate of the submillimetre LF using `accessible volume' techniques (Avni \& Bahcall 1980) (see Section~\ref{lumfun}). To this end several years ago we began the SCUBA Local Universe Galaxy Survey (SLUGS). In Papers I and II Dunne et al. (2000, hereafter D00) and Dunne \& Eales (2001, hereafter DE01) presented the results of SCUBA observations of a sample selected at 60\hbox{\,$\umu$m } (the \textit{IRAS}-Selected sample, hereafter the IRS sample). This paper presents the results of SCUBA observations of an optically-selected sample (hereafter the OS sample). The accessible volume method will produce unbiased estimates of the LF provided that no class of galaxy is unrepresented in the samples used to construct the LF. D00 produced the first direct measurements of the submillimetre LF and dust mass function (the space-density of galaxies as a function of dust mass) using the IRS sample, but this LF would be biased if there exists a `missed' population of submillimetre-emitting galaxies, i.e. a population \emph{that is not represented at all in the IRS sample}. In this earlier work we found that the slope of the submillimetre LF at lower luminosities was steeper than $-$2 (a submillimetre `Olbers' Paradox'), which indicated that the IRS sample may not be fully representative of all submillimetre-emitting sources in the local Universe. This `missed' population could consist of cold-dust-dominated galaxies, i.e. galaxies containing large amounts of `cold' dust (at \mbox{$T<25$\,K}), which would be strong emitters at 850\hbox{\,$\umu$m } but weak 60\,\micron-emitters. The OS sample is selected on the basis of the optical emission from the galaxies and, unlike the IRS sample which was biased towards warmer dust, the OS sample should be free from dust temperature selection effects. The results from the OS sample will therefore test the idea that our earlier IRS sample LF was an underestimate. \subsection{Previous investigations of cold dust in galaxies} \label{cold-dust} The paradigm for dust in galaxies is that there are two main components: (i) a warm component \mbox{(T\,$>$\,30\,K)} arising from dust grains near to star-forming regions and heated by young (OB) stars, and (ii) a cool `cirrus' component \mbox{(T\,=\,15--25\,K)} arising from diffuse dust associated with the HI and heated by the general interstellar radiation field (ISRF) (Cox, Kr\"ugel \& Mezger 1986; Lonsdale Persson \& Helou 1987; Rowan-Robinson \& Crawford 1989). \textit{IRAS} would only have detected the warm component, hence using \textit{IRAS} fluxes alone to estimate dust temperature would result in an overestimate of the dust temperature and an underestimate of the dust mass. Conversely, using the submillimetre to estimate dust masses has clear advantages. The flux is more sensitive to the mass of the emitting material and less sensitive to temperature in the Rayleigh-Jeans part of the Planck function, which is sampled when looking at longer submillimetre wavelengths. Studies at the longer wavelengths (170--850\,\micron; e.g. ISO, SCUBA) have confirmed the existence of cold dust components \mbox{($15<T_{d}<25$\,K)}, in line with the theoretical prediction of grain heating by the general ISRF (Cox et al. 1986), both in nearby spiral galaxies and in more IR-luminous/interacting systems (Gu\'elin et al. 1993, 1995; Sievers et al. 1994; Sodroski et al. 1994; Neininger et al. 1996; Braine et al. 1997; Dumke et al. 1997; Alton et al. 1998a,b, 2001; Haas et al. 1998; Davies et al. 1999; Frayer et al. 1999; Papadopoulos \& Seaquist 1999; Xilouris et al. 1999; Haas et al. 2000; DE01; Popescu et al. 2002; Spinoglio et al. 2002; Hippelein et al. 2003; Stevens, Amure \& Gear 2005). Many of these authors find an order of magnitude more dust than \textit{IRAS}\/ observations alone would indicate. Alton et al. (1998a), for example, find, by comparing their 200\hbox{\,$\umu$m } images of nearby galaxies to B-band images, that the cold dust has a greater radial extent than the stars, and conclude that \textit{IRAS} `missed' the majority of dust grains lying in the spiral disks. Other studies find evidence of cold dust components in a large proportion of galaxies. Contursi et al. (2001) find evidence of a cold dust component \mbox{($T\sim22$\,K)} for most of their sample of late-type galaxies; Stickel et al. (2000) find a large fraction of sources in their 170\hbox{\,$\umu$m } survey have high $S_{170}/S_{100}$ flux ratios and suggest this indicates a cold dust component \mbox{($T\leq20$\,K)} exists in many galaxies; Popescu et al. (2002) find, for their sample of late-type (later than S0) galaxies in the Virgo Cluster, that 30 out of 38 galaxies detected in all three observed wavebands (60, 100 and 170\,\micron) exhibit a cold dust component. An additional property of dust that can be investigated with submillimetre measurements is the dust emissivity index $\beta$. Dust radiates as a modified Planck function (a `grey-body'), modified by the emissivity term such that $Q_{em}\propto v^{\beta}$. Until recently the value of $\beta$ was quite uncertain, with suggested values lying between 1 and 2 (Hildebrand 1983). Recent multi-wavelength studies of galaxies including submillimetre observations, however, have consistently found $1.5\le\beta\le2$ with $\beta$=2 tending to be favoured (Chini et al. 1989; Chini \& Kr\"ugel 1993; Braine et al. 1997; Alton et al. 1998b; Bianchi et al. 1998; Frayer et al. 1999; DE01). This agrees with the values found in \textit{COBE}/FIRAS studies of the diffuse ISM in the Galaxy (Masi et al. 1995; Reach et al. 1995; Sodroski et al. 1997). \subsection{The scope of this paper} This paper presents the results from the SCUBA Local Universe Galaxy Survey (SLUGS) optically-selected sample. This OS sample is taken from the Center for Astrophysics (CfA) optical redshift survey (Huchra et al. 1983), and includes galaxies drawn from right along the Hubble sequence. In Section~\ref{data-red} we discuss our observation and data reduction techniques. Section~\ref{results} presents the sample and the results. Section~\ref{properties} presents an analysis of the submillimetre properties of the sample. In Section~\ref{lumfun} we present the local submillimetre luminosity and dust mass functions. We assume a Hubble constant \mbox{$H_{0}$=75 km\,s$^{-1}$ Mpc$^{-1}$} throughout. \begin{table*} \centering \begin{minipage}{14.5cm} \caption{\label{fluxtab}\small{850\hbox{\,$\umu$m } flux densities and isothermal SED parameters. (Notes on individual objects are listed in Section~\ref{maps}).}} \begin{tabular}{lllrrrrrllr} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ Name & R.A. & Decl. & cz & $S_{60}$ & $S_{100}$ & $S_{850}$ & $\sigma_{850}$ & $T_{dust}$ & $\beta$ & Type\\ & (J2000) & (J2000) & (km\,s$^{-1}$) & (Jy) & (Jy) & (Jy) & (Jy) & (K) & & \\ \hline UGC 148 & 00 15 51.2 & +16 05 23 & 4213 & 2.21 & 5.04 & 0.055 & 0.012 & 31.6 & 1.4 & 4\\ NGC 99 & 00 23 59.4 & +15 46 14 & 5322 & 0.81 & 1.49 & 0.063 & 0.015 & 41.8 & 0.4 & 6\\ PGC 3563 & 00 59 40.1 & +15 19 51 & 5517 & 0.35 & $^{\displaystyle s}$1.05 & 0.027 & 0.008 & 31.0 & 1.0 & 2\\ NGC 786 & 02 01 24.7 & +15 38 48 & 4520 & 1.09 & 2.46 & 0.066 & 0.019 & 35.2 & 0.8 & 4M\\ NGC 803 & 02 03 44.7 & +16 01 52 & 2101 & 0.69 & 2.84 & 0.093 & 0.019 & 27.4 & 1.1 & 5\\ UGC 5129 & 09 37 57.9 & +25 29 41 & 4059 & 0.27 & 0.92 & $<$0.034 & ... & ... & ... & 1\\ NGC 2954 & 09 40 24.0 & +14 55 22 & 3821 & $<$0.18 & $<$0.59 & $<$0.027 & ... & ... & ... &-5\\ UGC 5342 & 09 56 42.6 & +15 38 15 & 4560 & 0.85 & 1.66 & 0.032 & 0.008 & 36.4 & 0.9 & 4\\ PGC 29536 & 10 09 12.4 & +15 00 19 & 9226 & $<$0.18 & $<$0.52 & $<$0.041 & ... & ... & ... & -5\\ NGC 3209 & 10 20 38.4 & +25 30 18 & 6161 & $<$0.16 & $<$0.65 & $<$0.022 & ... & ... & ... & -5\\ NGC 3270 & 10 31 29.9 & +24 52 10 & 6264 & 0.59 & 2.39 & 0.059 & 0.014 & 26.8 & 1.3 & 3\\ NGC 3323 & 10 39 39.0 & +25 19 22 & 5164 & 1.48 & 3.30 & 0.070 & 0.014 & 34.0 & 1.0 & 5\\ NGC 3689 & 11 28 11.0 & +25 39 40 & 2739 & $^{\displaystyle s}$2.86 & $^{\displaystyle s}$9.70 & 0.101 & 0.017 & 26.8 & 1.7 & 5\\ UGC 6496 & 11 29 51.4 & +24 56 16 & 6277 & ... & ... & $<$0.018 & ... & ... & ... & -2\\ PGC 35952 & 11 37 01.8 & +15 34 14 & 3963 & 0.47 & 1.32 & 0.051 & 0.013 & 32.2 & 0.8 & 4\\ NGC 3799$^{\scriptstyle p}$ & 11 40 09.4 & +15 19 38 & 3312 & U & U & $<$0.268 & ... & ... & ... & 3\\ NGC 3800$^{\scriptstyle p}$ & 11 40 13.5 & +15 20 33 & 3312 & U & U & 0.117 & 0.025 & ... & ... & 3\\ NGC 3812 & 11 41 07.7 & +24 49 18 & 3632 & $<$0.23 & $<$0.56 & $<$0.038 & ... & ... & ... & -5\\ NGC 3815 & 11 41 39.3 & +24 48 02 & 3711 & 0.70 & 1.88 & 0.041 & 0.011 & 31.0 & 1.1 & 2\\ NGC 3920 & 11 50 05.9 & +24 55 12 & 3635 & 0.75 & 1.68 & 0.034 & 0.009 & 34.0 & 1.0 & -2\\ NGC 3987 & 11 57 20.9 & +25 11 43 & 4502 & 4.78 & 15.06 & 0.186 & 0.030 & 27.4 & 1.6 & 3\\ NGC 3997 & 11 57 48.2 & +25 16 14 & 4771 & 1.16 & $^{\displaystyle s}$1.95 & $<$0.023 & ... & ... & ... & 3M\\ NGC 4005 & 11 58 10.1 & +25 07 20 & 4469 & U & U & $<$0.015 & ... & ... & ... & 3\\ NGC 4015 & 11 58 42.9 & +25 02 25 & 4341 & 0.25 & $^{\displaystyle s}$0.80 & $<$0.050 & ... & ... & ...& 10M\\ UGC 7115 & 12 08 05.5 & +25 14 14 & 6789 & $<$0.20 & $<$0.68 & 0.051 & 0.011 & ... & ... & -5\\ UGC 7157 & 12 10 14.6 & +25 18 32 & 6019 & $<$0.24 & $<$0.63 & $<$0.032 & ... & ... & ... & -2\\ IC 797 & 12 31 54.7 & +15 07 26 & 2097 & 0.74 & 2.18 & 0.085 & 0.021 & 31.6 & 0.8 & 6\\ IC 800 & 12 33 56.7 & +15 21 16 & 2330 & 0.38 & 1.10 & 0.076 & 0.019 & 34.6 & 0.4 & 5\\ NGC 4712 & 12 49 34.2 & +25 28 12 & 4384 & 0.48 & 2.02 & 0.102 & 0.023 & 28.0 & 0.9 & 4\\ PGC 47122 & 13 27 09.9 & +15 05 42 & 7060 & $<$0.11 & 0.55 & $<$0.035 & ... & ... & ... &-2\\ MRK 1365 & 13 54 31.1 & +15 02 39 & 5534 & 4.20 & 6.11 & 0.032 & 0.009 & 35.2 & 1.6 & -2\\ UGC 8872 & 13 57 18.9 & +15 27 30 & 5529 & $<$0.22 & $<$0.45 & $<$0.021 & ... & ... & ... & -2\\ UGC 8883 & 13 58 04.6 & +15 18 53 & 5587 & 0.45 & 1.19 & $<$0.040 & ... & ... & ... & 4\\ UGC 8902 & 13 59 02.7 & +15 33 56 & 7667 & 1.23 & 3.32 & 0.067 & 0.018 & 30.4 & 1.2 & 3\\ IC 979 & 14 09 32.3 & +14 49 54 & 7719 & $^{\displaystyle s}$$^{\ast}$0.19 & $^{\displaystyle s}$$^{\ast}$0.60 & 0.057 & 0.017 & 34.0$^{\ast}$ & 0.3$^{\ast}$ & 2\\ UGC 9110 & 14 14 13.4 & +15 37 21 & 4644 & U & U & $<$0.046 & ... & ... & ... & 3\\ NGC 5522 & 14 14 50.3 & +15 08 48 & 4573 & 2.06 & 4.05 & 0.072 & 0.014 & 35.8 & 1.0 & 3\\ NGC 5953$\dag$$^{\scriptstyle p}$ & 15 34 32.4 & +15 11 38 & 1965 & U & U & 0.184 & 0.024 & ... & ... & 1 \\ NGC 5954$\dag$$^{\scriptstyle p}$ & 15 34 35.2 & +15 11 54 & 1959 & U & U & 0.112 & 0.019 & ... & ... & 6\\ NGC 5980 & 15 41 30.4 & +15 47 16 & 4092 & 3.45 & 8.37 & 0.253 & 0.043 & 34.0 & 0.8 & 5\\ IC 1174 & 16 05 26.8 & +15 01 31 & 4706 & $<$0.18 & $<$0.32 & 0.025 & 0.009 & ... & ... & 0\\ UGC 10200 & 16 05 45.8 & +41 20 41 & 1972 & 1.41 & 1.67 & $<$0.020 & ... & ... & ...& 2M\\ UGC 10205 & 16 06 40.2 & +30 05 55 & 6556 & 0.39 & 1.54 & 0.058 & 0.015 & 28.0 & 1.0 & 1\\ NGC 6090 & 16 11 40.7 & +52 27 24 & 8785 & 6.66 & 8.94 & 0.091 & 0.015 & 40.6 & 1.1 & 10M\\ NGC 6103 & 16 15 44.6 & +31 57 51 & 9420 & 0.64 & 1.67 & 0.052 & 0.012 & 33.4 & 0.8 & 5\\ NGC 6104 & 16 16 30.6 & +35 42 29 & 8428 & 0.50 & 1.76 & $<$0.033 & ... & ... & ... & 1\\ IC 1211 & 16 16 51.9 & +53 00 22 & 5618 & $<$0.12 & $<$0.53 & 0.028 & 0.009 & ... & ... & -5\\ UGC 10325$\S$ & 16 17 30.6 & +46 05 30 & 5691 & 1.57 & 3.72 & 0.041 & 0.009 & 31.0 & 1.4 & 10M\\ NGC 6127 & 16 19 11.5 & +57 59 03 & 4831 & $<$0.10 & $<$0.30 & 0.086 & 0.020 & ... & ... & -5\\ NGC 6120 & 16 19 48.1 & +37 46 28 & 9170 & 3.99 & 8.03 & 0.065 & 0.011 & 32.2 & 1.5 & 8\\ NGC 6126 & 16 21 27.9 & +36 22 36 & 9759 & $<$0.15 & $<$0.43 & 0.023 & 0.008 & ... & ... & -2\\ NGC 6131 & 16 21 52.2 & +38 55 57 & 5117 & 0.72 & 2.42 & 0.054 & 0.013 & 28.6 & 1.2 & 6\\ NGC 6137 & 16 23 03.1 & +37 55 21 & 9303 & $<$0.18 & $<$0.53 & 0.029 & 0.010 & ... & ... & -5\\ NGC 6146 & 16 25 10.3 & +40 53 34 & 8820 & $<$0.12 & $<$0.48 & 0.028 & 0.007 & ... & ... & -5\\ NGC 6154 & 16 25 30.4 & +49 50 25 & 6015 & $<$0.15 & $<$0.36 & $<$0.040 & ... & ... & ... & 1\\ NGC 6155 & 16 26 08.3 & +48 22 01 & 2418 & 1.90 & 5.45 & 0.116 & 0.022 & 29.8 & 1.2 & 6\\ UGC 10407 & 16 28 28.1 & +41 13 05 & 8446 & 1.62 & 3.12 & 0.026 & 0.009 & 32.8 & 1.5 & 10M\\ NGC 6166 & 16 28 38.4 & +39 33 06 & 9100 & $^{\displaystyle s}$0.10 & $^{\displaystyle s}$0.63 & 0.073 & 0.017 & 26.2 & 0.6 & -5\\ NGC 6173 & 16 29 44.8 & +40 48 42 & 8784 & $<$0.17 & $<$0.23 & $<$0.024 & ... & ... & ... & -5\\ NGC 6189 & 16 31 40.9 & +59 37 34 & 5638 & 0.75 & 2.57 & 0.072 & 0.019 & 28.6 & 1.1 & 6\\ NGC 6190 & 16 32 06.7 & +58 26 20 & 3351 & 0.58 & 2.37 & 0.099 & 0.024 & 28.0 & 1.0 & 6\\ \end{tabular} \end{minipage} \end{table*} \begin{table*} \centering \begin{minipage}{14.5cm} \contcaption{} \begin{tabular}{lllrrrrrllr} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ Name & R.A. & Decl. & cz & $S_{60}$ & $S_{100}$ & $S_{850}$ & $\sigma_{850}$ & $T_{dust}$ & $\beta$ & Type\\ & (J2000) & (J2000) & (km\,s$^{-1}$) & (Jy) & (Jy) & (Jy) & (Jy) & (K) & &\\ \hline NGC 6185 & 16 33 17.8 & +35 20 32 & 10301 & 0.17 & 0.56 & $<$0.030 & ... & ... & ... & 1\\ UGC 10486 & 16 37 34.3 & +50 20 44 & 6085 & $<$0.19 & $<$0.60 & $<$0.029 & ... & ... & ... & -3\\ NGC 6196 & 16 37 53.9 & +36 04 23 & 9424 & $<$0.12 & $<$0.44 & $<$0.023 & ... & ... & ... & -3\\ UGC 10500 & 16 38 59.3 & +57 43 27 & 5218 & $^{\displaystyle s}$$^{\ast}$0.16 & $^{\displaystyle s}$$^{\ast}$0.71 & $<$0.028 & ... & ... & ... & 0\\ IC 5090 & 21 11 30.4 & $-$02 01 57 & 9340 & 3.04 & 7.39 & 0.118 & 0.017 & 31.6 & 1.2 & 1\\ IC 1368 & 21 14 12.5 & +02 10 41 & 3912 & 4.03 & 5.80 & 0.047 & 0.011 & 37.6 & 1.3 & 1\\ NGC 7047 & 21 16 27.6 & $-$00 49 35 & 5626 & 0.43 & 1.65 & 0.055 & 0.013 & 28.0 & 1.1 & 3\\ NGC 7081 & 21 31 24.1 & +02 29 29 & 3273 & 1.79 & 3.87 & 0.044 & 0.010 & 32.8 & 1.3 & 3\\ NGC 7280 & 22 26 27.5 & $+$16 08 54 & 1844 & $<$0.12 & $<$0.48 & $<$0.040 & ... & ... & ... & -1\\ NGC 7442 & 22 59 26.5 & $+$15 32 54 & 7268 & 0.78 & 2.22 & 0.046 & 0.009 & 31.0 & 1.1 & 5\\ NGC 7448$\dag$ & 23 00 03.6 & $+$15 58 49 & 2194 & 7.23 & 17.43 & 0.193 & 0.032 & 31.0 & 1.4 & 5\\ NGC 7461 & 23 01 48.3 & $+$15 34 57 & 4272 & $<$0.176 & $<$0.64 & $<$0.022 & ... & ... & ... & -2\\ NGC 7463 & 23 01 51.9 & +15 58 55 & 2341 & U & U & 0.045 & 0.010 & ... & ... & 3M \\ III ZW 093 & 23 07 21.0 & +15 51 11 & 14962 & 0.48 & $<$3.16 & $<$0.026 & ... & ... & ... & 10Z\\ III ZW 095 & 23 12 43.3 & +15 54 12 & 7506 & $<$0.09 & $<$0.80 & $<$0.019 & ... & ... & ... & 10Z\\ UGC 12519 & 23 20 02.7 & +15 57 10 & 4378 & 0.76 & 2.59 & 0.074 & 0.016 & 29.2 & 1.1 & 5 \\ NGC 7653 & 23 24 49.3 & +15 16 32 & 4265 & 1.31 & 4.46 & 0.112 & 0.020 & 28.6 & 1.2 & 3\\ NGC 7691 & 23 32 24.4 & +15 50 52 & 4041 & 0.53 & 1.67 & $<$0.025 & ... & ... & ... & 4\\ NGC 7711 & 23 35 39.3 & +15 18 07 & 4057 & $<$0.15 & $<$0.50 & $<$0.027 & ... & ... & ... & -2\\ NGC 7722 & 23 38 41.2 & +15 57 17 & 4026 & 0.78 & 3.03 & 0.061 & 0.015 & 26.8 & 1.4 & 0\\ \hline \end{tabular}\\ (1) Most commonly used name. \\ (2) Right ascension, J2000 epoch. \\ (3) Declination, J2000 epoch. \\ (4) Recessional velocity taken from NED. [The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.] \\ (5) 60\hbox{\,$\umu$m } flux from the \textit{IRAS}\/ Faint Source Catalogue (Moshir et al. 1990); upper limits listed are measured using SCANPI as described in Section~\ref{iras-fluxes}. \\ (6) 100\hbox{\,$\umu$m } flux from the \textit{IRAS}\/ Faint Source Catalogue (Moshir et al. 1990); upper limits listed are measured using SCANPI as described in Section~\ref{iras-fluxes}.\\ (7) 850\hbox{\,$\umu$m } flux (this work). \\ (8) Error on 850\hbox{\,$\umu$m } flux, calculated as described in Section~\ref{errors}. \\ (9) Dust temperature derived from a single-component fit to the 60, 100 and 850\hbox{\,$\umu$m } data points, as described in Section~\ref{sed-fits}. \\ (10) Emissivity index derived from the single-component fit, as described in Section~\ref{sed-fits}. \\ (11) Hubble type (t-type) taken from the LEDA database; we have assigned t=10 to any multiple systems unresolved by \textit{IRAS} or SCUBA (indicated by `10M') and any systems with no type listed in LEDA (indicated by `10Z'; these 2 objects are listed as `compact' sources in NED); all other types marked `M' are listed as multiple systems in LEDA.\\ \smallskip\\ $^{\scriptstyle p}$ Part of a close or interacting pair which was resolved by SCUBA. Fluxes here are the individual galaxy fluxes; fluxes measured for the combined pair are given in Table~\ref{pairstab}.\\ U Unresolved by \textit{IRAS}.\\ $^{\displaystyle s}$ The $\textit{IRAS}$ flux is our own SCANPI measurement (see Section~\ref{iras-fluxes}); any individual comments are listed in Section~\ref{maps}.\\ $^{\ast}$ SCANPI measurements and fitted values should be used with caution (see Section~\ref{iras-fluxes}).\\ $\S$ The coordinates of this object refer to one galaxy (NED01) of a the \textit{pair} UGC 10325.\\ $\dag$ Objects are also in the Paper I \textit{IRAS}-selected sample (DE00). \end{minipage} \end{table*} \section{Observations and Data Reduction} \label{data-red} \subsection{The sample} \label{sample} This OS sample is taken from the Center for Astrophysics (CfA) optical redshift survey (Huchra et al. 1983), which is a magnitude-limited sample of optically-selected galaxies, complete to \mbox{$m_{B} \leq 14.5$ mag}. It has complete information on magnitude, redshift and morphological-type, and also avoids the Galactic plane. The OS sample consists of all galaxies in the CfA sample lying within three arbitrary strips of sky: (i) all declinations from (B1950.0) \mbox{$16.1<\textrm{RA}<21.5$}, (ii) all RAs from \mbox{$15<\textrm{Dec}<16$} and (iii) RAs from \mbox{$9.6<\textrm{RA}<12.8$} with declinations from \mbox{$25<\textrm{Dec}<26$}. We also imposed a lower velocity limit of \mbox{1900\,km\,s$^{-1}$} to try to ensure that the galaxies did not have an angular diameter larger than the field of view of SCUBA. There are 97 galaxies in the CfA survey meeting these selection criteria and of these we observed 81 (which were at convenient positions given our observing schedule). The OS sample covers an area of \mbox{$\sim$\,570} square degrees and is listed in Table~\ref{fluxtab}. Unlike the IRS sample which contained many interacting pairs (most of which were resolved by SCUBA but not by \textit{IRAS}), the OS sample contains just 2 such pairs. \subsection{Observations} \label{obs} We observed the OS sample galaxies using the SCUBA bolometer array at the 15-m James Clark Maxwell Telescope (JCMT) on Mauna Kea, Hawaii, between December 1997 and January 2001, with a handful of additional observations in February 2003 (due to bad data obtained when observed the first time round; see Section~\ref{red}). Observational methods and techniques were similar to those for the IRS sample described in D00. We give a brief description of these below. The SCUBA camera has 2 bolometer arrays (850\hbox{\,$\umu$m } and 450\,\micron, with 37 and 91 bolometers respectively) which operate simultaneously with a field of view of \mbox{$\sim$\,2.3} arcminutes at 850\hbox{\,$\umu$m } (slightly smaller at 450\,\micron). Beamsizes are measured to be $\sim$15 arcsec at 850\hbox{\,$\umu$m } and $\sim$8 arcsec at 450\,\micron. Our observations were made in `jiggle-map' mode which, for sources smaller than the field of view, is the most efficient mapping mode. Since the arrangement of the bolometers is such that the sky is instantaneously undersampled, and since we observed using both arrays, the secondary mirror was stepped in a 64-point jiggle pattern in order to fully sample the sky. The cancellation of rapid sky variations is provided by the telescope's chopping secondary mirror, operating at 7.8\,Hz. Linear sky gradients and the gradual increase or decrease in sky brightness are compensated for by nodding the telescope to the `off' position every 16 seconds. We used a chop throw of 120 arcsec in azimuth, except where the galaxy had a nearby companion, in which case we used a chop direction which avoided the companion. The zenith opacity $\tau$ was measured by performing regular skydips. The observations were carried out under a wide range of weather conditions, with opacities at 850\hbox{\,$\umu$m } $\tau_{850}$ ranging from 0.12 to 0.52. This means that some galaxies were observed in excellent conditions ($\tau_{850}<0.2$) while others were observed in far less than ideal conditions. As a result we obtained useful 450\hbox{\,$\umu$m } data for only a fraction of our galaxies. This is discussed in more detail in Section~\ref{450data}. Our observations were centred on the coordinates taken from the NASA/IPAC Extragalactic Database (NED). We made regular checks on the pointing and found it to be generally good to $\sim$2 arcsec. The integration times depended on source strength and weather conditions. Since most of the OS sample are relatively faint submillimetre sources we typically used $\sim$12 integrations ($\sim$30 mins), although many sources were observed in poorer weather and so required longer integration times. We calibrated our data by making jiggle maps of Uranus and Mars, or, when these planets were unavailable, of the secondary calibrators CRL 618 and HL Tau. We took the planet fluxes from the JCMT {\small {FLUXES}} program, and CRL 618 and HL Tau were assumed to have fluxes of 4.56 and 2.32 \mbox{Jy beam$^{-1}$} respectively at 850\,\micron. \begin{table*} \centering \begin{minipage}{14.5cm} \caption{\label{pairstab}\small{Combined SCUBA fluxes for pairs unresolved by \textit{IRAS}. (Notes on individual objects are listed in Section~\ref{maps}).}} \begin{tabular}{lllrrrrrllr} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ Name & R.A. & Decl. & cz & $S_{60}$ & $S_{100}$ & $S_{850}$ & $\sigma_{850}$ & $T_{dust}$ & $\beta$ & Type\\ & (J2000) & (J2000) & (km\,s$^{-1}$) & (Jy) & (Jy) & (Jy) & (Jy) & (K) & &\\ \hline NGC 3799/3800 & 11 40 11.4 & +15 20 05 & 3312 & 4.81 & 11.85 & 0.135 & 0.035 & 29.8 & 1.5 & 10M\\ NGC 5953/4 & 15 34 33.7 & +15 11 49 & 1966 & 10.04 & 18.97 & 0.273 & 0.034 & 35.2 & 1.1 & 10M\\ \hline \end{tabular}\\ Note. Columns have the same meanings as in Table~\ref{fluxtab}. \\ \end{minipage} \end{table*} \subsection{Data reduction} \label{red} The 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } data was reduced using the standard SCUBA specific tasks in the {\small {SURF}} package (Jenness \& Lightfoot 1998, 2000; Jenness et al. 2002), where possible via the {\small {XORACDR}} automated data reduction pipeline (Economou et al. 2004). The off-nod position was subtracted from the on-nod in the raw beam-switched data and the data was then flat-fielded and corrected for atmospheric extinction. In order to correct SCUBA data for atmospheric extinction we must accurately know the value of the zenith sky opacity, $\tau$. Although less crucial at 850\hbox{\,$\umu$m } if the observation is made in good weather ($\tau_{850}<$0.3) and at low airmass, in worse weather or at 450\hbox{\,$\umu$m } the measured source flux can be severely affected by an error in $\tau$. $\tau$ is most commonly estimated either by performing a skydip or by extrapolating to the required wavelength (using relations given in the JCMT literature and in Archibald et al. (2002)) from polynomial fits to the continuous measurements of $\tau$ at 225GHz made at the nearby Caltech Submillimetre Observatory. Since skydips are measured relatively infrequently, the polynomial fits to the CSO $\tau_{225}$ data are recommended in the JCMT literature to be the more reliable way of estimating $\tau$ for both SCUBA arrays. As such, for both 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } data we have wherever possible (the large majority of observations) used the derived CSO opacity at $225$GHz ($\tau_{cso}$). Where $\tau_{cso}$ values were not available the opacities were derived from 850\hbox{\,$\umu$m } skydip measurements (at 450\hbox{\,$\umu$m } using the $\tau_{850}$-to-$\tau_{450}$ relation described in the JCMT literature and Archibald et al. (2002)). Noisy bolometers were noted but not removed at this stage (it was frequently found to be the case that flagging a noisy bolometer as `bad' creates even worse noise spikes in the final map around the position of the removed bolometer data). Large spikes were removed from the data using standard {\small {SURF}} programs. The nodding and chopping should remove any noise which is correlated between the different bolometers. In reality, since the data was not observed in the driest and most stable conditions the signal on different bolometers was often highly correlated due to incomplete sky subtraction. In the majority of cases we used the {\small SURF} task {\small REMSKY}, which takes a set of user-specified bolometers to estimate the sky variation as a function of time. More explicitly, in each time step {\small REMSKY} takes the median signal from the specified sky bolometers and subtracts it from the whole array. To ensure that the sky bolometers specified were looking at sky alone and did not contain any source emission we used a rough SCUBA map together with optical (\textit{Digitised Sky Survey}\footnote{The Digitised Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAGW-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present digital form with the permission of these institutions.} (DSS)) images as a guide when choosing the bolometers, though in this sample there are so few bright sources that in the majority of cases all bolometers could be safely used. Even after this step, however, due to the relatively poor conditions in which much of the data was observed the residual sky level was sometimes found to vary linearly across the array, giving a `tilted plane' on the array. Moreover, in a number of cases a noisy `striped' sky (due possibly to some short-term instrumentation problem) was found. Though the {\small SURF} task {\small {REMSKY}} was designed to remove the sky noise, it is relatively simplistic and cannot remove such spatially varying `tilted' or `striped' sky backgrounds. In these cases, as for the IRS sample (D00), we used one of our own programs in place of {\small REMSKY}. In a handful of cases the `striped' sky was so severe that it could not be removed, so these objects were re-observed in February 2003. \begin{table*} \centering \begin{minipage}{11cm} \caption{\label{450tab}\small{450\hbox{\,$\umu$m } flux densities and two-component SED parameters. (Notes on individual objects are listed in Section~\ref{maps}). }} \begin{tabular}{lrrrllrcc} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9)\\ Name & $S_{450}$ & $\sigma_{450}$ & $\frac{S_{450}}{S_{850}}$ & $T_{w}$ & $T_{c}$ & $\frac{N_{c}}{N_{w}}$ & $M_{d2}$ & $L_{fir}$\\ & (Jy) & (Jy) & & (K) & (K) & & (log $M_{\odot}$) & (log $L_{\odot}$)\\ \hline UGC 148$\dag$ & 0.944 & 0.236 & 17.18 & 34 & 18 & 37 & ... & 10.33\\ NGC 99 & 0.490 & 0.182 & 7.73 & 47 & 17 & 542 & 7.72 & 10.08\\ NGC 803$\ddag$ & 0.631 & 0.196 & 6.79 & 33 & 18 & 92 & 7.02 & 9.46\\ NGC 3689$^{\ast}$ & 1.045 & 0.357 & 10.30 & 59 & 23 & 910 & 7.13 & 10.16\\ PGC 35952 & 0.421 & 0.116 & 8.26 & 58 & 18 & 1859 & 7.31 & 9.77\\ NGC 3987 & 1.110 & 0.319 & 5.98 & 44 & 22 & 279 & 7.85 & 10.78\\ IC 979$\S$ & 0.874 & 0.341 & 15.39 & ... & ... & ... & ... & ...\\ NGC 5953/4 & 2.879 & 0.683 & 10.54 & 54 & 21 & 277 & 7.33 & 10.28\\ NGC 5980 & 1.398 & 0.495 & 5.53 & 43 & 18 & 321 & 8.06 & 10.53\\ NGC 6090 & 0.803 & 0.180 & 8.82 & 55 & 22 & 122 & 8.09 & 11.29\\ NGC 6120 & 0.528 & 0.127 & 8.08 & 45 & 24 & 76 & 7.96 & 11.17\\ NGC 6155$^{\ast}$ & 0.381 & 0.135 & 3.30 & 30 & 20 & 7 & 6.92 & 9.80\\ NGC 6190$^{\ast}$ & 0.880 & 0.308 & 8.89 & 56 & 18 & 2684 & 7.16 & 9.85\\ IC 5090 & 1.018 & 0.240 & 8.66 & 52 & 21 & 346 & 8.28 & 11.19\\ IC 1368 & 0.425 & 0.137 & 9.10 & 55 & 23 & 110 & 7.10 & 10.37\\ NGC 7081 & 0.241 & 0.067 & 5.43 & 32 & 20 & 6 & 6.98 & 9.93\\ NGC 7442 & 0.410 & 0.099 & 9.02 & 54 & 20 & 665 & 7.70 & 10.45\\ UGC 12519 & 0.408 & 0.108 & 5.54 & 28 & 17 & 12 & 7.57 & 10.02\\ NGC 7722 & 0.595 & 0.148 & 9.78 & 54 & 20 & 1224 & 7.33 & 10.04\\ \hline \end{tabular}\\ (1) Most commonly used name. \\ (2) 450\hbox{\,$\umu$m } flux (this work). \\ (3) Error on 450\hbox{\,$\umu$m } flux, calculated as described in Section~\ref{errors}. \\ (4) Ratio of 450- to 850-$\micron$ fluxes. \\ (5) Warm temperature using $\beta=2$. \\ (6) Cold temperature using $\beta=2$. \\ (7) Ratio of cold-to-warm dust. \\ (8) Dust mass calculated using parameters in columns (5)--(7). \\ (9) FIR luminosity (40--1000\micron) integrated under the two-component SED. \\ $\ast$ Some caution is advised (see Section~\ref{maps}).\\ $\S$ The data could not be fitted with a 2-component model (see Section~\ref{maps}).\\ $\dag$ Not well-fitted by two-component model using 850\hbox{\,$\umu$m } data point; fitted parameters here are from 2-component fit to the 60, 100, 450 and 170\hbox{\,$\umu$m } (ISO) data points; (see Section~\ref{maps}).\\ $\ddag$The data for NGC 803 are also well-fitted by the parameters: \mbox{$T_{w}$=60\,K}, \mbox{$T_{c}$=19\,K}, \mbox{$\frac{N_{c}}{N_{w}}$=2597}, \mbox{log $M_{d2}$=6.99} and \mbox{log $L_{fir}$=9.48}, (see Section~\ref{maps}).\\ \end{minipage} \end{table*} Once the effects of the sky were removed the data was despiked again and the final map produced by re-gridding the data into a pixel grid to form an image on $1$ arcsecond pixels. Where there were multiple data-sets for a given source they were binned together into a co-added final map. In these cases each data set was weighted prior to co-adding using the {\small SURF} task {\small SETBOLWT}, which calculates the standard deviation for each bolometer and then calculates weights relative to the reference bolometer (the central bolometer in the first input map). This method is therefore only suitable if there are no very bright sources present in the central bolometer (if a bright source was present we weighted each dataset using the inverse square of its measured average noise). This step also ensures that noisy bolometers contribute to the final map with their correct statistical weight. \subsection{850\hbox{\,\boldmath{$\umu$}m} flux measurement} \label{data-red:flux} The fluxes were measured from the SCUBA maps by choosing a source aperture over which to integrate the flux, such that the signal-to-noise was maximised. The extent of the galaxy in the optical (DSS) images and the extent of the submillimetre source on the S/N map (see Section~\ref{snmaps}) were used to select an aperture that included as much of the submillimetre flux of the galaxy as possible while minimising the amount of sky included. Note, the optical images in Figure~\ref{egmaps} are shown stretched for optimum contrast -- however, apertures for flux measurement were drawn for a more modest optical extent, as seen at a standard level of contrast. Conversion of the measured aperture flux in volts to Janskys was carried out by measuring the calibrator flux for that night using the same aperture as for the object. The orientation of the aperture (relative to the chop throw) was also kept the same as for the object, as particularly for more elliptical apertures this has a significant effect. Objects are said to be detected at $>3\sigma$ if either: (a) the peak S/N in the S/N map was $>3\sigma$ or (b) the flux in the aperture was greater than 3 times the noise in that aperture (where the noise is defined as described in Section~\ref{errors}). \subsection{450\hbox{\,\boldmath{$\umu$}m} data} \label{450data} Due to the increased sensitivity to weather conditions at 450\,\micron, sources emitting at 450\hbox{\,$\umu$m } will only be detected if they are relatively bright at 850\,\micron. This, together with the wide range of observing conditions for this sample, meant that we found useful 450\hbox{\,$\umu$m } data for only 19 objects. Where possible the 450\hbox{\,$\umu$m } emission was measured in an aperture the same size as used for the 850\hbox{\,$\umu$m } data. In some cases a smaller aperture had to be used for the 450\hbox{\,$\umu$m } data, and these individual cases are discussed in Section~\ref{maps}. \subsection{Error analysis} \label{errors} The error on the flux measurement is made up of three components: \begin{itemize} \item{A background sky subtraction error $\sigma_{sky}$ due to the uncertainty in the sky level.} \item{A shot (Poisson) noise term $\sigma_{shot}$ due to pixel-to-pixel variations within the sky aperture. Unlike CCD images, in SCUBA maps the signal in adjoining pixels is correlated; this correlated noise depends on a number of factors, including the method by which the data is binned at the data reduction stage. This has been discussed in some detail by D00, who find that a correction factor is required for each array to account for the fact that pixels are correlated; they find the factor to be 8 at 850\hbox{\,$\umu$m } and 4.4 at 450\,\micron.} \item{A calibration error term $\sigma_{cal}$ which for SCUBA observations at 850\hbox{\,$\umu$m } is typically less than 10\%. We have therefore assumed a conservative calibration factor of 10\% at 850\,\micron. The calibration error at 450\hbox{\,$\umu$m } was taken to be 15\%, following DE01.} \end{itemize} The relationships used to calculate the noise terms are as follows: \[ \sigma_{sky}=\sigma_{ms}N_{ap} \] and \[ \sigma_{shot}=8\sigma_{pix}\sqrt{N_{ap}} \qquad \textrm{or} \qquad \sigma_{shot}=4.4\sigma_{pix}\sqrt{N_{ap}} \] for 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } flux measurements respectively. The error in the mean sky $\sigma_{ms}=S.D./\sqrt{n}$, where S.D. is the standard deviation of the mean sky values in \textit{n} apertures placed on off-source regions of the map. ${N_{ap}}$ is the number of pixels in the object aperture; $\sigma_{pix}$ is the mean standard deviation of the pixels within the sky apertures. The total error for each flux measurement is then given by \begin{equation} \sigma_{tot}=(\sigma_{sky}^{2}+\sigma_{shot}^{2}+\sigma_{cal}^{2})^{1/2} \label{eq1} \end{equation} as for the IRS sample. This error analysis is discussed in detail in D00 and DE01. 850\hbox{\,$\umu$m } fluxes were found to have total errors $\sigma_{tot}$ typically in the range \mbox{15--30\,\%}. 450\hbox{\,$\umu$m } fluxes were found to have total errors $\sigma_{tot}$ typically in the range \mbox{25--35\,\%}. Note, the $\sigma_{tot}$ used to determine whether a source was detected at the 3$\sigma$ level is defined as in Equation~\ref{eq1} but without the calibration error term. \subsection{S/N maps} \label{snmaps} Unlike the IRS sources the OS sources were not selected on the basis of their dust content. Many of the OS sources, especially the early types, are close to the limit of detection. Also, it is often hard to assess whether a source is detected, or whether some feature of the source is real, due to the variability of the noise across the array. This is due both to an increase in the noise towards the edge of each map, caused by a decrease in the number of bolometers sampling each sky point, and to individual noisy bolometers. For this reason we used the method described in D00 to generate artificial noisemaps, which we used with our real maps to produce signal-to-noise maps. The real maps and the artificial maps were first smoothed (using a 12 pixel FWHM) before creating the S/N map. We used these S/N maps to aid in choosing the aperture for measuring the 850\hbox{\,$\umu$m } flux (Section~\ref{data-red:flux}). We have also presented S/N maps of each source (see Section~\ref{results}), as this makes it easier to assess the reality of any features in the maps. \subsection{\textit{IRAS} fluxes} \label{iras-fluxes} \textit{IRAS} 100\hbox{\,$\umu$m } and 60\hbox{\,$\umu$m } fluxes, where available, were taken from the \textit{IRAS Faint Source Catalogue} (Moshir et al. 1990; hereafter FSC) via the NED database. Where literature fluxes were unavailable the NASA/IPAC Infrared Science Archive (IRSA) SCANPI (previously ADDSCAN) scan coadd tool was used to measure a flux from the \textit{IRAS} survey data. The small number of SCANPI fluxes are indicated by `s' in Table~\ref{fluxtab}, and any special cases are discussed individually in Section~\ref{maps}. We take SCANPI fluxes to be detections if the measurements are formal detections at $>4.5\sigma$ at 100\hbox{\,$\umu$m } or $>4\sigma$ at 60\,\micron, which Cox et al. (1995) conclude are actually detections at the 98\% confidence level. Otherwise we give a 98\% confidence upper limit (4.5$\sigma$ at 100\hbox{\,$\umu$m } or 4$\sigma$ at 60\,\micron) using the 1$\sigma$ error found from SCANPI (again following Cox et al. (1995)). If both fluxes are SCANPI measurements we mark the subsequent fitted values by `$\ast$' if there is any doubt as to their viability (for example possible source confusion, confusion with galactic cirrus, or no literature \textit{IRAS}\/ fluxes in NED for either band). \section{Results} \label{results} We detected 52 of the 81 galaxies in the OS sample. Table~\ref{fluxtab} lists the 850\hbox{\,$\umu$m } fluxes and other parameters. For interacting systems resolved by SCUBA but not resolved by \textit{IRAS} the 850\hbox{\,$\umu$m } fluxes given are for the individual galaxies; the 850\hbox{\,$\umu$m } fluxes measured for the combined system are listed in Table~\ref{pairstab} along with the \textit{IRAS} fluxes. Table~\ref{450tab} lists the 450\hbox{\,$\umu$m } fluxes for the 19 galaxies which are also detected at the shorter wavelength. The galaxies detected in the OS sample are shown in Figure~\ref{egmaps}, with our 850\hbox{\,$\umu$m } SCUBA S/N maps overlaid onto optical (DSS) images. Comments on the individual maps are given in Section~\ref{maps}. The 850\hbox{\,$\umu$m } images have several common features. Firstly, we find that many spiral galaxies exhibit two peaks of 850\hbox{\,$\umu$m } emission, seemingly coincident with the spiral arms. This is most obvious for the more face-on galaxies (for example NGC 99 and NGC 7442), but it is also seen for more edge-on spirals (e.g. NGC 7047 and UGC 12519). This `two-peak' morphology is not seen for all the spirals, however. Some, for example NGC 3689, are core-dominated and exhibit a single central peak of submillimetre emission, while others (NGC 6131 and NGC 6189 are clear examples) exhibit a combination of these features, with both a bright nucleus and peaks coincident with the spiral arms. In a number of cases the 850\hbox{\,$\umu$m } peaks clearly follow a prominent dust lane (e.g. NGC 3987 and NGC 7722). These results are consistent with the results of numerous mm/submm studies. For example, Sievers et al. (1994) observe 3 distinct peaks in NGC 3627 and note that the two outer peaks are coincident with the transition region between the central bulge and the spiral arms -- they also observe dust emission tracing the dust lanes of the spiral arms; Gu\'elin et al. (1995), Bianchi et al. (2000), Hippelein et al. (2003) and Meijerink et al. (2005) observe a bright nucleus together with extended dust emission tracing the spiral arms. Many of the features seen in our OS sample 850\hbox{\,$\umu$m } maps are also found by Stevens et al. (2005) in their SCUBA observations of nearby spirals. Secondly, we find that a number of galaxies appear to be extended at 850\hbox{\,$\umu$m } compared to the optical emission seen in the DSS images. In many cases this extended 850\hbox{\,$\umu$m } emission appears to correspond to very faint optical features, as can be seen for NGC 7081 and NGC 7442 in Figure~\ref{egmaps}. In order to investigate this further we have already carried out follow-up optical imaging for $\sim$ half the sample detected at 850\,\micron, to obtain deeper images than available from the DSS. The results and discussion of this deeper optical data will be the subject of a separate paper (Vlahakis et al., in preparation). \begin{figure*} \begin{center} \includegraphics[angle=0, width=8cm]{fig1_ugc148.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc99.ps}\\[-1ex] \hfill \vfill \includegraphics[angle=0, width=8cm]{fig1_pgc3563.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc786.ps}\\[-1ex] \hfill \vfill \includegraphics[angle=0, width=8cm]{fig1_ngc803.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ugc5342.ps} \hfill \caption{\label{egmaps}{The optically-selected SLUGS: 850\hbox{\,$\umu$m } SCUBA S/N maps (produced as described in Section~\ref{snmaps}; 1$\sigma$ contours) overlaid onto DSS optical images ($2\arcmin\!\times\!2\arcmin$, except for NGC 803 and NGC 6155 which are $3\arcmin\!\times\!3\arcmin$). (Optical images are shown here with a contrast that optimises the optical features, however when used as a guide for drawing SCUBA flux measurement apertures a more conservative stretch was applied).}} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[angle=0, width=8cm]{fig1_ngc3270.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc3323.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc3689.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_pgc35952.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc3800.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc3815.ps} \hfill \contcaption{} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[angle=0, width=8cm]{fig1_ngc3920.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc3987.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ugc7115.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ic797.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ic800.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc4712.ps} \hfill \contcaption{} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[angle=0, width=8cm]{fig1_mrk1365.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ugc8902.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ic979.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc5522.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc5953-4.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc5980.ps} \hfill \contcaption{} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[angle=0, width=8cm]{fig1_ic1174.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ugc10205.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6090.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6103.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ic1211.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ugc10325.ps} \hfill \contcaption{} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[angle=0, width=8cm]{fig1_ngc6127.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6120.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6126.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6131.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6137.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6146.ps} \hfill \contcaption{} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[angle=0, width=8cm]{fig1_ngc6155.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ugc10407.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6166.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6189.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc6190.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ic5090.ps} \hfill \contcaption{} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[angle=0, width=8cm]{fig1_ic1368.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc7047.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc7081.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc7442.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc7463.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ugc12519.ps} \vfill \contcaption{} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[angle=0, width=8cm]{fig1_ngc7653.ps} \hfill \includegraphics[angle=0, width=8cm]{fig1_ngc7722.ps} \hfill \contcaption{} \end{center} \end{figure*} \subsection{Notes on individual objects} \label{maps} In the following discussion of individual objects we note that since the number of bolometers sampling each sky point decreases towards the edges of the submillimetre maps the noise increases towards the edge of the maps. Although the S/N maps in Figure~\ref{egmaps} were produced using artificial noisemaps (Section~\ref{snmaps}) which should normally account for this effect there are certain circumstances, such as a `tilted-sky' (see Section~\ref{red}) or the very noisiest bolometers, where residual `noisy' features may remain in the S/N maps. This means that any submillimetre emission in Figure~\ref{egmaps} seen beyond the main optical extent (and away from the centre of the map) should be regarded with some caution. However, in order to aid distinction between probable residual noisy features in the S/N map and potential extended submillimetre emission we have made a thorough investigation of each individual map. In the following discussion, unless otherwise stated we have found all \mbox{$\ge$\,2$\sigma$} submillimetre peaks away from the main optical galaxy to be associated with noisy bolometers or a tilted sky. \textbf{UGC 148}. Data points for this object are not well-fitted by the two-component dust model (Section~\ref{sed-fits}), probably due to the 850\hbox{\,$\umu$m } flux having been underestimated -- the 850\hbox{\,$\umu$m } S/N contours shown in Figure~\ref{egmaps} for this object show evidence of a residual tilted sky plane (the sky is more positive on one side of the map than the other), suggesting that sky removal techniques may have been inadequate in this case and that therefore the source flux may have been under- (or over-) estimated. Also, since the E-NE part of the galaxy is coincident with noisy bolometers in the 850\hbox{\,$\umu$m } map the flux-measurement aperture was drawn to avoid this region, so the 850\hbox{\,$\umu$m } flux may be underestimated. However, an additional data point at 170\hbox{\,$\umu$m } (ISO) is available from the literature (Stickel et al. 2000, 2004). Using the 60, 100, 170 and 450\hbox{\,$\umu$m } fluxes we find that the data points are well-fitted by the two-component dust model (we take an average of all 170\hbox{\,$\umu$m } fluxes available, see Section~\ref{sed-fits}), and these results are listed in Table~\ref{450tab} and the SED is shown in Figure~\ref{2compSEDfig}. \textbf{NGC 99}. The submillimetre emission follows the spiral arm structure. The 2$\sigma$ peak to the NE of the galaxy is not associated with any noisy bolometers. \textbf{NGC 786}. The submillimetre emission to the SE of the galaxy is not associated with any noisy bolometers but we note that this object was observed in \textit{very} poor weather. \textbf{NGC 803}. None of the submillimetre peaks are associated with noisy bolometers. Due to the high ratio of $S_{25}/S_{60}$ good two-component SED fits (Section~\ref{sed-fits}) to the 4 data points (60, 100, 450 and 850\,\micron) can be achieved for two quite different values of the warm component temperature. In addition to the parameters listed in Table~\ref{450tab} a good fit is also found with the following parameters: $T_{w}$=60\,K, $T_{c}$=19\,K, \mbox{$\frac{N_{c}}{N_{w}}$=2597}, \mbox{log $M_{d2}$=6.99} and \mbox{log $L_{fir}$=9.48}. \\[-2.2ex] \textbf{UGC 5342}. This observation had a tilted sky. \textbf{NGC 3270}. All the submillimetre emission away from the main optical extent, and the emission to the N of the galaxy, is associated with noisy bolometers. This observation also suffered from a tilted sky, which may explain much of the submillimetre emission in the N part of the map. However, the emission towards the centre of the map, coinciding with the main optical galaxy, is not associated with any noisy bolometers. This emission seems to occur where the galaxy bulge ends and the inter-arm region begins, as was found for NGC 3627 by Sievers et al. (1994). \textbf{NGC 3689}. Very few scans were available via SCANPI, so $\textit{IRAS}$ fluxes (and corresponding fitted parameters) for this object should be used with caution. \textbf{PGC 35952}. The submillimetre emission to the S and SW is not associated with any noisy bolometers. The submillimetre peaks coincident with the main optical extent of the galaxy appear to follow the spiral arm structure, as for NGC 6131, and it is possible the extended submillimetre emission to the S relates to the very extended faint spiral arms seen in the optical. \textbf{NGC 3799/3800}. NGC 3799 and NGC 3800 were observed in separate maps. NGC 3799 individually is not detected at the 3$\sigma$ level (although we measure flux at the 2$\sigma$ level). For NGC 3800 (shown in Figure~\ref{egmaps}) most of the integrations for the bolometers to the S of the map were unusable, and consequently this part of the map is very much noisier. Generally this observation is very noisy, and especially bad at 450\,\micron; thus no 450\hbox{\,$\umu$m } flux is available. In fact only the main region of submillimetre emission at the centre of the map is in an area free from noisy bolometers, and it is this region over which we have measured the submillimetre flux of the galaxy. The S$_{850}$ listed for NGC 3799/3800 in Table~\ref{pairstab} is a conservative measurement of the 850\hbox{\,$\umu$m } emission from the system, the sum of the separately measured fluxes from each of the two component galaxies. In coadding the two maps there appears to be a `bridge' of 850\hbox{\,$\umu$m } emission between the two galaxies, consistent with emission seen in the optical (NGC 3799 is to the SW of NGC 3800 in Figure~\ref{egmaps}). However, since this region of the map has several noisy bolometers we only measure fluxes for the main optical extent of the galaxies. \textbf{NGC 3815}. The submillimetre emission to the NE and W of the galaxy is associated with noisy bolometers. However, the arm-like submillimetre structures seen extending from the galaxy to the N and S are \textit{not}. Both of these `arms' extend in the direction of faint optical features seen in the DSS and 2MASS images. The optical images also show evidence of extended optical emission around the main optical extent (just visible to the NE in Figure~\ref{egmaps}), but 2MASS (JHK) images show a band of emission stretching E-W between two nearby galaxies on either side of NGC 3815. It seems clear that some kind of interaction is taking place in this system, and therefore it is perhaps not unlikely that there might also be significantly extended submillimetre emission. \textbf{NGC 3920}. The submillimetre emission to the E and S is not associated with any noisy bolometers. \textbf{NGC 3987}. This edge-on galaxy has a prominent dust lane in the optical. Though the submillimetre emission follows the dust lane it is seen slightly offset. A similar result was found for another edge-on spiral by Stevens et al. (2005), who conclude this effect is simply an effect of the inclination of the galaxy on the sky. \textbf{NGC 3997}. The FSC gives an upper limit \mbox{$<$3.101\,Jy} for S$_{100}$, likely due to possible source confusion with NGC 3993. The S$_{100}$ we measure with SCANPI should therefore be used with some caution. \textbf{NGC 4005}. This object is detected at 850\hbox{\,$\umu$m } at only the 2.5$\sigma$ level. It is unresolved by $\textit{IRAS}$ and may be confused with $\textit{IRAS}$ source IRASF11554+2524 (NGC 4000). \textbf{UGC 7115}. With the exception of the submillimetre emission to the SE, none of the submillimetre emission in this map is associated with noisy bolometers. However, we found this SCUBA observation to have a tilted sky. We estimate that as much as 80\% of the 850\hbox{\,$\umu$m } flux from this elliptical may be due to synchrotron contamination from a radio source associated with the galaxy (Vlahakis et al., in prep.). \textbf{IC 800}. The submillimetre emission to the E of this galaxy corresponds to a region of the map which is only slightly noisy, making it unlikely any residual features of this noise remains in the S/N map (Section~\ref{snmaps}). However, we also note that this object was observed in poor weather. \textbf{NGC 4712}. None of the submillimetre emission is associated with noisy bolometers, with the exception of the 2$\sigma$ peak closest to the galaxy in the arm-like structure extending to the E. However, we note that this arm-like feature, though faint, is also seen in the optical (though not reproduced in the optical image in Figure~\ref{egmaps}); it appears to originate from the main galaxy extent, where there appears to be a significant amount of dust obscuration. \textbf{UGC 8902}. The region of submillimetre emission to the E, far SW and far S are all associated with noisy bolometers. However, the 4$\sigma$ submillimetre peak lying to the S/SE beyond the main optical extent, at a similar declination to the small galaxy to the SE, is not associated with any noisy bolometers. This submillimetre emission is consistent with the fact that the overall emission associated with the galaxy is offset to the S/SE with respect to the optical. We also note that this region in the optical contains a number of faint condensations in the direction of the small galaxy to the SE. \textbf{IC 979}. Although this galaxy is detected with relatively low S/N at 850\hbox{\,$\umu$m } it is also detected at 450\,\micron. We allocate a higher 450\hbox{\,$\umu$m } calibration error (25\%) for this source, since there were no good 450\hbox{\,$\umu$m } calibrator observations that night (calibration was achieved by taking the mean results from a number of calibrators observed that and the previous night). Note also that no two-component fit could be made to the data since the 450\hbox{\,$\umu$m } data point is higher than the 100\hbox{\,$\umu$m } value, possibly due to the problems with calibrating the 450\hbox{\,$\umu$m } data but more likely due to an underestimate of the 100\hbox{\,$\umu$m } flux. We note this object was observed in poor weather. \textbf{UGC 9110}. There appears to be flux present at both 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } at the 2$\sigma$ level, but the maps are very noisy and several integrations unusable, most likely due to unstable and deteriorating weather conditions during the observation. This object is unresolved by \textit{IRAS}: \textit{IRAS}\/ source IRASF14119+1551 (FSC fluxes S$_{100}$=2.341 and \mbox{S$_{60}$=0.802\,Jy}) is likely a blend of UGC9110 and companion CGCG103-124 (Condon, Cotton \& Broderick 2002). \textbf{NGC 5522}. The 850\hbox{\,$\umu$m } emission to the SE of NGC 5522 is associated with a region of the map which is slightly noisy and where there are a number of spikes in the data. We note that this observation was carried out in poor weather. \begin{figure*} \begin{center} \subfigure[UGC 148: T=(34,18), n=37]{ \includegraphics[angle=270, width=5.5cm]{fig2_ugc148.ps}} \subfigure[NGC 99: T=(47,17), n=542]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc99.ps}} \subfigure[NGC 803: T=(33,18), n=92 (shown) \textit{or} T=(60,19), n=2597]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc803.ps}} \subfigure[NGC 3689: T=(59,23), n=910]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc3689.ps}} \subfigure[PGC 35952: T=(58,18), n=1859]{ \includegraphics[angle=270, width=5.5cm]{fig2_pgc35952.ps}} \subfigure[NGC 3987: T=(44,22), n=279]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc3987.ps}} \subfigure[NGC 5953/4: T=(54,21), n=277]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc5953-4.ps}} \subfigure[NGC 5980: T=(43,18), n=321]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc5980.ps}} \subfigure[NGC 6090: T=(55,22), n=122]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc6090.ps}} \subfigure[NGC 6120: T=(45,24), n=76]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc6120.ps}} \subfigure[NGC 6155: T=(30,20), n=7]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc6155.ps}} \subfigure[NGC 6190: T=(56,18), n=2684]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc6190.ps}} \caption{\label{2compSEDfig}{Best-fitting two-component SEDs assuming $\beta=2$, fitted to the 60, 100, 450 and 850\hbox{\,$\umu$m } fluxes (with the exception of (a), see Section~\ref{maps}). Solid lines represent the composite two-component SED and dot-dash lines indicate the warm and cold components. Any additional 170\hbox{\,$\umu$m } (ISO) fluxes from the literature (Stickel et al. 2000, 2004) are also plotted, though (with the exception of (a)) not fitted. Note, captions show the fitted parameters as listed in Table~\ref{450tab}, so may be the averaged values (see Section~\ref{sed-fits}).}} \end{center} \end{figure*} \begin{figure*} \begin{center} \setcounter{subfigure}{12} \subfigure[IC 5090: T=(52,21), n=346]{ \includegraphics[angle=270, width=5.5cm]{fig2_ic5090.ps}} \subfigure[IC 1368: T=(55,23), n=110]{ \includegraphics[angle=270, width=5.5cm]{fig2_ic1368.ps}} \subfigure[NGC 7081: T=(32,20), n=6]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc7081.ps}} \subfigure[NGC 7442: T=(54,20), n=665]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc7442.ps}}\hfill \subfigure[UGC 12519: T=(28,17), n=12]{ \includegraphics[angle=270, width=5.5cm]{fig2_ugc12519.ps}}\hfill \subfigure[NGC 7722: T=(54,20), n=1224]{ \includegraphics[angle=270, width=5.5cm]{fig2_ngc7722.ps}} \hfill \contcaption{} \end{center} \end{figure*} \textbf{NGC 5953/4} is also in the IRS SLUGS sample. While D00 used colour-corrected \textit{IRAS} fluxes as listed in the \textit{IRAS} BGS (from Soifer et al. (1989)) we present here, as for all the OS sample, fluxes from the \textit{IRAS} FSC. \textbf{NGC 5980}. This observation suffered from a tilted sky, which potentially explains the extended 850\hbox{\,$\umu$m } emission to the W of the galaxy. The aperture used to measure the 450\hbox{\,$\umu$m } flux was smaller than at 850\hbox{\,$\umu$m } in order to avoid a noisy bolometer, thus source flux at 450\hbox{\,$\umu$m } may be underestimated. \textbf{IC 1174}. This source is \textit{just} detected at 850\hbox{\,$\umu$m } at the 3$\sigma$ level. \textbf{UGC 10205}. The 850\hbox{\,$\umu$m } map in Figure~\ref{egmaps} is a coadd of two observations. Since emission at the optical galaxy position is very clear in one observation (both at 850\hbox{\,$\umu$m } \textit{and} 450\,\micron) and not in the other observation, and since we find no explanation for this, we simply coadd the two observations (Section~\ref{obs}). The submillimetre emission coincident with the main optical galaxy extent, and also the peak to the S, are not associated with any noisy bolometers. Peaks to the N and W of the galaxy lie in a region of the map which is slightly noisy. We note that this observation was carried out in less than ideal weather. \textbf{NGC 6090} is a closely interacting/merging pair, and is also in the IRS SLUGS sample. \textbf{NGC 6103}. The submillimetre emission to the S of the galaxy is not associated with any noisy bolometers. This region contains a number of faint features seen in optical (DSS) and 2MASS images. We note that this object was observed in less than ideal weather. \textbf{IC 1211}. We find in the literature no known radio sources associated with this elliptical galaxy (NVSS 1.4GHz 3$\sigma$ upper limit is \mbox{$<$1.2\,mJy}), and therefore cannot attribute the 850\hbox{\,$\umu$m } flux detected here to contamination from synchrotron radiation. \textbf{UGC 10325 (NED01)} is one galaxy of the pair \mbox{UGC 10325}. The SCUBA map is centred on this galaxy (NED01), but NED02 can be seen at the SE edge of the DSS image in Figure~\ref{egmaps}. Thus all fluxes given are for the individual galaxy \mbox{UGC 10325 NED01}. \textbf{NGC 6127}. We find in the literature no known radio sources associated with this elliptical galaxy (NVSS 1.4GHz 3$\sigma$ upper limit is $<$1.2\,mJy), and therefore cannot attribute the 850\hbox{\,$\umu$m } flux detected here to contamination from synchrotron radiation. The 4$\sigma$ submillimetre peak to the W of the galaxy, coincident with a knot in the optical, is not associated with any noisy bolometers. \textbf{NGC 6120}. The submillimetre emission to the W of the galaxy, and at the S of the map, is associated with noisy bolometers. \textbf{NGC 6126}. The submillimetre source (which we measured as a point source) is offset to the S of the optical extent of the galaxy. We note that, at minimum contrast, a small satellite/companion object can be seen in this region in the DSS and 2MASS images. The 3$\sigma$ peak to the S of the map is not associated with any noisy bolometers and is coincident with a small object visible in the optical. This observation, however, was carried out in poor weather. \textbf{NGC 6131}. The submillimetre emission to the very NW of the galaxy (beyond the main optical extent) may be associated with a noisy bolometer. \textbf{NGC 6137}. We estimate that $\sim$\,20\% of the 850\hbox{\,$\umu$m } flux from this elliptical galaxy could be due to synchrotron contamination from a radio source associated with the galaxy (Vlahakis et al., in prep.). Although only the submillimetre emission to the W of the galaxy coincides with a noisy bolometer we note that this observation had a tilted sky. \textbf{NGC 6146}. We estimate that as much as 80\% of the 850\hbox{\,$\umu$m } flux from this elliptical may be due to synchrotron contamination from a radio source associated with the galaxy (Vlahakis et al., in prep.). \textbf{NGC 6155} The submillimetre map shows extended emission to the S and SE of the galaxy at 850\,\micron, coincident with a number of small galaxies/condensations in the optical. M\'arquez et al. (1999) find one of the spiral arms in this galaxy is directed towards the SE. None of the submillimetre peaks in this map are associated with noisy bolometers. A large aperture was used to measure all the flux associated with this object, and these results are listed in Table~\ref{fluxtab}. However at 450\hbox{\,$\umu$m } any flux appears confined to the main optical extent (though the map at 450\hbox{\,$\umu$m } is very noisy), and thus the flux measurement at 450\hbox{\,$\umu$m } was made using a smaller aperture. Using these values of the 850\hbox{\,$\umu$m } and 450\hbox{\,$\umu$m } flux we found a two-component SED could not be fitted (Section~\ref{sed-fits}); the $S_{450}/S_{850}$ ratio is simply too low, most likely because we have measured extended emission at 850\,\micron. Thus we also measured the $S_{850}$ using a smaller aperture the same size as used at 450\,\micron, and find \mbox{S$_{850}=0.069\pm0.013$\,Jy}. For this smaller aperture we find that a two-component model can just be fitted to the data, and those parameters we list in Table~\ref{450tab}. \textbf{NGC 6166} is an elliptical and is located in a very busy field -- it is the dominant galaxy in the cluster Abell 2199. The presence of dust lanes is well documented in the literature. We note that our SCANPI measurements are in good agreement with those of Knapp et al. (1989). Using all available radio fluxes from the literature we estimate that as little as 4\% or as much as 100\% of the 850\hbox{\,$\umu$m } flux from this elliptical may be due to synchrotron contamination from a radio source associated with the galaxy (depending whether a spectral index is assumed constant over the whole galaxy or whether it is assumed to have a flatter core) -- this is a preliminary analysis and discussion of this and the other five ellipticals detected in the OS sample will be the subject of a separate paper (Vlahakis et al., in prep.). \textbf{NGC 6173}. We measure \mbox{S$_{100}$=0.20\,Jy} with SCANPI but the detection is unconvincing since the coadds do not agree. Therefore we give an upper limit at 100\hbox{\,$\umu$m } in Table~\ref{fluxtab}. \textbf{NGC 6190}. Some of the data for this object was very noisy and unusable. Consequently the remaining data may not be reliable. The submillimetre emission to the W of the galaxy lies in a region where there is a noisy bolometer in the 850\hbox{\,$\umu$m } flux map. Thus the apertures used also unavoidably encompass some noisy bolometers, particularly at 450\,\micron, and results for this object should be used with caution. However, the rest of the 850\hbox{\,$\umu$m } emission in the map is not associated with any noisy bolometers, so while the flux measurements may be unreliable this does not apply to the emission extent, which appears to follow the outer spiral arm structure. \textbf{NGC 7081}. The submillimetre emission to the E and W of this galaxy are not associated with any noisy bolometers. The emission to the N and SE is coincident with regions of the map which are only slightly noisy, and since this observation was carried out in very good weather it is unlikely that any residual features of this noise remain in the S/N map (Section~\ref{snmaps}). Though from the optical DSS image only the central region of the galaxy (coincident with the main submillimetre peak) is clearly visible, there is evidence that this spiral has a very faint extended spiral arm structure. This is confirmed by optical images from SuperCOSMOS which clearly show very knotty and irregular faint spiral arms coincident with the peaks of submillimetre emission to the N and W of the galaxy. \textbf{NGC 7442}. The 3$\sigma$ submillimetre peak to NW of main optical extent is coincident with faint optical knots and (unlike the 2$\sigma$ peaks elsewhere in the submillimetre map) is not associated with any noisy bolometers. \textbf{NGC 7463}. This galaxy is part of a triple system with NGC 7464 (to the S of NGC 7463) and NGC 7465 (not shown in Figure~\ref{egmaps}). At 850\hbox{\,$\umu$m } we clearly detect emission from both NGC 7463 and NGC 7464, which seem to be joined by a bridge of submillimetre emission. The flux listed in Table~\ref{fluxtab} is for NGC 7463 alone, measured in an aperture corresponding to its main optical extent. Unfortunately a very noisy bolometer to the SE prevents us measuring the flux from the eastern half of NGC 7464, but excluding this region we measure a flux for the pair of \mbox{0.051$\pm$0.012\,Jy}, though obviously a lower limit. An $\textit{IRAS}$ source is associated with NGC 7465, which is resolved from the other members of the system at 60\hbox{\,$\umu$m } (HIRES; Aumann, Fowler \& Melnyk 1990). Dust properties of this system (using SCUBA data observed as part of SLUGS) are studied in detail by Thomas et al. (2002). \textbf{UGC 12519}. The 850\hbox{\,$\umu$m } emission to the NE of this galaxy is coincident with a number of small objects seen in the optical and is not associated with any noisy bolometers. Although UGC 12519 is also detected at 450\hbox{\,$\umu$m } the slightly smaller field of view of the short array means these NE objects lie just outside the 450\hbox{\,$\umu$m } map. \textbf{NGC 7722}. Along with a very high ratio of cold-to-warm dust this object also has a very prominent dust lane, extending over most of the NE `half' of the galaxy. The 850\hbox{\,$\umu$m } emission clearly follows the dust lane evident in the optical. The 2$\sigma$ submillimetre peak to the NW of the galaxy is not associated with any noisy bolometers. \subsection{Spectral fits} \label{sed-fits} In this section we describe the dust models we fit to the spectral energy distributions (SEDs) of the OS sample galaxies and present the results of these fits. Comparison of the results of the OS and IRS samples is discussed in Section~\ref{properties}. D00 found that for the IRS sample the 60\,\micron, 100\hbox{\,$\umu$m } and 850\hbox{\,$\umu$m } fluxes could be fitted by a single-temperature dust model. However, with the addition of the 450\hbox{\,$\umu$m } data (DE01) they found that a single dust emission temperature no longer gives an adequate fit to the data, and that in fact two temperature components are needed, in line with the paradigm that there are two main dust components in galaxies (Section~\ref{cold-dust}). For the OS galaxies we have fitted a two-component dust model where there is a 450\hbox{\,$\umu$m } flux available. Since only 19 of the galaxies have 450\hbox{\,$\umu$m } flux densities we have also fitted an isothermal model to the data for all the galaxies in the OS sample which were detected at 850\,\micron. We fitted two-component dust SEDs to the 60\,\micron, 100\hbox{\,$\umu$m } (\textit{IRAS}), 450\hbox{\,$\umu$m } \& 850\hbox{\,$\umu$m } (SCUBA) fluxes, by minimising the sum of the $\chi^2$ residuals. This two-component model expresses the emission at a particular frequency as the sum of two modified Planck functions (`grey-bodies'), each having a different characteristic temperature, such that \begin{equation} \label{eq:2comp} S_{\nu}=N_{w} \times \nu^{\beta}B(\nu,T_{w})+ N_{c} \times \nu^{\beta}B(\nu,T_{c}) \end{equation} for the optically thin regime. Here $N_{w}$ and $N_{c}$ represent the relative masses in the warm and cold components, $T_{w}$ and $T_{c}$ the temperatures, $B(\nu,T)$ the Planck function, and $\beta$ the dust emissivity index. DE01 used the high value for the ratio of $S_{450}/S_{850}$ and the tight correlation between $S_{60}/S_{450}$ and $S_{60}/S_{850}$ for the IRS galaxies to argue that $\beta\approx2$. The OS galaxies follow a similar tight correlation (Section~\ref{prop:ir-opt}). For the OS sample we find the mean $S_{450}/S_{850}$=8.6 with $\sigma$=3.3, which is slightly higher than found for the IRS sample (where $S_{450}/S_{850}$=7.9 with $\sigma$=1.6) and with a slightly less tight distribution (though still consistent with being produced by the uncertainties in the fluxes). Both the OS and IRS values are somewhat higher than that found for the Stevens et al. (2005) sample of 14 local spiral galaxies, where the mean $S_{450}/S_{850}$=5.9 and $\sigma$=1.0. Since the OS galaxies have a similar high value for this ratio to the IRS galaxies we follow DE01 in assuming $\beta$=2. We constrained $T_{w}$ by the \textit{IRAS} 25\hbox{\,$\umu$m } flux (the fit was not allowed to exceed this value), though we did not actually fit this data point, and we allowed $T_{c}$ to take any value lower than $T_{w}$. This method is the same as that used in DE01 for the IRS sample, but while many of the IRS galaxies with 450\hbox{\,$\umu$m } data also had fluxes at several other wavelengths in the literature we note that for the OS sample galaxies we have only four data points to fit. Since this is not enough data points to provide a well-constrained fit the values of $\chi^{2}_{min}$ may be unrealistically low. In Table~\ref{450tab} we list the parameters producing the best fits or, where more than one set of parameters produces an acceptable fit, we list an average of all those parameters. In practice we find that it is only $T_{w}$ (and hence $N_{c}$/$N_{w}$) for which there is sometimes a fairly large range of acceptable values, and this is likely due to our not fitting any data points below 60\,\micron. We show all our fitted two-component SEDs in Figure~\ref{2compSEDfig} (for the best-fitting, not averaged, parameters). Any additional 170\hbox{\,$\umu$m } (ISO) fluxes available from the literature (Stickel et al. 2000, 2004) are also plotted in Figure~\ref{2compSEDfig}, though (with the exception of UGC 148 (see Section~\ref{maps})) \textit{not} fitted; where there are several 170\hbox{\,$\umu$m } measurements available we plot the mean value. \begin{figure} \begin{center} \includegraphics[angle=270, width=7.5cm]{fig3_ngc99.ps}\\[4ex] \vfill \includegraphics[angle=270, width=7.5cm]{fig3_ngc3987.ps} \vfill \caption{\label{SEDfig}{Two representative isothermal SEDs.}} \end{center} \end{figure} We find a mean warm component temperature \mbox{$T_{w}=47.4\pm2.4$\,K} and a mean cold component temperature \mbox{$T_{c}=20.2\pm0.5$\,K}. The fitted warm component temperatures are in the range \mbox{$28<T_{w}<59$\,K} and cold component temperatures are in the range \mbox{$17<T_{c}<24$\,K}. Thus, the cold component temperature is close to that expected for dust heated by the general ISRF (Cox et al. 1986), one of the two components in the current paradigm (Section~\ref{cold-dust}). We find a mean \mbox{$N_{c}/N_{w}=532\pm172$} (or higher if we include the higher value found for NGC 803 (see notes to Table~\ref{450tab} and Section~\ref{maps})). For the IRS sample DE01 found a large variation in the relative contribution of the cold component to the SEDs (described by the parameter $N_{c}$/$N_{w}$); for the OS sample we find an even larger variation. Objects NGC 6190 and PGC 35952 in Figure~\ref{2compSEDfig}, for example, clearly exhibit very `cold' SEDs with a strikingly prominent cold component (with \mbox{$\approx$\,2000} times as much cold dust as warm dust). Comparison of the two samples is discussed in detail in Section~\ref{properties}. \begin{table*} \centering \begin{minipage}{12cm} \caption{\label{lumtab}\small{Luminosities and masses.}} \begin{tabular}{lrrrrrr} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7)\\ Name & log $L_{60}$ & log $L_{850}$ & log $L_{fir}$ & log $M_{d}$ & log $M_{HI}$ & log $L_{B}$ \\ & (W\,Hz$^{-1}$sr$^{-1}$) & (W\,Hz$^{-1}$sr$^{-1}$) & ($L_{\odot}$) & ($M_{\odot}$) & ($M_{\odot}$) & ($L_{\odot}$)\\ \hline UGC 148 & 22.80 & 21.20 & 10.22 & 7.05 & 9.82 & 10.39 \\ NGC 99 & 22.57 & 21.46 & 9.94 & 7.17 & 10.29 & 10.37 \\ PGC 3563 & 22.24 & 21.13 & 9.76 & 6.99 & ... & 10.03 \\ NGC 786 & 22.56 & 21.34 & 9.99 & 7.14 & ... & 9.94 \\ NGC 803 & 21.69 & 20.82 & 9.37 & 6.76 & 9.78 & 10.13 \\ UGC 5129 & 21.86 & $<$20.96 & $<$9.46 & $<$7.09 & 9.34 & 10.01 \\ NGC 2954 & $<$21.60 & $<$20.81 & $<$9.23 & ... & 8.09 & 10.25 \\ UGC 5342 & 22.46 & 21.03 & 9.84 & 6.81 & ... & 10.40 \\ PGC 29536 & $<$22.40 & $<$21.76 & $<$9.94 & ... & ... & 10.74 \\ NGC 3209 & $<$21.99 & $<$21.13 & $<$9.66 & ... & ... & 10.45 \\ NGC 3270 & 22.58 & 21.58 & 10.23 & 7.53 & 10.49 & 10.78 \\ NGC 3323 & 22.81 & 21.48 & 10.23 & 7.30 & 9.68 & 10.12 \\ NGC 3689 & $^{\displaystyle s}$22.54 & 21.09 & 10.09 & 7.04 & 9.14 & 10.29 \\ UGC 6496 & ... & ... & ... & ... & ... & 10.10 \\ PGC 35952 & 22.08 & 21.11 & 9.61 & 6.96 & 9.73 & 10.07 \\ NGC 3799/3800$^{\scriptstyle p}$ & 22.93 & 21.38 & 10.39 & 7.27 & 9.34 & ... \\ NGC 3812 & $<$21.68 & $<$20.91 & $<$9.17 & ... & ... & 9.95 \\ NGC 3815 & 22.19 & 20.96 & 9.70 & 6.83 & 9.62 & 10.15 \\ NGC 3920 & 22.21 & 20.86 & 9.63 & 6.68 & ... & 9.87 \\ NGC 3987 & 23.20 & 21.79 & 10.74 & 7.72 & 9.75 & 10.63 \\ NGC 3997 & 22.63 & $<$20.93 & $<$9.96 & $<$7.06 & 9.83 & 10.30 \\ NGC 4005 & ... & $<$20.69 & ... & $<$6.82 & 9.22 & 10.35 \\ NGC 4015 & 21.88 & $<$21.18 & $<$9.49 & $<$7.31 & ... & ... \\ UGC 7115 & $<$22.17 & 21.59 & $<$9.80 & $\dag$7.71 & ... & 10.45 \\ UGC 7157 & $<$22.15 & $<$21.27 & $<$9.65 & ... & ... & 10.32 \\ IC 797 & 21.72 & 20.78 & 9.27 & 6.63 & 8.50 & 9.77 \\ IC 800 & 21.52 & 20.82 & 9.07 & 6.63 & 8.51 & 9.67 \\ NGC 4712 & 22.18 & 21.50 & 9.87 & 7.43 & 10.18 & 10.50 \\ PGC 47122 & $<$21.95 & $<$21.46 & $<$9.71 & $<$7.58 & ... & 10.32 \\ MRK 1365 & 23.32 & 21.20 & 10.61 & 7.00 & 9.23 & 10.00 \\ UGC 8872 & $<$22.04 & $<$21.02 & $<$9.44 & ... & ... & 10.29 \\ UGC 8883 & 22.36 & $<$21.31 & $<$9.86 & $<$7.44 & ... & 10.00 \\ UGC 8902 & 23.07 & 21.81 & 10.57 & 7.69 & ... & 10.80 \\ IC 979 & $^{\displaystyle s \ast}$22.27 & 21.75 & $^{\ast}$9.85 & $^{\ast}$7.56 & ... & 10.64 \\ UGC 9110 & U & $<$21.21 & ... & ... & 9.72 & 10.27 \\ NGC 5522 & 22.84 & 21.39 & 10.22 & 7.17 & 9.77 & 10.51 \\ NGC 5953/4$^{\scriptstyle p}$ & 22.80 & 21.22 & 10.17 & 7.03 & 9.32 & ... \\ NGC 5980 & 22.97 & 21.84 & 10.44 & 7.65 & ... & 10.53 \\ IC 1174 & $<$21.80 & 20.95 & $<$9.19 & $\dag$7.08 & ... & 10.18 \\ UGC 10200 & 21.95 & $<$20.10 & $<$9.21 & $<$6.22 & 9.54 & 9.05 \\ UGC 10205 & 22.44 & 21.61 & 10.10 & 7.53 & 9.57 & 10.55 \\ NGC 6090 & 23.93 & 22.06 & 11.20 & 7.78 & 8.82 & 10.73 \\ NGC 6103 & 22.97 & 21.88 & 10.45 & 7.71 & ... & 10.83 \\ NGC 6104 & 22.77 & $<$21.59 & $<$10.35 & $<$7.71 & ... & 10.62 \\ IC 1211 & $<$21.78 & 21.16 & $<$9.52 & $\dag$7.29 & ... & 10.37 \\ UGC 10325 & 22.92 & 21.34 & 10.35 & 7.20 & 9.92 & ... \\ NGC 6127 & $<$21.57 & 21.51 & $<$9.17 & $\dag$7.64 & ... & 10.66 \\ NGC 6120 & 23.74 & 21.95 & 11.11 & 7.80 & ... & 10.73 \\ NGC 6126 & $<$22.37 & 21.56 & $<$9.89 & $\dag$7.69 & ... & 10.61 \\ NGC 6131 & 22.49 & 21.36 & 10.07 & 7.27 & 9.83 & 10.37 \\ NGC 6137 & $<$22.41 & 21.62 & $<$9.94 & $\dag$7.75 & ... & 11.04 \\ NGC 6146 & $<$22.18 & 21.55 & $<$9.83 & $\dag$7.68 & ... & 11.01 \\ NGC 6154 & $<$21.95 & $<$21.37 & $<$9.44 & ... & 9.86 & 10.38 \\ NGC 6155 & 22.25 & 21.04 & 9.78 & 6.93 & 8.95 & 9.82 \\ UGC 10407 & 23.28 & 21.48 & 10.63 & 7.32 & ... & 10.62 \\ NGC 6166 & $^{\displaystyle s}$22.14 & 22.00 & 10.03 & 7.96 & ... & 11.30 \\ NGC 6173 & $<$22.33 & $<$21.48 & $<$9.63 & ... & ... & 11.14 \\ NGC 6189 & 22.59 & 21.57 & 10.19 & 7.48 & 10.07 & 10.68 \\ NGC 6190 & 22.02 & 21.24 & 9.69 & 7.18 & 9.48 & 9.97 \\ \end{tabular} \end{minipage} \end{table*} \begin{table*} \begin{minipage}{12cm} \contcaption{} \begin{tabular}{lrrrrrr} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7)\\ Name & log $L_{60}$ & log $L_{850}$ & log $L_{fir}$ & log $M_{d}$ & log $M_{HI}$ & log $L_{B}$ \\ & (W\,Hz$^{-1}$sr$^{-1}$) & (W\,Hz$^{-1}$sr$^{-1}$) & ($L_{\odot}$) & ($M_{\odot}$) & ($M_{\odot}$) & ($L_{\odot}$)\\ \hline NGC 6185 & 22.47 & $<$21.72 & $<$10.06 & $<$7.85 & ... & 10.96 \\ UGC 10486 & $<$22.06 & $<$21.24 & $<$9.63 & ... & ... & 10.31 \\ NGC 6196 & $<$22.24 & $<$21.53 & $<$9.86 & ... & ... & 10.85 \\ UGC 10500 & $^{\displaystyle s \ast}$21.85 & $<$21.09 & $<$9.56 & $<$7.22 & ... & 10.20 \\ IC 5090 & 23.64 & 22.23 & 11.09 & 8.08 & ... & 10.46 \\ IC 1368 & 23.00 & 21.07 & 10.30 & 6.83 & ... & 10.14 \\ NGC 7047 & 22.35 & 21.45 & 9.98 & 7.38 & 9.08 & 10.50 \\ NGC 7081 & 22.49 & 20.88 & 9.89 & 6.72 & 9.51 & 9.75 \\ NGC 7280 & $<$20.80 & $<$20.34 & $<$8.51 & ... & 8.16 & 9.85 \\ NGC 7442 & 22.83 & 21.60 & 10.32 & 7.47 & 9.75 & 10.48 \\ NGC 7448 & 22.84 & 21.19 & 10.19 & 7.04 & 9.75 & 10.39\\ NGC 7461 & $<$21.70 & $<$20.81 & $<$9.33 & ... & ... & 9.90 \\ NGC 7463 & U & 20.60 & ... & $^{\scriptscriptstyle T}$6.73 & 9.33 & 10.12 \\ III ZW 093 & 23.26 & $<$21.99 & $<$11.06 & $<$8.12 & 9.95 & 11.60 \\ III ZW 095 & $<$21.92 & $<$21.24 & $<$9.93 & ... & ... & 9.90 \\ UGC 12519 & 22.37 & 21.36 & 9.95 & 7.26 & 9.53 & 10.27 \\ NGC 7653 & 22.59 & 21.52 & 10.17 & 7.43 & ... & 10.49 \\ NGC 7691 & 22.15 & $<$20.82 & $<$9.68 & $<$6.95 & 9.59 & 10.24 \\ NGC 7711 & $<$21.60 & $<$20.86 & $<$9.20 & ... & ... & 10.57 \\ NGC 7722 & 22.31 & 21.20 & 9.94 & 7.15 & 9.51 & 10.48 \\ \hline \end{tabular} \\ (1) Most commonly used name. \\ (2) 60\hbox{\,$\umu$m } luminosity. \\ (3) 850\hbox{\,$\umu$m } luminosity. \\ (4) FIR luminosity, calculated by integrating measured SED from 40--1000\,\micron. \\ (5) Dust mass, calculated using a single temperature, $T_{d}$, as listed in Table~\ref{fluxtab} ($T_{d}$ derived from fitted SED to the 60, 100 and 850\hbox{\,$\umu$m } data points). Upper limits are calculated using $T_{d}$=20\,K.\\ (6) HI mass; refs.: Chamaraux, Balkowski \& Fontanelli (1987), Haynes $\&$ Giovanelli (1988, 1991), Huchtmeier $\&$ Richter (1989), Giovanelli $\&$ Haynes (1993), Lu et al. (1993), Freudling (1995), DuPrie $\&$ Schneider (1996), Huchtmeier (1997), Theureau et al. (1998), Haynes et al. (1999). \\ (7) Blue luminosity, calculated from corrected blue magnitudes taken from the LEDA database. \\ \vspace{0.5pt}\\ $^{\scriptstyle p}$ A close or interacting pair which was resolved by SCUBA; parameters given refer to the combined system, as in Table~\ref{pairstab}.\\ $^{\ast}$ Values should be used with caution (see $^{\ast}$ notes to Table~\ref{fluxtab}).\\ $\dag$ Object was only detected at 850\hbox{\,$\umu$m } (and not in either $\textit{IRAS}$ band), so these dust masses should be used with caution; these are all early types; dust masses calculated using $T_{d}$=20\,K. \\ $^{\scriptscriptstyle T}$ Dust mass calculated using $T_{d}$=20\,K, since no fitted value of $T_{d}$.\\ U Unresolved by \textit{IRAS}.\\ \vspace{0pt}\\ Notes on HI fluxes:-\\ NGC 7463 Giovanelli $\&$ Haynes (1993) note confused HI profile, many neighbours.\\ UGC 12519 HI flux from Giovanelli $\&$ Haynes (1993) gives HI mass of \mbox{log $M_{\odot}$=9.65}.\\ NGC 7691 Haynes et al. (1999) gives \mbox{log $M_{\odot}$=9.88}.\\ NGC 5953/4 HI flux from Freudling (1995); Huchtmeier $\&$ Richter (1989) give \mbox{log $M_{\odot}$} in the range \mbox{8.82 -- 9.20}.\\ NGC 3799/3800 flux for NGC\,3800 but sources may be confused (Lu et al. 1993).\\ NGC 803 Giovanelli $\&$ Haynes (1993) give \mbox{log $M_{\odot}$=9.68}, and note optical disk larger than beam.\\ NGC 6090 Huchtmeier $\&$ Richter (1989) give values up to \mbox{log $M_{\odot}$=10.24}.\\ NGC 6131 confused with neighbour.\\ NGC 6189 Haynes et al. (1999) gives \mbox{log $M_{\odot}$=10.26}.\\ NGC 7081 Huchtmeier $\&$ Richter (1989) give values up to \mbox{log $M_{\odot}$=9.69}.\\ NGC 4712 Huchtmeier $\&$ Richter (1989) give \mbox{log $M_{\odot}$=9.91}.\\ \end{minipage} \end{table*} \begin{figure*} \begin{center} \includegraphics[angle=0, width=14cm]{fig4.ps} \caption{\label{colplot}{Colour-colour plot: $S_{60}/S_{100}$ versus $S_{60}/S_{850}$ colours for the optically-selected (this work) and \textit{IRAS}-selected (D00) SLUGS (filled and open points respectively).}} \end{center} \end{figure*} Since 450\hbox{\,$\umu$m } fluxes are only available for only $\sim$ one third of the sample we have in addition fitted single-component SEDs for all sources in the OS sample. In these fits we have allowed $\beta$ to vary as well as $T_{d}$, as it is rarely possible to get an acceptable fit with $\beta$=2. The best fitting $T_{d}$ and $\beta$ are listed in Table~\ref{fluxtab}. We include fitted parameters only for those objects with detections in all 3 wavebands (60\,\micron, 100\hbox{\,$\umu$m } and 850\,\micron). The sample mean and error in the mean for the best-fitting temperature is \mbox{$\bar{T}_{d}=31.6\pm0.6$\,K} and for the dust emissivity index $\bar{\beta}=1.11\pm0.05$. Figure~\ref{SEDfig} shows two representative isothermal SEDs. As an example of the potential dangers of fitting single-temperature SEDs, we note one of these objects (NGC 99) is also fitted with the two-component model (shown in Figure~\ref{2compSEDfig}). NGC 99 can clearly be well-fitted by both an isothermal dust model with very flat $\beta$ ($\beta$=0.4 in this case) \textit{and} a two-component model with much steeper $\beta$ ($\beta$=2); this is also the case for NGC 6190 and PGC 35952 described above. We note that the low values of $\beta$ found from the isothermal fits are not the \textit{true} values of $\beta$ but rather is evidence that galaxies across all Hubble types contain a significant proportion of dust that is colder than these fitted temperatures, and it is likely that these objects (as for NGC 99) require a two-component model to adequately describe their SED. \subsection{Dust masses} \label{sec:dmass} Dust masses for the OS galaxies are calculated using the measured 850\hbox{\,$\umu$m } fluxes and dust temperatures ($T_{d}$) from the isothermal fits (listed in Table~\ref{fluxtab}) using \begin{equation} \label{eq:dmass} M_{d}=\frac{S_{850}D^{2}}{\kappa_{d}(\nu)B(\nu,T_{d})} \end{equation} where $\kappa_{d}$ is the dust mass opacity coefficient at 850\,\micron, $B(\nu,T_{d})$ is the Planck function at 850\hbox{\,$\umu$m } for the temperature $T_{d}$ and D is the distance. As discussed in D00 we assume a value for $\kappa_{d}(\nu)$ of 0.077m$^{2}$kg$^{-1}$, which is consistent with the value derived by James et al. (2002) from the global properties of galaxies. Though the true value of $\kappa_{d}(\nu)$ is uncertain, as long as dust has similar properties in all galaxies then our relative dust masses will be correct. The uncertainties in the relative dust masses then depend only on errors in $S_{850}$ and $T_{d}$. Values for dust masses (calculated using $T_{d}$ from our isothermal fits) are given in Table~\ref{lumtab}. We find a mean dust mass \mbox{$\bar{M_{d}}=(2.34\pm0.36)\times{10^{7}}$ M$_{\odot}$} (where the $\pm$ error is the error on the mean), which is comparable to that found for the IRS sample (D00). This, together with the fact that for the OS sample we find significantly lower values of $\beta$, poses a number of issues. As shown by DE01, if more than one temperature component is present our use of a single-temperature model will have given us values of $\beta$ which are lower than actually true, biased our $T_{d}$ estimates to higher temperatures, and lead to underestimates of the dust masses. For those galaxies for which we have made two-component fits (Table~\ref{450tab}) we also calculate the two-component dust mass ($M_{d2}$), using \begin{equation} \label{eq:dmass2} M_{d2}=\frac{S_{850}D^2}{\kappa_{d}}\times\left[\frac{N_{c}}{B(\nu,T_{c})}+\frac{N_{w}}{B(\nu,T_{w})}\right] \end{equation} where parameters are the same as in Equation~\ref{eq:dmass} and $T_{c}$, $T_{w}$, $N_{c}$ and $N_{w}$ are the fitted two-component parameters as in Equation~\ref{eq:2comp} (and listed in Table~\ref{450tab}). The mean two-component dust mass is found to be \mbox{${\bar M_{d2}}=(4.89\pm1.20)\times{10^{7}}$ M$_{\odot}$}, and the two-component dust masses are typically a factor of 2 higher than found from fitting single-temperature SEDs, though in some cases (such as NGC 99) as much as a factor of 4 higher. Given the lack of CO measurements for the OS sample galaxies, one potential problem with the above estimates of dust mass is any contribution to the SCUBA 850\hbox{\,$\umu$m } measurements by CO(3-2) line emission. Seaquist et al. (2004) find, for a representative subsample of the IRS SLUGS galaxies from D00, that contamination of 850\hbox{\,$\umu$m } SCUBA fluxes by CO(3-2) reduces the average dust mass by \mbox{25--38\%}, though this does not affect the shape of the dust mass function derived using the IRS SLUGS sample in D00. However, the OS galaxies are relatively faint submillimetre sources compared with the IRS sample. From the fractional contribution of CO(3-2) line emission derived by Seaquist et al. (a linear fit to the plot of SCUBA-equivalent flux produced by the CO line versus SCUBA flux) we estimate that for the OS sample the CO line contribution to the 850\hbox{\,$\umu$m } flux is small and is well within the uncertainties on the 850\hbox{\,$\umu$m } fluxes we give in Table~\ref{fluxtab}. \begin{figure*} \begin{center} \includegraphics[angle=0, width=14cm]{fig5.ps} \caption{\label{colplot-450}{Colour-colour plot: $S_{60}/S_{450}$ versus $S_{60}/S_{850}$ colours for the optically-selected (this work) and \textit{IRAS}-selected (D00) SLUGS (filled and open points respectively).}} \end{center} \end{figure*} \subsection{Gas masses} \label{sec:gasmass} The neutral hydrogen masses listed in Table~\ref{lumtab} were calculated from HI fluxes taken from the literature\footnote{See notes to Table~\ref{lumtab}.} using \begin{equation} M_{HI}=2.356\times10^5D^2S_{HI} \end{equation} where $D$ is in Mpc and $S_{HI}$ is in Jy km s$^{-1}$. Only a small handful of objects in the OS sample had CO fluxes in the literature, and so in this work we will not present any molecular gas masses. \subsection{Far-infrared luminosities} \label{sec:fir} The FIR luminosity ($L_{fir}$) is usually calculated using \[FIR=1.26\times10^{-14}(2.58 S_{60}+S_{100}) \] and \begin{equation} \label{eq:fir} L_{fir}=4\pi D^2\times FIR\times C \end{equation} as described in the Appendix of \textit{Catalogued Galaxies and Quasars Observed in the IRAS Survey} (Version 2, 1989), where $S_{60}$ and $S_{100}$ are the 60\hbox{\,$\umu$m } and 100\hbox{\,$\umu$m } \textit{IRAS} fluxes, D is the distance, and C is a colour-correction factor dependant on the ratio $S_{60}/S_{100}$ and the assumed emissivity index. The purpose of this correction factor is to account for emission outside the \textit{IRAS} bands, and is explained by Helou et al. (1988). However, since we have submillimetre fluxes we can use our derived $T_{d}$ and $\beta$ to integrate the total flux under the SED out to 1000\,\micron. This method gives more accurate values of $L_{fir}$ since it makes no general assumptions. We list in Table~\ref{lumtab} $L_{fir}$ calculated using this method and our fitted isothermal SEDs; $L_{fir}$ values calculated using our two-component SEDs are listed in Table~\ref{450tab}. \subsection{Optical luminosities} \label{sec:optlum} The blue luminosities given in Table~\ref{lumtab} are converted (using M$_{B\odot}$=5.48) from blue apparent magnitudes taken from the Lyon-Meudon Extragalactic Database (LEDA; Paturel et al. 1989, 2003) which have already been corrected for galactic extinction, internal extinction and k-correction. \section{The Submillimetre Properties of Galaxies} \label{properties} \begin{figure*} \begin{center} \subfigure[\label{beta:opt-irs}]{ \includegraphics[angle=0, width=8.7cm]{fig6_a.ps}} \hfill \subfigure[\label{temp:opt-irs}]{ \includegraphics[angle=0, width=8.7cm]{fig6_b.ps}} \hfill \caption{\label{opt-iras-hist}{Distributions of (a) $\beta$ values and (b) $T_{d}$ values for the optically- and \textit{IRAS}-selected SLUGS (line-filled and shaded histograms respectively).}} \end{center} \end{figure*} \subsection{Optical selection versus IR selection} \label{prop:ir-opt} Figures~\ref{colplot} and~\ref{colplot-450} show the OS and IRS galaxies plotted on two-colour diagrams (filled and open symbols respectively). The IRS and OS galaxies clearly have different distributions, and in particular there are OS galaxies in parts of the diagram where there are no IRS galaxies. In Figure~\ref{colplot} $\sim$\,50\% of the OS galaxies are in a region of the colour-colour diagram completely unoccupied by IRS galaxies. This shows there are galaxies `missing' from IR samples, with important implications for the submillimetre LF (Section~\ref{lumfun}). Figure~\ref{colplot-450} shows the $S_{60}/S_{450}$ versus $S_{60}/S_{850}$ colour-colour plot for the OS sample objects and IRS sample objects which have 450\hbox{\,$\umu$m } fluxes. We confirm the very tight correlation found by DE01 (here the correlation coefficient \mbox{$r_{s}$\,=\,0.96}, \mbox{significance\,=\,9.20e-21})), and the scatter for the OS sample may be completely explained by the uncertainties on the fluxes. Importantly, this relationship holds for all the objects in the OS sample for which we have 450\hbox{\,$\umu$m } fluxes, which include a wide range of galaxy types \mbox{(t-type=0 to 10)} and with $L_{fir}$ ranging over 2 orders of magnitude. The (least-squares) best-fitting line to the \textit{combined} \mbox{OS + IRS} samples shown in Figure~\ref{colplot-450} is given by \[ \mathrm{log(S_{60}/S_{450})=(1.03\pm0.05)\,log(S_{60}/S_{850})-(0.955\pm0.070)} \] (or re-written $S_{60}/S_{450}=0.119(S_{60}/S_{850})^{1.03}$) and is very similar to that found by DE01, confirming the finding for the IRS sample that, within the uncertainties, the ratio $S_{450}/S_{850}$ is constant. DE01 conclude, from the results of simulations of the 450/850\hbox{\,$\umu$m } flux ratio and from the fitted $\beta$ values for those galaxies whose SEDs require a cold component, that $\beta\sim2$ for all galaxies, and that therefore the cold dust component in all galaxies has a similar temperature \mbox{($T_{c}\sim$\,20--21\,K)}. The fact that we also find the $S_{450}/S_{850}$ ratio constant for the OS sample suggests that these conclusions are true for all Hubble types (only \mbox{t-types$<$0} are unrepresented in the OS sub-sample with 450\hbox{\,$\umu$m } data). The positions of the OS galaxies in the colour diagrams suggest there is more cold dust in the OS galaxies than in the IRS galaxies. We can investigate this further with the results of our spectral fits. Figure~\ref{beta:opt-irs} shows the comparison between the distribution of $\beta$ values (found from the isothermal fits) for the OS and IRS samples. We find OS sample galaxies with $\beta$ values lower than any found in the IRS sample. The two-sided Kolmogorov--Smirnov (K-S) test shows that the distributions of the two samples are significantly different (the probability that the two samples come from the same distribution function is only 1.8e-5). Though this clearly demonstrates that the properties of the dust in the OS and IRS samples are different, rather than interpreting this as a physical difference in the emissivity behaviour of the grains ($\beta$) we believe that it is a difference in the two samples' ratios of cold/warm dust. Figure~\ref{temp:opt-irs} shows the comparison between the distribution of dust temperatures (from isothermal fits) for the OS and IRS samples. We note that the OS sample has consistently colder $T_{d}$ compared to the IRS sample. Once again using a K-S test we find that the OS and IRS sample dust temperatures do not have the same distribution, with the probability of the two samples coming from the same distribution function being only 1.41e-4. For those objects in the OS and IRS samples for which two-component fits were possible, the distributions of the warm and cold component temperatures ($T_{w}$ and $T_{c}$) for the two samples are shown in Figures~\ref{temp-warm} and~\ref{temp-cold} respectively. The distributions of $T_{c}$ for the OS and IRS samples are statistically indistinguishable, while conversely the distributions of $T_{w}$ are not similar (probability of same distribution is 0.03). While the mean cold component temperature for the OS sample (\mbox{$\bar T_{c}=20.2\pm0.5$\,K}) is very similar to the value found for the IRS sample (mean \mbox{$\bar T_{c}=20.1\pm0.4$\,K}), the mean warm component temperature is rather higher (\mbox{$\bar T_{w}=47.4\pm2.4$\,K} for the OS sample as opposed to \mbox{$\bar T_{w}=39.3\pm1.4$\,K} for the IRS sample). Figure~\ref{norm-ratio} shows the distribution of $N_{c}/N_{w}$ for the OS and IRS samples. The OS and IRS samples clearly have different distributions -- the (K-S test) probability of the two samples having the same distribution is 8.4e-4. For the OS sample the mean \mbox{$N_{c}/N_{w}=532\pm172$} (or higher, see Section~\ref{sed-fits}), for the IRS sample the mean \mbox{$N_{c}/N_{w}=38\pm11$}. For the OS sample there is a much larger range of $N_{c}/N_{w}$ than for the IRS sample. Interestingly, few of the OS objects even have a $N_{c}$/$N_{w}$ low enough to fall within the range found for the IRS sample, strongly suggesting a prevalence of cold dust in the OS sample compared to the IRS sample. \begin{figure} \begin{center} \subfigure[\label{temp-warm}]{ \includegraphics[angle=0, width=8.35cm]{fig7_a.ps}}\\ \vfill \subfigure[\label{temp-cold}]{ \includegraphics[angle=0, width=8.35cm]{fig7_b.ps}} \vfill \caption{\label{temp-warm-cold}{Distributions of warm component (a) and cold component (b) temperatures for the OS and IRS samples (line-filled and shaded histograms respectively).}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0, width=8.45cm]{fig8.ps} \caption{\label{norm-ratio}{Distribution of log($N_{c}/N_{w}$) for the OS and IRS samples (line-filled and shaded histograms respectively).}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0, width=8.45cm]{fig9.ps} \caption{\label{norm-lum60}{$N_{c}/N_{w}$ versus 60\hbox{\,$\umu$m } luminosity for the OS and IRS samples (filled and open points respectively).}} \end{center} \end{figure} The large difference between the distributions of $N_{c}/N_{w}$ for the OS and IRS galaxies implies that most OS sample galaxies contain much larger proportions of cold dust relative to warm dust than found for the IRS galaxies, additional evidence that \textit{IRAS} missed a population of cold-dust-dominated objects. The similarity of the temperature of the cold component for the OS and IRS sample and the difference in the distribution of $N_{c}/N_{w}$ supports the current paradigm for dust in galaxies. An alternative model for dust in galaxies would be one in which \textit{IRAS}\/ galaxies are ones in which the general ISRF is more intense, and therefore the majority of dust is hotter. The similarity of $T_{c}$ for the different samples argues against this and suggests that most dust in all galaxies is relatively cold and has a similar temperature. The temperature differences between galaxies arise from a second dust component, presumably the dust in regions of intense star formation. Our results for the OS sample indicate that the ratio of the mass of dust in this second component to the mass of dust in the first component can vary by roughly a factor of 1000. There are two other pieces of evidence in favour of the two-component model. First, the ISO 170\hbox{\,$\umu$m } flux densities that exist for 3 of our two-component-fitted galaxies (Stickel et al. 2004; Section~\ref{sed-fits}) agree very well with our model SEDs (we did not use these data in making our fits, with one exception; see Section~\ref{sed-fits}). Second, the ratio of the mass of cold dust to the mass of warm dust correlates inversely with 60\hbox{\,$\umu$m } luminosity (Figure~\ref{norm-lum60}; \mbox{$r_{s}$\,=\,$-$0.41}, \mbox{significance\,=\,1.24e-2}); in the two-component model one might expect the most luminous \textit{IRAS}\/ sources to be dominated by the warm component. The difference in the distributions of $T_{w}$ does not, however, fit in with this general picture. In the two-component model one would expect $T_{w}$ and $T_{c}$ to be constants, with the only thing changing between galaxies being the proportion of cold and warm dust. The difference in the distributions of $T_{w}$ may indicate that this model is too simplistic. Two things may be relevant here. First, as can be seen in Figure~\ref{2compSEDfig} it is those OS galaxies with very prominent cold components which typically account for the highest warm component temperatures (for example PGC 35952 or NGC 6090). Second, the model SEDs with high values of $T_{w}$ also generally provide a good fit to the 25\hbox{\,$\umu$m } flux density, whereas the model values with low values of $T_{w}$ tend to underestimate the 25\hbox{\,$\umu$m } flux density. This last point suggests that to fully understand dust in galaxies one cannot ignore the measurements at wavelengths $<$\,60\,\micron; however, if we did include these measurements we would then definitely need more than two dust components. This is clearly demonstrated by Sievers et al. (1994) who, for NGC 3627, fit a three-component model. A two-component model is nonetheless adequate for our purposes, since we are interested in the cold component rather than a third hot component. \begin{figure} \begin{center} \includegraphics[angle=0, width=8.45cm]{fig10.ps} \caption{\label{type-hist}{Distribution of Hubble types for the OS sample 850\hbox{\,$\umu$m } detections (upper panel) and non-detections (lower panel).}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0, width=8cm]{fig11.ps} \caption{\label{steve-lum-plot}{Cumulative luminosity distributions for early-type galaxies (solid line) and late-type galaxies (dot-dashed). The maximum values for both samples are less than one because of the upper limits that fall below the lowest actual measurement.}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0, width=8.45cm]{fig12.ps} \caption{\label{850-opt}{850\hbox{\,$\umu$m } luminosity versus optical luminosity $L_{B}$ for the OS sample, with different Hubble types indicated by different symbols: E-S0(t=-5 to 0), Early-type spirals(t=1 to 4), S?(t=5), Late-type spirals(t=6 to 10): circles, triangles, stars, and squares respectively. The 6 detected ellipticals are highlighted as open circles.}} \end{center} \end{figure} \subsection{Submillimetre properties along the Hubble sequence} \label{prop:hubble} In this section we investigate the submillimetre properties of galaxies as a function of Hubble type (\textit{t}). We first compare the distributions of Hubble type for the detections (D) and non-detections (ND) in our OS sample (Figure~\ref{type-hist}). We use the K-S test to find that the probability of their having the same distribution is $\simeq2\%$. Thus Figure~\ref{type-hist} suggests that early-type galaxies are less likely to be submillimetre sources than later types. \begin{figure*} \begin{center} \subfigure[\label{bhist-split}]{ \includegraphics[angle=0, width=8.7cm]{fig13_a.ps}} \subfigure[\label{temp-type-m}]{ \includegraphics[angle=0, width=8.7cm]{fig13_b.ps}} \caption{\label{type-beta-temp}{Distribution of (a) $\beta$ values and (b) $T_{d}$ values for the OS SLUGS, with different Hubble types (as given in LEDA according to the RC2 code, and listed in Table~\ref{fluxtab}) indicated by different shaded regions: E-S0 (t=-5 to 0), Early-type spirals (t=1 to 4), S? (t=5), Late-type spirals (t=6 to 10). ((b): Inserted panel: mean $T_{d}$ for the OS SLUGS for different Hubble types, with error bars of error on the mean (bins with only 1 source are not plotted)).}} \end{center} \end{figure*} To investigate this apparent morphological difference further, we estimated the submillimetre luminosity distributions of early- and late-type galaxies. A major complication is the large number of upper limits. We used the Kaplan-Meier estimator (Wall \& Jenkins 2003) to incorporate information from both the upper limits and the measurements. We defined early-type galaxies as all those with $t\,\leq\,1$ and late-type galaxies as those with $t\,>\,1$. We used this division, because the greatest difference between the cumulative distributions of Hubble type for detected and non-detected galaxies (Figure~\ref{type-hist}) was found at t=1. Figure~\ref{steve-lum-plot} shows the cumulative luminosity distributions estimated in this way for the early-type and late-type galaxies. There appears to be a tendency for the late-type galaxies to be more luminous submillimetre sources. However, the tendency is not very strong. We also used the ASURV statistical package for censored data (Feigelson \& Nelson 1985) to compare the results for the two samples, using the Gehan test and the log-rank test (see Wall \& Jenkins 2003). We found a marginally significant (10\%) difference using the log-rank test but no significant difference using the Gehan test. Figure~\ref{850-opt} shows a plot of 850\hbox{\,$\umu$m } luminosity versus optical luminosity. For clarity we simply divide our sample into 4 broad groups based on the galaxies' t-type parameter given in LEDA (which uses the standard numerical codes for the de Vaucoulers morphological type, as defined in RC2): E-S0 \mbox{(t=-5 to 0)}, Early-type spirals \mbox{(t=1 to 4)}, S? (t=5) and Late-type spirals \mbox{(t=6 to 10)}. The different Hubble types show similar relationships. On further inspection of the data, the more marked dependence on Hubble type visible in Figure~\ref{type-hist} appears to be at least partly caused by the early-type galaxies being observed in worse conditions. In summary, there appears to be some difference in submillimetre properties as one moves along the Hubble sequence, but it is not very strong. We can also use the results of our spectral fits to investigate whether there are any trends with Hubble type. As above, we simply divide our sample into 4 broad groups based on the galaxies' t-type. Figures~\ref{bhist-split} and \ref{temp-type-m} show the distributions of $\beta$ and $T_{d}$ (derived from our single-component fits) for the OS sample. We note that the objects of each type appear fairly evenly distributed across the bins from \mbox{$\beta$\,=\,0 to 2} (Figure~\ref{bhist-split}), and in order to test this statistically we divide the sample into two broad groups: early types ($-5\leq \textrm{t-type} \leq4$) and late types ($5\leq \textrm{t-type} \leq10$), and perform a K-S test on the two groups. We find that the distributions of the early and late type groups are not significantly different. The distribution of isothermal dust temperatures appears similar for all Hubble types (Figure~\ref{temp-type-m}); we find no significant differences between the early and late types. We also investigated the distributions of the warm and cold component temperatures found from our two-component fits to look for any differences between early and late types; for example Popescu et al. (2002) find a tendency for the temperatures of the cold dust component to become colder for later types. We divided our 18 two-component fitted temperatures into Hubble types as in Popescu et al. (2002) and, due to our smaller number of sources, also into two broad groups of early ($0\leq \textrm{t-type} \leq4$) and late ($6\leq \textrm{t-type} \leq10$) types, and compared the overall distributions and the median $T_{c}$ for each type grouping. We found no differences between either the overall distributions or the median values of $T_{c}$ or $T_{w}$ for the early and late types, though we note the limitations of such a small sample. \begin{figure} \begin{center} \includegraphics[angle=0, width=8.45cm]{fig14.ps} \caption{\label{d-HI-mass}{Dust mass versus HI mass for the OS and IRS samples (filled and open circles respectively).}} \end{center} \end{figure} \subsection{Ellipticals} \label{ellipticals} It was once thought that ellipticals were entirely devoid of dust and gas, but optical absorption studies now show that dust is usually present (Goudfrooij et al. 1994; van Dokkum \& Franx 1995). Furthermore, dust masses for the \mbox{$\sim$\,15\%} of ellipticals detected by \textit{IRAS} (Bregman et al. 1998) have been found to be as much as a factor of 10--100 higher when estimated from their FIR emission compared to estimates from optical absorption (Goudfrooij \& de Jong 1995). At 850\hbox{\,$\umu$m } we detect 6 ellipticals, from a total of 11 ellipticals in the OS sample, and find them to have dust masses in excess of \mbox{$10^{7}$ $M_{\odot}$}. However, a literature search revealed that for 4 of the 6 detections there are radio sources. We have used the radio data to estimate the contribution of synchrotron emission at 850\,\micron. These estimates are often very uncertain because of the limited number of flux measurements available between 1.4GHz and 850\hbox{\,$\umu$m } (353GHz). However, in some cases (Section~\ref{maps}) it is clear that some or all of the 850\hbox{\,$\umu$m } emission may be synchrotron radiation. We are currently investigating ellipticals further with SCUBA observations of a larger sample. This will be the subject of a separate paper (Vlahakis et al., in prep.). \subsection{The relationship between gas and dust} \label{prop:gas-dust} In D00 we found that both the mass of atomic gas and the mass of molecular gas are correlated with dust mass, but the correlation is tighter for the molecular gas. There are virtually no CO measurements for the OS sample, so here we have only estimated the mass of atomic gas. We compared the dust mass ($M_{d}$, calculated using dust temperatures from the isothermal fits) to the HI mass for the OS sample (Figure~\ref{d-HI-mass}) and find a very weak correlation. Though the correlation for the OS sample alone is very weak it is nonetheless consistent with the correlation found by D00 for the IRS sample; most of the OS points lie within the region covered by the IRS points, but though they cover the same range of HI masses we note that we do not have any HI masses for our OS sample objects with the higher dust masses. The weakness of the correlation for the OS sample is therefore likely due simply to the small number of OS sample 850\hbox{\,$\umu$m } detections for which we have HI data (28 objects). The mean neutral gas-to-dust ratio for the OS sample is $M_{HI}/M_{d}$=395$\pm71$, where the error given is the error on the mean. The (neutral $+$ molecular) gas-to-dust ratios for the IRS SLUGS sample and the Devereux \& Young (1990; herein DY90) sample of spiral galaxies are respectively $M_{{H_{2}}+HI}/M_{d}$=581$\pm43$ and $M_{{H_{2}}+HI}/M_{d}$=1080$\pm70$, but since for the OS sample we have no CO measurements and therefore no measure of the mass of molecular hydrogen we can at this stage only compare the neutral gas-to-dust ratio for the OS sample. We therefore compare our OS value to mean neutral gas-to-dust ratios which we calculate, for the IRS sample and the DY90 sample respectively, to be $M_{HI}/M_{d}$=305$\pm24$ and $M_{HI}/M_{d}$=2089$\pm341$. There is a large difference between the values for both SLUGS samples and the value determined by DY90. This is almost certainly due to the fact that the DY90 dust masses were estimated from \textit{IRAS}\/ fluxes and therefore, for the reasons described in Section~\ref{intro}, will have `missed' the cold dust. There is also a difference between the SLUGS values and the Galactic value of 160 for the (neutral $+$ molecular) gas-to-dust ratio (the value derived from Sodroski et al. (2004) by D00). The neutral gas-to-dust ratios for both the SLUGS samples are at least a factor of 2 larger than this Galactic value, and as shown by D00 when the molecular gas is included the value of the gas-to-dust ratio for the IRS sample is more than 3 times larger than the Galactic value. D00 attribute this discrepancy to a missed `cold dust' component \mbox{($T_{d}\le 20$\,K)} in the IRS sample. We have already noted in this paper that the single-temperature fits lead to dust masses approximately a factor of 2 lower than the more realistic two-component fits (Section~\ref{sec:dmass}). Using the dust masses calculated using our two-component fits ($M_{d2}$; Table~\ref{450tab}), for the 13 galaxies for which there are HI masses we find the mean neutral gas-to-dust ratio for the OS sample is then $M_{HI}/M_{d2}$=192$\pm44$. This is in good agreement with the Galactic value, although if there is a significant amount of molecular gas this value would obviously be higher. \section{Luminosity and Dust Mass Functions} \label{lumfun} The `accessible volume' method (Avni \& Bahcall 1980) will, in principle, produce unbiased estimates of the submillimetre luminosity function (LF) and dust mass function (DMF) provided that no population of galaxies is unrepresented by the sample used to derive the LF and DMF. In Paper I (D00) we produced a first estimate of the LF and DMF from the IRS sample. However, since our new observations of the OS sample have shown the existence of a population of galaxies with low values of the $S_{60}/S_{100}$ and $S_{60}/S_{850}$ flux ratios (Figure~\ref{colplot} and discussion in Section~\ref{sed-fits}) \textit{of which there is not a single representative in the IRS sample}, our earlier estimates of the LF and DMF are likely to be biased. In this section we use our new (OS sample) results to produce new estimates of the submillimetre LF and DMF. \subsection{Method} \label{lumfun:method} We derive the local submillimetre LF and DMF by two different methods: 1) directly from the OS SLUGS sample, and 2) by extrapolating the spectral energy distributions of the galaxies in the \textit{IRAS} PSCz catalogue out to 850\,\micron. The PSCz catalogue (Saunders et al. 2000) is a complete redshift survey of $\sim$15000 \textit{IRAS}\/ galaxies in the \textit{IRAS}\/ Point Source Catalogue. Serjeant \& Harrison (2005; herein SH05) used the PSCz galaxies and the IRS SLUGS submm:far-IR two-colour relation to extrapolate the SEDs of the PSCz galaxies out to 850\hbox{\,$\umu$m } and produce an 850\hbox{\,$\umu$m } LF. Importantly, this method allows us to probe a wider range of luminosities than probed directly by the SLUGS samples. We estimate the LF for both methods using \begin{equation} \label{accvol} \Phi(L)\Delta L=\sum_{i}\frac{1}{V_{i}} \end{equation} (Avni $\&$ Bahcall 1980). Here $\Phi(L)\Delta L$ is the number density of objects (Mpc$^{-3}$) in the luminosity range $L$ to $L+\Delta L$, the summation is over all the objects in the sample lying within this luminosity range, and $V_{i}$ is the accessible volume of the $i$th object in the sample. Throughout we use an $H_{0}$ of \mbox{75 km\,s$^{-1}$Mpc$^{-1}$} and a `concordance' universe with $\Omega_{M}$=0.3 and $\Omega_{\Lambda}$=0.7. We estimate the dust mass function (the space density of galaxies as a function of dust mass) in the same way as the LF, substituting dust mass for luminosity in Equation~\ref{accvol}. The details of these two methods, hereafter referred to as `directly measured' and `PSCz-extrapolated', are discussed in Sections~\ref{method:850LF} and~\ref{method:pscz} respectively. \begin{table} \caption{\label{optlf}\small{Directly measured OS SLUGS luminosity and dust mass functions}} \begin{tabular}{cccc} \hline \multicolumn{4}{c}{850\hbox{\,$\umu$m } luminosity function} \\ \smallskip \\ $log L_{850}$ & $\phi$(L) & $\sigma_{\phi}$ & \\ (W\,Hz$^{-1}$sr$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) & \\ \smallskip \\ 20.75 & 9.17e-3 & 3.47e-3 & \\ 21.01 & 3.83e-3 & 1.15e-3 & \\ 21.27 & 2.10e-3 & 6.32e-4 & \\ 21.52 & 1.20e-3 & 3.10e-4 & \\ 21.78 & 6.03e-4 & 2.46e-4 & \\ 22.04 & 9.14e-5 & 5.28e-5 & \\ \medskip \\ $\alpha$ & $L_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\ & (W\,Hz$^{-1}$sr$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) \\ \smallskip \\ $-$1.71$^{+0.60}_{-0.57}$ & $4.96^{+6.1}_{-2.5}\times10^{21}$ & 1.67$^{+5.21}_{-1.18}\times10^{-3}$ & 0.31 \\ \medskip\\ \multicolumn{4}{c}{850\hbox{\,$\umu$m } dust mass function} \\ \smallskip \\ $log M_{d}$ & $\phi$(M) & $\sigma_{\phi}$ & \\ ($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) & \\ \smallskip \\ 6.75 & 9.08e-3 & 3.03e-3 & \\ 6.99 & 3.99e-3 & 1.33e-3 &\\ 7.23 & 3.09e-3 & 8.57e-4 & \\ 7.48 & 9.25e-4 & 3.08e-4 &\\ 7.72 & 8.14e-4 & 2.45e-4 &\\ 7.96 & 5.69e-5 & 4.02e-5 &\\ \medskip \\ $\alpha$ & $M_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\ & ($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) \\ \smallskip \\ $-$1.67$^{+0.24}_{-0.25}$ & 3.09$^{+1.09}_{-0.64}\times10^{7}$ & 3.01$^{+1.62}_{-1.38}\times10^{-3}$ & 1.17 \\ \medskip \\ \hline \medskip \end{tabular} \end{table} \begin{table} \caption{\label{PSCZlf}\small{PSCz-extrapolated luminosity function}} \begin{tabular}{cccc} \hline \smallskip \\ log $L_{850}$ & $\phi$(L) & $\sigma_{\phi}^{down}$ & $\sigma_{\phi}^{up}$ \\ (W\,Hz$^{-1}$sr$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) & \multicolumn{2}{c}{(Mpc$^{-3}$dex$^{-1}$)} \\ \smallskip \\ 18.52 & 3.42e-02 & 2.42e-02 & 2.74e-02 \\ 18.75 & 6.30e-02 & 2.38e-02 & 2.83e-02 \\ 18.99 & 3.90e-02 & 1.62e-02 & 9.45e-03 \\ 19.23 & 3.20e-02 & 1.06e-02 & 6.17e-03 \\ 19.47 & 2.35e-02 & 3.50e-03 & 7.86e-03 \\ 19.70 & 3.08e-02 & 8.14e-03 & 3.42e-03 \\ 19.94 & 1.85e-02 & 2.81e-03 & 5.72e-03 \\ 20.18 & 1.26e-02 & 1.98e-03 & 1.34e-03 \\ 20.42 & 1.16e-02 & 1.14e-03 & 6.74e-04 \\ 20.65 & 1.02e-02 & 1.70e-03 & 4.41e-04 \\ 20.89 & 6.67e-03 & 1.12e-03 & 8.25e-04 \\ 21.13 & 4.30e-03 & 6.77e-04 & 3.01e-04 \\ 21.36 & 2.73e-03 & 7.59e-04 & 1.63e-04 \\ 21.60 & 1.34e-03 & 4.61e-04 & 1.00e-04 \\ 21.84 & 4.43e-04 & 1.75e-04 & 1.36e-04 \\ 22.08 & 1.17e-04 & 6.67e-05 & 2.36e-05 \\ 22.31 & 1.85e-05 & 7.86e-06 & 1.45e-05 \\ 22.55 & 4.27e-06 & 2.64e-06 & 3.81e-07 \\ 22.79 & 2.91e-07 & 1.28e-07 & 9.39e-07 \\ 23.03 & 9.86e-08 & 5.62e-08 & 3.49e-08 \\ \medskip \\ $\alpha$ & $L_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\ & (W\,Hz$^{-1}$sr$^{-1}$) & (Mpc$^{-3}$dex$^{-1}$) \\ \smallskip \\ $-$1.38$^{+0.02}_{-0.03}$ & 3.73$^{+0.29}_{-0.32}\times10^{21}$ & 4.17$^{+0.41}_{-0.45}\times10^{-3}$ & 1.0 \\ \smallskip \\ \hline \medskip \end{tabular} \end{table} \subsubsection{Directly measured 850\,$\mu$m luminosity function and dust mass function} \label{method:850LF} We calculated the directly measured LF and DMF from the 52 objects in the OS sample which were detected at 850\,\micron. For the DMF we use the dust masses listed in Table~\ref{lumtab}, which were calculated using the isothermal SED-fitted temperatures or, where no fit was made (11 objects), using a dust temperature of 20K. For the OS sample the accessible volume is the maximum volume in which the object would still be detected at 850\hbox{\,$\umu$m } and still be included in the CfA sample. Since objects with \mbox{$cz<1900$\,km\,s$^{-1}$} were excluded from our sample this volume is not included in our calculation of $V_{i}$. When calculating the maximum redshift at which an object would still be detected at 850\hbox{\,$\umu$m } we used the noise appropriate for the observation of that object. We corrected the LF by the factor 97/81 to account for the CfA galaxies we did not observe at all at 850\hbox{\,$\umu$m } (Section~\ref{sample}). The corrected directly measured 850\hbox{\,$\umu$m } LF and DMF are shown as star symbols in Figures~\ref{lumfun-plot} and~\ref{dmfun} respectively, and are given in tabular form in Table~\ref{optlf}. The errors on the directly measured LF and DMF are standard Poisson errors. One effect that may lead to our estimates of the LF and DMF being slight underestimates is that we noticed that the OS galaxies not detected at 850\hbox{\,$\umu$m } were generally observed under worse weather conditions than the sources that were detected. \subsubsection{\textit{IRAS} PSCz-extrapolated 850\,$\mu$m luminosity function and dust mass function} \label{method:pscz} \begin{figure*} \begin{center} \includegraphics[angle=270, width=13cm]{fig15.ps} \caption{\label{lumfun-plot}{PSCz-extrapolated 850\hbox{\,$\umu$m } luminosity function (filled circles) with best-fitting Schechter function (solid line). The parameters for the Schechter function are $\alpha=-1.38$, $L_{\ast}=3.7\times10^{21}$ W\,Hz$^{-1}$sr$^{-1}$. Also shown are the directly measured 850\hbox{\,$\umu$m } luminosity function for the OS SLUGS sample (filled stars) with best-fitting Schechter function (dashed line) and the results for the IRS SLUGS sample from Dunne et al. (2000) (open triangles and dotted line).}} \end{center} \end{figure*} \begin{figure*} \begin{center} \subfigure[]{\label{dmfun-a} \includegraphics[angle=270, width=13cm]{fig16_a.ps}}\\ \subfigure[]{ \includegraphics[angle=270, width=13cm]{fig16_b.ps}} \caption{\label{dmfun}{(a) PSCz-extrapolated dustmass function (filled circles) with best-fitting Schechter function (solid line). The dust masses were calculated using $T_{d}$ derived from the \textit{IRAS} 100/60 colour and $\beta$=2. The parameters for the Schechter function are $\alpha=-1.34$, $M_{\ast}=2.7\times10^7 M_{\odot}$. The dashed line and open circles are for a `cold dustmass function' in which dust masses are calculated using $T_{d}$=20K and $\beta$=2. The best-fitting Schechter parameters are $\alpha=-1.39, M_{\ast}=5.3\times10^7 M_{\odot}$. (b) Directly measured dustmass function for the OS SLUGS sample (filled stars) with best-fitting Schechter function (dashed line). The dust masses were calculated using $T_{d}$ from isothermal SED fitting. Also shown are the results for the IRS SLUGS sample from Dunne et al. (2000) (open triangles and dotted line). The filled circles and solid line show the PSCz-extrapolated dustmass function as in (a).}} \end{center} \end{figure*} In order to better constrain the LF at the lower luminosity end more data points are needed, probing a wider range of luminosities than probed directly by the SLUGS samples. We achieve this using a method described by SH05, whereby the 850\hbox{\,$\umu$m } LF is determined by extrapolating the spectral energy distributions of the $\sim$15000 \textit{IRAS} PSCz survey galaxies (Saunders et al. 2000) out to 850\,\micron. Since for the two SLUGS samples we find a strong correlation between the $S_{60}/S_{100}$ and $S_{60}/S_{850}$ colours (Figure~\ref{colplot}) we can use a linear fit to this colour-colour relation to make the extrapolation from 60\hbox{\,$\umu$m } to 850\hbox{\,$\umu$m } flux density. SH05 derived the submm:far-IR two-colour relationship from the IRS SLUGS sample. However, we have shown in this paper that the OS and IRS samples have quite different properties. In order to determine the sensitivity of the LF/DMF to the colour relationship we have derived colour relationships for the combined \mbox{OS + IRS} sample, the OS sample alone, and the IRS sample alone (Table~\ref{colplot-params}). \begin{table} \caption{\label{colplot-params}\small{Linear fit parameters for the SLUGS colour-colour plot (log($S_{60}/S_{100}$) vs log($S_{60}/S_{850}$)) shown in Figure~\ref{colplot}.}} \begin{tabular}{ccc} \hline SLUGS data fitted & \multicolumn{2}{c}{linear fit (y=mx+c)} \\ & m & c \\ \hline OPT+\textit{IRAS} & $0.365\pm0.014$ & $-0.881\pm0.024$\\ OPT & $0.296\pm0.031$ & $-0.797\pm0.039$\\ \textit{IRAS} & $0.421\pm0.023$ & $-0.981\pm0.042$\\ \hline \end{tabular} \end{table} In order to produce unbiased estimates of the LF and DMF we have excluded some PSCz galaxies. Firstly we exclude all those objects that do not have redshifts, those that have velocities \mbox{$<300$\,km\,s$^{-1}$} (to ensure that peculiar velocities are unimportant), and those that have redshifts $>0.2$ (thus excluding any ultra-luminous, high-redshift objects). We then exclude all objects with upper limits at 100\,\micron, since for these objects we cannot apply the SH05 method. Finally, we use the {\textit{IRAS} Point Source Catalogue flags, as listed in the PSCz catalogue, to exclude sources which are likely to be either solely or strongly contaminated by Galactic cirrus. It is important to exclude these sources because they are very cold sources and so potentially can have a large effect on the 850\hbox{\,$\umu$m } LF. If two or more of the flags indicate Galactic cirrus (using flag value limits indicated in the \textit{IRAS} Explanatory Supplement) we exclude that object. As a check on the validity of this method we inspected by eye (using the IRSA ISSA Image Server) a sample of $\sim$40 objects randomly chosen from those excluded as Galactic cirrus, and a further sample of $\sim$40 objects randomly chosen from those that made it into our final sample. We found that 98\% of the sources with cirrus flags and 7\% of the sources without cirrus flags showed signs of significant cirrus, although for two thirds of the sources with cirrus flags there still appeared to be a genuine source present. In total, from the $\sim$14500 galaxies with redshifts in the \textit{IRAS} PSCz catalogue we exclude $\sim$4300 objects because of either 100\hbox{\,$\umu$m } upper limits or Galactic cirrus. This leaves 10252 galaxies in our PSCz-selected sample. \begin{table} \caption{\label{PSCZdmf}\small{PSCz-extrapolated dustmass functions}} \begin{tabular}{cccc} \hline \multicolumn{4}{c}{PSCz-extrapolated single temperature dustmass function} \smallskip \\ log $M_{d}$ & $\phi$(M) & $\sigma_{\phi}^{down}$ & $\sigma_{\phi}^{up}$ \\ ($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) & \multicolumn{2}{c}{(Mpc$^{-3}$dex$^{-1}$)} \\ \smallskip \\ 4.30 & 3.26e-02 & 2.30e-02 & 2.30e-02 \\ 4.55 & 1.62e-02 & 7.26e-03 & 4.37e-02 \\ 4.80 & 6.78e-02 & 4.22e-02 & 1.70e-02 \\ 5.05 & 3.11e-02 & 8.74e-03 & 9.10e-03 \\ 5.30 & 2.58e-02 & 6.36e-03 & 3.77e-03 \\ 5.54 & 2.36e-02 & 5.72e-03 & 8.87e-03 \\ 5.79 & 2.68e-02 & 8.69e-03 & 2.48e-03 \\ 6.04 & 1.46e-02 & 1.62e-03 & 3.60e-03 \\ 6.29 & 1.19e-02 & 2.41e-03 & 6.62e-04 \\ 6.54 & 9.29e-03 & 3.93e-04 & 8.97e-04 \\ 6.79 & 7.68e-03 & 1.68e-03 & 2.53e-04 \\ 7.04 & 4.67e-03 & 8.33e-04 & 3.80e-04 \\ 7.29 & 2.80e-03 & 7.87e-04 & 1.03e-04 \\ 7.54 & 1.38e-03 & 4.83e-04 & 1.94e-04 \\ 7.79 & 4.51e-04 & 2.12e-04 & 1.87e-04 \\ 8.04 & 1.06e-04 & 6.10e-05 & 3.12e-05 \\ 8.29 & 1.44e-05 & 7.21e-06 & 1.48e-05 \\ 8.54 & 3.10e-06 & 2.22e-06 & 1.18e-06 \\ 8.79 & 4.82e-07 & 4.21e-07 & 5.94e-07 \\ 9.04 & 5.24e-08 & 5.64e-08 & 5.24e-08 \\ \medskip \\ $\alpha$ & $M_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\ & ($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) \\ \smallskip \\ $-$1.34$^{+0.13}_{-0.08}$ & 2.74$^{+1.23}_{-1.13}\times10^{7}$ & 5.16$^{+3.90}_{-1.74}\times10^{-3}$ & 0.65 \\ \medskip \\ \multicolumn{4}{c}{PSCz-extrapolated 20K `cold' dustmass function} \smallskip \\ log $M_{d}$ & $\phi$(M) & $\sigma_{\phi}^{down}$ & $\sigma_{\phi}^{up}$ \\ \smallskip \\ 4.65 & 3.41e-02 & 2.41e-02 & 2.76e-02 \\ 4.88 & 6.28e-02 & 2.37e-02 & 2.82e-02 \\ 5.12 & 3.88e-02 & 1.61e-02 & 9.42e-03 \\ 5.36 & 3.20e-02 & 1.06e-02 & 6.16e-03 \\ 5.60 & 2.59e-02 & 4.59e-03 & 6.08e-03 \\ 5.84 & 2.86e-02 & 4.67e-03 & 3.24e-03 \\ 6.07 & 1.85e-02 & 2.60e-03 & 4.39e-03 \\ 6.31 & 1.29e-02 & 1.82e-03 & 1.13e-03 \\ 6.55 & 1.17e-02 & 1.11e-03 & 6.77e-04 \\ 6.79 & 1.02e-02 & 1.58e-03 & 4.37e-04 \\ 7.03 & 6.75e-03 & 1.21e-03 & 5.96e-04 \\ 7.26 & 4.29e-03 & 6.15e-04 & 4.09e-04 \\ 7.50 & 2.79e-03 & 7.83e-04 & 1.07e-04 \\ 7.74 & 1.35e-03 & 4.65e-04 & 1.07e-04 \\ 7.98 & 4.54e-04 & 1.81e-04 & 1.37e-04 \\ 8.22 & 1.19e-04 & 6.64e-05 & 2.08e-05 \\ 8.45 & 1.91e-05 & 7.57e-06 & 1.47e-05 \\ 8.69 & 4.56e-06 & 3.07e-06 & 3.27e-07 \\ 8.93 & 2.87e-07 & 9.82e-08 & 9.94e-07 \\ 9.17 & 1.03e-07 & 5.48e-08 & 3.26e-08 \\ \medskip \\ $\alpha$ & $M_{\ast}$ & $\phi_{\ast}$ & $\chi^2_{\nu}$ \\ & ($M_{\odot}$) & (Mpc$^{-3}$dex$^{-1}$) \\ \smallskip \\ $-$1.39$^{+0.03}_{-0.02}$ & 5.28$^{+0.45}_{-0.55}\times10^{7}$ & 4.04$^{+0.74}_{-0.50}\times10^{-3}$ & 1.28 \\ \medskip \\ \hline \end{tabular} \end{table} For the PSCz-extrapolated sample the accessible volume is the maximum volume in which the object could still be seen and still be included in the \textit{IRAS} PSCz catalogue. Since objects with \mbox{$cz<300$\,km\,s$^{-1}$} were excluded from our sample this volume is not included in our calculation of $V_{i}$. For the PSCz-extrapolated DMF the dust masses were calculated using $T_{d}$ derived from the \textit{IRAS} 100/60 colour and $\beta$=2. For completeness, the effect of excluding real 60\hbox{\,$\umu$m } sources must be taken into account by applying a correction factor to the LF and DMF. This correction factor will be uncertain, since some excluded sources will be real and some not. Therefore we correct using our best estimate of real sources as follows. We corrected for two thirds of the sources we excluded as being contaminated by cirrus. The correct correction factor for the sources that were excluded because they have 100\hbox{\,$\umu$m } upper limits is even more uncertain. These are probably all genuine sources, but they will generally have warmer colours than the sources that were not excluded. We arbitrarily corrected for 50\% of these. Including the correction for $\sim$100 sources without redshifts, the final correction factor for excluded sources is 1.27. This is obviously very uncertain, however at the most it could be 1.43 and at the least it could be 1.00. This produces maximum errors of $+$13\% and $-$21\% on the LF and DMF in addition to the errors described below. We made a correction for evolution out to z=0.2 using a density evolution $\propto (1+z)^{7}$ (Saunders et al. 1990). We confirmed that the strength assumed for the evolution made virtually no difference to our results. The PSCz-extrapolated 850\hbox{\,$\umu$m } LF and DMF are shown as filled circles in Figures~\ref{lumfun-plot} and~\ref{dmfun} respectively, and are given in tabular form in Tables~\ref{PSCZlf} and~\ref{PSCZdmf}. For comparison we also produce a `cold' PSCz-extrapolated DMF, produced as above but with dust masses calculated using $T_{d}$=20K and $\beta$=2; this is shown as open circles in Figure~\ref{dmfun-a} and listed in Table~\ref{PSCZdmf}. While the errors on the directly measured LF and DMF are standard Poisson errors, the errors on the PSCz-extrapolated LF and DMF are derived from a combination of Poisson errors and the errors resulting from the fact that the 850\hbox{\,$\umu$m } luminosities have been derived using the best-fitting linear relation to our SLUGS colour-colour plot (Figure~\ref{colplot}). In order to take into account how our `choice' of linear fit affects the LF we produce, we additionally generate two `extremes' of the PSCz-extrapolated LF and DMF using two alternative fits to the SLUGS colour-colour plot: 1) a fit to the OS data only, and 2) a fit to the IRS data only (linear fit parameters listed in Table~\ref{colplot-params}). We then use the maximum difference between these `extreme' LF values and our actual PSCz-extrapolated LF data points as the errors on our LF due to our `choice' of colour-colour linear relation. We then also take into account the number statistics, and thus add in quadrature the standard Poisson errors and the `choice of colour-colour fit' errors to obtain our total errors listed in Tables~\ref{PSCZlf} and~\ref{PSCZdmf}. In addition to these errors there are, at most, upper and lower errors of +13\% and $-$21\% from our choice of correction factors. \subsection{Results and discussion} \label{lumfun:results} The directly measured OS LF and PSCz-extrapolated LF agree remarkably well over the range of luminosities covered by the SLUGS samples, yet we find that in comparison the IRS sample of D00 (plotted as triangles in Figures~\ref{lumfun-plot} and~\ref{dmfun}) consistently underestimates the submillimetre LF by a factor of 2 and the DMF by a factor of 4. The fact that we see this underestimate compared to our OS sample, which by definition should be free from any dust temperature selection effects, is strong evidence that a population of `cold' dusty galaxies was indeed `missed' by \textit{IRAS} and that therefore the IRS sample was missing $\sim$half the galaxies. The bigger difference between the DMFs is probably due to the fact that, unlike the IRS sample, for the OS sample we do not have fitted isothermal SEDs for all galaxies and therefore have calculated dust masses using an assumed $T_{d}$=20K for $\sim$20\% of the sample (Section~\ref{method:850LF}). We fit both the directly measured and PSCz-extrapolated 850\hbox{\,$\umu$m } LFs and DMFs with Schechter functions of the form \[ \Phi(L)dL=\phi(L)\left(\frac{L}{L_{\ast}}\right)^\alpha e^{-(L/L_{\ast})} dL/L_{\ast} \] (Press \& Schechter 1974; Schechter 1975). The best-fitting parameters for the PSCz-extrapolated 850\hbox{\,$\umu$m } LF and DMF are listed in Tables~\ref{PSCZlf} and~\ref{PSCZdmf} respectively, along with the reduced chi-squared values ($\chi^{2}_{\nu}$) for the fits; likewise best-fitting parameters for the directly measured 850\hbox{\,$\umu$m } LF and DMF are shown in Table~\ref{optlf}. We find that both the directly measured and PSCz-extrapolated LFs and DMFs are well-fitted by Schechter functions. For the PSCz-extrapolated LF and DMF the best-fitting Schechter function \mbox{($\alpha$\,=\,$-$1.38)} fits the data points extremely well across most of the luminosity range -- however, we note that the PSCz-extrapolated functions are much less well fitted at the high luminosity end. Investigation of the 3 or 4 high end luminosity bins has found several anomalies for the objects in these bins, the most striking of which is the fact that in each bin there is typically a small number of objects with accessible volumes 2 or 3 orders of magnitude lower than the rest of the objects in that bin, and thus it is these few objects in each of these bins which are the main contributors to the high space density. There are many possible explanations for the excess at the high luminosity end. One possible explanation could be that the objects in these bins are multiple systems. At larger distances \textit{IRAS}\/ galaxies are mostly very luminous starbursts and are frequently in interacting pairs. The density of galaxy pairs at these distances might by substantially higher than the local galaxy density which may produce an excess in the LF at high luminosities. Several authors find this excess at high luminosities or high masses. For example Lawrence et al. (1999) find a similar excess in their 60\hbox{\,$\umu$m } LF, as do Garcia-Appadoo, Disney \& West (in preparation) for their HI Mass Function, who find that the higher HI masses are typically multiple systems. One can also think of ways our use of a global colour-colour relation might have produced a spurious excess if, for example, the galaxies at the highest luminosities have systematically different colours. This would not, however, explain the excess seen in the 60\hbox{\,$\umu$m } LF. In our earlier work the 850\hbox{\,$\umu$m } LF derived from the IRS sample (D00) was found to have a slope steeper than $-$2 at the low luminosity end, suggesting that the submillimetre sky should be infinitely bright (a submillimetre `Olbers' Paradox'). Using the OS sample we find the slope of the PSCz-extrapolated 850\hbox{\,$\umu$m } LF is $-$1.38, showing that the LF does flatten out at luminosities lower than those probed by the IRS sample, thus solving the submillimetre `Olbers' Paradox'. \section{Conclusions} \label{conc} Following our previous SCUBA survey of an \textit{IRAS}-selected sample of galaxies we have carried out the first systematic survey of the local submillimetre Universe free from dust temperature selection effects -- a submillimetre survey of a sample of 81 galaxies selected from the CfA optical redshift survey. We obtained the following results: (i) We detected 52 out of 81 galaxies at 850\hbox{\,$\umu$m } and 19 galaxies at 450\,\micron. Many of these galaxies have 850\hbox{\,$\umu$m } emission which appears extended with respect to the DSS optical emission, and which appears to correspond to very faint optical features. (ii) We fitted two-component dust spectral energy distributions to the 60, 100, 450 and 850\hbox{\,$\umu$m } flux densities for 18 of the galaxies which were detected at 850\hbox{\,$\umu$m } \textit{and} at 450\,\micron. We find that the \textit{IRAS}\/ and submillimetre fluxes are well-fitted by a two-component dust model with dust emissivity index $\beta$=2. The tight and fairly constant ratio of $S_{450}/S_{850}$ for both the OS galaxies and the IRS galaxies is evidence that $\beta\approx 2$. The temperatures of the warm component range from 28 to 59\,K; the cold component temperatures range from 17 to 24\,K. (iii) We find the ratio of the mass of cold dust to the mass of warm dust is much higher for our optically-selected galaxies than for our previous work on \textit{IRAS}-selected galaxies (DE01), and can reach values of $\sim$1000. By comparing the results for the \textit{IRAS}- and optically-selected samples we show that there is a population of galaxies containing a large proportion of cold dust that is unrepresented in the \textit{IRAS}\/ sample. (iv) We also fitted single-temperature dust spectral energy distributions (to the 60, 100 and 850\hbox{\,$\umu$m } flux densities) for the 41 galaxies in the OS sample with detections in all 3 wavebands. The mean best-fitting temperature for the sample is $\bar{T}_{d}=31.6\pm0.6$K and the mean dust emissivity index is $\bar{\beta}=1.12\pm0.05$. These values are significantly lower than for the IRS sample. The very low value of $\beta$ is additional evidence that galaxies, across all Hubble types, contain a significant amount of cold dust. (v) Using our isothermal fits we find a mean dust mass \mbox{$\bar{M_{d}}=(2.34\pm0.36)\times{10^{7}}$ M$_{\odot}$}, which is comparable to that found for the IRS sample. However, using our two-component fits we find a mean dust mass a factor of two higher. (vi) We find little change in the properties of dust in galaxies along the Hubble sequence, except a marginally significant trend for early-type galaxies to be less luminous submillimetre sources than late-types. (vii) We detect 6 out of 11 ellipticals in the sample and find them to have dust masses in excess of \mbox{$10^{7}$ $M_{\odot}$}. It is possible, however, that for some of these galaxies the submillimetre emission may be synchrotron emission rather than dust emission. (viii) We have derived local submillimetre luminosity and dust mass functions, both directly from the optically-selected SLUGS sample and by extrapolation from the \textit{IRAS} PSCz survey, and find excellent agreement between the two. By extrapolating the spectral energy distributions of the \textit{IRAS} PSCz survey galaxies out to 850\hbox{\,$\umu$m } we have probed a wider range of luminosities than probed directly by the SLUGS samples. We find the LFs to be well-fitted by Schechter functions except at the highest luminosities. We have shown that, whereas the slope of the \textit{IRAS}-selected LF at low luminosities was steeper than $-$2 (a submillimetre `Olbers' Paradox'), the PSCz-extrapolated LF, as expected, flattens out at the low luminosity end and has a slope of \mbox{$-$1.38}. (ix) We find that as a consequence of the omission of a population of `cold' dusty galaxies from the \textit{IRAS}\/ sample the LF presented in our earlier work (D00) is too low by a factor of 2, and the DMF by a factor of 4. In order to further investigate the properties of dust in galaxies follow-up optical imaging (to obtain deeper images than available from the DSS) for the whole OS sample detected at 850\hbox{\,$\umu$m } is needed, in order to make a full comparison of the optical versus submillimetre emission. This is important since for many of the OS sample galaxies the 850\hbox{\,$\umu$m } emission appears extended with respect to the DSS optical emission. Work on obtaining this data is in progress. \section*{Acknowledgements} We thank Diego Garcia-Appadoo for providing information about his HI Mass Function, and Jonathan Davies for his useful comments. We also thank Steve Serjeant for useful discussions. Many of the observations for this survey were carried out as part of the JCMT service programme, so we are grateful to Dave Clements, Rob Ivison and the many other observers and members of the JCMT staff who have contributed to this project in this way. This research has made use of the NASA/IPAC Extragalactic Database (NED) and the NASA/IPAC Infrared Science Archive which are operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We have also made use of the LEDA and DSS databases. Research by LD and SE is supported by the Particle Physics and Astronomy Research Council.
2024-02-18T23:39:50.515Z
2005-10-27T13:38:15.000Z
algebraic_stack_train_0000
604
25,159
proofpile-arXiv_065-3050
\section{Introduction} Scattering of ambient light by fast moving flows has been recognized as a conceivable mechanism operating in different classes of objects. This process quite likely contributes to the linear polarization of initially unpolarized soft radiation up-scattered in blazar jets \citep{beg87}, winds from accretion discs \citep{bel98} and in gamma-ray bursts (\citealt{sha95}; \citealt{laz04}; \citealt{lev04}). A significant level of intrinsic linear polarization of $\Pi\sim3$\% was reported in a microquasar GRO J1655-40 \citep{sca97} and similarly for LS\,5039 \citep{com04} in the optical band. A model of magnetized fireball polarization was discussed by \citet{ghi99}, who demonstrate that the expected polarization lightcurve should exhibit two peaks. Variations of the Galactic Centre linear polarization were reported in the millimetre band \citep{bow05}. Recently, \citet{vii04} brought the attention to strong-gravity effects in polarimetry of accreting millisecond pulsars. The idea of X-ray polarization studies providing clues to the physics of accreting compact objects was discussed in seminal papers (\citealt{ang69}; \citealt{bon70}; \citealt{lig75}; \citealt{ree75}; \citealt{sun85}). Here we study a simple model with Thomson scattering on electrons ejected outwards from the centre or falling back; a novel point is that we consider strong gravity effects. The source of seed photons can be identified with the surface of a central star lying in arbitrary (finite) distance from the scatterer. Alternatively, it can represent an axially symmetric accretion disc near a black hole, a quasi-isotropic (ambient) radiation field, or a combination of all these possibilities. In order to avoid numerical complexities we do not discuss other radiation mechanisms that also produce polarization, namely, we do not consider synchrotron self-Compton process, which is the most likely process wherever magnetic fields interact with relativistic particles (see \citealt{pou94}; \citealt{cel94} for discussion and for further references). In the non-relativistic regime, a conceptually similar problem was examined by \citet{rud78} and \citet{fox94}, who considered polarization of light of a finite-size star due to scattering on free electrons in a fully ionized circumstellar shell. These works showed non-negligible solid angle of the source subtended on the local sky of scattering particles has a depolarizing influence on the observed fractional polarization. This result was discussed by various authors, because the scattering of stellar continuum is a probable source of net polarization of visible light in early-type stars, for which polarimetry has become a standard observational tool (e.g.\ \citealt{poe76}; \citealt{bro77}). General relativity effects are negligible in these objects, however, even the Newtonian limit is complex enough: the observed signal does not easily allow to disentangle the contribution of a circumstellar shell, the primary radiation of the star and the effect of interstellar medium. In our case the situation complicates further, because the system is highly time-dependent and mutual time delays along different light rays must be taken into account. The general relativity signatures, which we discuss in this paper, reach maximum in a system containing a compact body with radius less than the photon circular orbit, i.e.\ $R_\star<r_{\rm{}ph}$. This can arise either as a result of a non-stationary system with $R_\star{\equiv}R_\star(t)$, presumably during a gravitational collapse, or it can represent a static ultra-compact star if such exist in nature. Here we impose spherical symmetry of the gravitational field, and so both situations are identical as far as the form of spacetime metric is concerned. The Schwarzschild vacuum solution describes the external gravitational field of all types of compact objects within general relativity, provided that their rotation is negligible and self-gravity of accreting matter does not contribute significantly to the gravitational field. We will assume that these constraints are fulfilled. A sequence of $R_\star=\mbox{const}$ situations can be employed in order to model a collapsing case. The formalism that we apply works for a system with arbitrary compactness, even if the case of $R_\star=\mbox{const}<r_{\rm{}ph}$ seems to be an unrealistically large compactness per se, likely violating the causality condition for neutron stars. Such a high compactness would thus be normally excluded, however, the situation has not yet been definitively settled (see \citealt{lat04} for a recent review). According to astrophysically realistic equations of state, neutron-star sizes do not reach ultra-compact dimensions; their typical radii should exceed the photon circular orbit (\citealt{lat01}; \citealt{hae03}), and so the gravitational effects are constrained accordingly. However, the possibility of $R_\star$ being slightly less than $r_{\rm{}ph}$ persists, and there seems to be a growing awareness now of the fact that the very high density regime needs to be explored further. The interest in ultra-compact stars has been recently revived mainly in connection with gravitational waves in general relativity (\citealt{cha91}; \citealt{kok04}). One may consider also more exotic options. Strange quark stars (e.g.\ \citealt{alc86}; \citealt{dey98}) can have their radii extending down to the Buchdal limit (see \citealt{web05} for a recent review on other forms of quark matter and their relevance for compact stars). If nucleons can be confined at the density lower than nuclear matter density, then Q-stars could exist with a relatively high mass of $\sim10^2M_\odot$ and the radius as small as $R_\star\sim0.9r_{\rm{}ph}$ (e.g.\ \citealt{mil98}). On a more speculative level is the idea of gravastars \citep{maz04}, compactness of which can exceed the Buchdal limit. Options for detecting the thermal radiation from different kinds of ultra-compact stars have been recently discussed by \citet{mcc04}. In the case of transiently accreting neutron stars, the crustal heating has been nominated as one of relevant mechanisms generating thermal emission from the surface (\citealt{hae90}; \citealt{bro98}). The issue of realistic equations of state for ultra-compact star matter is beyond the scope of the present paper, likewise the actual observational information on masses and radii of compact stars (see e.g. \citealt{hae03}). It will be convenient to introduce a dimensionless parameter, $\zeta\equiv1-R_\star/r$, which maps the whole range of radii above the star surface onto $\langle0,1\rangle$ interval, so that the form of graphs does not change with the star compactness. We consider the whole range of $0\leq\zeta\leq1$ allowed by general relativity. It is worth noticing that the formalism used below could be readily applied also to the case of an accretion disc as a source of light near a black hole. Obviously, a black hole represents a body with the maximum compactness and, at the same time, it is the most conservative option for such an ultra-compact object (lower, axial symmetry of the disc radiation field limits the possibility of exploring the problem in an analytical way). Our paper takes general relativity effects into account, including the effect of higher-order images if they arise. Although the signal is usually weak in these images, favourable geometrical arrangements are possible and, even more importantly, photons of the higher-order image experience a characteristic delay with respect to photons following a direct course. This delay (examined in detail by \citealt{boz04}; \citealt{cad05}) could help revealing the presence of strong gravitational field in the system. Our model provides a useful test bed for astrophysically more realistic schemes. In the next section we formulate the model and describe calculations. Then we show comparisons with previous results of other authors. We build our discussion on the approach of Beloborodov (\citeyear{bel98}, for polarization) and Abramowicz et al.\ (\citeyear{abr90}, for the motion in combined gravitational and radiation fields in general relativity). In these papers the individual components of the whole picture were treated separately, while we connect them together in a consistent scheme. A reader interested only in the main results on polarization of higher-order images from light scattered on a moving cloudlet can proceed directly to section~\ref{sec:cloudlet}. As a final point of the introduction, it is worth noticing that \citet{ghi04} propose a model of aborted jets in which colliding clouds and shells occur very near a black hole and are embedded in strong radiation field. According to their scheme, most of energy dissipation should take place on the symmetry axis of an accretion disc. This would be another suitable geometry, in which a fraction of light is boosted in the direction to the photon circular orbit and eventually redirected to the observer. One may fear that the Thomson scattering approximation is not adequate to describe a turbulent medium in which electrons become very hot \citep{pou94}, however, the accuracy should be still sufficient for energies of several keV, at which planned polarimeters are supposed to operate. Also the seed photons have different distribution when they originate from an accretion disc, but we examined the variation of the polarization that an observer can expect from this kind of a system \citep{hor05}, and our calculations confirm that the expected polarization has magnitude smaller but roughly similar to values predicted by a simple model adopted here. \section{The model and calculations} \subsection{The set up of the model and reference frames} We assume fully ionized optically thin medium distributed outside the source of seed photons. These primary photons follow null geodesics until they are intercepted by an electron, which itself is moving under the mutual competition between gravity and radiation (we do not consider the effect of magnetic fields on the particle motion and radiation in this paper). Hence, we adopt the approximation of single scattering and we assume Schwarzschild metric for the gravitational field of a compact body. We consider frequency-integrated quantities. Polarization vector of scattered light is propagated parallelly through gravitational field to a distant observer and, as consequence, the polarization magnitude $\Pi(\fvec{r},\fvec{n})$ and the redshifted intensity $\tilde{I}\equiv\left(1+\mathcal{Z}\right)^{-4}I(\fvec{r},\fvec{n})$ (expressed here in terms of the redshift $\mathcal{Z}$) are invariant. Polarization is described in terms of Stokes parameters $I$, $Q$, $U$ and $V$ \citep{cha60,ryb79}: $I$ has a meaning of intensity along a light ray, $Q$ and $U$ characterize the linear polarization in two orthogonal directions (say $\fvec{e}_X$ and $\fvec{e}_Y$) in the plane perpendicular to the ray, and $V$ is the circularity parameter. The polarization angle is defined by $\tan\,2\chi=U/Q$. It gives the orientation of major axis of the polarization ellipse with respect to $\fvec{e}_X$. One can form a polarization basis in the local space by supplementing $\fvec{e}_X$ and $\fvec{e}_Y$ with another unit vector, $\fvec{e}_Z$, pointed in the direction of the light ray. Three parameters are necessary to describe a monochromatic beam, for which the condition $I^2=Q^2+U^2+V^2$ holds. In case of partially polarized light the whole set of four parameters is generally required. It is then customary to define the degree of elliptical polarization, $\Pi\,\equiv\,{I}^{-1}\,(Q^2+U^2+V^2)^{1/2}$, which satisfies $0\leq\Pi\leq1$. \begin{figure*} \begin{center} \hfill~ \includegraphics[width=0.45\textwidth]{fig1a.eps} \hfill~ \includegraphics[width=0.25\textwidth]{fig1b.eps} \hfill~ \end{center} \caption{Geometry of the problem and the definition of angles.} \label{fig1} \end{figure*} A four-dimensional tetrad can be constructed by extending the three base vectors and supplementing them with a purely timelike four-vector. A suitable choice of the tetrad is described below. Let us consider a simple case when the incident radiation field is axially symmetric in the laboratory frame, LF, with the basis $(\fvec{e}_t,\fvec{e}_x,\fvec{e}_y,\fvec{e}_z)$.\footnote{Hereafter we understand three-vectors as spatial projections of their corresponding four-vectors (and we do not introduce special notation for them; there is no danger of confusion). We stick to the conventional formalism of Stokes parameters, but we remark that it can be recast by employing a covariant definition of the polarization tensor, components of which are assembled using suitable combinations of Stokes parameters \citep{bor64}. It has been argued \citep{por05} that the latter approach may be found more elegant and useful for discussing the radiation transfer of polarized light through the medium in general relativity.} Indices of four-vectors with respect to a local-frame basis are manipulated by the flat-space metric, $\mathrm{diag}(-1,1,1,1)$. We orient $\fvec{e}_z$ along the symmetry axis; other two spatial vectors, $\fvec{e}_x$ and $\fvec{e}_y$, lie in a plane perpendicular to $\fvec{e}_z$. Further, we assume that scattering electrons are streaming along the symmetry axis with four-velocity $\fvec{u}=u^t\fvec{e}_t+u^z\fvec{e}_z$, where $u^t=c\gamma$ and $u^z=c\gamma\beta$ ($\gamma$ is Lorentz factor, $\beta$ is velocity in LF divided by speed of light). Later on we carry out a Lorentz boost to the co-moving frame (CF) of the scatterer, $(\bar{\fvec{e}}_t,\bar{\fvec{e}}_x,\bar{\fvec{e}}_y,\bar{\fvec{e}}_z)$, which is equipped with timelike four-vector $\bar{\fvec{e}}_t=\fvec{u}$ and three space-like four-vectors $\bar{\fvec{e}}_x=\fvec{e}_x$, $\bar{\fvec{e}}_y=\fvec{e}_y$. Spatial part of $\bar{\fvec{e}}_z$ is oriented in the direction of relative velocity of both frames. Each incident photon of the ambient unpolarized radiation gets highly polarized when scattered by a relativistically moving electron. The total polarization is eventually obtained by integrating over incident directions and the distribution of scattering electrons. In order to describe propagation of scattered photons, we denote four-vectors $\fvec{n}\,\equiv\,\fvec{p}/p^t$ (with respect to LF) and $\bar{\fvec{n}}\,\equiv\,\fvec{p}/\bar{p}^t$ (with respect to CF), where $\fvec{p}$ is the photon four-momentum (a null four-vector). Due to the axial symmetry we can assume $n^y=\bar{n}^y=0$. In addition to the above-defined reference frames LF and CF, we introduce two `polarization' frames: the lab polarization frame (LPF) with basis $(\fvec{e}_t,\fvec{e}_X,\fvec{e}_Y,\fvec{e}_Z)$, and the co-moving polarization frame (CPF) with the basis $(\bar{\fvec{e}}_t,\bar{\fvec{e}}_X,\bar{\fvec{e}}_Y,\bar{\fvec{e}}_Z)$. LPF is defined in such a way that $\fvec{e}_Z$ is the three-space projection of the propagation four-vector $\fvec{n}$, $\fvec{e}_X$ lies in the $(\fvec{e}_x,\fvec{e}_z)$-plane, and $\fvec{e}_Y$ is identical with the LF tetrad vector $\fvec{e}_y$. CPF is defined analogically and indicated by bars over variables. Our definition of the reference frames is apparent from figure~\ref{fig1}. \subsection{Stokes parameters in terms of the incident radiation stress-energy tensor} We start by calculating the polarization of the scattered radiation in CPF. Conceptually the model of local polarization is equivalent to the one employed by \citet{bel98}. The incident radiation is unpolarized with intensity $\bar{I}_\mathrm{i}$. It can be imagined as a superposition of two parallel beams of identical intensities, $\bar{I}_\mathrm{i}^{(1)}=\bar{I}_\mathrm{i}^{(2)} =\bar{I}_\mathrm{i}/2$, propagating along $\bar{\fvec{n}}_\mathrm{i}$ four-vector. The two beams are completely linearly polarized in mutually perpendicular directions and the scattered radiation is a mixture of both components. In the adopted choice of reference frames (see the figure~\ref{fig1}), the spatial projection of the propagation vector $\bar{\fvec{n}}$ is identical with the spatial projection of $\bar{\fvec{e}}_Z$. Unequal contributions $\bar{I}^{(1)}$ and $\bar{I}^{(2)}$ to the total intensity $\bar{I}$ result in net linear polarization of the scattered beam. According to this, $\bar{I}$, $\bar{Q}$ and $\bar{U}$ are non-zero, whereas the circularity parameter $\bar{V}$ vanishes. Assuming that each scattered photon experiences one scattering event in an optically thin medium ($\tau\ll1$), non-zero contributions to Stokes parameters are \citep{cha60} \begin{eqnarray} \delta \bar{I} &=& A \bar{I}_\mathrm{i} \left(1 + \cos^2\omega\right), \\ \delta \bar{Q} &=& -A \bar{I}_\mathrm{i} \,\cos 2\varphi \,\sin^2\omega, \\ \delta \bar{U} &=& -A \bar{I}_\mathrm{i} \,\sin 2\varphi \,\sin^2\omega, \end{eqnarray} where $\omega$ is the scattering angle between $\fvec{n}_\mathrm{i}$ and $\fvec{n}$, $A\,\equiv\,3\tau/(16\pi)$. The scattering takes place in the plane that forms an angle $\varphi$ with $\bar{x}$-axis. Angles $\varphi$ and $\omega$ can be expressed using direction cosines, which are defined here as spatial components of the propagation four-vector $\bar{n}_\mathrm{i}$ of the incident beam, i.e.\ $\bar{n}_\mathrm{i}^X=\cos\varphi\,\sin\omega$, $\bar{n}_\mathrm{i}^Y=\sin\varphi\,\sin\omega$ and $\bar{n}_\mathrm{i}^Z=\cos\omega$. We obtain \begin{eqnarray} \delta \bar{I} &=& A \left(1 + \bar{n}_\mathrm{i}^Z \bar{n}_\mathrm{i}^Z\right) \bar{I}_\mathrm{i}, \label{eq:I1} \\ \delta \bar{Q} &=& A \left(\bar{n}_\mathrm{i}^Y \bar{n}_\mathrm{i}^Y - \bar{n}_\mathrm{i}^X \bar{n}_\mathrm{i}^X\right) \bar{I}_\mathrm{i}, \label{eq:Q1} \\ \delta \bar{U} &=& -2A\, \bar{n}_\mathrm{i}^X \bar{n}_\mathrm{i}^Y \bar{I}_\mathrm{i}. \label{eq:U1} \end{eqnarray} This form is useful, as it allows integrating conveniently the partial contributions over incident directions to obtain \begin{eqnarray} \bar{I} &\!\!\!=\!\!\!& Ac \left(\bar{T}^{tt} + \bar{T}^{ZZ}\right), \label{eq:scattI} \\ \bar{Q} &\!\!\!=\!\!\!& Ac \left(\bar{T}^{YY} - \bar{T}^{XX}\right), \label{eq:scattQ} \\ \bar{U} &\!\!\!=\!\!\!& -2Ac \bar{T}^{XY}, \label{eq:scattU} \end{eqnarray} for the total Stokes parameters of scattered light. We denoted \begin{equation} \bar{T}^{\mu\nu}\equiv \frac{1}{c}\int_{4\pi} \bar{n}_\mathrm{i}^\mu \bar{n}_\mathrm{i}^\nu \bar{I}_\mathrm{i}(\bar{\fvec{n}}_\mathrm{i})\,\mathrm{d}\Omega \end{equation} the stress-energy tensor of the incident radiation field. \begin{figure*} \includegraphics[width=0.45\textwidth]{iso.eps} \hfill \includegraphics[width=0.54\textwidth]{iso-b1.eps} \caption{Left: the magnitude of transversal polarization $\Pi(\bar{\vartheta};\gamma)$ due to up-scattering by a relativistic electron as a function of observing angle in the local co-moving frame. The case of locally isotropic ambient radiation field is shown for three different values of Lorentz factor $\gamma$. In the inset the emission diagram shows the corresponding lab-frame polarization. The lobes become gradually flattened toward the front direction of motion as $\gamma$ increases. One can see from eq.~(\ref{eq:barI}) that the graph of $\Pi(\bar{\vartheta})$ is symmetrical with respect to $\bar{\vartheta}=\pi/2$. This corresponds to a well-known fact that polarization is maximum for the radiation scattered perpendicularly to axis of symmetry in the CF. Right: Contours of $\Pi(\vartheta,\beta)$ are shown superposed on the density plot of $I(\vartheta,\beta)$. Levels of shading give the intensity (in arbitrary units) and illustrate the progressive beaming towards $\vartheta=0$ direction in the ultra-relativistic limit (in the lab frame). On the other hand, given a value of $\beta$, the polarization degree $\Pi(\vartheta;\beta)$ as function of $\vartheta$ reaches maximum at a non-zero angle, always off axis (dashed line).} \label{fig:iso} \end{figure*} We remind that the incident radiation was assumed axially symmetric in the CF, therefore the only non-zero components in this frame are $\bar{T}^{tt}$, $\bar{T}^{tz}$, $\bar{T}^{zz}$, $\bar{T}^{xx}$, and $\bar{T}^{yy}$. These are further constrained by symmetry: $\bar{T}^{xx}=\bar{T}^{yy}=(\bar{T}^{tt}-\bar{T}^{zz})/2$. A relation to CPF components can be obtained by rotation about $\bar{y}$-axis by angle $\bar{\vartheta}$. Using equations (\ref{eq:scattI})--(\ref{eq:scattU}) we find \begin{eqnarray} \bar{I} &\!\!\!=\!\!\!& \textstyle{\frac{1}{2}} Ac\big[\left(3\bar{T}^{tt}-\bar{T}^{zz}\right)- \left(\bar{T}^{tt}-3\bar{T}^{zz}\right)\cos^2\bar{\vartheta}\,\big], \label{eq:barI} \\ \bar{Q} &\!\!\!=\!\!\!& \textstyle{\frac{1}{2}} Ac\left(\bar{T}^{tt}-3\bar{T}^{zz}\right)\sin^2\bar{\vartheta}. \end{eqnarray} The Stokes parameter $\bar{U}$ vanishes due to axial symmetry. The scattered radiation is partially polarized either in the $(\bar{x},\bar{z})$-plane, or perpendicularly to it. The former and the latter case will be referred as longitudinal and transverse polarization, respectively. Later on, in Sec.~\ref{sec:relstar}, we will demonstrate that this change can occur also with a group of cold electrons, whose bulk motion is determined by the radiation and gravitational fields of a star and recorded in the lab frame. The degree of polarization is calculated directly from definition: \begin{equation} \Pi(\bar{\vartheta})=\frac{|\bar{Q}|}{\bar{I}} =\frac{|\Pi_\mathrm{m}|\sin^2\bar{\vartheta}} {1 - \Pi_\mathrm{m}\cos^2\bar{\vartheta}}\,, \label{eq:polCF} \end{equation} where \begin{equation} \Pi_\mathrm{m}\equiv\frac{\bar{T}^{tt}-3\bar{T}^{zz}} {3\bar{T}^{tt}-\bar{T}^{zz}}. \end{equation} This result is equivalent to eqs.~(4)--(5) in \citet{bel98}, who applied the model of Thomson scattering to winds outflowing from a plane-parallel disc slab. Beloborodov also calculated the polarization of scattered radiation and he found that its sign depends on the wind velocity. In our notation the change is captured in $\Pi_\mathrm{m}$, which acquires values in the range $-1\leq\Pi_\mathrm{m}\leq1$. Its meaning is evident from eq.~(\ref{eq:polCF}): the absolute value $|\Pi_\mathrm{m}|$ is the maximum degree of polarization of the scattered light and the sign of $\Pi_\mathrm{m}$ determines the sign of $\bar{Q}$-parameter. In order to determine the polarization magnitude as seen by an observer in LF we carry out the Lorentz boost (e.g., \citealt{coc72}; \citealt{ryb79}). The angle of observation $\bar{\vartheta}$ is transformed according to: \begin{equation} \sin\bar{\vartheta}=\mathcal{D}\;\sin\vartheta,\quad \cos\bar{\vartheta}=\gamma\mathcal{D}\;(\cos\vartheta-\beta), \end{equation} where $\mathcal{D}\,\equiv\,\gamma^{-1}(1-\beta\cos\vartheta)^{-1}$ is the Doppler factor. Stokes parameters are transformed in the same manner as the radiation intensity and the boost retains the four-vector $\fvec{e}_y$ unchanged: $I=\mathcal{D}^4\bar{I}$ and $Q=\mathcal{D}^4\bar{Q}$. It follows that the polarization magnitude $|\Pi_\mathrm{m}|$ is Lorentz invariant. By transforming all relevant quantities to LF, we obtain Stokes parameters of scattered radiation, \begin{eqnarray} Q&=&\textstyle{\frac{1}{2}}Ac\,\mathcal{D}^6\gamma^2 \big[(1-3\beta^2)T^{tt} \nonumber\\ && +4\beta T^{tz} - (3-\beta^2)T^{zz}\big]\sin^2\vartheta, \\ I&=&Ac\mathcal{D}^4\gamma^2\,\big[(1+\beta^2)\left(T^{tt}+T^{zz}\right) - 4\beta T^{tz}\big] + Q. \label{eq:isc} \end{eqnarray} \subsection{Critical velocities} The aim of this subsection is to connect, in a self-consistent manner, the properties of particle motion through the ambient radiation field with Stokes parameters of scattered light. In order to prepare for this discussion it is useful to introduce two critical velocities of the particle motion. Firstly, of particular interest is the velocity at which the polarization of scattered radiation vanishes \citep{bel98}. The condition for velocity follows from the requirement \begin{equation} \bar{T}^{tt} - 3\bar{T}^{zz} = 0. \label{eq:pi00} \end{equation} Performing the Lorentz boost to LF we obtain \begin{eqnarray} \left(1-3\beta^2\right)T^{tt} + 4\beta T^{tz} + \left(\beta^2-3\right)T^{zz} = 0. \end{eqnarray} This is a quadratic equation for $\beta$, which has two roots, \begin{eqnarray} \beta_{1,2} = a \pm \sqrt{a^2 + b}\,, \end{eqnarray} where \begin{eqnarray} a\equiv\frac{2T^{tz}}{3T^{tt}-T^{zz}}\,, \quad b\equiv\frac{T^{tt}-3T^{zz}}{3T^{tt}-T^{zz}}\,. \end{eqnarray} Clearly, eq.~(\ref{eq:pi00}) can be satisfied independently of the direction of observation. For $\beta\rightarrow\beta_{1,2}$ the polarization changes from longitudinal to transversal. Secondly, we introduce the saturation velocity $\beta_0$ \citep{sik81}. As was shown by various authors under different approximations about the particle cross-section and the form of gravitational field (see e.g.\ \citealt{abr90}; \citealt{vok91}; \citealt{mel89}; \citealt{fuk99}; \citealt{kea01}), the saturation velocity plays an important role in the dynamics of relativistic jets: particles moving at velocity smaller/greater than the saturation velocity gain/lose their momentum at the expense of the radiation field. In absence of other acceleration mechanisms and neglecting inertia of particles, the effect of radiation pressure eventually leads to $\beta\rightarrow\beta_0$ as terminal speed of the particle motion. The saturation velocity is determined by the requirement of the vanishing radiation flux in CF, i.e. \begin{equation} \bar{T}^{tz}=0. \end{equation} This gives another quadratic equation, \begin{equation} \left(1+\beta^2\right)T^{tz}-\beta\left(T^{tt}+T^{zz}\right)=0, \end{equation} with the solution \begin{equation} \beta_0=\frac{1-\sqrt{1-\sigma^2}}{\sigma}\,, \label{eq:beta0} \end{equation} where $\sigma\,\equiv\,2T^{tz}/(T^{tt}+T^{tz})$. We ignore the second solution, as it has no physical meaning. \label{sec:example} As an example let us assume the incident radiation field to be strictly isotropic in the laboratory frame, i.e. \begin{equation} T^{\alpha\beta} = \mathrm{diag}\left(\mathcal{E}, \textstyle{\frac{1}{3}}\mathcal{E}, \textstyle{\frac{1}{3}}\mathcal{E}, \textstyle{\frac{1}{3}}\mathcal{E}\right) \label{eq:iso1} \end{equation} with $\mathcal{E}{\,\equiv\,}T^{tt}$ being energy-density of radiation. Evaluating the stress-energy tensor in CF we find $\Pi_\mathrm{m}=-\beta^2$. Substituting into the equation~(\ref{eq:polCF}) we obtain polarization degree \begin{equation} \Pi(\bar{\vartheta},\beta) = \frac{\beta^2\sin^2\bar{\vartheta}}{1+\beta^2\cos^2\bar{\vartheta}}\;. \label{eq:pi1} \end{equation} Lorentz transformation to LF gives \begin{equation} \Pi(\vartheta,\beta) = \frac{\beta^2\sin^2\vartheta}{\left(2\gamma^2-1\right) \left(1-\beta\cos\vartheta\right)^2 - \beta^2\sin^2\vartheta}\;. \label{eq:pi2} \end{equation} Since $\Pi_\mathrm{m}\leq0$, the scattered radiation is polarized transversely. The critical velocities are $\beta_0=\beta_1=\beta_2=0$ in this case. Figure~\ref{fig:iso} shows the dependence of $\Pi$ on observing angle according to equation~(\ref{eq:pi1}). It is worth noticing that we explore the frequency-integrated model, because this assumption is adequate for the purpose of clarification of the role gravitational lensing (discussed in the next section). The same dependency of Stokes parameters on the scatterer velocity and observer viewing angle is obtained in frequency-dependent calculation with the spectral index of the incident radiation equal to $-1$ (the case adopted originally by \citealt{beg87}). Our results are consistent with \citet{laz04} provided that appropriate averaging over energy is adopted. It can be seen (in the left panel of Fig.~\ref{fig:iso}) that the resulting curves closely resemble the numerical result of Lazzati et al.\ (\citeyear{laz04}; cp.\ their Fig.~1). In particular, the curves are identical for $\gamma\gg1$ and they approach the ultra-relativistic limit $\Pi=(1-\cos^2\bar{\vartheta})/(1+\cos^2\bar{\vartheta})$ (originally examined again by \citealt{beg87}, and invoked more recently e.g.\ by \citealt{sha95}). This limit corresponds to the case of a head-on collision, when all photons are impinging at incident angles $\bar{\vartheta}_\mathrm{i}\rightarrow\pi$ because of aberration in CF. For moderate Lorentz factors there is some difference between our profile of $\Pi(\bar{\vartheta})$ and the corresponding numerical values plotted in \citet{laz04}. For example, checking the $\gamma=2$ curve, we notice that relative difference amounts to roughly $13$\%. This apparent discrepancy is explained by realizing that our eq.~(\ref{eq:pi1}) has been derived in terms of bolometric intensities, whereas Lazzati et al.\ employ specific (frequency-dependent) quantities. By integrating their Stokes parameters over frequency we recover precisely the value predicted by eq.~(\ref{eq:pi1}). See \citet{hor05} for detailed comparisons. \begin{figure*} \includegraphics[width=0.49\textwidth]{pol1.eps} \hfill \includegraphics[width=0.49\textwidth]{pol2.eps} \caption{Left: The case of incident radiation originating from an isotropic source of angular radius $\alpha$; see eqs.~(\ref{eq:estar})--(\ref{eq:pstar}). Two branches of critical velocity are shown, $\beta_1(\alpha)$ and $\beta_2(\alpha)$, at which the total polarization of scattered light vanishes independent of observing direction. The saturation curve $\beta_0(\alpha)$ is also plotted assuming that the radiation drag dominates the particle dynamics. Right: the same as on the left but for a mixture of two components of the incident radiation, i.e.\ the ambient isotropic ($\alpha=\pi$) source plus stellar (non-isotropic) contribution according to eq.~(\ref{eq:lam1}) with $\lambda_\mathrm{i}=0.001$. In both panels, the regions of longitudinal and transversal polarization are distinguished by shading.} \label{fig4} \end{figure*} \section{Polarization of light scattered near a compact star} \label{sec:relstar} \subsection{The gravitational and radiation fields} The gravitational field of a spherically symmetric star is described by Schwarzschild metric \citep{cha92}, \begin{equation} {\rm d}s^2 = -c^2\xi\, {\rm d}t^2 + \xi^{-1}{\rm d}r^2 + r^2\,{\rm d}\Omega^2, \label{eq:ds} \end{equation} where ${\mathrm{d}}\Omega$ is the angular part of a spherically symmetric line element, $\xi(r)\,\equiv\,1-R_{\mathrm{S}}/r$ is the redshift function in terms of Schwarzschild radius, $R_{\mathrm{S}}\,\equiv\,{2GM}c^{-2}\dot{=}2.95\times10^5(M/M_{\sun})$~cm, and $M$ is mass of the star. Four-vectors and four-tensors will be expressed with respect to a local orthonormal tetrad, $(\fvec{e}^{(t)},\fvec{e}^{(r)},\fvec{e}^{(\theta)},\fvec{e}^{(\phi)})$, with non-vanishing components $e^{(t)}_t=c\xi^{1/2}$, $e^{(r)}_r=\xi^{-1/2}$, $e^{(\theta)}_\theta=r$ and $e^{(\phi)}_\phi=r\sin\theta$. Tetrad components of four-vectors are denoted by bracketed indices and are raised and lowered using the Minkowski metric. Primary photons are emitted from a star and form the ambient radiation field acting on the particle. The star of radius $R_\star$ and compactness $R_\star/R_{\mathrm{S}}$ appears to a static observer, located at radius $r$, as a bright disc of angular radius $\alpha_\star=\alpha(r)$, \begin{equation} \sin\alpha(r)=\frac{\tilde{R}}{r}\;\frac{\xi(r)^{1/2}}{\xi(\tilde{R})^{1/2}}, \label{eq:angle} \end{equation} where $\tilde{R}\,\equiv\,\max\{\textstyle{\frac{3}{2}}R_{\mathrm{S}},R_\star\}$. Because of light bending the solid angle subtended by a compact star on the sky is larger than the Euclidean (flat space) estimate. Formula~(\ref{eq:angle}) was originally discussed by \citet{syn67} and the manifestation of the gravitational self-lense effect was examined by \citet{win73}. In case of very high compactness (when $R_\star<\frac{3}{2}R_{\mathrm{S}}$) the rim of the image is formed by photons encircling the perimeter of the star more than once. In spite of complicated trajectories of photons, to the observer the surface appears radiating with intensity (we neglect limb darkening for simplicity) \begin{equation} I(r) = \frac{\xi(R_\star)^2}{\xi(r)^2}\;I_\star(R_\star). \label{eq:I} \end{equation} Let us take the previous example from eq.~(\ref{eq:iso1}), but assume that the source of primary photons occupies only a fraction of the local sky of the scattering particle. Gradual dilution of the source radiation with distance is described by function $\alpha(r)$. The limit of $\alpha\rightarrow\pi$ corresponds to strictly isotropic radiation arriving from all directions, whereas for $\alpha\rightarrow0$ we obtain the case of a point-like source. We thus recognize the results of subsection~\ref{sec:example}. The stress-energy tensor of the stellar radiation field has three independent components, namely, the energy density, the energy flux, and the radial stress. These are given, respectively, by \begin{eqnarray} \mathcal{E}_\star &\equiv& T_\star^{(t)(t)} = \frac{2\pi}{c} I \left(1-\cos\alpha\right), \label{eq:estar} \\ \mathcal{F}_\star &\equiv& cT_\star^{(t)(r)} = \pi I \sin^2\alpha, \\ \mathcal{P}_\star &\equiv& T_\star^{(r)(r)} = \frac{2\pi}{3c} I \left(1-\cos^3\alpha\right). \label{eq:pstar} \end{eqnarray} There are two other non-zero components, $T_\star^{(\theta)(\theta)}=T_\star^{(\phi)(\phi)}$, which can be computed from the condition $T_{\star \sigma}^\sigma=0$. Magnitude of the star radiation is characterized by total luminosity, $L_\star = 4\pi R_\star^2 \mathcal{F}_\star(R_\star)$. Finally we include another, isotropic component of the radiation field with intensity $I_\mathrm{iso}$ in addition to stellar light. The corresponding stress-energy tensor is entirely determined by energy density $\mathcal{E}_\mathrm{iso}=4{\pi}c^{-1}I_\mathrm{iso}$. The stress-energy tensor of the total radiation field is the sum $T^{\alpha\beta}=T_\star^{\alpha\beta} + T_\mathrm{iso}^{\alpha\beta}$. Combining the two contributions allows us to model different situations according to their relative magnitude and the motion of the scattering medium. \begin{figure*} \includegraphics[width=\textwidth]{relstar-i.eps} \caption{Contours of extremal values of the polarization function $\Pi_{\mathrm{m}}(\beta,\zeta)$ for photons Thomson-scattered on an electron, moving with a given velocity $\beta$ through a mixture of stellar and ambient diffuse light. Each panel captures the whole range of radii from $r=R_\star$ ($\zeta=0$) to $r\rightarrow\infty$ ($\zeta=1$). The three rows correspond to progressively increasing luminosity parameter (\ref{eq:lam1}): (i)~$\lambda_{\mathrm{i}}=0$ (top); (ii)~$\lambda_{\mathrm{i}}=0.001$ (middle); and (iii)~$\lambda_{\mathrm{i}}=0.1$ (bottom). The left column is for a highly compact star with $R_\star=1.01R_{\mathrm{S}}$; the right column is for $R_\star=10^3R_{\mathrm{S}}$. Hence, the light-bending effects are significant on the left and negligible on the right. The curve of zero polarization $\Pi=0$ is plotted with a dashed line. Generally, if the star is sufficiently compact then the curve of zero polarization becomes double-valued; its two branches correspond to $\beta\,\equiv\,\beta_{1,2}(\zeta)$ in the previous figure. A separatrix is a particular contour that distinguishes regions of different topology in this graph. The saturation curve $\beta_{\mathrm{}0}(\zeta)$ is also plotted.} \label{fig5} \end{figure*} \subsection{Polarization of scattered light} \label{sec:scattered} We first assume velocity of the scatterer $\beta(r)$ and compute the resulting polarization. Figure~\ref{fig4} shows the effect of vanishing and changing polarization which occurs at particular value of $\beta$. We compare two situations: the case of purely stellar component of the primary irradiation, as described in the previous paragraph (in the left panel), versus the case of a sum of the isotropic component and radiation coming from the star surface (on the right). The latter configuration represents an anisotropic irradiation of an electron; it can be parametrized by the mutual ratio of redshifted radiation intensities received at a distant observer, i.e. \begin{equation} \lambda_{\mathrm{i}}\equiv \frac{\tilde{I}_\mathrm{iso}}{\tilde{I}_{\star}}. \label{eq:lam1} \end{equation} This can be considered as a toy-model of inverse Compton up-scatter in an illuminated jet where intensity of ambient light is not directly connected with the intensity of the central source. $\lambda_{\mathrm{i}}=\mathrm{const}$ is a free parameter of the model; given a value, the degree of anisotropy depends on the distance from the star in $R_{\mathrm{S}}$. It is worth noticing that light-bending is taken into account in this calculation automatically, including all higher-order images encircling the star. Polarization is non-zero provided that particle velocity is not equal to $\beta_{1,2}(r)$ and, indeed, $\Pi$ can reach large values. This is shown in figure~\ref{fig5}, where we plot the extremal value of function $\Pi_{\mathrm{m}}(\beta,\zeta)$ in the plane of particle velocity versus distance. $|\Pi_{\mathrm{m}}(\beta,\zeta)|$ is equal to extremes of the polarization degree measured along a suitably chosen observing angle $\vartheta$. The curve of zero polarization is also plotted and we notice that it is independent of $\vartheta$. In this figure the primary unpolarized light was assumed to be a mixture of stellar and ambient contributions (the latter component was assumed to be distributed isotropically in the lab frame). The saturation curve $\beta_0(\zeta)$ is shown and it is worth noticing that, for some values of the model parameters, $\beta_0(\zeta)$ crosses the contour of $\Pi=0$. Therefore, a hypothetical particle moving or oscillating along the saturation curve would exhibit polarization that swings its direction by right angle. One can modify the previous example by considering a constant ratio of energy density, i.e.\ by replacing $\lambda_{\mathrm{i}}$ with another parameter, \begin{equation} \lambda_{\mathrm{e}}\equiv\frac{\mathcal{E}_\mathrm{iso}}{\mathcal{E}_{\star}(r)}. \label{eq:lam2} \end{equation} This definition captures better the case when the ambient light originates from scattering of the central component (perhaps on clumps being accreted onto the star), so that both contributions are linked to each other and their energy density decreases at identical rate with the distance. We again constructed graphs of $\Pi_\mathrm{m}(\beta,\zeta)$ and found a similar structure of contours at small radii as those shown in Fig.~\ref{fig5}, including the double-valued function $\beta_{1,2}(\zeta)$. However, the saturation velocity $\beta_0(\zeta)$ does not fall to zero at $r\rightarrow\infty$, instead, it generally reaches substantially higher values. Moreover, the critical point (where the separatrix curve self-crosses) is lost, as well as the whole structure towards right of the critical point. Polarization of scattered light obviously depends on scatterer motion and these can be calculated consistently. In the next section we finally determine velocity $\beta(r)$ along the particle trajectories, for which luminosity of the star and its compactness are parameters. \subsection{Polarization along the particle trajectory} \label{sec:dynamics} Four-velocity $\fvec{u}$ of a particle can be found by solving the equation of motion in the form \citep{abr90}, \begin{equation} mu_\rho \fvec{\nabla}^\rho u^\alpha =-\frac{\sigma_{_\mathrm{T}}}{c}\,h^{\alpha}_{\rho}\,T^{\rho\sigma}u_\sigma, \label{eq:eom} \end{equation} where $m$ is the particle rest mass and $h^\nu_\mu\,\equiv\,\delta^\nu_\mu+c^{-2}u^\nu u_\mu$ is a projection tensor. Left-hand side of eq.~(\ref{eq:eom}) includes the effect of gravity ($\fvec{\nabla}$ denotes covariant differentiation with respect to curved spacetime geometry) and the right-hand side provides the effect of radiation drag -- accelerating or decelerating particles with respect to free-fall motion. \begin{figure*} \includegraphics[width=\textwidth]{trajectories.eps} \caption{Top row: particle velocity $\beta(\zeta)$ (thick curve) and the three critical velocities $\beta_0$, $\beta_1$ and $\beta_2$ in the combined radiation and gravitational fields. Each trajectory starts from the star, $R_\star=1.205R_{\mathrm{S}}$ ($\zeta=0$), and it proceeds towards infinity ($\zeta=1$). Positive values of $\beta$ correspond to an outflow, negative values are for an inflowing material. Three cases are shown for different values of dimensionless luminosity: $\Gamma=100$ (left), $\Gamma=2$ (middle), and $\Gamma=0.1$ (right). Bottom row: the polarization magnitude $\Pi(\zeta)$ along the particle trajectories corresponding to the three above-given solutions. In each panel, the curves are labelled with the observing angle $\vartheta$. All curves have a common zero point where they cross each other. The sign of $\Pi$ distinguishes here the case of transversal polarization from the longitudinal one.} \label{fig:trajectories} \end{figure*} Non-zero components of four-velocity are $u^{(t)}=c\gamma$ and $u^{(r)}=c\gamma\beta$ in the local tetrad of a static observer.\footnote{In this paper we consider purely radial motion. The same formalism can be readily applied to more complicated motion of the scatterer, although it will then hardly be possible to solve both the motion and the resulting polarization analytically. Notice that the case of clumps orbiting in the plane of a black hole accretion disc was discussed by \citet{pin77}, \citet{con80} and \citet{bao97}. These authors also pointed out that effects of general relativity could be discovered by tracing time variable polarization.} Equation~(\ref{eq:eom}) takes the form \begin{eqnarray} \dot{\beta}&=& \frac{1}{c\gamma^2}\left(\frac{\xi^{1/2}}{\gamma^2}\frac{F^{(r)}_\mathrm{rad}}{m} - \frac{c^2R_{\mathrm{S}}}{2r^2}\right), \label{eq:eom-beta} \\ \dot{r}&=&c\xi\beta, \label{eq:eom-r} \end{eqnarray} where dot denotes derivative with respect to coordinate time $t$ and $F^{(r)}_\mathrm{rad}$ is the radial component of the radiation force \begin{equation} F^{(r)}_\mathrm{rad} = \sigma_{_\mathrm{T}}\gamma^3 \left[\left(1+\beta^2\right)T^{(t)(r)} - \beta\left(T^{(t)(t)} + T^{(r)(r)}\right)\right]. \label{eq:eom-F} \end{equation} The effect of radiation on the motion is expressed by the first term in the parentheses on the right-hand side of eq.~(\ref{eq:eom-beta}), whereas the other term can be considered as the contribution of gravity. Hence, the particle dynamics depends on relative strength of the radiation and gravitational fields. Because of redshift factor near a compact star these two influences do not obey the same simple Newtonian law, and so a rich set of possible results emerges. These can be parametrized by Eddington luminosity $L_\mathrm{E}$, which follows from the condition of zero acceleration for matter hovering at radius $r=R_{\star}$. The radiation force becomes $F^{(r)}_\mathrm{rad}=(\sigma_{_\mathrm{T}}L_\star)/(4{\pi}R_{\star}^2c)$ and the equation (\ref{eq:eom-beta}) with $\dot{\beta}=0$ gives \begin{equation} L_\mathrm{E}=\frac{2\pi mc^3R_{\mathrm{S}}}{\sigma_{_\mathrm{T}}\xi(R_\star)^{1/2}}. \end{equation} We note, that the acceleration depends on the radial distance from the center as well as on particle velocity. The relative importance of radiation and gravity is characterised by dimensionless factor \begin{equation} \Gamma \equiv \frac{L_\star}{L_\mathrm{E}}. \end{equation} The radiation term in acceleration is regulated by the interplay of relativistic aberration and the Doppler boosting, which tend to establish the saturation velocity. At this point further acceleration vanishes, i.e.\ $\dot{\beta}_0(r)=0$. Considering only radiation from the star and expressing the explicit form of the stress-energy tensor, eqs.~(\ref{eq:eom-beta})--(\ref{eq:eom-r}) reduce to eq.~(2.3) of \cite{abr90}. With gravitational attraction of the centre taken into account, a possibility occurs of an equilibrium point $\zeta_{\rm{}eq}\,\equiv\,\zeta(r=R_{\rm{}eq})$, where a particle can reside. By setting $\beta=0$ and $\dot{\beta}=0$ in eqs.~(\ref{eq:eom-beta})--(\ref{eq:eom-F}), one can find that the equilibrium radius ranges from $R_{\rm{}eq}(\Gamma){\rightarrow}R_\star$ (i.e.\ $\zeta_{\rm{}eq}\rightarrow0$) for $\Gamma\rightarrow1$ up to $R_{\rm{}eq}(\Gamma)\rightarrow\infty$ (i.e.\ $\zeta_{\rm{}eq}\rightarrow1$) for $\Gamma\rightarrow\sqrt{3}$. Equations (\ref{eq:eom-beta})--(\ref{eq:eom-r}) allow for a finite set of topologically different solutions. These can be classified into different categories (\citealt{abr90}; see also \citealt{kea01}) according to the behaviour of saturation curves in $(\beta,\zeta)$-plane. Notice that we already examined one of these solutions, i.e.\ the saturation curve (\ref{eq:beta0}) for very high luminosity of the star and negligible mass of the particle, i.e., $\Gamma\rightarrow\infty$. The motion is then governed solely by radiation drag. We select this condition because it is particularly relevant for the discussion of the resulting polarization of scattered light. Its role can be inferred also from Fig.~\ref{fig5}, where the $\beta_0$-curve for this case passes through the critical point of contour lines of $\Pi(\beta,\zeta)$. The limit of $\Gamma\rightarrow\infty$ is an extreme case. Different profile $\beta_0(\zeta)$ applies to moderate values of the luminosity parameter, $\Gamma<\infty$, when particles do not strictly maintain the saturation velocity because of inertial effects acting on them. Different categories of the particle motion then provide a natural framework also for the discussion of the resulting polarization. Three most important cases are recorded in figure~\ref{fig:trajectories}. In this example only the stellar radiation is taken into account, whereas the component $I_\mathrm{iso}=0$ for simplicity. The cases shown here correspond to the situation when (i)~the radiation field dominates over gravity and the electron is therefore pushed away to an infinite radius (see the left panel); (ii)~a moderate value of the luminosity allows the scattering electrons to reach an equilibrium position at $\zeta\,\equiv\,\zeta_{\rm{}eq}=0.62$ (middle); (iii)~the luminosity is very small and the particles are almost free-falling in the gravitational field (right). Particles start from $\zeta=0$ and they quickly adhere to the saturation curve $\beta_0(\zeta;\Gamma)$, provided that radiation is dynamically important, i.e.\ in cases (i) and (ii). This occurs independently of initial velocity; then the motion follows a curve adjacent to but slightly different from the saturation curve. On the other hand, in the case (iii) the gravitation governs the motion; the trajectory $\beta(\zeta)$ is only slightly asymmetric with respect to $\beta=0$ line by the weak influence of radiation. By coupling the equations of particle motion with the polarization equations of sec.~\ref{sec:scattered} we obtain Stokes parameters of scattered light along each particle trajectory. Bottom panels of Fig.~\ref{fig:trajectories} show the resulting magnitude of polarization. Notice how it crosses zero level at certain distance of the scatterer from the stellar surface. At this point polarization changes direction from transverse to longitudinal. Points of intersection of curves $\beta_{1,2}(\zeta)$ with the particle motion $\beta(\zeta)$ determine the radial location of the point of vanishing polarization (indicated by dotted vertical lines in the plot). We will now assume that a small cloudlet is formed by a group of electrons. We can distinguish three cases, depending on the bulk velocity of the cloudlet. These are discussed in subsections \ref{sec:outward}--\ref{sec:inward} below. The predicted time dependence offers a way to test the model. \section{Polarization from a cloudlet} \label{sec:cloudlet} \subsection{The case of fast ejection ({\mbox{\protect\boldmath{$\gamma\gg1$}}})} \label{sec:outward} Let us denote $R_\mathrm{cl}\llR_{\mathrm{S}}$ radius of the cloud and $\psi{\sim}R_\mathrm{cl}/z$ its angular radius as seen from the center. We assume that the cloud has small optical depth, $\tau_\mathrm{cl}\ll1$, and it is ejected along $z$-axis, with bulk velocity $\beta(z)\gg0$ directed approximately toward an observer (inclination angle of the observer is denoted $\theta_{\rm{}o}$). Clearly, $\gamma\gg1$ implies the scattered photons are boosted in the direction of motion. Although light bending increases the apparent size of the star on the particle local sky, general relativity effects are quite negligible on scattered photons moving straight away from the center. Only few photons are scattered backwards, and therefore the direct image greatly dominates the signal received by an observer. \begin{figure*} \includegraphics[width=\textwidth]{frac.eps} \caption{Left: the variation of the total normalized radiation flux $S_\mathrm{tot}(t)$ and the corresponding degree of polarization $\Pi_\mathrm{tot}(t)$. The case of ultra-relativistic ejection starting with initial $\gamma(R_\star)=10$. Right: The corresponding velocity profile $\beta(\zeta)$ and critical velocities $\beta_{1,2}(\zeta)$ are shown. Parameters of the plot are $\theta=17$~deg, $R_\star=1.2R_{\mathrm{S}}$. Polarization vanishes at the point \textsf{A} when $\beta(\zeta)=\beta_1(\zeta)$; the corresponding time is $t=t_\mathrm{A}$ in the left panel. Polarization vanishes once again at a later time, when $\beta(\zeta)=\beta_2(\zeta)$.} \label{fig:frac} \end{figure*} The measured radiation flux $S_\mathrm{tot}$ has two components: the primary unpolarized flux $S_\star$ and the flux of partially polarized radiation scattered in the cloud $S_\mathrm{cl}$. Their ratio can be given in terms of redshifted intensities $\tilde{I}_\star$ and $\tilde{I}_\mathrm{cl}$ of the star and of the cloud, and by the ratio of solid angles occupied by the cloud and by the star on observer sky. This provides us with the estimation of the expected fractional polarization of the total signal. The ratio of fluxes is \begin{equation} s\equiv\frac{S_\mathrm{cl}}{S_\star} = \left(\frac{R_\mathrm{cl}}{R_\star}\right)^2 \frac{\tilde{I}_\mathrm{cl}}{\tilde{I}_\star}. \label{eq:s} \end{equation} Intensity arriving from the cloud $\tilde{I}_\mathrm{cl}=\xi^2(r)I_\mathrm{sc}$, where $I_\mathrm{sc}(r)$ is the locally emitted intensity, as given by eq.~(\ref{eq:isc}). The scattered component is polarized with the magnitude $\Pi_\mathrm{cl}$, as derived in the previous section. The total flux and the total polarization are \begin{equation} S_\mathrm{tot}=(1+s)S_\star,\quad \Pi_\mathrm{tot}=\frac{s\Pi_\mathrm{cl}}{1+s}. \end{equation} Substituting equation (\ref{eq:isc}) into (\ref{eq:s}) one can verify that the flux ratio $s$ does not depend on the star intensity $I_\star$, and hence \begin{equation} s=\kappa f(\beta,r,\vartheta), \end{equation} where $\kappa\,\equiv\,\tau_\mathrm{cl}(R_\mathrm{cl}/R_\star)^2$ depends on the size and density of the cloud, and $f$ includes the geometry of the radiation field and the beaming/aberration effects arising from the cloud motion. We consider situations when $\kappa$ is small; then the two contributions to the radiation intercepted by the observer become comparable only in case of strong beaming, which leads to $f\gg1$ for small observing angles. The moment of observation, i.e.\ the arrival time of photons $t_\mathrm{obs}(r\rightarrow\infty)$, is related to the moment of emission $t$ by \begin{equation} t_\mathrm{obs} \simeq t-\frac{1}{c}\big[z(t)-z_0\big]\cos\theta, \end{equation} where we set $t(z_0)=0$ for the initial time and $t_\mathrm{obs}=0$ for the moment when the signal arrives to the observer. Notice that this estimate is sufficient for direct image photons discussed in this subsection, but it would not be appropriate for higher-order image photons in the next subsection (in Schwarzschild geometry one can express time of arrival in terms of elliptic integrals, proceeding in the same analytical manner as above in the calculation of the ray trajectory; see also \citealt{boz04}; \citealt{cad05}). The temporal behaviour is shown in figure~\ref{fig:frac}. We assume that the cloud has been pre-accelerated to large initial speed $\beta(t=0)$ near the star surface. The graph captures the subsequent phase of gravitational and radiative deceleration. The scattered light contributes significantly to the total signal only for a short initial phase (a peak occurs in the graph). The local maxima of the radiation flux (at $t=t_\mathrm{I}$) and of polarization (at $t=t_\mathrm{P}$) can be understood in terms of beaming: most of radiation from the cloud is emitted in a cone with the opening angle $\sim1/\gamma$ about the direction of motion. For small viewing angle ($\theta_{\rm{}o}\lesssim13$~deg) the observer was initially located outside this cone but, as time goes, the electron decelerates, the cone opens up and the observer intercepts more radiation. The maximum observed polarization occurs with a certain delay $t_\mathrm{P}-t_\mathrm{I}$ (proportional to $M$) after the peak of radiation flux. Subsequent decay of the signal is connected with a diminished scattering power of the cloud and the overall dilution of the radiation field. The observed polarization and the flux are lagged with each other and sensitive to the angle of observation. This behaviour is clearly seen also in figure~\ref{fig:frac3}, where we assumed several different viewing angles. \begin{figure} \includegraphics[width=0.5\textwidth]{frac3.eps} \caption{The relation between the normalized radiation flux and the total polarization. The magnitude of polarization $\Pi_{\rm{}tot}$ reaches up to $\sim65$\% for suitable view angles. Values of $\theta_{\rm{}o}$ (in degrees) are given with the curves. Other parameters are the same as in previous figure.} \label{fig:frac3} \end{figure} We selected large initial velocity in this example, otherwise the effects of aberration and the Doppler boosting would be less prominent, time-scales longer, and the effect of fractional polarization crossing zero point would disappear. The time span of this plot can be scaled according to the light-crossing time in physical units, \begin{equation} t\simeq1.5\,\frac{R_{\mathrm{S}}}{c}=1.5\times10^{-4}\frac{M}{10M_\odot}\quad\mathrm{[sec]}, \end{equation} i.e.\ proportionally to the central mass. The polarization magnitude is correlated with the intensity (this correlation was already noticed for the isotropic radiation in the right panel of Fig.~\ref{fig:iso}). \subsection{A cloudlet at rest ({\mbox{\protect\boldmath{$\gamma=1$}}})} \label{sec:static} An interplay between gravity and the ambient radiation stalls the bulk motion, $\beta(t)\rightarrow0$. Scattered photons are then no longer boosted in the outward direction, and so the higher order (highly bent) rays can provide a non-negligible contribution to the observed light after encircling the star. This of course requires large compactness; we set $R_\star=\frac{3}{2}R_{\mathrm{S}}$ hereafter. Again we assume an observer near $z$-axis and a cloudlet with a small size, $\psi\ll1$. Unlike a more traditional application of the lense geometry, the cloudlet is placed at an arbitrary finite distance $z{\,\equiv\,}z(t)$ above the star and the deflection angle does not have to be small. Let us consider rays making a single round (by the angle $\Theta=2\pi\pm\theta_{\rm{}o}$), with a radial turning point at pericenter $r=r_{\rm{}p}$ tightly above the photon circular orbit. As mentioned in subsect.~\ref{sec:dynamics}, the equilibrium radius depends on the star luminosity. Once the cloudlet settles at the equilibrium point $r=R_{\rm{}eq}$, scattered photons are no more boosted to high energy and the collimation effect disappears. In this situation relatively more light is backscattered in the direction toward the photon orbit. Part of these photons form a retrolensed image \citep{hol02}, which may also reach the observer. We thus now calculate (de)magnification of light also for the two first-order images, which give the most significant contribution and may influence the net polarization at infinity. To this aim we need to consider rays starting near above the star, passing through pericenter and eventually escaping to infinity (the retrolensing geometry; see \citealt{oha87}; \citealt{vir00}; \citealt{boz02}, and references cited therein). \begin{figure*} \includegraphics[width=0.49\textwidth]{tracing-M-i2.eps} \hfill \includegraphics[width=0.49\textwidth]{static-GI.eps} \caption{Left: the gain factor $\mathcal{M}(\Theta,\zeta)$ according to the approximation formula (\ref{eq:mu}). Two curves are shown as a function of the source distance $\zeta$, for $\Theta=358^{\rm{}o}$ (solid line) and $\Theta=362^{\rm{}o}$ (dashed line). For comparison, exact (numerically computed) values are also plotted with triangles and circles. Right: normalized intensity of the direct image (thin solid line) and the two retrolensed images (dashed and dotted lines) of light scattered from a particle residing in the equilibrium point $z=R_{\rm{}eq}$, as a function of the Eddington parameter. Notice that the contribution of the two retrolensed images is almost identical and it amounts up to $\sim20$\% of the total signal (thick solid line). Inclination $\theta_{\rm{}o}=2^{\rm{}o}$ in both panels.} \label{fig:mu} \end{figure*} Two arcs are formed which merge together in the Einstein ring (with radius just above $b_{\rm{}c}=\frac{3}{2}\,\sqrt{3}\,R_{\mathrm{S}}$) if the observer is aligned with the source. By integrating null geodesics, expanding the elliptic integrals near the pericenter $r_{\rm{}p}\sim\frac{3}{2}R_{\mathrm{S}}$, assuming the deflection angle close to $\Theta\sim2\pi$, and keeping only the leading terms we obtain the desired width of the two retrolensing images, \begin{equation} \delta{b}(\phi)=2\,\delta{b_0}\,\left(\psi^2-\theta^2_{\rm{}o}\sin^2\phi\right)^{1/2} \end{equation} where $\delta{b_0}=K(z)\,e^{-2\pi}$, $|\phi|\leq\arcsin(\bar{\psi}/\theta_{o})$, $\bar{\psi}=\mbox{Min}\{\theta_{\rm{}o},\psi\}$, and \begin{equation} K=\frac{6^3\,3\,\sqrt{3}}{2}\,\frac{\sqrt{3}-1}{\sqrt{3}+1}\, \frac{\sqrt{3}-\sqrt{1+3u}}{\sqrt{3}+\sqrt{1+3u}},\quad u(z)\equiv\frac{R_{\mathrm{S}}}{z}. \end{equation} We remark that $\Theta\sim2\pi$ was assumed for simplicity only. The case of arbitrary $\Theta$ can be treated in similar way. Time dependency of the arcs is caused by the scatterer motion, $z=z(t)$. We integrate over the cross-section of the arc images to derive their total luminosity and polarization at each time moment. Higher-order images suffer from the de-magnifying influence of the light bending, which reduces their luminosity, unless a special geometrical alignment of the source and the observer occurs and favours the opposite effect of a caustic. This can be quantified by the gain factor, ${\mathcal{M}}$, which determines the ratio of fluxes received in retrolensed/direct images. The problem translates to evaluating the ratio of solid angles, $\mathcal{M}\,\equiv\,\frac{{\rm{}d}\Omega_{\rm{}i}}{{\rm{}d}\Omega_{\rm{}o}}$, where indices ``i/o'' refer to the angular size of the source with/without taking the light bending into account. In the case of a small (but finite) size cloudlet, we find \begin{equation} {\mathcal{M}}(z)=6\,\sqrt{3}\,K(z)\,e^{-2\pi}\psi^{-1}\, \Lambda(\theta_{\rm{}o}/\psi), \end{equation} where the term \begin{equation} \Lambda(k)\,\equiv\,\frac{2}{\pi}\,E(\Phi,k),\quad \Phi\,\equiv\,\arcsin\left[\mbox{Min}\left\{k^{-1},1\right\}\right], \label{eq:phim} \end{equation} arises from the integration over the Einstein arcs. For $\psi\ll\theta_{\rm{}o}$ the gain function is \begin{equation} \mathcal{M}(z,\Theta)\simeq{\textstyle{\frac{3}{2}}}\,\sqrt{3}\,K(z)\, \frac{R_{\mathrm{S}}^2}{z^2}\;\frac{\exp(-\Theta)}{|\sin\Theta|}. \label{eq:mu} \end{equation} Formula (\ref{eq:mu}) reduces to eq.~(21) of \citet{oha87} for $z\rightarrow\infty$, $\zeta(z)\rightarrow1$. In our situation, eq.~(\ref{eq:mu}) requires that the cloudlet is sufficiently small in size and its motion is directed somewhat sideways with respect to the observer view angle. Figure \ref{fig:mu} compares ${\mathcal{M}}(\zeta,\Theta)$ with the corresponding result of a numerical integration, showing that the approximation is sufficiently accurate for our purposes. \begin{figure*} \includegraphics[width=0.49\textwidth]{static-GP.eps} \hfill \includegraphics[width=0.49\textwidth]{static-iP.eps} \caption{Polarization magnitude from scattering on a particle at rest at $z=R_{\rm{}eq}$. This represents a cloudlet of angular radius $\psi$ on the local sky of the star. Left: the observed polarization magnitude as a function of Eddington parameter $\Gamma$. Notice that $R_{\rm{eq}}$ is a function of $\Gamma$, and so the graph covers the whole range of radii from the star surface to infinity. The observer inclination is $\theta_{\rm{}o}=2^{\rm{}o}$. Right: the corresponding polarization magnitude for different inclinations and constant $\Gamma=1.6$ (in case of precise alignment, $\theta_{\rm{}o}=0^{\rm{}o}$, polarization vanishes because of symmetry).} \label{fig:static} \end{figure*} \begin{figure*} \includegraphics[width=0.4\textwidth]{einstein-rings.eps} \hfill \includegraphics[width=0.58\textwidth]{einstein-P.eps} \caption{Left: the form of Einstein arcs (a--c) and the ring (d) corresponding to the retrolensing images in polar coordinates $(b,\phi)$ in the observer plane. The source is supposed to be a circular target of angular radius $\psi=2^{\rm{}o}$, located on $z$-axis at distance $z=3R_{\mathrm{S}}$. The observer is at $r\rightarrow\infty$ and she has a small angular offset from the perfect alignment -- (a)~$\theta_{\rm{}o}=10^{\rm{}o}$, (b)~$\theta_{\rm{}o}=2^{\rm{}o}$, (c)~$\theta_{\rm{}o}=1^{\rm{}o}$, (d)~$\theta_{\rm{}o}=0^{\rm{}o}$. Right: a contribution to the polarization produced by the Einstein arcs. A detail of the normalized magnitude $p$ is plotted for small values of inclination; see eq.~(\ref{eq:piret}) for the definition of function $p(\theta_{\rm{}o}/\psi)$. For large $\theta_{\rm{}o}$ the magnitude of polarization saturates at roughly constant level, equal to the polarization scattered in the direction toward the photon circular orbit. Two curves are parametrized by the angular size $\psi$ of the cloudlet, as indicated in the plot.} \label{fig:static2} \end{figure*} Adding the contributions from different parts of the source has a depolarizing effect on the final signal, which we illustrate in figure~\ref{fig:static}. For the total magnitude of polarization of the retrolensing images we find \begin{equation} \Pi_{\rm{}ret}=\Pi(\vartheta_{\rm{}ph})\,p(\theta_{\rm{}o}/\psi), \label{eq:piret} \end{equation} where $\Pi(\vartheta_{\rm{}ph})$ is the polarization magnitude of light scattered in the direction towards the photon circular orbit and \begin{equation} p(k)\,\equiv\,\frac{2}{\pi\Lambda(k)}\int_0^{\Phi}\cos2\phi\, \sqrt{1-k^2\sin^2\phi}\;{\rm{}d}\phi. \end{equation} Functions $\Lambda$ and $\Phi$ were defined in eq.~(\ref{eq:phim}). Function $p$ determines the shape of retrolensing images in the observer plane; see figure~\ref{fig:static2} (we resolve a narrow trace of these arcs on observer sky by enlarging their separation $\delta{b}\,\equiv\,b-b_{\rm{}c}$ from the critical radius $b_{\rm{}c}$). Polarization vectors of all three images have the same orientation, but they experience different time delays and lensing along each trajectory. The contribution of the retrolensed images is now evident, and quite significant. Notice that the angle $\vartheta_{\rm{}ph}$ is the apparent angular size of the photon orbit as seen on the local sky of the cloudlet. It enters in eq.~(\ref{eq:piret}) because higher-order images are formed almost exclusively by light scattered on the photon circular orbit. In our case, $\vartheta_{\rm{}ph}=\alpha_\star(z)$. The polarization magnitude drops sharply if the observer inclination is less than the angular size of the cloudlet. \begin{figure*} \includegraphics[width=0.49\textwidth]{ael_b07_r2_i5-xb.eps} \hfill \includegraphics[width=0.49\textwidth]{ael_b07_r2_i5-ti.eps} \caption{An outward-moving particle decelerating in the gravitational and weak radiation fields. Left: the trajectory in the ($\beta,\zeta$)-plane, i.e.\ dimensionless velocity versus distance (thick solid line). Velocity changes in the direction of an arrow. Right: the radiation flux of scattered light as a function of time for the direct and two retrolensed images. The initial condition is $\beta=0.7$ at $r=2R_{\mathrm{S}}$. The observer inclination is $\theta_{\rm{}o}=5^{\rm{}o}$.} \label{fig:ael1} \end{figure*} \begin{figure*} \includegraphics[width=0.49\textwidth]{ael_b-099_r20_i2-xb.eps} \hfill \includegraphics[width=0.49\textwidth]{ael_b-099_r20_i2-ti.eps} \caption{The same as in Fig.~\ref{fig:ael1}, but for fast inward motion of the scatterer and strong radiation of the star ($\Gamma=10$). We set $\beta=-0.99$, $r=20R_{\mathrm{S}}$ as an initial condition at $t=0$. The signal of the higher-order image flashes for a brief moment around dimensionless time $t\simeq60$. The intense radiation of the star reverses the particle velocity to the outward motion at later stages. The observer inclination is $\theta_{\rm{}o}=2^{\rm{}o}$.} \label{fig:ael2} \end{figure*} \begin{figure*} \includegraphics[width=0.49\textwidth]{ael_b07_r2_i5-tp.eps} \hfill \includegraphics[width=0.49\textwidth]{ael_b-099_r20_i2-tp.eps} \caption{The magnitude of polarization corresponding to the case shown in Fig.~\ref{fig:ael1} (left panel) and in Fig.~\ref{fig:ael2} (right panel). The retrolensed signals are delayed with respect to the direct signal. The delay has a value characteristic to light-travel time along the photon circular orbit (it scales proportionally to the mass of the central body). The total polarization is suppressed or enhanced depending on the mutual relation between the polarization of direct and retrolensed photons.} \label{fig:ael3} \end{figure*} \subsection{Comparison between an outflow and an inflow} \label{sec:inward} Now we consider an intermediate situation with moderate velocity of the bulk motion (both an outflow or an inflow, i.e.\ $\beta\gtrless0$). For a moderate outflow velocity, the result is shown in figure~\ref{fig:ael1}. We consider a particle on the decelerating branch of the trajectory in a weak radiation field, $\Gamma\rightarrow0$, which eventually reaches the turning point (and starts falling afterwards). The two retrolensed images contribute about $10$\% of the scattered flux at maximum and the trajectory crosses $\beta=\beta_2(\xi)$ curve, where the polarization vector swings its direction. The outcome is quite different for matter infalling onto the star, because scattered photons are boosted in the downward direction and a considerable amount of light is then directed on the photon orbit. As a result, the retrolensed images are more pronounced and they cause a brief flash of light. The resulting signal is shown in figure~\ref{fig:ael2}. The effect of retrolensed images is clearly visible in the polarization curve; see figure~\ref{fig:ael3}. The case shown in the left panel exhibits a brief drop of the polarization magnitude when velocity crosses $\beta=\beta_2(\zeta)$ (the direct image arrives at $t\sim1$ in dimensionless units). At this moment the polarization changes its orientation between transversal and the longitudinal one. Then the signal restores back to a non-zero value and the same behaviour repeats when the retrolensed image arrives after a certain delay (at $t\sim21$). The case shown in the right panel exhibits a similar flash, also caused by the contribution of the retrolensed photons. However, now we observe a single fluctuation, which is actually an increase of the polarization magnitude; this is because the case shown here corresponds to transversal polarization during the whole observation and the trajectory does not cross any of the critical curves $\beta=\beta_{1,2}$. \section{Conclusions} Our calculation here is self-consistent in the sense that the motion of the blob and of photons, and the resulting polarization are mutually connected. We concentrated ourselves on gravitational effects and neglected other intervening processes, first of all the effect of magnetic fields to which polarization is sensitive (see e.g.\ \citealt{ago96}). This allowed us to compare polarization magnitudes of direct and retrolensed images, which could point to the presence of a highly compact body. We have noticed the mutual delay between the signal formed by photons of different order. The delay is characteristic to the effect and has a value proportional to the central mass. Polarimetric properties are susceptible to large changes depending on detailed physics and geometry of the source, and this is not only the advantage, which could help us to trace how different objects are functioning, but also a complication. In particular, the polarization is sensitive to the source orientation and its magnitude fluctuates from case to case. Our results are useful for testing more complex and astrophysically realistic models with up-scattering of soft photons in jets and fast flows around black holes. In the black hole case the primary photons would be provided by an accretion disc rather than the star surface and, hence, one can no more take advantage of its spherical symmetry, which helped us to simplify our calculations here. Nonetheless, the same formalism can be employed and similar features of time-dependent polarization lightcurves are expected; strong gravity plays a vital role again. Putting this in another way: provided a `realistic' equation of state implies neutron star radii greater than the photon circular orbit, detection of the signatures of retrolensing images, which we discussed above, would exclude a neutron star as a candidate on the central body. \section*{Acknowledgments} The authors thank John Miller for his advice on the current status of neutron star modelling, and an anonymous referee for critical comments on the first version of the paper. VK appreciates fruitful discussions with participants of the Aspen Center for Physics 2005 workshop Revealing Black Holes. The financial support for this research has been provided by the Academy of Sciences (grant IAA\,300030510) and by the Czech Science Foundation (grant 205/03/0902). The Astronomical Institute has been operated under the project AV0Z10030501.
2024-02-18T23:39:50.585Z
2005-10-18T22:00:36.000Z
algebraic_stack_train_0000
611
11,549
proofpile-arXiv_065-3081
\section{Introduction} One of the many possible uses of gravitational lens systems is the measurement of $H_0$ \citep{refsdal}. However, this method has been limited by a lack of knowledge about the mass distribution in the lens, which leads to degeneracies between model parameters and $H_0$. Perhaps the most important degeneracy is that between the radial slope of the lensing mass profile and $H_0$. However, strong limits can be placed on the mass slope in systems in which measurements of stellar velocity dispersions can be made \citep[e.g.,][]{tk_galevol}, more than one component of the source is multiply-imaged \citep[e.g.,][]{cohn1933}, an Einstein ring is seen \citep[e.g.,][]{cskrings}, or VLBI structure in the lensed images can be clearly mapped from one image to another \citep[e.g.,][]{rusin1152}. Another major degeneracy is that caused by an extended mass distribution that is associated with the lens. This is the famous ``mass-sheet degeneracy'' \citep[e.g.,][]{mass_sheet} and can be caused by a cluster or group along the line of sight to the main lensing galaxy. The problem is that the standard lens observables, namely the locations and fluxes of the lensed images and the location of the lensing galaxy, do not indicate how much of the lensing mass surface density is due to the mass sheet. In fact, with the exception of very large separation lenses \citep[e.g., SDSS J1004+4112 and Q0957+561;][]{1004oguri,0957walsh}, it is difficult to know from the standard lensing observables whether or not there even exists an associated group or cluster. Thus, it is necessary to search for such structures by other means. We are conducting a survey of lenses in which time delays have been measured, with the aim of detecting groups or clusters which can affect the determination of $H_0$ from those systems. The \ifsubmode \object{CLASS B1608+656} \else CLASS B1608+656 \fi system \citep{stm1608} is an excellent target for our survey. The redshifts of the lens and background source have been measured to be $z=0.630$ and $z = 1.39$, respectively \citep{stm1608,zs1608}. It remains the only four-image lens system for which robust and high-precision measurements of all three independent time delays have been made \citep{1608delays2}. The lens has been subjected to intensive modeling, which incorporated information from the measured stellar velocity distribution and the Einstein ring shape. With an advanced modeling code, strong limits were placed on the slope of the mass density profile, yielding a measurement of $H_0 = 75^{+7}_{-6}$~km\ s$^{-1}$\ Mpc$^{-1}$\ \citep{1608H0}. Furthermore, the mass models for this system require a relatively large external shear of nearly 0.1 \citep{1608H0}, indicating the presence of nearby mass. In this paper, we provide evidence for a group of galaxies associated with the primary lensing galaxy and discuss the result that it and other structures along the line of sight have on the determination of $H_0$ from this system. Throughout this paper we assume $\Omega_M = 0.3$, $\Omega_\Lambda = 0.7$, and, unless otherwise stated, we will express the Hubble Constant as $H_0 = 100 h~{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$. \section{Observations and Data Reduction} We have conducted a spectroscopic survey of the B1608+656 field as part of our ongoing program to find compact groups of galaxies associated with gravitational lenses. The field was imaged in three bands, Gunn $g$, $r$, and $i$ \citep{gunnstd}, with the Palomar 60-Inch Telescope. Spectroscopic targets were selected based on their colors and distances from the lens system. The highest priority targets were those close to the lens system and with $(r - i)$ colors in the range 0.45 to 0.65, i.e., close to those expected for early-type galaxies at the redshift of the lensing galaxy. All of the spectroscopic targets had $r \leq 23$. In addition to the high priority targets, several other galaxies were observed in order to pack efficiently the slitmasks that were used for the bulk of the spectroscopy. The spectroscopic observations were made with the Low Resolution Imaging Spectrograph \citep[LRIS;][]{lris}, in both longslit and multislit modes, and the Echellete Spectrograph and Imager \citep[ESI;][]{esi} on the W. M. Keck Telescopes. The spectroscopic data were reduced using scripts based on standard {\sc iraf}\footnote{IRAF (Image Reduction and Analysis Facility) is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy under cooperative agreement with the National Science Foundation.} tasks. Additional IDL scripts were used to process the ESI data. The spectroscopic and photometric data on all of the surveyed galaxies, along with the full details of the data reduction procedures, will be presented in a future paper dealing with the properties of the groups and the galaxies within them. \section{Mass Along the Line of Sight to B1608+656} The spectroscopic observations produced redshifts for 97 galaxies in the field. The typical uncertainties in the redshifts, based on the scatter of redshifts calculated from individual lines in each spectrum, were $\sigma_z \sim 0.0003$. The distribution of redshifts, in which several spikes are seen, is shown in Figure~\ref{fig_zhist}. We select group or cluster candidates by looking for structures that are concentrated both spatially and in redshift space, using the iterative method described by \citet{wilmangrp1}. We investigated structures with at least five members in one redshift bin, where the bin size was $\Delta z = 0.005$. To characterize each group, we estimate its line-of-sight velocity dispersion ($\sigma_v$) utilizing the ROSTAT package (Beers, Flynn, \& Gebhardt 1990). This package avoids assumptions of gaussianity in the underlying velocity distributions and includes bootstrap and jackknife error estimation for the group redshifts and dispersions. Many of these estimators are significantly more resistant to non-gaussianity in the sample than the traditional Gaussian; large differences between the different scale measures can be a sign of deviations from gaussianity in the sample, either in shape or due to outliers. In particular, we use three estimators. The first, $\sigma_{gap}$, is based on the gapper algorithm, which is recommended for very small datasets \citep{bee90}. The error on this estimator is the jackknifed biweight estimate. The second is the biweight scale estimate, $\sigma_{biwt}$, which is recommended for datasets with $\sim10$ members. Its confidence interval is taken from the jackknifed gapper. Finally, we use $\sigma_{gauss}$, the Gaussian estimator, and its 1-sigma Student's $t$ error. Consistency among the various dispersion estimates for a small redshift sample lends credence to the measurement. However, we note that velocity dispersions based on a small number of redshifts can be highly uncertain \cite[e.g.,][]{zm98}. \subsection{The Group Associated with B1608+656} The most prominent spike in the redshift distribution consists of eight galaxies with redshifts of $z \sim 0.63$. This spike includes the lensing galaxy at $z_\ell = 0.630$. However, we have not included the second lensing galaxy within the ring of images \citep[G2;][]{1608H0,1608surpi}. All indications are that G2 is merging with G1, and therefore that this group consists of at least nine members. As one test of the likelihood that this redshift spike represents a real group, we have plotted the spatial distribution of the galaxies in the spike (Figure~2). We find that the galaxies are spatially concentrated and centered roughly on the position of the lens. Seven of the eight galaxies in the redshift spike, or eight out of nine if G2 is included, are within a circle centered on the lens system of radius 2\farcm1, corresponding to 1~$h^{-1}$ comoving Mpc at the redshift of the lens. Figure 3 shows \ the redshift distribution in the region of the spike in terms of velocity offsets from the mean redshift of the group. The galaxies also have a tight distribution in velocity space, with all eight within $\pm$300~km\ s$^{-1}$\ of the mean redshift. Thus, we conclude that these galaxies represent a group, hereafter called Group 1, associated with the lensing galaxy. For Group 1, we find that the various $\sigma_v$ estimators obtained from the ROSTAT package are all consistent. This result strongly suggests that, despite the small number of redshifts in the group, the velocity distribution is well described by a Gaussian, and the resulting velocity dispersion is robust. In Table~\ref{tab_groupdata} we list the median redshift and the three estimates of the line-of-sight velocity dispersion ($\sigma_{gap}$, $\sigma_{biwt}$, and $\sigma_{gauss}$) with their associated errors (in km\ s$^{-1}$). Based on the number of group members, we use $\sigma_v = \sigma_{gap}$ for Group 1, since $\sigma_{gap}$ is the more appropriate estimator for very small data sets \citep{bee90}. Thus, we obtain $\sigma_v = 150 \pm 60$~km\ s$^{-1}$. \ifsubmode \clearpage \fi \begin{figure} \plotone{f1.eps} \caption{ Histogram showing the distribution of the 97 non-stellar redshifts obtained in the field of B1608+656. The width of the bins is $\Delta z = 0.005$. The most prominent spike, with eight galaxies, is at the redshift of the lensing galaxy ($z = 0.63$). \label{fig_zhist}} \end{figure} \begin{figure} \plotone{f2.eps} \caption{ Spatial distribution of the galaxies in Group 1. The field of view is 10\arcmin x 11\arcmin, with the axes labeled in terms of offsets from the B1608+656 lens system in arcseconds. The dots represent the positions of galaxies with $r \leq 23$, while the open circles mark the galaxies for which redshifts have been obtained. The open boxes mark the galaxies in the group. The large dashed circle has a radius of 1 $h^{-1}$ comoving Mpc at the redshift of the lensing galaxy. \label{fig_group1_zgals}} \end{figure} \begin{figure} \plotone{f3.eps} \caption{ Histogram showing the velocity distribution of the galaxies in and surrounding the redshift spike at $z = 0.63$. The bin widths are 200~km\ s$^{-1}$, approximately twice the uncertainties in determining the redshifts. The curve is a Gaussian with $\sigma = $150~km\ s$^{-1}$. \label{fig_vhist1608}} \end{figure} \ifsubmode \clearpage \fi \subsection{Other Mass Concentrations Along the Line of Sight} There are other spikes in the distribution of measured redshifts shown in Figure~\ref{fig_zhist}. In addition to the group at the redshift of the lens, there are three other group candidates in the observed distribution satisfying the redshift and spatial concentration criteria, with mean redshifts of $<z> = 0.265$ (Group 2), $<z> = 0.426$ (Group 3), and $<z> = 0.520$ (Group 4). Each group has a substantial number of confirmed members, with sizes of nine, seven, and 14 galaxies, respectively. The properties of these groups are given in Table~\ref{tab_groupdata}. For each group, we find that the various $\sigma_v$ estimators are all consistent. Based on the number of confirmed members in each group, we estimate the group velocity dispersions with $\sigma_{gap}$ for Groups 2 and 3, and $\sigma_{biwt}$ for Group 4. Figures \ref{fig_xy} and \ref{fig_vhist} show the spatial and velocity distributions, respectively, for the three additional groups detected in this field. We explore the effects that these groups may have on the determination of $H_0$ from B1608+656 in the following sections. \ifsubmode \clearpage \begin{figure} \plotone{f4.eps} \caption{ Spatial distributions of the galaxies in the additional three group candidates in the B1608+656 field, represented as in Figure~\ref{fig_group1_zgals}. The open circles represent galaxies for which redshifts have been obtained, while the open boxes represent confirmed group members. The dashed circle in each plot has a radius of 1$h^{-1}$ comoving Mpc at the redshift of the group and is centered at the median position of the confirmed group members. (a) Group 2 at $z \sim 0.27$. (b) Group 3 at $z \sim 0.43$. (c) Group 4 at $z \sim 0.52$. \label{fig_xy}} \end{figure} \else \begin{figure*} \plotone{f4.eps} \caption{ Spatial distributions of the galaxies in the additional three group candidates in the B1608+656 field, represented as in Figure~\ref{fig_group1_zgals}. The open circles represent galaxies for which redshifts have been obtained, while the open boxes represent confirmed group members. The dashed circle in each plot has a radius of 1$h^{-1}$ comoving Mpc at the redshift of the group and is centered at the median position of the confirmed group members. (a) Group 2 at $z \sim 0.27$. (b) Group 3 at $z \sim 0.43$. (c) Group 4 at $z \sim 0.52$. \label{fig_xy}} \end{figure*} \fi \begin{center} \ifsubmode \clearpage \begin{deluxetable}{cclrlllrrrr} \else \begin{deluxetable*}{cclrlllrrrr} \fi \tabletypesize{\scriptsize} \tablecolumns{10} \tablewidth{0pc} \tablecaption{Group Properties} \tablehead{ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{$\sigma_{gap}$} & \colhead{$\sigma_{biwt}$} & \colhead{$\sigma_{gauss}$} & \colhead{$\theta_{med}$\tablenotemark{a}} & \colhead{PA$_{med}$\tablenotemark{a}} & \colhead{$\theta_{lw}$\tablenotemark{b}} & \colhead{PA$_{lw}$\tablenotemark{b}} \\ \colhead{Group} & \colhead{$<z>$} & \colhead{$D_{\ell s}/D_s$} & \colhead{$N_{gals}$} & \colhead{(km s$^{-1}$)} & \colhead{(km s$^{-1}$)} & \colhead{(km s$^{-1}$)} & \colhead{(arcsec)} & \colhead{($\degr$)} & \colhead{(arcsec)} & \colhead{($\degr$)} } \startdata 1 & 0.6313 & 0.45 & 8\tablenotemark{c} & 150$\pm$60 & $130\pm100$ & $150^{+50}_{-20}$ & 25 & 30 & 8 & $-$5 \\ 2 & 0.2651 & 0.74 & 9 & 320$\pm$100 & $230\pm270$ & $320^{+120}_{-100}$ & 69 & 29 & 18 & 6 \\ 3 & 0.4263 & 0.66 & 7 & 270$\pm$110 & $260\pm120$ & $270\pm110$ & 63 & -166 & 46 & $-$155 \\ 4 & 0.5202 & 0.53 & 14 & 920$\pm$120 & $930\pm140$ & $890^{+240}_{-130}$ & 13 & 26 & 6 & $-$47 \\ 4a\tablenotemark{d} & 0.5163 & 0.53 & 7 & 410$\pm$160 & 100$\pm$320 & 430$\pm$110 & 24 & 47 & 46 & 6 \\ 4b\tablenotemark{d} & 0.5241 & 0.53 & 7 & 360$\pm$120 & 350$\pm$120 & 350$\pm$120 & 44 & $-$150 & 52 & $-$161 \\ \enddata \tablenotetext{a}{Median group position expressed as an offset from the lens. The PA is measured north through east.} \tablenotetext{b}{The luminosity-weighted group position expressed as an offset from the lens. The PA is measured north through east.} \tablenotetext{c}{This number does not include the second galaxy (G2) inside the Einstein ring of the lens system because there is no measured redshift for G2. However, G2 appears to be merging with the primary lens galaxy and thus appears to be a member of the group.} \tablenotetext{d}{Groups 4a and 4b are subsets of Group 4. See \S4.1.} \label{tab_groupdata} \ifsubmode \end{deluxetable} \else \end{deluxetable*} \fi \end{center} \ifsubmode \clearpage \begin{figure} \plotone{f5.eps} \caption{ Histograms showing the velocity distributions of the galaxies in the three other spatially-concentrated redshift spikes. The bin widths are 200~km\,s$^{-1}$, approximately twice the uncertainties in determining the redshifts. \label{fig_vhist}} \end{figure} \clearpage \else \begin{figure*} \plotone{f5.eps} \caption{ Histograms showing the velocity distributions of the galaxies in the three other spatially-concentrated redshift spikes. The bin widths are 200~km\,s$^{-1}$, approximately twice the uncertainties in determining the redshifts. \label{fig_vhist}} \end{figure*} \fi \section{Discussion} \subsection{A Cluster at $z = 0.52$?} Of the three additional candidate groups, the one at $z = 0.520$ stands out. Its velocity dispersion of $\sim$900~km\ s$^{-1}$\ implies that this structure is a cluster of galaxies rather than a group. If it is in fact a real cluster, it will have a significant effect on the measured value of $H_0$ obtained from the B1608+656 lens system (see next section). Therefore, we consider evidence that pertains to the reality of the cluster. There are two main arguments in favor of the redshift spike's representing a real cluster. The first is the concentration of the structure in both velocity space and in projection on the sky. Nine of the 14 spectroscopically-confirmed galaxies in the structure are within $\pm$1000km\ s$^{-1}$\ of the mean redshift, while 13 of 14 lie within a projected distance of 1 $h^{-1}$ comoving Mpc of the median position. The second is that nine of the 14 galaxies in the group lie along a tight sequence on a color-magnitude diagram, at $(r - i) \sim 0.45$ (Figure \ref{fig_cmri}). The arguments against the reality of the cluster are as follows. First, an examination of deep Advanced Camera for Surveys (ACS) images of this field (Program GO-10158; PI: Fassnacht) shows no obvious overdensity such as might be expected from a cluster. Furthermore, of the 11 galaxies that are members of Group 4 and also lie within the field of view of the ACS imaging, none appears to be a central bright early-type galaxy. In fact, nine out of the 11 group galaxies in the ACS imaging appear to be spirals. This is surprising given the red colors of these galaxies, but we will leave the discussion of this point to the followup paper in which we will focus on the properties of the groups and the galaxies within them. The second point arguing against the cluster hypothesis is that the group velocities (Figure 5c) are not normally distributed as one would expect from a relaxed cluster. With the large velocity dispersion implied by the measured redshifts, it is perhaps not unlikely that a random set of 14 galaxies does not produce a nice Gaussian shape. Further spectroscopy of the field, especially pushing deeper than the $r\sim 23$ limit used in the previous observations, may fill in the velocity structure and produce a velocity distribution that is closer to Gaussian. On the other hand, the last slitmask in our observing program had the targets re-prioritized to preferentially find galaxies in the cluster. Only one additional member was found, once again indicating that there is not the large overdensity of galaxies typically found in a cluster. Third, there was no extended X-ray emission detected in this field in a recent 30~ksec Chandra X-ray Observatory observation of this field \citep{chandra1608}. If the cluster is as massive as suggested by its velocity dispersion and is virialized, it should have been detected by the Chandra observations, which had a 3-$\sigma$ upper limit for detection that correspond to a velocity dispersion of $\sim$500~km\ s$^{-1}$\ \citep{chandra1608}. However, it is possible that the cluster is recently formed or is, in fact, a pair of merging groups. In these situations, it may be severely underluminous at X-ray wavelengths. Finally, a cluster this massive and centered as close to the lens as it appears to be would introduce a strong effect on the lensing signature. The external shear introduced by such a cluster would be much larger than required by the lens model \citep{1608H0}. Furthermore, it is possible that the surface mass density would be high enough at the position of the lens to form yet another image of the lensed source, i.e., the cluster would have to be treated in the strong lensing regime. \ifsubmode \clearpage \fi \begin{figure} \plotone{f6.eps} \caption{ Color-magnitude diagram for the B1608+656 field. The dashed lines represent the approximate completeness limit of the imaging obtained at the Palomar 60-Inch Telescope. The boxed points belong to the $z=0.52$ cluster candidate. The diagonal solid line represents the limit of the spectroscopy at $r \sim 23$. \label{fig_cmri}} \end{figure} \ifsubmode \clearpage \fi We feel that the arguments against the reality of the cluster are stronger than those in favor. The structure may instead be a pair of merging groups or some kind of filamentary structure. To explore the two-group hypothesis, we arbitrarily split the group into two parts by redshift. Group 4a is defined as the seven galaxies with redshifts smaller than the median Group 4 redshift, while Group 4b is comprised of the seven galaxies with redshifts larger than the median. Figure~\ref{fig_2group} shows the spatial and velocity distributions of the two groups. The velocity dispersions of the two groups are $\sim 400$~km\ s$^{-1}$\ and $\sim 350$~km\ s$^{-1}$. If the two-group explanation is correct, and these velocity dispersions are close to the true values, then it is not surprising that the X-ray observations of \citet{chandra1608} did not detect diffuse X-ray emission. The spatial distribution of Group 4b is centered slightly to the southwest of the Group 4a centroid. However, the two centroids are completely consistent, given the uncertainties. Another possibility is that the Group 4 galaxies lie along a filament. We note that much of the following discussion treats each group as a collection of individual halos, each associated with a group galaxy. Therefore, whether the galaxy under discussion is in a cluster, a group, or a filament is irrelevant. \ifsubmode \clearpage \begin{figure} \plotone{f7.eps} \caption{ Spatial (left) and velocity (right) distributions of galaxies in Group 4. The galaxies have been split into two smaller groups in velocity space. The spatial distribution of Group 4a (redshifts smaller than the median Group 4 redshift) are marked by boxes. The galaxies in Group 4b are marked with triangles. \label{fig_2group}} \end{figure} \clearpage \else \begin{figure*} \plotone{f7.eps} \caption{ Spatial (left) and velocity (right) distributions of galaxies in Group 4. The galaxies have been split into two smaller groups in velocity space. The spatial distribution of Group 4a (redshifts smaller than the median Group 4 redshift) are marked by boxes. The galaxies in Group 4b are marked with triangles. \label{fig_2group}} \end{figure*} \fi \subsection{Effect on Gravitational Lensing\label{sec_lensing}} We now consider the effect the groups associated with B1608+656 may have on the overall lensing gravitational potential, with a particular emphasis on the effect on $H_0$. While the angular separation of the lensed images in a gravitational lens system provides an accurate measurement of the projected lensing mass, it does not require that all of the mass be associated with the primary lensing galaxy. In fact, some of the mass can be contributed by other structures along the line of sight, such as the groups discussed above. The contribution of external mass is quantified through the convergence, $\kappa_{ext}$, that it causes at the location of the lensing galaxy. The convergence is just the scaled mass surface density: $$ \kappa_{ext} = \frac{\Sigma_{ext}}{\Sigma_c}; \quad \Sigma_c = \frac{c^2}{4 \pi\ G} \frac{D_s}{D_\ell\ D_{\ell s}}, $$ where $\Sigma_c$ represents the critical mass surface density required for multiple images to form. As usual, $D_\ell$, $D_{\ell s}$, and $D_s$ are the angular diameter distances between observer and lens, lens and source, and observer and source, respectively. The external convergence will lead to a value of $H_0$ that is too high if the full lensing mass is improperly assigned solely to the primary lensing galaxy, i.e., $$ H_{0,true} = H_{0,meas} (1 - \kappa_{ext}), $$ where $H_{0,meas}$ is the value obtained without properly including the external convergence in the lens model. We will use two methods to estimate the external convergence contributed by each of the groups along the line of sight to the B1608+656 system. Although neither of these methods may be entirely correct, the range of results that they produce should be representative of the true group convergences. The first, and more traditional, method is to assume that the group can be approximated as a smooth mass distribution. The distribution, for simplicity, is usually taken to be that produced by a singular isothermal sphere (SIS). In this case, the convergence contributed by the group, calculated at the location of the lens, is $$ \kappa_{SIS}(\theta_{cent}) = \frac{D_{\ell s}}{D_s} \frac{2 \pi \sigma_v^2}{c^2 \theta_{cent}} = \frac{b_{SIS}}{2 \theta_{cent}}, $$ where $\theta_{cent}$ is the angular offset between the center of the group and the lens system. The ``lens strength'' of an isothermal distribution is defined as $b = 4 \pi \sigma_v^2 D_{\ell s} / (D_s c^2)$, and for a singular isothermal sphere gives the Einstein ring radius in angular units. If $\theta_{cent}$ is measured in arcminutes and $\sigma_v$ is measured in km\ s$^{-1}$, then $$ \kappa_{SIS}(\theta_{cent}) \sim 0.015\ \left ( \frac{D_{\ell s}}{D_s} \right ) \left ( \frac{\sigma_v}{250~{\rm km\ s}^{-1}} \right )^2 \left ( \frac{\theta_{cent}}{{\rm arcmin}} \right )^{-1}. $$ The convergences due to the cluster and group candidates, calculated using the SIS assumption, are given in Table~\ref{tab_groupconv}. In each case the velocity dispersion used was that obtained from the gapper method. With the strong dependence of $\kappa_{ext}$ on $\theta_{cent}$, it becomes imperative to locate the group centroid accurately, which is extremely difficult with fewer than several tens of confirmed group members. For example, the centroid estimates listed in Table~\ref{tab_groupdata} differ substantially depending on whether they were obtained by taking the median position or the luminosity-weighted position. Therefore, we also apply a second method for estimating the group convergence. \begin{center} \ifsubmode \clearpage \begin{deluxetable}{cllccc} \else \begin{deluxetable*}{cllccc} \fi \tabletypesize{\scriptsize} \tablecolumns{10} \tablewidth{0pc} \tablecaption{Convergences due to Groups} \tablehead{ \colhead{Group} & \colhead{$\kappa_{SIS,med}$\tablenotemark{a}} & \colhead{$\kappa_{SIS,lw}$\tablenotemark{b}} & \colhead{$\kappa_{ind}$\tablenotemark{c}} & \colhead{$\kappa_{trunc}$\tablenotemark{c}} & \colhead{$N_{trunc}$\tablenotemark{d}} } \startdata 1 & 0.0056 & 0.018 & 0.025 & 0.012 & 1 \\ 2 & 0.016 & 0.061 & 0.013--0.026 & 0.0040--0.0082 & 1 \\ 3 & 0.011 & 0.015 & 0.014--0.028 & 0.0064--0.013 & 2 \\ 4 & \nodata & \nodata & 0.026--0.053 & 0.015--0.031 & 3 \\ 4a & 0.054 & 0.028 & \nodata & \nodata & \nodata \\ 4b & 0.023 & 0.019 & \nodata & \nodata & \nodata \\ \enddata \tablenotetext{a}{Convergence calculated with the group represented as a SIS and the group centroid represented by the median galaxy position.} \tablenotetext{b}{Convergence calculated with the group represented as a SIS and the group centroid represented by the luminosity-weighted mean.} \tablenotetext{c}{Range corresponds to a range of stellar velocity dispersions for the fiducial (brightest) galaxy in Groups 2, 3, and 4. The velocity dispersions range from 140--200~km\ s$^{-1}$.} \tablenotetext{d}{Number of galaxies contributing to $\kappa_{trunc}$.} \label{tab_groupconv} \ifsubmode \end{deluxetable} \clearpage \else \end{deluxetable*} \fi \end{center} The alternate method for computing the group convergence is to treat the group as a collection of individual galaxy halos, with no overall group halo. In other words, the mass sheet at the position of the lensing galaxy is composed of the overlapping halos of the other galaxies in the group. The use of this method is motivated by two considerations. First, by ignoring any shared group halo, this method explores what is probably an extreme case, especially when assuming that the galaxy halos may be truncated (see below). This extreme case should provide a fairly robust lower limit to the estimate of the group convergence. Second, these moderate-redshift groups may still be in the early phases of formation and thus the galaxies may not yet have lost a significant fraction of their individual halos to the shared group halo. Simulations performed by \citet{kzgroups} indicate that similar results are obtained whether the groups are treated as a single shared halo or a collection of individual halos. Furthermore, the input cosmological parameters (e.g., $H_0$) are recovered accurately when the group contribution is treated as a collection of galaxy halos. This accuracy is obtained even though they make the simplifying assumption that the group galaxies are circular when, in fact, their simulated galaxy mass distributions were elliptical. We follow the approach of \citet{kzgroups} and assign group galaxy masses (expressed in terms of their lens strengths, $b_i$) based on their optical magnitudes ($m_i$) via the relationship $$ b_i = b_{fid}\ 10^{-0.2(m_i - m_{fid})}, $$ where $b_{fid}$ and $m_{fid}$ are the lens strength and magnitude for a fiducial galaxy within the group. In each case, we take the fiducial galaxy to be the brightest confirmed group member. If each of the group members is treated as a singular isothermal sphere, as in \citet{kzgroups}, then the total convergence at the position of the lens is just the sum of the individual convergences at that position: $$ \kappa_{ind} = \sum_i \kappa_i = \sum_i \frac{b_i}{2 \theta_i}, $$ where $\theta_i$ is the distance between galaxy $i$ and the lens. The results for each of the four groups are given in Table~\ref{tab_groupconv}, while notes on the individual groups are given below. Another effect to consider is that due to the truncation of the dark matter halos of the group galaxies. In the calculation of $\kappa_{ind}$, all of the galaxy halos are assumed to be larger than the separation between the galaxies and the primary lens system. However, weak lensing studies by \citet{halo_cutoff} have suggested that galaxy halos have a truncation radius of $\sim 200 h^{-1}$~kpc. If this truncation radius is real and is typical, then the convergence due to galaxies located farther from the lens than this will drop off faster than the $(1/\theta)$ assumed in the calculation of $\kappa_{ind}$. To explore the possible size of this effect, we recalculate the external convergence contributed by each group, summing only the contributions from the galaxies that lie within a projected distance of $200 h^{-1}$ comoving kpc from the lens. The results are given as $\kappa_{trunc}$ in Table~\ref{tab_groupconv}. This is an extreme approach, but it should give an approximate lower limit to the convergence caused by each group. \subsubsection{Convergence Due to Group 1} For this group we use F814W magnitudes measured from {\em Hubble Space Telescope} observations of the field. These observations consist of the deep ACS images mentioned above, as well as Wide-Field Planetary Camera 2 images of the same field (Program GO-6555: PI Schechter). The fiducial galaxy is the primary lensing galaxy (G1) which has a F814W magnitude of 18.2 and a lens strength of $b_{1608} = 0\farcs83$ \citep{1608H0}. The group galaxy most distant from the lens system is not covered by the HST images so we do not include it in the calculation of $\kappa_{ind}$. However, it is far enough away (2\farcm7) that its contribution to the overall convergence is negligible. \subsubsection{Convergence Due to Group 4} As discussed in \S4.1, we do not believe that Group 4 is a real cluster. We note that an assumption that the cluster is real yields extremely large convergences at the position of the B1608+656 system. The SIS approximation leads to $\kappa_{SIS} \sim 6.5/\theta$. For an isothermal sphere, another image of the background object will be produced when $\kappa > 0.5$. This condition is satisfied for the foreground cluster whether the cluster centroid is estimated using the luminosity-weighted position or the straight median position. Using these centroids leads to $\kappa = 1.1$ or $\kappa = 0.51$, respectively, at the position of the lens. We have not seen evidence for an obvious lensed counterpart in deep 5~GHz or 8.5~GHz radio maps, although there are other compact sources in the field \citep[e.g., object 2;][]{1608delays1}. Furthermore, such large convergences, whether from a SIS or some other mass distribution would almost certainly lead to an image separation in the main lens larger than the 2\farcs1 that is observed. Once again, these arguments indicate that Group 4 is not a real cluster. Because the evidence against the reality of the cluster is strong, we instead apply the SIS approximation to our arbitrarily selected subgroups, 4a and 4b. For these groups, the convergences (Table \ref{tab_groupconv}) are much more reasonable The calculation of $\kappa_{ind}$ for Group 4 does not depend on whether the group is a single cluster or a pair of merging groups or a filament. For this group, the galaxy lens strengths are estimated from the $r$-band magnitudes from the ground-based imaging. The fiducial galaxy, which is the brightest confirmed group member, has $r = 20.0$. We will discuss different estimates of the lens strength of the fiducial galaxy below. We note here, however, that a galaxy at this redshift will have $b \sim 1$ if its stellar velocity dispersion is $\sim$260~km\ s$^{-1}$. \subsubsection{Convergence Due to Groups 2 and 3} The convergences calculated for Groups 2 and 3 using the SIS method are larger than that of Group 1 due to their larger velocity dispersions. For Group 3, the two SIS estimates of the convergences are similar because the group is compact and located approximately 1\arcmin\ from the lensing galaxy. In contrast, because Group 2 is roughly centered on the lens system, small displacements in the centroid can lead to large changes in $\kappa_{SIS}$. For both Group 2 and Group 3, the fiducial galaxies used in the $\kappa_{ind}$ and $\kappa_{trunc}$ methods are once again the brightest galaxies in their respective groups. In each case the fiducial galaxy has $r = 20.0$. Galaxies at the redshifts of Groups 2 and 3 have $b \sim 1$ for stellar velocity dispersions of $\sim$220~km\ s$^{-1}$\ and $\sim$230~km\ s$^{-1}$, respectively. \subsubsection{Total Convergences of Groups 2, 3, and 4} In order to compute the convergences for Groups 2--4, we need to assign a value for $b_{fid}$ for each group. We do this by assuming that the fiducial galaxy in each group has a velocity dispersion of 200~km\ s$^{-1}$. This is slightly smaller than the expected dark matter velocity dispersion for a $L^\ast$ galaxy of $\sigma^\ast_{DM} \sim 225$~km\ s$^{-1}$\ \citep[e.g.,][]{csklambda}. Because the fiducial galaxy in each case is the brightest one in its respective group, this assumption should be reasonable. The resulting lens strengths are 0.85, 0.76, and 0.61 for Groups 2, 3, and 4, respectively. The resulting convergences are given as the upper end of the ranges given for $\kappa_{ind}$ and $\kappa_{trunc}$ in Table~\ref{tab_groupconv}. These are within a factor of a few of the convergences estimated from assuming that the groups were isothermal spheres. One factor that could lessen the total convergence arises from the conversion of the group galaxy luminosities to masses. For example, in the ACS images, several of the confirmed members of the groups appear to be late-type galaxies. In this case the estimated masses are probably too high since spirals have lower mass-to-light ratios than ellipticals \citep[e.g.,][]{bld_dm}. To explore this effect, we assume that the fiducial galaxy for Groups 2, 3, and 4 has a velocity dispersion of 140~km\ s$^{-1}$. The resulting lens strengths are $\sim$50\% as large as those obtained above. The corresponding convergences are given as the lower ends of the ranges listed in the $\kappa_{ind}$ and $\kappa_{trunc}$ columns of Table~\ref{tab_groupconv}. In contrast, it is almost certain that not all of the group members have been identified. An incomplete census of the group will lead to an underestimate of the total convergence. We note that the deep ACS imaging of this field reveals at least 10 galaxies within 10\arcsec\ of the lens, corresponding to $\sim 40 - 90~h^{-1}$ comoving kpc at redshifts between 0.3 and 1.0. Only one of these galaxies has a measured redshift ($z = 0.6087$). Overall, a reasonable estimate is that true convergence from each group falls between the lower end of the $\kappa_{trunc}$ range and the largest of the other estimates of $\kappa_{ext}$. \subsection{Stellar Velocity Dispersion of the Lensing Galaxy} Until now we have ignored another datum which can be used to break the mass-sheet degeneracy, namely the measured stellar velocity dispersion of the lensing galaxy. This measurement provides an estimate of the enclosed mass at a smaller radius than that probed by the lensing signature. Thus, the mass measurement from lensing can be combined with that from stellar dynamics to provide an effective mass density slope in the lensing galaxy \citep[e.g.,][]{ktlsd,tklsd}. The result of adding an external mass sheet that is physically associated with the lensing galaxy is to flatten the overall mass density profile, which leads to a lower value of $H_0$ for given time delays. A measurement of the stellar velocity dispersion will reflect this flattening, i.e., the velocity dispersion will not be as high as would have been predicted from the lensing mass and an assumption of a steeper mass density profile. For a density profile that is close to isothermal (i.e., $\rho \propto r^{-2}$), essentially the same value of $H_0$ should be derived whether the system is modeled as a single galaxy with the effective mass density slope measured from lensing plus dynamics or as a galaxy with a steeper density profile but also including an external mass sheet \citep{leonmasssheet}. For a high-accuracy measurement of $H_0$, it is thus crucial to obtain high precision measurements of either the external convergence or the stellar velocity dispersion of the lensing galaxy. In the case of B1608+656, the measured velocity dispersion is $247 \pm 35$ km\ s$^{-1}$, indicating that the mass distribution in the lens (including any contribution from the group) is close to isothermal \citep{1608H0}. The uncertainties in the velocity dispersion are, in fact, the largest source of error in the current model. \subsection{Effect of Large-scale Structure} Finally, we consider the effect of the mass in large-scale structures (LSS) along the line of sight to the background source. In an ideal situation, it would be possible to account for all mass along the line of sight and to trace the rays from the background source through the distribution. In practice, even with deep space-based imaging, this is impractical. The uncertainties arising from photometric redshifts, the conversion of light to mass, etc., would far exceed the expected size of the effect. Therefore, it is necessary to examine the effect of LSS in a statistical manner. Analytic calculations have indicated that large-scale structure should affect the time delays for a lens system, and hence $H_0$, by a few percent \citep{seljaklss} or perhaps as much as 10\%, depending on the redshift of the background source \citep{barkanalss}. The effect of LSS can be to either increase or decrease the value of $H_0$ because voids along the line of sight effectively act as areas of negative density when compared to the mean density of the Universe. Although this effect is a systematic one for any given lens system, it should be random for random lines of sight \citep[e.g.,][]{seljaklss}. Therefore, it should be possible to significantly reduce the uncertainty due to large-scale structures on the global measurement of $H_0$ by averaging the values obtained from many lens systems. We note that lenses may lie along lines of sight that are biased (e.g., have more line-of-sight structure) and therefore that the effect of large-scale structure can not be eliminated completely by averaging lens-based measurements of $H_0$. We will investigate this question in a future paper (Fassnacht et al., in prep). \subsubsection{The Effect on $H_0$} Finally, we arrive at the total effect on the value of $H_0$ derived from the B1608+656 system. The group physically associated with the lens system, Group 1, provides an external convergence of $\sim 0.01 - 0.03$ (Table~\ref{tab_groupconv}). As discussed above, however, the degeneracy associated with Group 1 is broken by the measurement of the stellar velocity dispersion of the lensing galaxy. Therefore, the effect of Group 1 on the value of $H_0$ derived from this system gets folded into the determination of the radial mass slope in the lensing galaxy, and is thus already included in the published uncertainties on $H_0$ from B1608+656. The other groups along the line of sight also provide convergences of a few percent. These groups are almost certainly typical of the clumpy structure that can be found along any line of sight. None of them provide an extraordinarily large convergence, unless Group 4 is a real cluster. Therefore, one would be inclined to incorporate the effects of these groups into the effects of LSS and conclude that these groups will contribute to the random shift of a few percent in the value of $H_0$ determined from this system. However, as a result of our investigation of lens fields (Fassnacht et al., in prep) we have determined that the B1608+656 system lies along a line of sight that is overdense compared to typical lines of sight. Therefore, the expected effect of properly incorporating the effects of LSS on this system is to {\em reduce} the value of $H_0$ from the previously measured value. If the majority of the extra mass along this line of sight is being contributed by Groups 1-3, the size of the effect could be 5\% or more, given the convergences in Table~\ref{tab_groupconv}. This effect, thus, could be comparable in size to the statistical uncertainties quoted for the $H_0$ measurement from this system, and would reduce the central value to $\sim$70~km\ s$^{-1}$\ Mpc$^{-1}$\ or lower. \section{Summary and Future Work \label{summary} } Our spectroscopic observations of the field containing the lens system B1608+656 have provided evidence for four groups of galaxies along the line of sight to the lens system. These groups should contribute external convergence at the location of the lens and, therefore, will affect the determination of the Hubble Constant obtained with this system. However, quantifying the amount of external convergence is difficult. If each group contains all of its mass in a overall smooth halo, it becomes imperative to measure accurately the halo mass and centroid. These measurements can be highly uncertain when made based on $\sim 10$ redshifts. We therefore follow a second approach and treat each group as a collection of individual galaxy halos. To establish a firm lower limit to the convergence due to each group, we examine the convergence contributed only by galaxies within a projected distance of 200~$h^{-1}$ comoving kpc from the lens. However, our calculations do not include contributions from other galaxies within the truncation radius of the lens because their redshifts are unknown. Each of the groups contributes a convergence of a few percent. The overall effect on $H_0$ is less than suggested by the sum of the convergences because the stellar velocity dispersion of the main lensing galaxy has been measured. This measurement breaks the mass-sheet degeneracy, at least due to Group 1, by determining the effective radial mass density slope in the lensing galaxy. The effects of the other groups can be folded into the overall uncertainties due to large-scale structure along the line of sight, since none of the groups appears to be extremely massive. However, because the line of sight to B1608+656 appears to be significantly overdense compared to typical lines of sight, the effect of LSS will be to bias the current measurement of $H_0$ high, i.e., the true value of $H_0$ from this system should be lower than the published value. The size of the effect of the LSS could be 5\% or larger. We note that none of the newly discovered groups was obvious from optical images of this system. Therefore, it is important to closely investigate the fields of lens systems in order to examine possible sources of bias in lens-based measurements of $H_0$, especially if the stellar velocity dispersion of the lensing galaxy is unknown. We are actively working to reduce the uncertainties in the determination of $H_0$ from this lens system. The deep ACS imaging is being used as an input to new lensing code that can properly incorporate Einstein ring structure. This modeling should reduce the mass-slope uncertainties substantially in the region of the lensed images. We will also use the ACS imaging to search for weak lensing signatures of mass concentrations along the line of sight to the lens system. The weak lensing will provide further constraints on the amount of external convergence for B1608+656. Another approach to reducing the uncertainties in the mass slope would be to obtain higher sensitivity spectroscopy of the lensing galaxy. The spectroscopy would provide a more accurate determination of the stellar velocity dispersion. Further optical spectroscopy of the galaxies in the field and deeper X-ray imaging would also add information on the external mass distribution. Finally, by averaging measurements of $H_0$ obtained from many different lens systems, we should be able to reduce the uncertainties due to large-scale structure and thus obtain a precise global measurement of $H_0$ from lensing. \acknowledgments We thank Leon Koopmans, Tommaso Treu, and Tony Tyson for useful discussions. We thank the anonymous referee for his or her comments. CDF and JPM acknowledge support under HST program \#GO-10158. Support for program \#GO-10158 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations would not have been possible without the expertise and dedication of the staffs of the Palomar and Keck observatories. We especially thank Paola Amico, Karl Dunscombe, Grant Hill, Jean Mueller, Ron Quick, Kevin Rykoski, Gabrelle Saurage, Chuck Sorenson, Skip Staples, Wayne Wack, Cindy Wilburn, and Greg Wirth. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This work is supported in part by the European Community's Sixth Framework Marie Curie Research Training Network Programme, Contract No.\ MRTN-CT-2004-505183 `ANGLES'. \ifsubmode {\it Facility:} \facility{HST (ACS, WFPC2)}, \facility{Keck:I (LRIS)}, \facility{Keck:II (ESI)}, \facility{PO:1.5m (CCD13)} \fi
2024-02-18T23:39:50.702Z
2006-02-23T20:25:30.000Z
algebraic_stack_train_0000
619
7,721
proofpile-arXiv_065-3090
\subsection{Epsilon Expansion} The loop expansions for correlation functions computed in section \ref{sec-oneLoopAnswers} are expansions in powers of the dimensionless reaction rate, $g$. The problem is that in $d<2$, these expansions become badly ordered as we approach $L_\mu \to \infty$. However, since we have computed correlation functions in arbitrary dimension, $d$, we can convert the loop expansions into expansions in $\epsilon=2-d$ at a fixed value of $g$. For $\epsilon \ll 1$, eq. (\ref{eq-densityOneLoop2}) and eq. (\ref{eq-2pointOneLoop2}) can be written : \begin{eqnarray} \label{eq-densityOneLoop3}\langle R_\mu\rangle = L_\mu^{\epsilon-2}\frac{1} {\sqrt{g}} \left[1+\frac{g}{4\pi\epsilon} + \ldots\right], \end{eqnarray} and \begin{eqnarray} \label{eq-2pointOneLoop3}\langle R_{\mu_1}R_{\mu_2}\rangle &=& \langle R_{\mu_1}\rangle \langle R_{\mu_2}\rangle \left[1-\frac{g}{2\pi\epsilon} + \ldots\right]. \end{eqnarray} Of course these series are still badly ordered as $\epsilon\to 0$. The idea is to replace certain correlation functions with appropriate renormalised quantities, also expressed as expansions in $\epsilon$, such that the renormalised counterparts of the above expressions are well-ordered in $\epsilon$. The final pay-off comes when we find that these expressions remain well ordered in $\epsilon$ even when we take the limit $L_\mu\to \infty$ because of the presence of a {\em perturbative fixed point}, a structural feature of the theory which we must now explain in order to make sense of this scheme. \subsection{Renormalised Reaction Rate and $\beta$-function} \label{sec-betaFunction} \input ./lambda_R.eps.tex The presence of a perturbative fixed point for the $A+A\to A$ model was originally pointed out by Peliti \cite{peliti}. The corresponding calculations in the presence of a source were done by Droz\cite{droz}. This is sufficient to deal with the problem at hand. Nevertheless we shall paraphrase their arguments here for the sake of completeness. Let us define a renormalised reaction rate, $\lambda_R$, as the amputated 3-point vertex function shown diagrammatically in fig. \ref{fig-lambdaR}. After performing the algebra we find \begin{equation} \lambda_R = \lambda \left[ 1 - \frac{1}{2(2\pi)^\frac{d}{2}} \Gamma\left(\frac{\epsilon}{4}\right) g^{1-\frac{\epsilon}{4}} + \ldots\right]. \end{equation} Now we introduce a dimensionless renormalised reaction rate, $g_R = \lambda_R L_\mu^\epsilon$, as we did in eq. (\ref{eq-dimensionlessg}), which is given by \begin{equation} g_R = g - \frac{1}{2(2\pi)^{1-\frac{\epsilon}{2}}} \Gamma\left(\frac{\epsilon}{4}\right) g^{2-\frac{\epsilon}{4}} + \ldots. \end{equation} For small values of $\epsilon$ this can be written as \begin{equation} \label{eq-gRExpg} g_R = g - g_*^{-1}(\epsilon) g^2 + \ldots, \end{equation} where \begin{eqnarray} \label{eq-gstar} g_*^{-1}(\epsilon) &=& \frac{1}{2(2\pi)^{1-\frac{\epsilon}{2}}} \Gamma\left(\frac{\epsilon}{4}\right)\\ \nonumber &=& \frac{1}{2\pi\epsilon}+o(1) \hspace{0.5cm}\mbox{$\epsilon\ll 1$}. \end{eqnarray} Inverting eq. (\ref{eq-gRExpg}) allows us to convert perturbative expansions in the bare reaction rate, $g$, into expansions in the renormalised reaction rate, $g_R$. We find \begin{equation} \label{eq-gExpgR} g = g_R + g_*^{-1}(\epsilon) g_R^2 + \ldots. \end{equation} The crucial point to all of this analysis is the following observation. Although for positive $\epsilon$, $g$ diverges as $L_\mu\to\infty$, rendering perturbative expansions in $g$ useless for capturing the large mass behaviour of the theory, we will find that $g_R$ remains finite as $\mu\to\infty$. Furthermore, $g_R$ tends to a value which is of order $\epsilon$. Therefore for small $\epsilon$ we can use eq. (\ref{eq-gExpgR}) to convert expansions in $g$ into expansions in $g_R$ which then have a better chance of remaining non-singular when we take $L_\mu\to\infty$. The process of replacing $g$ with $g_R$ is usually called {\em coupling constant renormalisation} in the literature. The large mass behaviour of the renormalised reaction rate is determined by the $\beta$-function of the theory defined as \begin{equation} \label{defn-beta} \beta(g_R) = \left.\left(L_\mu\pd{g_R}{L_\mu}\right)\right|_\lambda. \end{equation} Using the fact that $L_\mu\pd{ }{L_\mu} g^n = n\epsilon g^n$ together with eq. (\ref{eq-gRExpg}) and eq. (\ref{eq-gExpgR}) we quickly find \begin{equation} \label{eq-beta} \beta(g_R) = \epsilon g_R (1 - g_*^{-1}(\epsilon) g_R + \ldots). \end{equation} Eq. (\ref{defn-beta}) now tells us how $g_R$ changes as we vary $L_\mu$. Solving this differential equation with the initial condition $g(L_0) = g_0$ determines how the reaction rate varies with scale. The behaviour is different in $d=2$ and $d<2$. In $d<2$, \begin{equation} L_\mu\pd{g}{L_\mu} = \epsilon g (1-\frac{g}{2\pi\epsilon}), \end{equation} so that \begin{equation} g_R(L_\mu) = \frac{g_0L_\mu^\epsilon}{(1-\frac{g_0}{2\pi\epsilon})L_0^\epsilon+\frac{g_0}{2 \pi \epsilon}L_\mu^\epsilon}. \end{equation} We note that $g_R$ goes to a fixed point value of $2\pi\epsilon$ (+corrections of $O(\epsilon^2)$) as $L_\mu\to\infty$ irrespective of the initial values of $g_0$ and $L_0$. For simplicity we can take $L_0=(1-\frac{g_0}{2\pi\epsilon})^{-1}$ giving \begin{equation} \label{eq-gdlt2} g_R(L_\mu) = \frac{g_0 L_\mu^\epsilon}{1+\frac{g_0}{2 \pi \epsilon}L_\mu^\epsilon}. \end{equation} This universal behaviour as $L_\mu\to\infty$ is what is meant when we say that the renormalisation group flow has a perturbative fixed point in $d<2$. In $d=2$, $\epsilon=0$ so that \begin{equation} L_\mu\pd{g}{L_\mu} = -\frac{g^2}{2\pi}, \end{equation} which gives \begin{equation} \label{eq-gdeq2} g_R(L_\mu) = \frac{g_0}{1+\frac{g_0}{2 \pi }\log \left(\frac{L_\mu}{L_0}\right)}. \end{equation} Thus in $d=2$ the reaction rate decays to 0 as $L_\mu\to\infty$ but logarithmically slowly and, unlike in the case $d<2$, retains some memory of the small scale cut-off, $L_0$. \subsection{Average density in $d<2$} \label{sec-avgDensity} Let us now show how all of this technology works by calculating the large mass behaviour of the average density. We define the renormalised density, $\langle R_\mu\rangle_{\rm R}$, by using eq. (\ref{eq-gExpgR}) to replace $g$ with the renormalised reaction rate, $g_R$ in eq. (\ref{eq-densityOneLoop3}). Using eq. (\ref{eq-gstar}), a Taylor expansion shows that the replacement of $g$ with $g_R$ cancels the $\epsilon$-singular term so we are left with \begin{eqnarray} \label{eq-renormalisedR_mu}\langle R_\mu\rangle_{\rm R} = L_\mu^{\epsilon-2}\frac{1}{\sqrt{g_R}} \left[1+ o(g_R^2) \right]. \end{eqnarray} Now as $L_\mu \to \infty$, $g_R \to g_*$ so we can now take the limit to obtain \begin{equation} \langle R_\mu\rangle_{\rm R} \sim L_\mu^{\epsilon-2}\frac{1}{\sqrt{2\pi\epsilon}} \left[1+ o(\epsilon^2) \right]. \end{equation} Finally note that as $\mu\to 0$, \begin{displaymath} L_\mu \sim \left(\frac{J \mu}{m_0}\right)^{-\frac{1}{d+2}}, \end{displaymath} allowing us to perform the inverse Laplace Transform required to return to mass space. Using the definition, eq. (\ref{eq-defnR_mu}), of $R_\mu$ we finally find \begin{equation} \label{eq-renormalisedDensity} N_m \stackrel{\sim}{\scriptstyle m\to\infty} -\frac{1}{\Gamma\left(-d/d+2\right)}\, \frac{1}{\sqrt{2\pi\epsilon}} \left[1+ o(\epsilon^2) \right]\, m^{-\frac{2d+2}{d+2}}, \end{equation} giving a scaling exponent which we know to be correct \cite{RM2000}. We recognise this as the K41 exponent. As discussed in section \ref{sec-dimensionalAnalysis} we could have obtained this answer simply from dimensional arguments once we recognised that the reaction rate is renormalised away to infinity and hence cannot play any role in the answer. However this would ignore the possibility of anomalous dimensions. By calculating the one loop corrections to $\langle R_\mu\rangle$ we confirmed the absence of any relevant (for $\langle R_\mu\rangle$) couplings other than the reaction rate itself. We shall find that this is not the case for the higher order correlation functions. \subsection{Higher order moments of the density} A natural object to study to gather more information about the mass distribution function would be moments of the density of the form $M_n(m) = \left<N_m^n\right>$. As explained in the appendix \ref{sec-shiftExplanation} (see eq. \ref{cor} and the explanation thereafter) these moments exhibit ``extreme'' anomalous scaling characterised by Burgers-like scalings : \begin{equation} \label{eq-extremeAnomalousScaling} \left< N_m^n \right> \sim \left< N_m\right> \stackrel{\sim}{\scriptstyle m\to\infty} m^{-\frac{2d+2}{d+2}}. \end{equation} For the MM however, this anomaly is somewhat trivial from a physical perspective. It arises because large masses become large by absorbing almost all nearby particles. Thus asymptotically, the number of heavy particles on a given lattice site ends up being either zero or one. Taking moments of such a distribution will always give the behaviour described by eq. (\ref{eq-extremeAnomalousScaling}). However, the analysis of appendix \ref{sec-shiftExplanation} which allows one to extract this essentially non-mean field behaviour from an initially weakly coupled theory is not trivial and can be expected to yield interesting results in other contexts. To observe true the multiscale structure of the mass model one should really study multipoint correlation functions. We do this next. \subsection{Higher order multi-point correlation functions in $d<2$} The analysis for the higher order correlation functions is not quite so simple as for the density. By replacing $g$ with $g_R$ in eq. (\ref{eq-2pointOneLoop2}) we get \begin{equation} \langle R_{\mu_1}R_{\mu_2}\rangle_{g\to g_R} = \langle R_{\mu_1}\rangle_{\rm R} \langle R_{\mu_2}\rangle_{\rm R} \left[1-\frac{g_R}{2\pi\epsilon} + o(g_R^2)\right]. \end{equation} We see that we have removed the $\epsilon$-singularities from the $\langle R_\mu\rangle$ factors but the singularity inside the square brackets remains. The correct definition of the renormalised 2-point function must include renormalisation of the amplitude of $C_2$, not just the reaction rate. This process is known as {\em composite operator renormalisation}. The correct definition of the renormalised 2-point function is therefore \begin{equation} \langle R_{\mu_1}R_{\mu_2}\rangle_{\rm R} = Z_2 \langle R_{\mu_1}R_{\mu_2}\rangle_{g\to g_R}, \end{equation} where the amplitude $Z_2$ is chosen so that $\langle R_{\mu_1}R_{\mu_2}\rangle_{\rm R}$ is nonsingular in $\epsilon$: \begin{equation} Z_2 = 1+\frac{g_R}{2\pi\epsilon} + o(g_R^2). \end{equation} The prefactor, $Z_n$, of the $n^{\rm th}$ order correlation function can be computed in a similar manner to the second order one. For example, in the loop expansion of the $3^{\rm rd}$ order correlation function, there are three diagrams containing singularities which are not removed by coupling constant renormalisation. These are shown in fig. \ref{fig-3pointFn}. For the $n$-point function there are $\frac{1}{2}n(n-1)$ such diagrams. Each of these diagrams contributes $\frac{g_R}{2\pi\epsilon}$ to the one-loop expression for $Z_n$ so that : \begin{equation} Z_n = 1 + \frac{1}{2}n(n-1) \frac{g_R}{2\pi\epsilon} + o(g_R^2). \end{equation} This situation is a bit more complicated than before. To extract the scaling exponent we employ the technology of renormalisation group (RG) which was not truly necessary to compute the scaling of the density. Our discussion follows closely the presentation of \cite{binney}. The approach is based on the simple observation, already made at the end of section \ref{sec-oneLoopAnswers}, that the $n^{th}$ order correlation function, $C^{(n)}(L_{\mu_1}\ldots L_{\mu_n})= \langle R_{\mu_1}\ldots R_{\mu_n}\rangle$ does not depend on the arbitrary length scale $L_\mu$, known in RG language as the {\em reference scale}. It immediately follows that \begin{eqnarray} \nonumber & &L_\mu\pd{}{L_\mu} \left( Z_n^{-1}(g_R)\, C^{(n)}_{\rm R}(L_{\mu_1}\ldots L_{\mu_n}, g_R, L_\mu) \right) = 0. \end{eqnarray} The $L_\mu$-dependence of the bracketed expression comes from three sources : an explicit dependence of $C^{(n)}_{\rm R}$ on $L_\mu$, an implicit dependence through $g_R(L_\mu)$ and an implicit dependence through $Z_n(g_R(L_\mu))$. We can thus write \begin{eqnarray} \label{eq-CS1} \left[ L_\mu\pd{}{L_\mu} + L_\mu\pd{g_R}{L_\mu}\pd{}{g_R} + L_\mu\pd{Z_n}{L_\mu}\pd{}{Z_n}\right]Z_n^{-1}(g_R)\, C^{(n)}_{\rm R}(L_{\mu_1}\ldots L_{\mu_n}, g_R, L_\mu) = 0, \end{eqnarray} where the partial derivative with respect to $L_\mu$ is now taken at fixed $g_R$ and $Z_n$ whose dependences on $L_\mu$ are catered for by the additional derivatives. This can then be arranged to give the equation : \begin{eqnarray} \label{eq-CS2} \left[ L_\mu\pd{}{L_\mu} + \beta(g_R)\pd{}{g_R} - \gamma_n(g_R)\right] C^{(n)}_{\rm R}(L_{\mu_1}\ldots L_{\mu_n}, g_R, L_\mu) = 0, \end{eqnarray} where \begin{eqnarray} \nonumber \gamma(g_R) &=& L_\mu\pd{}{L_\mu} \left(\log Z_2(g_R)\right),\\ &=& \frac{g_R}{2\pi} + o(g_R^2), \end{eqnarray} and $\beta(g_R)$ is given by eq. (\ref{eq-beta}). By itself, this equation just tells us how $C^{(n)}_{\rm R}$ varies with physically meaningless reference scale, $L_\mu$. However dimensional analysis provides extra information. Since the physical dimension of $C^{(n)}_{\rm R}$ is $L^{-n d}$ it must satisfy an Euler equation \cite{binney} \begin{equation} \label{eq-Euler} \left[\sum_{i=1}^n L_{\mu_i}\pd{}{L_{\mu_i}} + L_{\mu}\pd{}{L_\mu} +n d\right]C^{(n)}_{\rm R}(L_{\mu_1}\ldots L_{\mu_n}, g_R, L_\mu) =0. \end{equation} Suppose we now rescale all lengths by some amount, $\Lambda$, by introducing $\tilde{L}_{\mu_i}=\Lambda L_{\mu_i}$. Eq. (\ref{eq-Euler}) allows us to convert derivatives with respect to $L_\mu$ into derivatives with respect to $\Lambda$: \begin{equation} L_\mu\pd{}{L_\mu} C^{(n)}_{\rm R}(\tilde{L}_{\mu_1}\ldots \tilde{L}_{\mu_n}, g_R, L_\mu)= -\left( \Lambda\pd{}{\Lambda} + n d\right)C^{(n)}_{\rm R}(\tilde{L}_{\mu_1}\ldots \tilde{L}_{\mu_n}, g_R, L_\mu), \end{equation} so that eq. (\ref{eq-CS2}) can be written as \begin{equation} \label{eq-CS3} \left[ -\Lambda\pd{}{\Lambda} + \beta(g_R)\pd{}{g_R} -n d-\gamma_n(g_R)\right] C^{(n)}_{\rm R}(\tilde{L}_{\mu_1}\ldots \tilde{L}_{\mu_n}, g_R, L_\mu) = 0. \end{equation} This equation is called the Callan-Symanzic (C-S) equation. It tells us something physically useful, namely how the renormalised correlation function changes as we rescale its arguments by $\Lambda$. We wish to solve it in the limit of large $\Lambda$. This can be done using the method of characteristics. For $\Lambda=1$, $C^{(n)}_{\rm R}$ is given by the mean-field answer which is valid for small values of the $\tilde{L}_{\mu_i}$'s, thus providing an initial condition : \begin{equation} C^{(n)}_{\rm R}(\Lambda=1, g_R=g_0) = g_0^{-\frac{n}{2}}(L_{\mu_1}\ldots L_{\mu_n})^{-d}. \end{equation} For $d<2$, $\beta(g_R)=\epsilon g_R(1-\frac{g_R}{2 \pi \epsilon})$ and the characteristic equations are \begin{eqnarray} \dd{\Lambda}{s} &=& -\Lambda,\\ \dd{g_R}{s} &=& \epsilon g_R(1-\frac{g_R}{2 \pi \epsilon}), \\ \dd{C^{(n)}_{\rm R}}{s} & =& \left(n d+\frac{1}{2}n(n-1)\frac{g}{2\pi}\right), \end{eqnarray} with the boundary conditions \begin{eqnarray} \nonumber \Lambda(g_0,s_0)& =& 1,\\ \label{eq:boundary}g_R(g_0,s_0)& = &g_0,\\ \nonumber C^{(n)}_{\rm R}(g_0,s_0) &= & g_0^{-\frac{n}{2}}(L_{\mu_1} \ldots L_{\mu_n})^{-d}. \end{eqnarray} If we solve these equations, use the uniqueness of the characteristic curves to express $s_0$ and $g_0$ in terms of $\Lambda$ and $g_R$ and then evaluate the solution at $s=0$, the answer can be found explicitly: \begin{equation} C^{(n)}_{\rm R}(\Lambda, g_R) = \left(\sqrt{\frac{1}{2\pi \epsilon}\left(1-\left(1-\frac{2 \pi \epsilon}{g_R}\right)\Lambda^{-\epsilon}\right)}\right)^n \left(\Lambda L_{\mu_1}\ldots\Lambda L_{\mu_n}\right)^{-d} \left(\frac{\Lambda^{-\epsilon}-\left(1-\frac{2 \pi \epsilon}{g_R}\right)\Lambda^{-\epsilon}}{1-\left(1-\frac{2 \pi \epsilon}{g_R}\right)\Lambda^{-\epsilon}}\right)^{\frac{1}{2}n(n-1)}. \end{equation} Taking $\Lambda\to\infty$ we conclude \begin{equation} C^{(n)}_{\rm R}(\tilde{L}_{\mu_1}\ldots \tilde{L}_{\mu_n}, g_R, L_\mu) \sim \prod_{i=1}^n L_{\mu_i}^{\frac{1}{2}\epsilon(n-1)}\sqrt{\frac{1}{2 \pi \epsilon}}\, \tilde{L}_i^{-d-\frac{1}{2}\epsilon(n-1)}, \end{equation} independent of the value of $g_R$. This independence is the consequence of the presence of a fixed point of the $\beta$-function. All values of $g_R$ flow to the fixed-point value, $g^*=2\pi\epsilon$, leaving a universal answer in the limit of large $\Lambda$. It remains to perform the inverse Laplace Transform to find the scaling properties of the original mass-space correlation functions. To do this we note from eq. (\ref{eq-L}) that for large values of the $\tilde{L}_i$, \begin{equation} \tilde{L}_i = \left(\frac{J \tilde{\mu}_i}{D m_0}\right)^{-\frac{1}{d+2}}. \end{equation} It is then easy to perform the $n$ inverse Laplace Transforms with respect to the $\tilde{\mu}_i$ to get \begin{equation} C^ {(n)}_{\rm R}(\tilde{m}_1\ldots \tilde{m}_n, g_R, L_\mu) \sim \prod_{i=1}^n \tilde{m}_i^{-\frac{2 d + 2}{d+2}-\frac{\epsilon(n-1)}{2(d+2)}}. \end{equation} \input ./3pointFn.eps.tex The mass scaling of $C^{(n)}_{\rm R}$ is therefore $m^{-\gamma_n}$ with \begin{equation} \gamma_n = n\frac{2d+2}{d+2} + \frac{n(n-1)\epsilon}{2(d+2)}+o(\epsilon^2). \label{finalanswer} \end{equation} Note that $\gamma_n$ acquires a correction to the value predicted from K41 theory signalling the breakdown of self-similarity in low dimensions. This is the multiscaling curve against which we compared our numerical results in fig. \ref{fig-numerics}. \subsection{Logarithmic Corrections in $d=2$} \label{sec-RGdeq2} In $d=2$ scale invariance is broken by the presence of logarithmic corrections to the mean field scaling. For completeness, let us calculate the powers of the logarithms acquired by the $C^{(n)}_R$'s. In $d=2$, $\beta(g_R)=-\frac{g_R^2}{2\pi}$ and the C-S equation, eq. (\ref{eq-CS3}), reads : \begin{equation} \label{eq-CS2D} \left[ -\Lambda\pd{}{\Lambda} - \frac{g_R^2}{2\pi}\pd{}{g_R} -2 d-\frac{1}{2}n(n-1)\frac{g_R}{2\pi}\right] C^{(n)}_{\rm R}(\tilde{L}_{\mu_1}\ldots \tilde{L}_{\mu_n}, g_R, L_\mu) = 0. \end{equation} The initial condition is again given by the mean-field answer which in $d=2$ is \begin{equation} C^{(n)}_{\rm R}(\Lambda=1, g_R=g_0) = g_0^{-\frac{n}{2}}(L_{\mu_1}\ldots L_{\mu_n})^{-2}. \end{equation} The characteristic equations are \begin{eqnarray} \dd{\Lambda}{s} &=& -\Lambda, \\ \dd{g_R}{s} & =& -\frac{g_R^2}{2\pi},\\ \dd{C^{(n)}_{\rm R}}{s} & = &\left(2n+\frac{1}{2}n(n-1)\frac{g}{2\pi}\right), \end{eqnarray} with the boundary conditions as in eq. (\ref{eq:boundary}) with $d$ replaced by $2$. These can again be solved explicitly at $s=0$ to give: \begin{equation} C_n^{\rm R}(\Lambda,g_R) = \prod_{i=1}^{n} \sqrt{\frac{1}{g_R}+\frac{1}{2\pi}\log{\Lambda}}\, (\Lambda L_{\mu_i})^{-2} \left(\frac{2\pi}{g}\right)^{\frac{1}{2}(n-1)} \left( \frac{2\pi}{g}+\log{\Lambda}\right)^{-\frac{1}{2}(n-1)}, \end{equation} or, in terms of the rescaled lengths, $\tilde{L}_i$ : \begin{equation} C_n^{\rm R}(\tilde{L}_1\ldots \tilde{L}_n,g_R,L_\mu) = \prod_{i=1}^{n} \sqrt{\frac{1}{g_R}+\frac{1}{2\pi}\log{\frac{\tilde{L}_i}{L_{\mu_i}}}}\, \tilde{L}_i^{-2} \left(\frac{2\pi}{g}\right)^{\frac{1}{2}(n-1)} \left( \frac{2\pi}{g}+\log{\frac{\tilde{L}_i}{L_{\mu_i}}}\right)^{-\frac{1}{2}(n-1)}. \end{equation} The large mass limit corresponds to $\tilde{L}_i/L_{\mu_i} \to \infty$ in which case \begin{equation} C_n^{\rm R}(\tilde{L}_1\ldots \tilde{L}_n,g_R,L_\mu) \sim \prod_{i=1}^{n} \sqrt{\frac{1}{2\pi}\log{\frac{\tilde{L}_i}{L_{\mu_i}}}}\, \tilde{L}_i^{-2} \left(\frac{2\pi}{g}\right)^{\frac{1}{2}(n-1)} \left(\log{\frac{\tilde{L}_i}{L_{\mu_i}}}\right)^{-\frac{1}{2}(n-1)}. \end{equation} To recover the asymptotic behaviour in mass space, it is again necessary to take inverse Laplace transforms with respect to the $\tilde{\mu}_i$ as $\tilde{\mu}_i\to 0$. Recalling the definition, eq. (\ref{eq-L}) of $L_\mu$ we can write \begin{equation} C_n^{\rm R}(\tilde{m}_1\ldots \tilde{m}_n,g_R)\sim \int \prod_{i=1}^{n} e^{-\tilde{m}_i\tilde{\mu}_i}d\tilde{\mu}_i\, \left(\sqrt{-\frac{1}{8\pi}\log{m_i\tilde{\mu}_i}}\, \sqrt{\frac{J\tilde{\mu}_i}{D m_0}}\right)^n \left(\frac{2\pi}{g}\right)^{\frac{1}{2}n(n-1)} \left(-\frac{1}{4}\log{m_i\tilde{\mu}_i}\right)^{-\frac{1}{2}n(n-1)}. \end{equation} By introducing scaling variables $x_i=\tilde{m}_i \tilde{\mu}_i$ and keeping leading order terms in $\log{\tilde{m}_i/m_i}$ the asymptotic behaviour of this integral as the $\tilde{m}_i\to\infty$ is shown to be \begin{equation} C_n^{\rm R}(\tilde{m}_1\ldots \tilde{m}_n,g_R) \sim C(g_R) \prod_{i=1}^n \left(\sqrt{\frac{J}{D}\,\log{\frac{\tilde{m}_i}{m_i}}}\,\tilde{m}_i^{-\frac{3}{2}}\right)\, \left(\log{\frac{\tilde{m}_i}{m_i}}\right)^{-\frac{1}{2}(n-1)}. \end{equation} The density, $n=1$, picks up a square root of a logarithm coming from the renormalisation of the reaction rate. However the higher order correlation functions pick up {\em additional} logarithmic corrections which come from the anomalous dimension of the two point function. Note that in $d=2$ the asymptotic behaviour retains some memory of the low mass cut-offs, $m_i$. Furthermore the prefactor, denoted above by $C(g_R)$ remains dependent on the value of $g_R$, unlike in $d<2$. \subsection{Dimensional Considerations for the Mass Spectrum} Before doing detailed calculations, we begin by asking what we can learn about possible stationary states of the mass model from simple dimensional considerations. The first quantity of interest in characterising the long time behaviour of the model is the stationary mass spectrum, $\langle N_m \rangle$. $\langle N_m \rangle$ is the number of particles of mass $m$ per unit volume in the stationary state. Specifically, we would like to know how $\langle N_m \rangle$ scales with $m$ for large values of $m$. Since we are hoping for universal behaviour in the limit of large $m$, we assume that the mass spectrum does not depend on the position of the source, $m_0$. This assumption must be verified at a later stage. The remaining dimensional parameters upon which $N_m$ could, in principle, depend are $J$, $D$, $\lambda$ and , of course, $m$. We shall perform most of our computations in units where $D=1$ but for the purposes of dimensional analysis we shall keep $D$. The dimension of $N_m$ is $[N_m] = {\rm M}^{-1} {\rm L}^{-d}$. The dimensions of the other parameters are: $[J] = {\rm M} {\rm L}^{-d} {\rm T}^{-1}$, $[D] = {\rm L}^{2} {\rm T}^{-1}$ and $[\lambda] = {\rm L}^{d} {\rm T}^{-1}$. It is immediately evident that there are too many dimensional parameters in the model to uniquely determine the mass spectrum. One can readily verify that for any scaling exponent, $x$, and dimensionless constant, $c_1$, the formula \begin{equation} \langle N_m \rangle = c_1\, J^{x-1} D^\frac{(3-2x)d}{d-2} \lambda^\frac{(d+2)x-2d-2}{d-2}m^{-x}, \label{eq-generalDimAnalysis} \end{equation} is a dimensionally correct expression for $\langle N_m \rangle$ which scales as $m^{-x}$. This is different to the dimensional argument used by Kolmogorov in his theory of hydrodynamic turbulence. For that system, there is a unique dimensionally correct combination of parameters giving the energy spectrum. Eq. (\ref{eq-generalDimAnalysis}) allows us to pick out the scaling exponent, $x$, for the reaction and diffusion limited regimes: \begin{eqnarray} \label{eq-KZExponent}x^{\rm KZ} &=&\frac{3}{2},\\ \label{eq-K41Exponent}x^{\rm K41}&=&\frac{2d+2}{d+2}. \end{eqnarray} The above two exponents correspond to a different balance of physical processes in order to realise a stationary state. We briefly discuss each to explain the choice of nomenclature. \begin{itemize} \item We shall call $x^{\rm KZ}$ the Kolmogorov--Zakharov (K--Z) exponent since it is the analogue for aggregating particles of the K--Z spectrum of wave turbulence \cite{zakharovBook} in the sense that it is obtained as the stationary solution of a mean field kinetic equation. This spectrum describes a reaction limited regime where diffusion plays no role. \item We shall call $x^{\rm K41}$ the Kolmogorov 41 (K41) exponent since it is a closer analogue of the $5/3$ spectrum of hydrodynamic turbulence originally proposed by Kolmogorov in his 1941 papers on self-similarity in turbulence. This is because in the Navier-Stokes equations there is no dimensional parameter like the reaction rate controlling the strength of the nonlinear interactions. This exponent describes a diffusion limited regime where the reaction rate, $\lambda$, plays no role, reactions being effectively instantaneous. \end{itemize} The case $x=1$ corresponds to one in which $\langle N_m \rangle$ does not depend on the mass flux $J$. However, this is not of physical interest for this problem. On the other hand, each of the regimes characterised by $x^{\rm KZ}$ and $x^{K41}$ carry a mass flux and is relevant. \subsection{Self-Similarity Conjectures and Multipoint Correlation Functions} We are interested in more than just the average mass density $\langle N_m \rangle$. To characterise correlations in the the mass model we must also consider multipoint structure functions. Let $C_{n}(m_1,\ldots,m_n) (\Delta V)^n \prod_i dm_i$ be the probability of having particles of masses $m_i$ in the intervals $[m_i, m_i+dm_i]$ in a volume $\Delta V$ for $i=1\ldots n$. $C_1(m)$ is the average mass density $\langle N_m \rangle$. We ask how $C_n(m_1,\ldots,m_n)$ varies with mass when $m_1,\ldots,m_n \gg m_0$. In particular what is the value of the homogeneity exponent $\gamma_n$ defined through $C_n(\Gamma m_1,\ldots,\Gamma m_n) =\Gamma^{-\gamma_n} C_n(m_1,\ldots,m_n)$? As for the density, dimensional analysis alone is insufficient. The formula \begin{eqnarray} \nonumber C_{n}(m_1,\ldots,m_n) &=& c_n\,J^{\gamma_n-n} D^\frac{(3n-2\gamma_n)d}{d-2} \lambda^\frac{(d+2)\gamma_n+(2d+2)n}{d-2}\\ \label{eq-CnDimAnalysis} & &\ \ \ \ (m_1\ldots m_n)^{-\frac{\gamma_n}{n}}, \end{eqnarray} is dimensionally consistent for any exponent $\gamma_n$ where $c_n$ is a dimensionless constant. We are again assuming that the large mass behaviour of the $C_n$'s is independent of $m_0$. The simplest way to obtain a theoretical prediction for the mass dependence of the $C_n$'s is to use a self-similarity conjecture similar to Kolmogorov's 1941 conjecture about the statistics of velocity increments in hydrodynamic turbulence. Assume that $C_n$ depends only on the masses $m_i$, mass flux $J$ and either the reaction rate, $\lambda$ or the diffusion coefficient $D$. Depending on which assumption we make, dimensional analysis allows us to determine the mass dimension of $C_n$. For the reaction limited case we obtain \begin{equation} \gamma_n^{\rm KZ} = \frac{3}{2} n, \end{equation} and for the diffusion limited case, \begin{equation} \gamma_n^{\rm K41} = \frac{2d+2}{d+2} n. \label{kolmogorovanswer} \end{equation} Note that in both cases, the dependence of $\gamma_{n}$ on the index $n$ is linear, reflecting the assumed self-similarity of the statistics of the local mass distribution. When $n=1$, $\gamma_{1}^{\rm K41} =(2 d+2)/(d+2)$ coincides with the result of an exact computation of $\gamma_1$ for $d<2$ \cite{RM2000} so we expect that the K41 conjecture is the appropriate theory in $d<2$. In $d>2$ it is known that $\gamma_1=3/2$, hence $\gamma_{1}^{\rm KZ} = 3/2$ is the correct scaling for the density. Therefore the KZ conjecture is appropriate in higher dimensions. For $d>2$, the statistics of the MM should be accurately described by mean field theory (KZ) and the self-similarity conjecture should hold. In this paper we will concern ourselves with $d\leq 2$. The K41 self-similarity conjecture assumes that $C_n$ does not depend on $\lambda$, $m_0$, the lattice spacing, and the box size $\Delta V dm_1\ldots dm_n$. The lack of dependence on the lattice spacing is expected due to the renormalizability of the effective field theory describing the MM below two dimensions. We will however find an anomalous dependence of correlation functions on a length scale depending on the other parameters and the box size which leads to a violation of self-similarity. \subsection{Motivation} \label{subsec-FT-motivation} In order to check the validity of Kolmogorov conjecture for the mass model for $n>1$, we need to go beyond dimensional analysis. We shall do this by constructing an effective field theory that provides us with a continuum description of the model. We are helped by the fact that the effective field theory which describes the mass model is closely related to a much simpler and well studied theory which describes the $A+A\to A$ reaction-diffusion model. We shall then use standard techniques of statistical field theory to extract information about the mass model from this theory. As is well known, this model has critical dimension 2. In dimensions greater than 2, a mean field description, characterised by K--Z scaling works. In $d\leq 2$, this mean field description breaks down and correlations between particles become important. We shall show how to take into account these correlations and we demonstrate how they lead to the breakdown of self-similarity in $d\leq 2$ by calculating the scaling behaviour of the correlation functions, $C_n$, for large masses. Unlike for the case hydrodynamic turbulence, such an analysis is possible for the mass model. This is because in $d\leq 2$, the large mass statistics of the model in are governed by a perturbative fixed point of the renormalization group flow in the space of coupling constants of the model. The order of this fixed point is $\epsilon=2-d$, which allows one to compute the relevant scaling exponents in the form of an $\epsilon$-expansion. \subsection{Effective Action for the Mass Model} \label{subsec-FT-effectiveAction} Using Doi's formalism, it is possible to construct an effective field theory of the mass model. The steps in the procedure are as follows: \begin{enumerate} \item Write a {\em master equation} for time evolution of ${\mathbf P}(\{N_t(\V{x}_i,m)\})$, the probability of finding the system in a given configuration, $\{N_t(\V{x}_i,m)\}$. The master equation is linear and first order in time. \item Introduce creation and annihilation operators, $a_{i,m}$ and $a_{i,m}^\dagger$, which create and destroy particles of mass $m$ at site $\V{x}_i$. Then convert the master equation into a Schroedinger equation : \begin{displaymath} \dd{ }{t}\, \left|\psi(t)\right> = -H[a_{i,m},a_{i,m}^\dagger]\, \left|\psi(t)\right>, \end{displaymath} using Doi's formalism (second quantisation) \cite{doi1976a,doi1976b}. \item Use the Feynman trick to derive a functional integral measure which converts the second quantised Schroedinger equation into a continuous field theory. \end{enumerate} These steps are well described in the context of reaction-diffusion models in \cite{cardyhttp,lee1994}. For the model of interest here, the procedure is similar to the reaction-diffusion case with the algebra made slightly more complicated by the necessity to keep track of a mass index for each particle. Explicit formulae for the master equation and Hamiltonian operator of the MM are given in appendix \ref{app-MMHamiltonian}. After going over to a path integral formulation of the Schroedinger equation we can express the average density and other correlation functions as path integrals. Further detail on this procedure can be found in appendix \ref{sec-shiftExplanation}. For example the average density, $N_t(m) = \langle \phi_{\V{x},m}(\tau)\rangle$, is given, in the notation of appendix \ref{sec-shiftExplanation}, by \begin{equation} \label{eq-TMFieldTheory} N_t(m) = \int {\mathcal D}\phi {\mathcal D}\phi^* \phi_{\V{x},m}(\tau)\, e^{-S_{\rm MM}[\phi,\phi^*, D,J,t,\lambda]}, \end{equation} where \begin{eqnarray} \nonumber S_{\rm MM}[\phi, \phi^*, D,J,t,\lambda] &=& \int_0^t d\tau \int d^d\V{x}\,dm\ \left\{\phi^*\partial_t\phi \right. \\ & & \hspace{2.0cm} \left.+ H[\phi, \phi^*]\right\}, \label{eq-SMM} \end{eqnarray} and \begin{eqnarray*} H[\phi, \phi^*] &=& D\,\nabla_{\V{x}}\phi_m^*\cdot\nabla_{\V{x}}\phi_m + \frac{J}{m}\delta(m-m_0)\phi_m^*\\ &-&\lambda\int dm_1dm_2 \left\{\delta(m_2-m-m_1)\right.\\ & &\left.\left[\phi^*_{m_2} - 2\phi^*_{m} - \phi^*_{m}\phi^*_{m_1}\right] \phi_{m}\phi_{m_1}\right\}. \end{eqnarray*} \subsection{Dimensional Analysis of the Effective Action} As for the case of stochastic aggregation without source \cite{oleg2001}, it is helpful to nondimensionalise the fields $\phi$ and $\phi^*$ and express the action in eq. (\ref{eq-SMM}) solely in terms of dimensionless quantities. Introduce dimensionless fields, $\bar{\phi}$ and $\bar{\phi}^*$, in eq. (\ref{eq-TMFieldTheory}) by the following rescalings: \begin{eqnarray*} \tau &\to& t\, \tau,\\ \V{x} &\to& \sqrt{D t}\, \V{x},\\ m &\to& \lambda J t^2\, m,\\ \phi&\to& \frac{1}{\lambda^2 J t^3}\, \bar{\phi},\\ \phi^*&\to& \bar{\phi}^*, \end{eqnarray*} to obtain \begin{equation} \label{eq-TMdimensionless} N_t(m) = \int {\mathcal D}\phi {\mathcal D}\phi^* \phi_{\V{x},m}(\tau)\, e^{-\frac{1}{g}S_{\rm MM}[\bar{\phi},\bar{\phi}^*, 1,1,1,1]}, \end{equation} where the dimensionless interaction coefficient is \begin{equation} \label{eq-g} g = D^\frac{d}{2}\, t^\frac{d-2}{2}\, \lambda. \end{equation} The fact that $g\to 0$ as $t\to\infty$ for dimensions greater than 2 and $g\to\infty$ as $t\to\infty$ for dimensions less than 2 expresses the well known fact that the critical dimension of the mass model is 2. We shall have much more to say about this later. \subsection{The Stochastic Smoluchowski Equation} \label{subsec-FT-SSE} It is possible to establish an exact map between the field theory in eq.~(\ref{eq-TMFieldTheory}) and the following stochastic integro-differential equation, \cite{lee1994, oleg2001}: \begin{eqnarray} && \left(\frac{\partial}{\partial t} -D \nabla^2 \right) \phi(m) = \lambda \int_{0}^{m} dm' \phi(m') \phi(m-m') \nonumber\\ &&- 2\lambda \phi(m) N +\frac{J}{m_{0}}\delta (m-m_{0}) +i\sqrt{2\lambda}\phi(m) \eta(\vec{x},t), \label{sse} \end{eqnarray} where $N=\int_{0}^{\infty} dm'\phi(m')$, $i=\sqrt{-1}$, and $\eta(\vec{x},t)$ is white noise in space and time: \begin{equation} \langle \eta(\vec{x},t) \eta(\vec{x}',t')\rangle =\delta(t-t')\delta^d (\vec{x}-\vec{x}') . \end{equation} The technical details of this mapping can be found in appendix \ref{sec-shiftExplanation}. Without the noise term, one recognises eq. (\ref{sse}) as the mean field (Smoluchowski) equation of the model. Thus, all fluctuation effects are encoded in the imaginary multiplicative noise term. \subsection{Correspondence with $A+A \to A$ Model} \label{subsec-FT-2A2A} Eq. (\ref{sse}) simplifies after taking Laplace transform with respect to the mass variable \cite{oleg2001}. Let \begin{equation} R_{\mu} (\vec{x},t)=\int_{0}^{\infty} \!\!\! dm \phi(\vec{x},m,t)-\int_{0}^{\infty} \!\!\! dm \phi(\vec{x},m,t) e^{-\mu m}. \label{eq-defnR_mu} \end{equation} Then, \begin{equation} \left(\!\frac{\partial}{\partial t}\! -\! D \nabla^2\! \right)\! R_{\mu}(\vec{x} ,t) = -\lambda R_{\mu}^2+\frac{j_\mu}{m_{0}}+ 2 i \sqrt{\lambda}R_{\mu}(\vec{x},t)\eta(\vec{x},t), \label{sre} \end{equation} where $j_\mu=J(1-e^{-\mu m_{0}})$. In terms of field $R_{\mu}(\vec{x},t)$, eq. (\ref{sre}) becomes a stochastic version of the rate equation for the $A+A \rightarrow A$ reaction in the presence of a source. Hence, the computation of the average mass distribution in the mass model reduces to solving a one-species particle system with a $\mu$-dependent source and then computing the inverse Laplace transform with respect to $\mu$. For example, to compute the average density, $\langle N_m(t) \rangle$, for the mass model, we first calculate $\langle R_\mu\rangle$, the average of the solution of eq. (\ref{sre}) with respect to the noise, $\eta(\vec{x},t)$. We then take the inverse Laplace transform with respect to $\mu$ and obtain the density from eq. (\ref{eq-defnR_mu}). By applying the Martin--Siggia--Rose (MSR) procedure \cite{MSR} to eq. (\ref{sre}), we can write $\langle R_\mu\rangle$ as a functional integral : \begin{equation} \langle R_\mu\rangle = \int {\mathcal D}R_\mu {\mathcal D}\widetilde{R}_\mu\ R_\mu\, e^{-S_{\rm RD}[R_\mu,\widetilde{R}_\mu]}, \end{equation} where the effective action for the reaction-diffusion system described by eq. (\ref{sre}) is \begin{eqnarray} \nonumber S_{\rm RD}[R_\mu,\widetilde{R}_\mu] &=& \int d\V{x}dt \left[ \vphantom{ \frac{j}{m_0}} \widetilde{R}_\mu(\partial_t-D\Delta)R_\mu + \lambda \widetilde{R}_\mu R_\mu^2 \right.\\ & & \left. + \lambda \widetilde{R}_\mu^{2}R_\mu^2 - \frac{j}{m_0}\widetilde{R}_\mu \right]. \label{eq-Srd} \end{eqnarray} In order to compute higher order correlation functions $C_{n}(m,t)$, we need to know correlation functions of the form $\langle R_{\mu_{1}}(\vec{x},t) R_{\mu_{2}}(\vec{x},t) \ldots R_{\mu_{n}}(\vec{x},t)\rangle$. These are non-trivial, as the stochastic fields $R_{\mu}(\vec{x},t)$'s are correlated for different values of $\mu$ via the common noise term in eq. \ref{sre}. To clarify what is meant by this, we apply the MSR procedure to two copies of eq. (\ref{sre}) describing the evolution of $R_{\mu_1}$ and $R_{\mu_2}$ respectively to obtain a functional integral representation for $\langle R_{\mu_1} R_{\mu_2}\rangle$. This gives : \begin{eqnarray} \nonumber \langle R_{\mu_1} R_{\mu_2} \rangle &=& \int {\mathcal D}R_{\mu_1} {\mathcal D} \widetilde{R}_{\mu_1}{\mathcal D}R_{\mu_2} {\mathcal D} \widetilde{R}_{\mu_2}\ R_{\mu_1} R_{\mu_2} \\ \nonumber & &\times e^{-S_{\rm RD}[R_{\mu_1},\widetilde{R}_{\mu_1}]}\, e^{-S_{\rm RD}[R_{\mu_2},\widetilde{R}_{\mu_2}]}\\ \label{eq-2pointAction}& &\times e^{-\int d\V{x}dt\ 2\lambda \widetilde{R}_{\mu_1} \widetilde{R}_{\mu_2} R_{\mu_1} R_{\mu_2}}. \end{eqnarray} The point to note here is that the path integral measure for correlations of $R_\mu$ for {\em different} values of $\mu$ does not factorise owing to the presence of the last term in eq. (\ref{eq-2pointAction}). Thus, to compute $n$-point correlation functions in the MM, one needs to analyse a system of $n$ stochastic rate equations of $A+A\rightarrow A$ theory coupled via common noise terms of the form shown in eq. (\ref{eq-2pointAction}). Some detailed explanation of the physical interpretation of higher order correlation functions in terms of the probability of multi-particle configurations in the particle system is provided in appendix \ref{app-SSE}. \subsection{Feynman Rules} \label{subsec-FT-FeynmanRules} \input ./feynmanRules.eps.tex Solving eq. (\ref{sre}) perturbatively in $\lambda$ and $j$, and then averaging over noise, one can derive the set of Feynman rules for the computation of correlation functions. Alternatively they can be written down directly from the action, eq. (\ref{eq-Srd}). See \cite{cardyhttp} for the details of the procedure. However care must be taken to include the ``extra'' vertex which arises when computing correlations between fields with different $\mu$ indices. The Feynman rules are summarised in fig. \ref{fig1} with time increasing from right to left. The slightly more complicated prefactor for the quartic vertex takes into account the aforementioned ``extra'' vertex. The propagator is just the regular diffusive Green's function which, in $d$ spatial dimensions, is \begin{equation} G_0(\V{x}_2-\V{x}_1, t_2-t_1) = 4\pi(t_2-t_1)^{-\frac{d}{2}}\ e^{-\frac{\left|(\V{x}_2-\V{x}_1 \right|}{4(t_2-t_1)}}, \end{equation} or \begin{equation} \label{eq-diffGreenFn} \hat{G}_0(\V{k},t) = (2\pi)^{-\frac{d}{2}}\ e^{-k^2 t}, \end{equation} in the momentum-time representation usually used in computations. The one-point function, $\langle R_\mu\rangle$ is then given by the sum of all diagrams constructed from the building blocks shown in fig. \ref{fig1} with a single outgoing line. Likewise, the $n$-point correlation function $\langle R_{\mu_{1}}(\vec{x}_{1},t_{1})$ $R_{\mu_{2}}(\vec{x}_{2},t_{2})$ $\ldots R_{\mu_{n}}(\vec{x}_{n},t_{n})\rangle$ is given by the sum of the contributions of all diagrams which have $n$-outgoing lines. In section \ref{sec-pertExpansion} we shall turn to actual computations. \section{Introduction} \label{sec-introduction} \input intro.tex \section{Definition of the Model} \label{sec-model} \input model.tex \section{Dimensional Analysis and Self-Similarity Conjectures} \label{sec-dimensionalAnalysis} \input dimensions.tex \section{Field Theoretic Description of the Mass Model} \label{sec-fieldTheory} \input fieldTheoreticFormulation.tex \section{Numerical Simulations and Multiscaling} \label{sec-numerics} \input numerics.tex \section{Perturbative Expansion for Correlation Functions} \label{sec-pertExpansion} \input perturbationTheory.tex \section{Renormalisation Group Analysis for $d<2$} \label{sec-RG} \input RG.tex \section{Renormalised Smoluchowski Equation} \label{sec-RSE} \input RSE.tex \section{Conclusions} \label{sec-conclude} \input conclusion.tex \begin{acknowledgments} {CC acknowledges the support of Marie--Curie grant HPMF-CT-2002-02004. RR acknowledges the support of NSF grant DMR-0207106.} \end{acknowledgments} \subsection{Numerical Simulations of the Mass Model in d=1} \label{sec-numericalResults} \begin{figure} \includegraphics[width=12.0cm]{numerics.eps} \caption{\label{fig-numerics} $\gamma_n$ as a function of $n$ is one dimension. The straight line shows the Kolmogorov answer [eq. (\ref{kolmogorovanswer})]. The dotted line shows eq. (\ref{finalanswer}) with $\epsilon=1$ and terms of order $\epsilon^2$ and higher set to zero. The values $\gamma_0$, $\gamma_1$ and $\gamma_2$ are exact. $\gamma_3$ and $\gamma_4$ were obtained by Monte Carlo simulations performed on a lattice of size $10^5$ and averaged over $2\times 10^7$ Monte Carlo time steps with $J=4 D$. } \end{figure} We first look at the results of Monte Carlo simulations of the MM which confirm that there is indeed some interesting behaviour which requires explanation. In particular, numerical simulations show a breakdown of self-similarity in the mass model in one dimension and multiscaling of the correlation functions, $C_n$. The results are shown in fig. \ref{fig-numerics}. \subsection{Constant Flux Relation - Analytic Confirmation of Multiscaling} We know that the K41 hypothesis works for $n=1$. From fig. \ref{fig-numerics}, it is clear that the Kolmogorov scaling breaks down for $n>1$. It is also possible to analytically confirm that $\gamma_n\neq \gamma^{K41}_n$ in $d<2$ by computing $\gamma_2$. From the definition of $\gamma_2$, if follows that \begin{eqnarray} \Phi_{2}(m_{1}, m_{2})=\bigg( \frac{1}{m_{1}m_{2}}\bigg)^{\gamma_2/2} \phi\bigg( \frac{m_{1}}{m_{2}}\bigg),\label{sc2} \end{eqnarray} where $\phi$ is an unknown scaling function which satisfies $\phi (x)= \phi (1/x)$ due to a symmetry. Our aim is to compute $\gamma_{2}$ without using the $\epsilon$-expansion which we shall use in section \ref{sec-RG} to compute $\gamma_{n}$ for general $n$. As we are interested in $\Phi_{2}(m_{1}, m_{2})$ for $m_{1}, m_{2}>0$, \begin{eqnarray} \Phi_{2}(m_{1}, m_{2})=\int_{\sigma-i\infty}^{\sigma+i\infty} \int_{\sigma-i\infty}^{\sigma+i\infty} d\mu_{1} d\mu_{2} \langle R_{\mu_{1}} R_{\mu_{2}} \rangle, e^{-m_1\mu_1}\,e^{-m_2\mu_2}, \end{eqnarray} where $R_{\mu}$ solves eq. (\ref{sre}). Due to eq. (\ref{sc2}), \begin{eqnarray} \langle R_{\mu_{1}} R_{\mu_{2}} \rangle=\bigg( \frac{1}{\mu_{1}\mu_{2}} \bigg)^{1-\gamma_2/2}\psi\bigg( \frac{\mu_{1}}{\mu_{2}} \bigg), \label{sc2l} \end{eqnarray} where $\psi$ is an unknown scaling function. To find the large $m_{1}, m_{2}$ asymptotics of $\Phi$, we need to know the small $\mu_{1}, \mu_{2}$ asymptotics of $\langle R_{\mu_{1}} R_{\mu_{2}} \rangle$. Averaging eq. (\ref{sre}), with respect to noise and setting $\partial_{t} \langle R_{\mu} \rangle=0$ in the large time limit, we find that $\langle R_{\mu} R_{\mu} \rangle =\frac{j}{\lambda m_{0}} \approx \frac{J\mu}{\lambda}$ for $\mu \ll m_{0}$. Comparing this result with eq. (\ref{sc2l}) we find that $\gamma_2=3$. Note that $\gamma_2$ does not depend on dimension, $d$, of the lattice. Therefore, it is correctly predicted by mean field theory. The non-renormalization of $\gamma_2$ by diffusive fluctuations can be explained by mass conservation or, more precisely by constancy of the average flux of mass in the mass space, see \cite{CRZ1} for more details. Here, we simply wish to point out that the exact answers for $\gamma_1$ and $\gamma_2$ establish multiscaling non-perturbatively: the points $(0,0)$, $(1,\gamma_1)$ and $(2, \gamma_2)$ do not lie on the same straight line. Due to its close connection with mass conservation, the law $\gamma_2=3$ is a counterpart of the $4/5$ law of Navier-Stokes turbulence. Recall, that $4/5$ law states that the third order longitudinal structure function of the velocity field scales in the inertial range as the first power of the separation. It is interesting to notice, that Kolmogorov theory respects $4/5$ law in Navier-Stokes turbulence, but violates $\gamma_2=3$ in the MM. \subsection{Mean Field Analysis} \input ./meanField.eps.tex The mean field theory associated with the field theory described by the effective action, eq. (\ref{eq-SMM}) can be thought of in several complementary ways. Let us suppose that the reaction rate, $\lambda$, is the smallest parameter in the problem. This means that the dimensionless interaction coefficient, $g$, given by eq. (\ref{eq-g}) is small. In this case, the path integral in eq. (\ref{eq-TMdimensionless}) can computed in the limit $g\to 0$ using the saddle point method. In this limit, $\phi$ satisfies the Euler-Lagrange equation (expressed in dimensional variables) : \begin{eqnarray} && \left(\frac{\partial}{\partial t} -D \nabla^2 \right) \phi(m) = \lambda \int_{0}^{m} dm' \phi(m') \phi(m-m') \nonumber\\ &&- 2\lambda \phi(m) N +\frac{J}{m_{0}}\delta (m-m_{0}), \label{eq-EL} \end{eqnarray} which we recognise as the mean field equation derived for classical aggregation problems by Smoluchowski. Now, if $g\to 0$ then it follows that the noise term disappears from the non-dimensionalised version of the stochastic Smoluchowski equation, eq. (\ref{sse}), leaving us with a deterministic equation for the density, which is again the classical Smoluchowski equation. For readers interested in the analogy between stochastic aggregation and wave turbulence, the mean field Smoluchowski equation is the analogue of the kinetic equation. In \cite{CRZ1} we studied in great detail the stationary state of eq. (\ref{eq-EL}) and showed that the spectrum \begin{equation} \label{eq-KZ} N_m = \sqrt{\frac{J}{4\pi\lambda}}\, m^{-\frac{3}{2}}, \end{equation} is the exact stationary solution as $t\to\infty$. This solution carries a constant flux, $J$, of mass from small masses to large. This is the Kolmogorov-- Zakharov spectrum of the mass model which we identified from dimensional considerations in section \ref{sec-dimensionalAnalysis} as corresponding to a reaction limited regime. When do we expect the mean field answers to be correct? The K--Z solution is established in the limit of large times. Since the mean field results become exact in the limit $g\to 0$, eq. (\ref{eq-g}) implies that the spectrum (\ref{eq-KZ}) should be correct for $d>2$. For $d\leq 2$ the mean field approximation quickly breaks down and we must take into account the effect of fluctuations. This will be the main objective of the rest of this article. Let us now identify clearly the terms in the diagrammatic expansion which give the mean field answers so that we can see how to use our formalism to compute the fluctuations about the mean field. Since the mean field kinetic equation corresponds to the deterministic limit of the stochastic Smoluchowski equation, the corresponding field theory has no loops. Therefore we expect the mean field answers for the average density to correspond to the sum of all tree diagrams with a single outgoing line. Let us now analyse these. Let $R_{mf}$, denoted by a thick line with a cross, be the contribution to $R$ from all tree level diagrams. The equation satisfied by $R_{mf}$ is shown is diagrammatic form in fig. \ref{fig2}A. In equation form, it reads \begin{equation} \frac{d R_{mf}}{dt} = \frac{j_\mu}{m_0} - \lambda R_{mf}^2. \end{equation} This is easily solved to give \begin{equation} R_{mf}(t)= \sqrt{\frac{j_\mu}{m_0 \lambda}} \tanh\left( \sqrt{\frac{j_\mu \lambda}{m_0}} t \right) \stackrel{t\rightarrow \infty}{\longrightarrow} \sqrt{\frac{j_\mu}{m_0 \lambda}}. \end{equation} Performing the inverse Laplace transform in the limit $\mu\to 0$ we find that as $m \to \infty$, \begin{displaymath} N_m \sim \sqrt{\frac{J}{4\pi\lambda}}\, m^{-\frac{3}{2}}, \end{displaymath} and recover the K--Z spectrum as we should. Both the constant and the exponent agree with those obtained by the Zakharov transformation of the mean field kinetic equation confirming that our approach makes sense. It is convenient to define $G^{\rm mf}_\mu(x_2 t_2;x_1 t_1)$ as the propagator that includes all the tree level diagrams. The equation obeyed by it is shown in fig. \ref{fig2}B. The solution is \begin{eqnarray} G^{\rm mf}_\mu({\bf 2}; {\bf 1})&=& G_{0}({\bf 2}; {\bf 1}) \left[\frac {\cosh \sqrt{\frac{j \lambda}{m_0}} t_1 } {\cosh \sqrt{\frac{j \lambda}{m_0}} t_2 } \right]^2,\\ &\stackrel{t_{1,2}\rightarrow \infty}{\longrightarrow} & G_{0}({\bf 2}; {\bf 1}) e^{-\Omega (t_2-t_1)}, \end{eqnarray} where $G_{0}$ is the Green's function of the linear diffusion equation, eq. (\ref{eq-diffGreenFn}), and \begin{equation} \Omega_\mu = 2\sqrt{\frac{j_\mu\lambda}{m_{0}}}, \end{equation} is the inverse of the mean field response time. \subsection{Loop Expansion} In order to take into account fluctuations about the mean field answer we need to compute diagrams with loops. Ordering the terms in the perturbation series according to the number of loops is known as a {\em loop expansion}. Using the mean field density and response functions computed above simplifies the task of computing the sum of all diagrams with a given number of loops. We now demonstrate by power counting that loop expansion of the mean mass distribution corresponds to weak coupling expansion with respect to $\lambda$. The quantity $\langle R(\vec{x},t) \rangle $ is given by the sum of all diagrams with one outgoing line built out of blocks shown in fig. \ref{fig1}. Consider such a diagram containing $L$ loops, $V$ vertices and $N$ $R_{mf}$-lines. The corresponding Feynman integral contains (in the mixed momentum-time representation) $d L$ momentum integrals and $V$ time integrals. Hence, the integration over all times and momenta produces the factor $\Omega^{-V+ d L/2}\sim \lambda^{-\frac{V}{2}+\frac{dL}{4}}$. The $N$ $R_{mf}$-lines produce the factor $R_{mf}^{N}\sim \lambda^{-N\lambda/2}$. A factor $\lambda^{V}$ comes from $V$ vertices of the graph. Hence the corresponding Feynman integral is proportional to $\lambda^{-\frac{N}{2} +\frac{V}{2}+ \frac{dL}{4}}$. Note also that the number of triangular vertices in the graph is equal to $N-1$ and the number of quadratic vertices is equal to the number of loops $L$. Thus the total number of vertices is given by $V=L+N-1$. Therefore, any $L$-loop graph contributing to the average mass distribution is proportional to $\lambda^{-\frac{1}{2}+\frac{L}{2}(1+\frac{d}{2})}$. We conclude that loop expansion corresponds to the perturbative expansion of $R$ around the mean field value with the parameter $\lambda^{\frac{2+d}{4}}$. \subsection{Breakdown of Loop Expansion} \label{sec-breakdownOfLoopExp} The conditions under which the loop corrections to the mean field answer can be neglected are most simply derived using dimensional analysis. The scale of diffusive fluctuations is given by the only constant of dimension length which can be constructed out of $\mu$ and $J$: $L_{D}= (\mu J)^{-1/(d+2)}$. The dimensionless expansion parameter in the loop expansion above is $g_{0}(\mu)=\lambda L_{D}^{\epsilon/2}$, where $\epsilon=2-d$. The large mass behaviour of $N_m$ is determined by the small-$\mu$ behaviour of $R_{\mu}$. In $d< 2$, $g_{0}$ goes to infinity in the limit $\mu \rightarrow 0$ and the loop expansion breaks down. Thus a re-summation of the loop expansion is needed in order to extract the large-$m$ behaviour of $N_m$ in low dimensions. \subsection{Calculation of One Loop Corrections to Mean Field Theory} \label{sec-oneLoopAnswers} \input ./oneLoopAnswers.eps.tex Let us now compute $\langle R_\mu\rangle$ and $\langle R_{\mu_1} R_{\mu_2} \rangle$ to one loop order. The diagrams are shown in fig. \ref{fig-oneLoop}. The corresponding algebraic expressions are evaluated by dimensional regularisation in dimension $d$, no longer necessarily an integer. \begin{eqnarray} \nonumber \langle R_\mu\rangle &=& R^{\rm mf}_\mu + R^{(1)}_\mu + \ldots\\ \label{eq-densityOneLoop}&=& \sqrt{\frac{j_\mu}{\lambda m_0}} \left[1+\frac{\lambda^2 \Omega_\mu^\frac{d-4}{2}}{(4\pi)^\frac{d}{2}} \Gamma\left(\frac{\epsilon}{2}\right) \sqrt{\frac{j_\mu}{\lambda m_0}} + \ldots\right], \end{eqnarray} where $\ldots$ represent terms of higher order in $\lambda$ which necessarily have more loops. In this formula we have introduced the quantity $\epsilon$, defined as \begin{equation} \epsilon = 2-d, \end{equation} to measure the deviation of the dimension of the system from the critical dimension. The four diagrams for $\langle R_{\mu_1} R_{\mu_2} \rangle$, shown in fig. \ref{fig-oneLoop}, give the following respective contributions : \begin{eqnarray} \nonumber \langle R_{\mu_1}R_{\mu_2}\rangle &=& R^{\rm mf}_{\mu_1} R^{\rm mf}_{\mu_2} - \frac{2 \lambda}{(8\pi)^\frac{d}{2}} \frac{R^{\rm mf}_{\mu_1}R^{\rm mf}_{\mu_2}}{(\Omega_{\mu_1}\!\!+\! \Omega_{\mu_2})^\frac{\epsilon}{2}}\Gamma\left(\frac{\epsilon}{2}\right) \\ &&\label{eq-2pointOneLoop} + R^{(1)}_{\mu_1}R^{\rm mf}_{\mu_2} + R^{\rm mf}_{\mu_1} R^{(1)}_{\mu_2}\ldots,\\ \nonumber &=& \langle R_{\mu_1}\rangle \langle R_{\mu_2}\rangle \left[1-\frac{2 \lambda}{(8\pi)^\frac{d}{2}} \frac{1} {(\Omega_{\mu_1}\!\!+\!\Omega_{\mu_2})^\frac{\epsilon}{2}} \Gamma\left(\frac{\epsilon}{2}\right) + \ldots\right], \end{eqnarray} an expression which is correct to one loop order. Note that the second diagram in the expression for $\langle R_{\mu_1} R_{\mu_2} \rangle$ describes the correlation between $R_\mu$ fields for different values of $\mu$ and prevents the factorisation of the 2-point function into a product of 1-point functions. We shall need these expressions again to when we use RG to resum the loop expansion. For a given mass scale, $m$, there is a corresponding $\mu$ scale, $1/m$ and a corresponding length scale, $L_\mu$ defined as \begin{equation} \label{eq-L} L_\mu = \left(\frac{j_\mu}{D m_0}\right)^{-\frac{1}{d+2}}. \end{equation} At this point, let us also define the dimensionless reaction rate, $g$, as \begin{equation} \label{eq-dimensionlessg} g = \lambda L_\mu^\epsilon. \end{equation} In what follows, it shall be convenient to express eq. (\ref{eq-densityOneLoop}) and eq. (\ref{eq-2pointOneLoop}) in terms of $L_\mu$ and $g$. This gives \begin{eqnarray} \label{eq-densityOneLoop2}\langle R_\mu\rangle = L_\mu^{\epsilon-2}\frac{1} {\sqrt{g}} \left[1+\frac{1}{4(2\pi)^{1-\frac{\epsilon}{2}}} \Gamma\left(\frac{\epsilon}{2}\right)\, g^{1-\frac{\epsilon}{4}} + \ldots\right], \end{eqnarray} and \begin{eqnarray} \label{eq-2pointOneLoop2}\langle R_{\mu_1}R_{\mu_2}\rangle &=& \langle R_{\mu_1}\rangle \langle R_{\mu_2}\rangle \left[1-\frac{g^{1-\frac{\epsilon}{4}}}{(4\pi)^{1-\frac{\epsilon}{2}}} \Gamma\left(\frac{\epsilon}{2}\right)\right.\\ \nonumber && \times \left. \left(\left(\frac{L_{\mu_1}}{L_\mu}\right)^{\frac{\epsilon-4}{2}}\!\!\!\!\!+\left(\frac{L_{\mu_2}}{L_\mu}\right)^{\frac{\epsilon-4}{2}}\right)^{-\frac{\epsilon}{2}}+\!\!\! \ldots\right]. \end{eqnarray} Note that the $\mu$ dependence of this expression is illusory since $g$ also depends on $\mu$. To study the behaviour of the corrections to mean field answers which we have just calculated, we need to study the large $m$ behaviour of the Laplace transforms with respect to the $\mu$'s of the expressions in eq. (\ref{eq-densityOneLoop}) and eq. (\ref{eq-2pointOneLoop}). Simple calculation shows that the second terms inside the square brackets in these expressions diverge as the $\mu$'s are taken to zero when $\epsilon>0$ signifying a breakdown of the loop expansion. This is as expected from the power counting argument of section \ref{sec-breakdownOfLoopExp}.
2024-02-18T23:39:50.724Z
2005-10-14T18:01:19.000Z
algebraic_stack_train_0000
621
9,731
proofpile-arXiv_065-3157
\section{Introduction} \label{sec:introduction} The field of extrasolar planet research has recently made a leap forward with the direct detection of extrasolar giant planets (EGPs). Using Spitzer Space Telescope, \citet{Charbonneau05} and \citet{Deming05} have detected infrared photons from two transiting planets, TrES-1b and HD209458b, respectively. \citet{Chauvin04,Chauvin05} have reported the infrared imaging of an EGP orbiting the nearby young brown dwarf 2M1207 with VLT/NACO, whereas \citet{Neuhauser05} have collected evidence for an EGP companion to the T-Tauri star GQ Lup using VLT/NACO as well. Although there are claims that the direct detection of terrestrial planets could be performed from the ground with -- yet to come -- extremely large telescopes \citep{Angel03,Chelli05}, it is widely believed that success will be more likely in space. Direct detection is the key to spectroscopy of planetary atmospheres and discovery of biomarkers, namely indirect evidence of life developed at the planetary scale \citep[e.g.][]{DesMarais02}. Both NASA and ESA have space mission studies well underway to achieve this task. Darwin, the European mission to be launched in 2015, will be a thermal infrared nulling interferometer with three 3.5-m free-flying telescopes \citep{Karlsson04}. Terrestrial Planet Finder, the American counterpart, will feature two missions: a $8\!\times\!3.5$~m-monolithic visible telescope equipped with a coronagraph (TPF-C) to be launched in 2015, and an analog to Darwin (TPF-I) to be launched in the 2015--2019 range \citep{Coulter04}. The direct detection of the photons emitted by a terrestrial planet is made very challenging by the angular proximity of the parent star, and by the very high contrast (i.e. luminosity ratio) between the planet and its star: about $10^6$ in the thermal infrared and about $ 10^{10}$ in the visible. Both wavelength ranges have their scientific merits and technical difficulties, and both of them are thought to be necessary for an unambiguous detection of habitability and signs of life \citep[e.g.][]{DesMarais02}. In this paper, we deal with the visible range only. In the visible, planet detection faces two fundamental noise sources: (i) quantum noise of the diffracted star light, and (ii) speckle noise due to the scattering of the star light by optical defects. \citet{Labeyrie95} proposed a technique based on \emph{dark speckles} to overcome speckle noise: random fluctuations of the atmosphere cause the speckles to interfere destructively and disappear at certain locations in the image, thus creating localized dark spots suitable for planet detection. The statistical analysis of a large number of images then reveals the planet as a spot persistently brighter than the background. \citet{Malbet95} proposed to use a deformable mirror (DM) instead of the atmosphere to make speckles interfere destructively in a targeted region of the image called \emph{search area} or \emph{dark hole} (DH or $\mathcal{H}$). Following the tracks of these authors, this paper discusses methods to reduce the speckle noise below the planet level by using a DM and an ideal coronagraph. However, unlike \citet{Malbet95}, we propose non-iterative algorithms, in order to limit the number of long exposures needed for terrestrial planet detection. We will refer to these methods as \emph{speckle nulling} techniques, as \cite{Trauger04} call them. Technical aspects of this work are inspired by the High Contrast Imaging Testbed \cite[HCIT;][]{Trauger04}, a speckle-nulling experiment hosted at the Jet Propulsion Laboratory, specifically designed to test TPF-C related technology. After reviewing the process of speckle formation to establish our notations (\S\ref{sec:speckle_formation}), we derive two speckle nulling methods in the case of small aberrations (\S\ref{sec:speckle_nulling}). The speckle nulling phase is preceded by the measurement of the electric field in the image plane (\S\ref{sub:measurement}). The performance of both methods are then evaluated with one- and two-dimensional simulations (\S\ref{sec:simulations}), first with white speckle noise (\S\ref{sub:sim_white}), then with non-white speckle noise (\S\ref{sub:sim_real}). Various effects and instrumental noises are considered in \S\ref{sec:discussion}. Finally, we conclude and discuss some future work (\S\ref{sec:conclusion}). \section{Speckle formation} \label{sec:speckle_formation} This paper is written in the framework of Fourier optics considering a single wavelength, knowing that a more sophisticated theory (scalar or vectorial) in polychromatic light will eventually be needed. Fourier transforms (FTs) are signaled by a hat. Let us consider a simple telescope with an entrance pupil $\mathcal{P}$. In the pupil plane, we use the reduced coordinates $(u,v) = (x/\lambda,y/\lambda)$, where $(x,y)$ are distances in meters and $\lambda$ is the wavelength. We define the pupil function by \begin{equation} \label{eq:P} P(u,v) \equiv \left \{ \begin{array}{l} 1 \mbox{ if } (u,v) \in \mathcal{P}, \\ 0 \mbox{ otherwise.} \end{array} \right. \end{equation} Even in space, i.e. when not observing through a turbulent medium like the atmosphere, the optical train of the telescope is affected by phase and amplitude aberrations. Phase aberrations are wavefront corrugations that typically originate in mirror roughness caused by imperfect polishing, while amplitude aberrations are typically the result of a heterogeneous transmission or reflectivity. Moreover, Fresnel propagation turns phase aberrations into amplitude aberrations, and the reverse \citep[e.g.][]{Guyon05b}. Regardless of where they originate physically, all phase and amplitude aberrations can be represented by a complex aberration function $\phi$ in a re-imaged pupil plane, so that the aberrated pupil function is now $P e^{i \phi}$. The electric field associated with an incident plane wave of amplitude unity is then \begin{equation} \label{eq:E_pu} E(u,v) = P(u,v)\,e^{i \phi(u,v)}. \end{equation} Exoplanet detection requires that we work in a regime where aberrations are reduced to a small fraction of the wavelength. Once in this regime, we can replace $e^{i \phi}$ by its first order expansion $1 + i \phi$ (we will discuss in \S\ref{sub:discuss_linearity} the validity of this approximation). The electric field in the image plane being the FT of (\ref{eq:E_pu}), we get \begin{equation} \label{eq:E_im} \widehat{E}(\alpha,\beta) = \widehat{P}(\alpha,\beta) + i \, \widehat{P\phi}(\alpha,\beta), \end{equation} where $(\alpha,\beta)$ are angular coordinates in the image plane. The physical picture is as follows. The first term ($\widehat{P}$) is the direct image of the star. The second term ($\widehat{P\phi}$) is the field of speckles surrounding the central star image, where each speckle is generated by the equivalent of first-order scattering from one of the sinusoidal components of the complex aberration $\phi$. Each speckle is essentially a ghost of the central PSF. In the remainder of this paper, we focus on means to measure and correct the speckles in a coronagraphic image. Following \cite{Malbet95} we will leave out the unaberrated PSF term by assuming that it is was canceled out by a coronagraph of some sort (see \citet{Quirrenbach05} for a review on coronagraphs). Thus we clearly separate the gain in contrast that can be obtained by reducing the diffracted light with the coronagraph on one hand, and by fighting the scattered light with the speckle nulling technique on the other hand. \section{Speckle nulling theory} \label{sec:speckle_nulling} The purpose of speckle nulling is to reduce the speckle noise in a central region of the image plane. This region, the dark hole, then becomes dark enough to enable the detection of companions much fainter than the original speckles. Speckle nulling is achieved by way of a servo system that has a deformable mirror as actuator. Because our sensing method requires DM actuation and is better understood with the knowledge of the command control theory, we first model the deformable mirror (\S\ref{sub:deformable_mirror}), then present two algorithms for the command control (\S\ref{sub:field_nulling} \& \ref{sub:energy_min}), and conclude with the sensing method (\S\ref{sub:measurement}). \subsection{Deformable mirror} \label{sub:deformable_mirror} The deformable mirror (DM) in \cite{Trauger03} consists of a continuous facesheet supported by $N\!\times\!N$ actuators arranged in a square pattern of constant spacing. This DM format is well adapted to either square or circular pupils, the only pupil shapes that we consider in this paper\footnote{Two square DMs can be assembled to accommodate an elliptical pupil such as the one envisioned for TPF-C.}. We assume that the DM is physically located in a plane that is conjugate to the entrance pupil. However, what we call DM in the following is the projection of this real DM in the entrance pupil plane. The projected spacing between actuators is denoted by $d$. We assume that the optical magnification is such that the DM projected size is matched to the entrance pupil, i.e. $Nd = D$, where $D$ is either the pupil side length or its diameter. The DM surface deformation in response to the actuation of actuator ${(k,l) \in \{0 \ldots N\!-\!1 \}^2}$ is described by an \emph{influence function}, denoted by $f_{kl}$. The total phase change introduced by the DM (DM phase function) is \begin{equation} \label{eq:psi} \psi(u,v) \equiv \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl}\,f_{kl}(u,v), \end{equation} where $a_{kl}$ are actuator strokes (measured in radians). Note that contrary to the complex aberration function $\phi$, the DM phase function is purely real. With an ideal coronagraph and a DM, the image-plane electric field formerly given by (\ref{eq:E_im}) becomes \begin{equation} \label{eq:E_im2} \widehat{E}'(\alpha,\beta) = i\,\widehat{P\phi}(\alpha,\beta) + i\,\widehat{P\psi}(\alpha,\beta). \end{equation} In the next two sections, we explore two approaches for speckle nulling. In \S\ref{sub:field_nulling}, we begin naively by trying to cancel $\widehat{E}'$. Because there is a maximum spatial frequency that the DM can correct for, the DH has necessarily a limited extension. Any energy at higher spatial frequencies will be aliased in the DH and limit its depth. Therefore, the DM cannot be driven to cancel $\widehat{E}'$, unless $\widehat{P\phi}$ is equal to zero outside the DH (i.e. unless there are already no speckles outside the DH). With this in mind, we start over in \S\ref{sub:energy_min} with the idea that speckle nulling is better approached by minimizing the field energy. \subsection{Speckle field nulling} \label{sub:field_nulling} The speckle field nulling approach consists in trying to null out $\widehat{E}'$ in the DH region ($\mathcal{H}$), meaning we seek a solution to the equation \begin{equation} \label{eq:field1} \forall (\alpha,\beta) \in \mathcal{H}, \quad \widehat{P\phi}(\alpha,\beta) + \widehat{P\psi}(\alpha,\beta) = 0, \end{equation} although, as we shall show, this equation has no exact solution unless $\widehat{P\phi}$ happens to be a band-limited function within the controllable band of the DM. By replacing $\psi$ with its expression (\ref{eq:psi}), we obtain \begin{equation} \label{eq:field2} \forall (\alpha,\beta) \in \mathcal{H}, \quad \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl}\,\widehat{Pf_{kl}}(\alpha,\beta) = - \widehat{P\phi}(\alpha,\beta). \end{equation} We recognize in (\ref{eq:field2}) a linear system in the $a_{kl}$ that could be solved using various techniques such as singular value decomposition \citep[SVD;][\S2.6]{Press02}. Although general, this solution does not provide much insight in the problem of speckle nulling. For this reason, let us examine now a different solution, less general but with more explanatory power. We will comment on the use of SVD at the end of this section. We consider a square pupil. In this case, all DM actuators receive light and the pupil function has no limiting effect on the DM phase function, i.e. $P\psi = \psi$. Moreover, we assume all influence functions to be identical in shape, and write $f_{kl}(u,v) = f(u-k\frac{d}{\lambda},v-l\frac{d}{\lambda})$. Under these hypotheses, \begin{equation} \label{eq:psi2} P\psi(u,v) = f(u,v) \ast \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl} \, \delta \! \left( u - k \frac{d}{\lambda}, v - l \frac{d}{\lambda} \right), \end{equation} where $\delta$ is Dirac's bidimensional distribution, and $\ast$ denotes the convolution. Substituting $\widehat{P\psi}$ by the FT of (\ref{eq:psi2}) in (\ref{eq:field1}) yields \begin{equation} \label{eq:field3} \forall (\alpha,\beta) \in \mathcal{H}, \quad \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl}\,e^{-i \frac{2\pi d}{\lambda} (k \alpha + l \beta)} = - \frac{\widehat{P\phi}(\alpha,\beta)}{\hat{f}(\alpha,\beta)}. \end{equation} We recognize in the left-hand side of (\ref{eq:field3}) a truncated Fourier series. If we choose the $a_{kl}$ to be the first $N^2$ Fourier coefficients of $-\widehat{P\phi}/\hat{f}$, i.e. \begin{equation} \label{eq:coef1} a_{kl} = \frac{2d^2}{\lambda^2} \int\!\!\!\int_{{[-\frac{\lambda}{2d}, \frac{\lambda}{2d}]}^2} -\frac{\widehat{P\phi}(\alpha,\beta)}{\hat{f}(\alpha,\beta)} \: e^{i \frac{2\pi d}{\lambda} (k \alpha + l \beta)} \, \mathrm{d}\alpha \, \mathrm{d}\beta, \end{equation} then according to Fourier theory, we minimize the mean-square error between both sides of the equation \cite[see e.g.][\S1.5]{Hsu67}. This error cannot be reduced to zero unless the Fourier coefficients of $-\widehat{P\phi}/\hat{f}$ happen to vanish for $k,l < 0$ and $k,l > N$. At this point, we have reached the important conclusion that \emph{perfect speckle cancellation cannot be achieved with a finite-size DM unless the wavefront aberrations are band-limited}. Moreover, we can assert that the maximum DH extension is the square domain $\mathcal{H} \equiv {[-\frac{\lambda}{2d}, \frac{\lambda}{2d}]}^2 = {[-\frac{N}{2}\frac{\lambda}{D}, \frac{N}{2}\frac{\lambda}{D}]}^2$. Solution (\ref{eq:coef1}) is physically acceptable only if the Fourier coefficients are real numbers, which means mathematically that $\widehat{P\phi}/\hat{f}$ should be Hermitian\footnote{A function $f$ is said to be Hermitian if $ \forall (x,y), \: f(x,y) = f^\ast(-x,-y)$. The FT of a real function is Hermitian and vice versa.}. If there are phase aberrations only, $P\phi$ is real, $\widehat{P\phi}/\hat{f}$ is Hermitian, and the $a_{kl}$ are real. This is no longer true if there are amplitude aberrations as well, reflecting the fact that the DM alone cannot correct both phase and amplitude aberrations in $\mathcal{H}$. However, by considering the Hermitian function that is equal to $\widehat{P\phi}/\hat{f}$ in one half of the DH, say $\mathcal{H}^+ \equiv [0,\frac{\lambda}{2d}] \times [-\frac{\lambda}{2d},\frac{\lambda}{2d}]$, we obtain the real coefficients \begin{equation} \label{eq:coef2} a_{kl} = 4d^2 \int\!\!\!\int_{\mathcal{H}^+} -\frac{\widehat{P\phi}(\alpha,\beta)}{\hat{f}(\alpha,\beta)} \: \cos \! \left[ \frac{2\pi d}{\lambda} (k \alpha + l \beta) \right] \mathrm{d}\alpha \, \mathrm{d}\beta, \end{equation} that correct both amplitude and phase aberrations in $\mathcal{H}^+$. As we have $\frac{\lambda}{2d} = \frac{N}{2}\frac{\lambda}{D}$, the DH has a size of $N\!\times\!N$ resolution elements (resels) with phase aberrations only, and of $\frac{N}{2}\!\times\! N$ resels with phase and amplitude aberrations. Therefore, \emph{a DM can correct both amplitude and phase aberrations in the image plane, albeit in a region that is either the left, right, upper, or lower half of the phase-corrected region.} As \citet{Malbet95} pointed out, let us remind the reader that $\frac{\lambda}{2d}$ is equal to the Nyquist frequency for a sampling interval $\frac{d}{\lambda}$. Therefore, we find that maximum extension for the DH corresponds to the range where the sampling theorem applies to the wavefront at the DM actuator scale. Indeed, taking the inverse FT of (\ref{eq:field3}) leads to the wavefront reconstruction formula \begin{equation} \label{eq:field4} P\phi(u,v) = - \sum_{k=0}^{N-1} \sum_{l=0}^{N-1} a_{kl} \, f \! \left( u - k \frac{d}{\lambda}, v - l \frac{d}{\lambda} \right). \end{equation} Again, this reconstruction cannot be perfect unless the spectrum of $P\phi$ is contained in $\mathcal{H}$. Note that because $\hat{f}$ is generally not a flat function (as it would be the case if influence functions were for instance 2D sinc functions), actuator strokes are not equal to the negative of wavefront values sampled at the actuator locations. Our Fourier solution was derived by assuming that (a) all influence functions are identical in shape, and (b) that the pupil has a square shape. Hypothesis (a) appears to be reasonable at least for the DM in use on the HCIT (Joseph Green, personal communication), but this remains to be precisely measured. If hypothesis (b) is relaxed then (i) some actuators do not receive any light and play no role, so there are effectively fewer terms in the summation in (\ref{eq:field3}), and (ii) the fact that influence functions on the pupil boundary are only partly illuminated is ignored. Now that we have two methods to solve (\ref{eq:field1}), Fourier expansion and SVD, let us compare their solutions. We deal here with functions belonging to the Hilbert space of square integrable functions $f\!: \mathcal{H} \rightarrow \mathbb{C}$. This space has $<f,g> \equiv \int\!\!\!\int_\mathcal{H} f \, g^\ast$ for dot product, and $||f|| \equiv \sqrt{\int\!\!\!\int_\mathcal{H} |f|^2}$ for norm. As mentioned earlier, Fourier expansion minimizes the mean-square error between both sides of (\ref{eq:field3}), i.e. $||(\widehat{P\phi}+\widehat{P\psi})/\hat{f}||^2$. By contrast, SVD has the built-in property of minimizing the norm of the residuals of (\ref{eq:field2}), i.e. $||\widehat{P\phi}+\widehat{P\psi}||$. In other words, SVD minimizes $||\widehat{E'}||^2$, the speckle field energy, which seems more satisfactory from a physical point of view. To find out what is best, we have performed one-dimensional numerical simulations. It turns out that SVD yields dark holes 50\,\% deeper (median value) than Fourier expansion. In addition, SVD does not require all influence functions to have the same shape. However, considering four detector pixels per resel in two dimensions (critical sampling), SVD would require us to manipulate matrices as large as $N^2\!\times\!4N^2$ (or even $N^2\!\times\!8N^2$ when real and imaginary parts are separated). Such matrices would occupy 537~MB of memory space for $64\!\times\!64$ actuators and single-precision floating-point numbers. By contrast, Fourier expansion would be straightforwardly obtained with FFTs of $2N\!\times\!2N$ arrays at critical sampling, but again at the cost of a 50\,\% shallower dark hole and a strong hypothesis on the influence functions. In the next section, we seek to find a computationally less intensive solution that still minimizes the speckle energy in the dark hole, but does not require any hypothesis on the influence functions. \subsection{Speckle energy minimization} \label{sub:energy_min} Let us start with the idea that the best solution is defined as the one \emph{minimizing the total energy of the speckle field in the DH}. For the sake of simplicity, we assume once again a square pupil, but not necessarily a common shape for the influence functions. The total energy in the speckle field reads \begin{equation} \label{eq:energy1} \mathcal{E} \equiv \int\!\!\!\int_\mathcal{H} |\widehat{P\phi}(\alpha,\beta) + \widehat{\psi}(\alpha,\beta)|^2 \, \mathrm{d}\alpha \, \mathrm{d}\beta \; = \; <\widehat{P\phi} + \widehat{\psi}, \widehat{P\phi} + \widehat{\psi}>, \end{equation} using the same notation as in \S\ref{sub:field_nulling}. Given that $\partial \widehat{\psi}/\partial a_{kl} = \hat{f}_{kl}$, the energy is minimized when \begin{equation} \label{eq:energy2} \forall (k,l) \in {\{0 \ldots N\!-\!1\}}^2, \quad \frac{\partial \mathcal{E}}{\partial a_{kl}} = 0 \quad \Longleftrightarrow \quad \Re \left( <\widehat{P\phi} + \widehat{\psi}, \hat{f}_{kl}> \right) = 0, \end{equation} where $\Re$ stands for the real part. Note that this is less demanding than (\ref{eq:field1}), as (\ref{eq:field1}) implies (\ref{eq:energy2}) but the reverse is not true. Using the definition (\ref{eq:psi}) for $\psi$ and realizing that $<\hat{f}_{nm},\hat{f}_{kl}>$ is a real number\footnote{This property stems from the Hermitian character of $\hat{f}_{kl}$ together with the symmetry of $\mathcal{H}$.}, we get finally \begin{equation} \label{eq:energy3} \forall (k,l) \in {\{0 \ldots N\!-\!1\}}^2, \quad \sum_{n=0}^{N-1} \sum_{m=0}^{N-1} a_{nm} <\hat{f}_{nm},\hat{f}_{kl}> \; = - \Re \left( <\widehat{P\phi},\hat{f}_{kl}> \right). \end{equation} As in (\ref{eq:field2}), we find a system that is linear in the actuator strokes. By replacing double indices with single ones, e.g. $(k,l)$ becomes $s = k \, N + l$, (\ref{eq:energy3}) can be solved in matrix format by inverting a $N^2\!\times\!N^2$ real matrix. This is already an improvement with respect to the $N^2\!\times\!4N^2$ complex matrix required by SVD in the previous section. It appears that the same solution can be obtained with a much less demanding ${N\!\times\!N}$ matrix inversion, provided two-dimensional influence functions can be written as the tensor product of two one-dimensional functions (separation of variables), i.e. $f_{kl}(u,v) = g_k(u) \, g_l(v)$. This would be the case for box functions or bidimensional Gaussians, and is good at the 5\,\% level for the DM in use on the HCIT. This property also holds in the image plane since the FT of the previous equation yields $\hat{f}_{kl}(\alpha,\beta) = \hat{g}_k(\alpha) \, \hat{g}_l(\beta)$. By separating variables, (\ref{eq:energy3}) becomes \begin{equation} \label{eq:energy4} \forall (k,l) \in {\{0 \ldots N\!-\!1\}}^2, \quad \sum_{n=0}^{N-1} <\hat{g}_n,\hat{g}_k> \sum_{m=0}^{N-1} a_{nm} <\hat{g}_m,\hat{g}_l> \; = - \Re \left( <\widehat{P\phi},\hat{f}_{kl}> \right). \end{equation} As the left-hand side happens to be the product of three $N\!\times\!N$ matrices, (\ref{eq:energy4}) can be rewritten as an equality between square matrices. \begin{equation} \label{eq:energy5} G \, A \, G = \Phi, \quad \mbox{where} \quad \left \{ \begin{array}{l} G_{kl} = \; <\hat{g}_k,\hat{g}_l> \\ A_{kl} = a_{kl} \\ \Phi_{kl} = - \Re \left( <\widehat{P\phi},\hat{f}_{kl}> \right). \end{array} \right. \end{equation} For square-box and actual HCIT influence functions, numerical calculations show that $G$ is diagonally dominant\footnote{A matrix $A=[a_{ij}]$ is said to be diagonally dominant if $\forall i, \: |a_{ii}| > \sum_{j \neq i} |a_{ij}|$.} and therefore invertible by regular Gaussian elimination. The solution to (\ref{eq:energy5}) is then \begin{equation} \label{eq:energy6} A = G^{-1} \, \Phi \, G^{-1}. \end{equation} Note that $G^{-1}$ can be precomputed and stored, so that computing the strokes effectively requires only two matrix multiplications. As shown in appendix~\ref{app:global}, an equivalent result can be obtained by working with pupil plane quantities. As for the field nulling approach, correcting amplitude errors as well implies restricting the dark hole to either $\mathcal{H}^+ = [0,\frac{N}{2}\frac{\lambda}{D}] \times [-\frac{N}{2}\frac{\lambda}{D},\frac{N}{2}\frac{\lambda}{D}]$ or $\mathcal{H}^- = [-\frac{N}{2}\frac{\lambda}{D},0] \times [-\frac{N}{2}\frac{\lambda}{D},\frac{N}{2}\frac{\lambda}{D}]$. To account for amplitude errors and keep the formalism we have presented so far, it is sufficient to replace $\widehat{P\phi}$ by a function equal to $\widehat{P\phi}(\alpha,\beta)$ in either $\mathcal{H}^+$ or $\mathcal{H}^-$ (depending on the half where one wishes to create the dark hole), and equal to $\widehat{P\phi}^\ast(-\alpha,-\beta)$ in the other half (Hermitian symmetry). Because its FT is Hermitian, the new aberration function in the pupil plane is real, and thus the algorithm processes amplitude and phase errors at the same time as if there were phase errors only. Let us derive the residual total energy in the DH after the correction has been applied. Starting from definition (\ref{eq:energy1}) and rewriting condition (\ref{eq:energy2}) as ${\Re ( <\widehat{P\phi} + \widehat{\psi}, \widehat{\psi}> ) = 0}$, we find \begin{equation} \mathcal{E}_\mathrm{min} = \; <\widehat{P\phi},\widehat{P\phi}> - <\widehat{\psi},\widehat{\psi}>. \end{equation} The former term is the initial speckle energy in the DH, while the latter is the speckle energy decrease gained with the DM. Mathematically, $\sqrt{\mathcal{E}_\mathrm{min}}$ measures the distance (according to the norm we have defined) between the speckle field and its approximation with the DM inside the DH. Because there is no exact solution to ($\ref{eq:field1}$), the residual energy cannot be made equal to zero in $\mathcal{H}^+$ or $\mathcal{H}^-$. However, the energy approach offers an additional degree of freedom: by reducing concentrically the domain over which the energy is minimized, the speckle energy can be further decreased (see \S\ref{sec:simulations}). \subsection{Speckle field measurement} \label{sub:measurement} So far, our speckle nulling theory has presupposed the knowledge of the speckle field $\widehat{P\phi}$, or equivalently of the phase and amplitude aberrations across the pupil, embodied in the complex phase function $P\phi$. In this section, we show how the speckle field can be measured directly in the image plane. As the detector measures an intensity, a single image yields only the modulus of the speckle field. The phase of the speckle field can be retrieved by perturbing the phase function $P\phi$ in a controlled way, and by recording the corresponding images, a process analogous to \emph{phase diversity} \citep[e.g.][]{Lofdahl94}. In our system, the DM provides the natural means for creating this controlled perturbation. As we will see, exactly three images obtained with well chosen DM settings provide enough information to measure $\widehat{P\phi}$. Let us call image 0 the original image recorded with a setting $\psi_0$, whereas images 1 and 2 are recorded with settings $\psi_0+\delta\psi_1$ and $\psi_0+\delta\psi_2$. To be general, we consider in the field of view the presence of an exoplanet and an exozodiacal cloud (exozodi for short), in addition to the star itself. The electric fields of these objects are incoherent with that of the star, so their intensities should be added to the star's intensity. Because they are much fainter than the star, the speckles they produce are negligible with respect to the star speckles, and their intensities can be considered as independent of $\phi$ and $\psi$. The total intensity of every image pixel $(\alpha,\beta)$ takes then the successive values \begin{equation} \label{eq:I_system1} \left \{ \begin{array}{l} I_0 = |\widehat{P\phi} + \widehat{\psi}_0|^2 + I_\mathrm{p} + I_\mathrm{z} \\ I_1 = |\widehat{P\phi} + \widehat{\psi}_0 + \widehat{\delta\psi_1}|^2 + I_\mathrm{p} + I_\mathrm{z} \\ I_2 = |\widehat{P\phi} + \widehat{\psi}_0 + \widehat{\delta\psi_2}|^2 + I_\mathrm{p} + I_\mathrm{z}, \\ \end{array} \right. \end{equation} where $I_\mathrm{p}$ and $I_\mathrm{z}$ are the exoplanet and exozodi intensities, respectively. System~(\ref{eq:I_system1}) can be reduced to the linear system \begin{equation} \label{eq:I_system2} \left \{ \begin{array}{l} {(\widehat{\delta\psi_1})}^\ast \, (\widehat{P\phi}+\widehat{\psi}_0) + \widehat{\delta\psi_1} \, {(\widehat{P\phi}+\widehat{\psi}_0)}^\ast = I_1 - I_0 - |\widehat{\delta\psi_1}|^2 \\ {(\widehat{\delta\psi_2})}^\ast \, (\widehat{P\phi}+\widehat{\psi}_0) + \widehat{\delta\psi_2} \, {(\widehat{P\phi}+\widehat{\psi}_0)}^\ast = I_2 - I_0 - |\widehat{\delta\psi_2}|^2, \\ \end{array} \right. \end{equation} where the exponent $\ast$ denotes the complex conjugate. Notice how the exoplanet and exozodi intensities have disappeared from the equations, demonstrating that faint objects do not affect the measurement process of stellar speckles. However, note that because of quantum noise, the planet detection can still be problematic if the exozodi is much brighter than the planet. Now, system (\ref{eq:I_system2}) admits a unique solution if its determinant, \begin{equation} \label{eq:delta} \Delta \equiv {(\widehat{\delta\psi_1})}^\ast \, \widehat{\delta\psi_2} - \widehat{\delta\psi_1} \, {(\widehat{\delta\psi_2})}^\ast, \end{equation} is not zero, that is to say if \begin{equation} \label{eq:I_condition} |\widehat{\delta\psi_1}|\,|\widehat{\delta\psi_2}|\,\sin \! \left[ \arg(\widehat{\delta\psi_2}) - \arg(\widehat{\delta\psi_1}) \right] \neq 0. \end{equation} Condition (\ref{eq:I_condition}) tells us that the DM setting changes, $\delta\psi_1$ and $\delta\psi_2$, should modify the speckles differently in any given pixel, otherwise not enough information is secured to measure unambiguously $\widehat{P\phi}$ in this pixel. It should be expected for this method to work practically that the magnitude of the speckle modification be greater than the photon noise level. We have not yet found a rigorous derivation of the optimum values for the amplitude $|\widehat{\delta\psi_1}|$ and $|\widehat{\delta\psi_2}|$, but a heuristic argument suggests to us that the optimum perturbations may be that $I_1 \approx I_0$ and $I_2 \approx I_0$. That is to say, the DM-induced speckle instensity pattern, taken by itself, should be approximately the same as the original speckle intensity pattern. Thus at each pixel we choose $|\widehat{\delta\psi_1}| \approx |\widehat{\delta\psi_2}| \approx \sqrt{I_0}$, with the caveat that neither should be zero to keep (\ref{eq:I_condition}) valid. The phase of $\widehat{\delta\psi_1}$ does not matter, but the phase difference between $\widehat{\delta\psi_1}$ and $\widehat{\delta\psi_2}$ should be made as close to $\frac{\pi}{2}$ as possible to keep $\Delta$ from zero. Practically, this can be realized as follows: \begin{enumerate} \item Compute $\delta\psi_1$ stroke changes from (\ref{eq:coef2}) or (\ref{eq:energy6}) by replacing $\widehat{P\phi}$ by $\sqrt{I_0}\,e^{i\theta}$, where $\theta$ is a random phase; \item Compute $\delta\psi_2$ stroke changes from (\ref{eq:coef2}) or (\ref{eq:energy6}) by replacing $\widehat{P\phi}$ by $\widehat{\delta\psi_1}\,e^{i\frac{\pi}{2}}$. \end{enumerate} Now that we have made sure that $\Delta \neq 0$, we derive finally \begin{equation} \label{eq:I_solution} \widehat{P\phi} = \frac{\widehat{\delta\psi_2} \, (I_1 - I_0 - |\widehat{\delta\psi_1}|^2) - \widehat{\delta\psi_1} \, (I_2 - I_0 - |\widehat{\delta\psi_2}|^2)}{\Delta} - \widehat{\psi}_0. \end{equation} Equation (\ref{eq:I_solution}) shows that the initially unknown speckle field ($\widehat{P\phi}$) can be experimentally measured in just three exposures taken under identical circumstances but with different shapes imposed on the DM. \section{Speckle nulling simulations} \label{sec:simulations} \subsection{White speckle noise} \label{sub:sim_white} In this section, we perform one- and two-dimensional simulations for the theoretical case of white speckle noise caused by phase aberrations only. The DM has 64 actuators and top-hat influence functions. Smoother influence functions have been tested and do not lead to qualitatively different results. A simulation with actual HCIT influence functions will be presented in the next section. The simulated portion of the pupil plane is made twice as big as the pupil by zero padding, so that every element of resolution in the image plane would be sampled by two detector pixels. This corresponds to the realistic case of a photon-starved exoplanet detection where read-out noise must be minimized. \subsubsection{One-dimensional simulations} Figure~\ref{fig:f1} shows a complete one-dimensional simulation including speckle field measurement (\S\ref{sub:measurement}) and speckle nulling with field nulling (\S\ref{sub:field_nulling}) and energy minimization (\S\ref{sub:energy_min}). The standard deviation of the phase aberrations is set to $\lambda/1000$. Intensities are scaled with respect to the maximum of the star PSF in the absence of a coronagraph. Ideal conditions are assumed: no photon noise, noiseless detector, and perfect precision in the control of DM actuators. Under these conditions, the speckle field is perfectly estimated, and the mean intensity in the DH is $5.8 \times 10^{-11}$ with field nulling and $1.4 \times 10^{-11}$ with energy minimization, i.e. about 1500 and 6500 times lower than the mean intensity outside the DH, respectively. Repeated simulations with different noise sequences show that energy minimization performs always better than field nulling by a factor of a few. Field nulling solved with SVD yields the same numerical solution as energy minimization (they differ by the last digit only), in agreement with the idea that they both minimize speckle energy. \subsubsection{Dark hole depth estimate in one dimension} In the one-dimensional case, it is easy to predict roughly the shape and the depth of the DH. The function $\widehat{P\phi} + \widehat{\psi}$ is band-limited since the pupil has a finite size. As the pupil linear dimension is $D/\lambda$, the maximum spatial frequency of $\widehat{P\phi} + \widehat{\psi}$ is $D/2\lambda$. Let us apply the sampling theorem at the Nyquist sampling frequency $D/\lambda$, and write \begin{equation} \label{eq:1d1} (\widehat{P\phi} + \widehat{\psi})(\alpha) = \sum_{n=-\infty}^{+\infty} \left[ \widehat{P\phi}_n + \widehat{\psi}_n \right] \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} - n \right), \end{equation} where the subscript $n$ denotes the function value for $\alpha = n \frac{\lambda}{D}$. Substituting $\alpha$ by $n \frac{\lambda}{D}$ and $d$ by $\frac{D}{N}$ leads to \begin{equation} \label{eq:1d2} \widehat{P\phi}_n + \widehat{\psi}_n = \widehat{P\phi}_n + \hat{f}_n \sum_{k=0}^{N-1} a_k e^{-i \frac{2\pi k n }{N}}. \end{equation} The field nulling equation (\ref{eq:field1}) takes here the discrete form \begin{equation} \label{eq:1d3} \forall n \in \{ 0 \ldots N\!-\!1\}, \quad \widehat{P\phi}_n + \widehat{\psi}_n = 0 \quad \Longleftrightarrow \quad a_k = \sum_{n=0}^{N-1} \left( -\frac{\widehat{P\phi}_n}{\hat{f}_n} \right) e^{i \frac{2\pi k n }{N}}. \end{equation} i.e. actuator strokes are computed thanks to an inverse FFT. Let us now turn to the residual speckle field \begin{equation} \label{eq:1d4} (\widehat{P\phi} + \widehat{\psi})(\alpha) = \sum_{n=-\infty}^{-1} \widehat{P\phi}_n \: \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} - n \right) + \sum_{n=N}^{+\infty} \widehat{P\phi}_n \: \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} - n \right). \end{equation} Because the sinc function decreases rapidly with $\alpha$, the terms flanking the DH ($n=-1$ and $n=N$) should by themselves give the order of magnitude of the residual speckle field in the DH. In case of phase aberrations only and white noise, we have $|\widehat{P\phi}_{-1}|^2 \approx |\widehat{P\phi}_N|^2 \approx \overline{I_0}$, where $\overline{I_0}$ is the mean intensity in the image plane prior to the DH creation. Therefore, a crude estimate of the intensity profile in the DH should be \begin{equation} \label{eq:1d5} I_\mathrm{DH}(\alpha) \approx \overline{I_0} {\left[ \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} +1 \right) + \mbox{sinc} \! \left( \frac{\alpha D}{\lambda} - N \right) \right]}^2. \end{equation} We have superimposed this approximation as a thick line in Fig.~\ref{fig:f1}. In this case the match is remarkable, but more simulations show that it is generally good within a factor of 10 only. Nevertheless, it demonstrates that the DH depth depends critically on the residual speckle field at its edges, hence on the decreasing rate of the complex aberration spectrum with spatial frequency. In that respect, a white spectrum is certainly the worst case. Equation (\ref{eq:1d5}) further indicates that the DH depth depends on the number of actuators: as $N$ is increased, the DH widens and gets deeper. With 8, 16, 32, and 64 actuators, (\ref{eq:1d5}) predicts $\overline{I_0}/\overline{I_\mathrm{DH}}$ to reach about 100, 300, 1000, and 4500. \subsubsection{Dark hole depth vs. search area} As mentioned in \S\ref{sub:energy_min}, speckle nulling by energy minimization can be performed in a region narrower than the maximum DH. Figure~\ref{fig:f2} illustrates this point: by reducing the search area from 64 to 44 resels (31\,\% reduction), the DH floor was decreased from $1.4 \times 10^{-11}$ to $2.7 \times 10^{-15}$, i.e. a gain of about 5200 in contrast (further reducing the search area does not bring any significant gain). By giving up search space, one frees the degrees of freedom corresponding to the highest spatial frequency components on the DM pattern. These can be used to improve the DH depth at lower spatial frequency because of the PSF angular extension (this is essentially the same reason why high spatial frequency speckles limit the DH depth). As the search space is reduced, the leverage of these highest spatial frequency components decreases (PSF wings falling off). The energy minimization algorithm compensates by putting more energy at high frequency (see lower panel in Fig.~\ref{fig:f2}), which produces increasingly oscillatory DM patterns (see top panel in Fig.~\ref{fig:f2}) and increasingly brighter spots in the image (around $\pm 32 \frac{\lambda}{D}$ and $\pm 96 \frac{\lambda}{D}$ in bottom panel of Fig.~\ref{fig:f2}). Thus the trade-off range might be limited in practice by the maximum actuator stroke (currently $0.6\,\mu$m on the HCIT), and/or by the detector's dynamic range. In two-dimensions, the trade-off limits are well illustrated by the following example: considering a $64\!\times\!64$ DM and a random wavefront, we find that the DH floor can be decreased from $2.4 \times 10^{-12}$ to $1.4 \times 10^{-13}$ (a factor of 17) if the search area is reduced from $64\!\times\!64$ to $60\!\times\!60$ resels (12\,\% reduction in area). This implies a maximum actuator stroke of 10\,nm and a detector dynamic range of $10^6$. A further reduction to $58\!\times\!58$ resels does not feature a lower DH floor ($2.1 \times 10^{-13}$), and would imply a maximum actuator stroke of $10\,\mu$m and a detector dynamic range of $10^{10}$. In this case, the leverage of the additionally freed high-spatial frequency components is so weak that the algorithm starts diverging. \subsubsection{Two-dimensional simulations with phase and amplitude aberrations} In Figs.~\ref{fig:f3}--\ref{fig:f4}, we show an example of two-dimensional speckle nulling with phase and amplitude aberrations for a square pupil. To reflect the fact that phase aberrations dominate amplitude aberrations in real experiments \cite[see][]{Trauger04}, the rms amplitude of amplitude aberrations is made ten times smaller than that of phase aberrations (the choice of a factor ten is arbitrary). The DH is split into two regions: in the right one ($\mathcal{H}^+$), amplitude and phase aberrations are corrected, whereas in the left one ($\mathcal{H}^-$), phase aberrations are corrected and amplitude aberrations are made worse by a factor of four in intensity. \subsection{Realistic speckle noise} \label{sub:sim_real} \subsubsection{Power spectral density of phase aberrations} With the $3.5\!\times\!8$-m TPF-C primary mirror in mind, we have studied the phase aberration map of an actual 8-m mirror: the primary mirror of Antu, the first 8.2-m unit telescope of ESO's Very Large Telescope (VLT). This phase map\footnote{It can be found by courtesy of ESO at http://www.eso.org/projects/vlt/unit-tel/m1unit.html.} was obtained with the active optics system on, and is characteristic of zonal errors (aberrations which cannot be fitted by low-order Zernike-type polynomials). It can be seen in Fig.~\ref{fig:f5} that the azimuthally averaged power spectral density (PSD) of such a map is well represented by \begin{equation} \label{eq:psd} \mbox{PSD}(\rho) = \frac{\mbox{PSD}_0}{1+{(\rho/\rho_c)}^x}, \end{equation} where $\rho = \sqrt{\alpha^2+\beta^2}$. Values for PSD$_0$, $\rho_c$ and $x$ are listed in Table~\ref{tab:t1}. For comparison, the same treatment has been applied to the Hubble Space Telescope (HST) zonal error map from \citet{Krist95}. We conclude from this study that \emph{a realistic phase aberration PSD for an 8-m mirror decreases as the third power of the spatial frequency}. The standard deviation of the VLT phase map is 20.9~nm (18.5~nm for HST). The square root of the power of phase aberrations in the 0.5--4~m$^{-1}$ spatial frequency range (4--32~$\lambda/D$ for an 8-m mirror) is 19.4~nm, i.e. about $\lambda/25$ at 500~nm, clearly not in the validity domain of our linear approximation. \subsubsection{One-dimensional simulation} Figure~\ref{fig:f6} shows a simulation performed in the same conditions as Fig.~\ref{fig:f1}, but with a VLT-like PSD. The PSD is scaled so that the standard deviation of phase aberrations is equal to $\lambda/1000$. The average DH floor is now $5.3 \times 10^{-12}$, six orders of magnitude below the intensity peak in the original image! In agreement with \S\ref{sub:sim_white}, we find that \emph{the DH's depth depends critically on the magnitude of the speckle field at the edge of the DH, hence on the decrease of the phase aberration PSD with spatial frequency}. \subsubsection{Two-dimensional simulation} For the two-dimensional simulation in Figs.~\ref{fig:f7}--\ref{fig:f8}, we have kept the original VLT phase map and circular pupil, but scaled the standard deviation of phase aberrations to $\lambda/1000$. In addition, we have used the actual HCIT influence functions from \citet{Trauger03}. The average DH floor is then $5.9 \times 10^{-12}$ with field nulling (case shown), and $7.1 \times 10^{-11}$ with energy minimization. The worse performance of the second method reflects the cost of the variable separation hypothesis, only accurate to within 5\,\% for the HCIT. Note that the DH retains its square shape with a circular pupil, as the DH shape is fixed by the actuator grid geometry on the DM (a square grid of constant spacing in our case). \section{Discussion} \label{sec:discussion} \subsection{Quantum and read-out noise} \label{sub:discuss_noise} In \S\ref{sec:simulations}, we presented noise-free simulations. To give an idea of the effect of quantum and read-out noises, let us consider a sun-like star at 10~pc observed by a $3.5\!\times\!8$~m space telescope with a 5\,\% overall efficiency. In a 100~nm bandwith centered at 600~nm, the telescope collects about $2 \times 10^{12}$ photo-electrons in one-hour exposures. Considering the quantum noise, a 1~e$^-$ read-noise and ignoring chromatic effects, simulations of sequences of four one-hour exposures show that the average DH floor in Fig.~\ref{fig:f1} would jump from $1.4 \times 10^{-11}$ to $2.7 \times 10^{-10}$, whereas the average DH floor in Fig.~\ref{fig:f6} would jump from $5.2 \times 10^{-12}$ to $3.2 \times 10^{-11}$. \subsection{Validity of the linear approximation} \label{sub:discuss_linearity} In practice, our speckle nulling process will work as stated provided Eq.~(\ref{eq:E_im}) holds, that is to say if ${|P\phi+\psi| \gg \frac{1}{2}|P\phi^2|}$. If $c$ is the improvement in contrast with respect to the speckle floor and $\sigma_\phi$ the standard deviation of wavefront aberrations in radians, this condition translates into ${\sigma_\phi/\sqrt{c} \gg \sigma_\phi^2/\sqrt{2}}$, or ${\sigma_\phi \ll \sqrt{2/c}}$. In terms of optical path difference, the standard deviation should then be much less than $\lambda/(\pi \sqrt{2c}) = \lambda/140$ for $c = 10^3$. This is why we considered $\lambda/1000$ rms wavefronts in our simulations. As the wavefront will probably not be of this quality at the start, the speckle nulling method presented here is intended to be used in the course of observations, after a first phase where the bulk of the aberrations have been taken out. When the linear approximation breaks down, three images with different DM settings still provide enough information about the aberrations, so that a DH could be created thanks to a global non-linear analysis of these images \citep{Borde04}. \cite{Malbet95} also explored non-linear solutions, but with many more iterations ($\approx 20$). \subsection{Real coronagraphs} \label{sub:discuss_coronagraphs} Dwelling on the validity of Eq.~(\ref{eq:E_im}), real coronagraphs would not only remove the direct image of the star ($\widehat{P}$), they would also modify the speckle field ($\widehat{P\phi}$) and the DM phase function ($\widehat{P\psi}$). This can be easily incorporated in the theory. A more delicate point is that real coronagraphs are not translation-invariant systems. As a consequence, effective influence functions as seen from behind the coronagraph will vary over the pupil. For image-plane coronagraphs with band-limited sinc masks \cite[][\S4]{Kuchner02}, we estimate this variation to be of the order of 10\,\%, assuming $\epsilon = 0.1$ and 64 actuators. Only energy minimization, not field nulling (unless solved with SVD), can accomodate this effect. \subsection{Actuator stroke precision} \label{sub:discuss_actuators} What about the precision at which actuators should be controlled? As a consequence of the linearity of (\ref{eq:energy3}), the DH depth depends quadratically on the precision on the actuator strokes. We deduce -- and this is confirmed by simulations -- that a four orders of magnitude deep DH can only be obtained if the strokes are controlled at a 1\,\% precision, i.e. 6\,pm rms with $\lambda/1000$ aberrations at 600\,nm. This precision corresponds to the current resolution of the actuator drivers on the HCIT. \subsection{Instrumental stability} \label{sub:discuss_stability} Regarding instrumental stability, we assumed that the instrument would remain perfectly stable during the four-step process. However, despite the foreseen thermal and mechanical controls of the spacecraft, very slow drifts during the few hours of single exposures should be expected. Therefore we intend to study in a subsequent paper how to incorporate a model of the drifts in our method. The exact parameters of this model would be derived from a learning phase after the launch of the spacecraft. \subsection{Chromaticity} \label{sub:discuss_chromaticity} We have not considered the effect of chromaticity. Let us point out that phase aberrations due to mirror surface errors scale with wavelength, so the correction derived from one wavelength would apply to all wavelengths. This is unfortunately not the case for amplitude aberrations. Although these are weaker than phase aberrations, a degradation of the correction should be expected in polychromatic light. Moreover, polychromatic wavefront sensing will require a revised theory as speckles will move out radially in proportion to the wavelength. \section{Conclusion and future work} \label{sec:conclusion} In this paper, we presented two techniques to optimally null out speckles in the central field of an image behind an ideal coronagraph in space. The measurement phase necessitates only three images, the fourth image being fully corrected. Depending on the number of actuators and the desired search area, the gain in contrast can reach several orders of magnitude. These techniques are intended to work in a low aberration regime, such as in the course of observations after an initial correction phase. They are primarily meant to be used in space but could be implemented in a second-stage AO system on ground-based telescopes. Out of these two methods, the speckle energy minimization approach seems to be the more powerful and flexible: (i) it offers the possibility to trade off some search area against an improved contrast, and (ii) it can accomodate influence function variations over the pupil (necessary with real coronagraphs). If influence functions feature the required symmetry (variable separation), it is computationally very effective, but is otherwise still better than SVD. Since the principles underlying these speckle nulling techniques are general, it should be possible to use them in conjunction with most coronagraph designs, including those with band-limited masks \citep{Kuchner02}, pupil-mapping \citep{Guyon05a,Vanderbei05}, and shaped pupils \citep{Kasdin03}. It is our intent to complete our work by integrating in our simulations models of these coronagraphs, and to carry out experiments with the HCIT. In addition, we will seek to incorporate in the measurement theory a linear model for the evolution of aberrations, and we will work toward a theory accommodating the spectral bandwidth needed for the detection and spectroscopy of terrestrial planets. \acknowledgments We wish to thank the anonymous referee for his insightful comments that helped to improve greatly the content of this paper. We acknowledge many helpful discussions with Chris Burrows, John Trauger, Joe Green, and Stuart Shaklan. This work was performed in part under contract 1256791 with the Jet Propulsion Laboratory (JPL), funded by NASA through the Michelson Fellowship Program, and in part under contract 1260535 from JPL. JPL is managed for NASA by the California Institute of Technology. This research has made use of NASA's Astrophysics Data System.
2024-02-18T23:39:50.959Z
2005-10-20T03:43:06.000Z
algebraic_stack_train_0000
630
8,205
proofpile-arXiv_065-3174
\section{The Challenges of Core-collapse Supernovae} \label{sec:challenges} In taking stock of `long-term' efforts to understand core-collapse supernovae, we reflect upon the fact that supernovae have been challenging us for centuries. Their very existence helped overturn worldviews. Their explosion mechanisms and remnants involve all four fundamental forces, and many (if not most) branches of physics. A cornucopia of electromagnetic radiation observables continues to provide intriguing puzzles, and yields some clues regarding the violent proceedings of a massive star's death. But direct observational penetration of the secrets of neutron star birth and initiation of the explosive ejection of the stellar envelope demands extraordinary efforts aimed at the detection of gravitational waves and neutrinos, the only messengers carrying direct information from the extreme conditions of a newly-collapsed stellar core. And the associated theoretical penetration---required for both the prediction and interpretation of expected gravitational wave and neutrino signals---comprises algorithmic and computational issues that will challenge computational physicists and tax state-of-the-art supercomputers for years to come. In western civilization, supernovae played a role in changing prevailing notions of the universe in at least two eras. Remarkably, of the handful of supernovae in our Milky Way Galaxy recorded by humanity, two were observed by Tycho (1572) and Kepler (1604). Tycho's detailed observations established that the `new and never previously seen star' of 1572---and also another transient celestial phenomenon, a comet of 1577---were beyond the moon's orbit, `new phenomena in the ethereal world,' contributing to the overthrow of the Aristotelian worldview that included immutable heavens. In modern times, supernovae figured in the debate over whether the spiral nebulae were separate galaxies, each an `island universe' comparable to our Milky Way. It was recognized that the `novae' or `new stars' seen in these nebulae would have to be much more luminous than typical novae occuring in our galaxy. The phrases ``giant novae,'' novae of ``impossibly great absolute magnitudes,'' ``exceptional novae,'' and the German term ``Hauptnovae'' or ``chief novae'' were used during the 1920s.\cite{osterbrock01} In a review article, Zwicky explained that it was deduced that `supernovae' were about a thousand times as luminous as `common novae,' and claimed that ``Baade and I first introduced the term `supernovae' in seminars and in a lecture course on astrophysics at the California Institute of Technology in 1931.''\cite{zwicky40} Supernovae are classified by astronomers into two broad classes based on their optical spectra.\cite{filippenko97} These classes are `Type I,' which have no hydrogen features, and `Type II,' which have obvious hydrogen features. These types have further subcategories, depending on the presence or absence of silicon and helium features in Type I, and the presence or absence of narrow hydrogen features in the case of Type II. In particular, supernovae of Type Ia exhibit strong silicon lines, those of Type Ib have helium lines, and those of Type Ic do not have either of these. Astronomers have also identified a number of distinct characteristics in supernova light curves (total luminosity as a function of time). There are two basic physical mechanisms for supernovae, but these do not line up cleanly with the observational categories of Type I and Type II. Type Ia supernovae are caused by a thermonuclear runaway that consumes an entire white dwarf, thought to be induced by accretion of matter from a companion star. Supernovae of Type Ib, Ic, and II are produced by a totally different mechanism: the catastropic collapse of the core of a massive star. The observational distinctions of presence or absence of hydrogen or helium turn out to be unrelated to the mechanism; they depend on whether the outer hydrogen and helium layers of the star---which have nothing to do with the collapsing core---have been lost to winds or accretion onto a binary companion during stellar evolution. Of the two physical mechanisms, core-collapse supernovae are the focus of the present discussion. We now consider the core-collapse supernova process in more detail. Shortly after the discovery of the neutron in the early 1930s, Baade and Zwicky declared, ``With all reserve we advance the view that supernovae represent the transitions from ordinary stars to {\em neutron stars,} which in their final stages consist of extremely closely packed neutrons.''\cite{baade34} This turned out to be true, at least for some `core-collapse' supernovae (those of Type Ib/Ic/II); a black hole is another possible outcome. The dominant fleshing-out of the core collapse process in the last two decades\footnote{See for example Ref. \cite{burrows90} for some information on earlier views of the mechanism.} has been the delayed neutrino-driven explosion mechanism.\cite{wilson85,bethe85} A core-collapse supernova results from the evolution of a massive star. For most of their existence, stars burn hydrogen into helium. In stars at least eight times as massive as the Sun ($8\ M_\odot$), temperatures and densities become sufficiently high to burn through carbon to oxygen, neon, and magnesium; in stars of at least $\sim 10\ M_\odot$, burning continues through silicon to iron group elements. The iron group nuclei are the most tightly bound, and here burning in the core ceases. The iron core---supported by electron degeneracy pressure instead of gas thermal pressure, because of cooling by neutrino emission from carbon burning onwards---eventually becomes unstable. Its inner portion undergoes homologous collapse (velocity proportional to radius), and the outer portion collapses supersonically. Electron capture on nuclei is one instability leading to collapse, and this process continues throughout collapse, producing neutrinos. These neutrinos escape freely until densities in the collapsing core become so high that even neutrinos are trapped. Collapse is halted soon after the matter exceeds nuclear density; at this point (``bounce''), a shock wave forms at the boundary between the homologous and supersonically collapsing regions. The shock begins to move out, but after the shock passes some distance beyond the surface of the newly-born neutron star, it stalls as energy is lost to neutrino emission and endothermic dissociation of heavy nuclei falling through the shock. It is natural to consider neutrino heating as a mechanism for shock revival, because neutrinos dominate the energetics of the post-bounce evolution. Initially, the nascent neutron star is a hot thermal bath of dense nuclear matter, electron/positron pairs, photons, and neutrinos, containing most of the gravitational potential energy released during core collapse. Neutrinos, having the weakest interactions, are the most efficient means of cooling; they diffuse outward on a time scale of seconds, and eventually escape with about 99\% of the released gravitational energy. Because neutrinos dominate the energetics of the system, a detailed understanding of their evolution will be integral to definitive accounts of the supernova process. If we want to understand the origin of the explosion with energy $\sim 10^{51}$ erg, we cannot afford to lose (or gain) more than this amount during the period covered by the simulation. This requires careful accounting of the neutrinos' much larger contribution to the system's energy budget. (For further discussion, and a review of work recognizing the importance of this point, see Ref. \cite{cardall05b}). What sort of computation is needed to follow the neutrinos' evolution? Deep inside the newly-born neutron star, the neutrinos and the fluid are tightly coupled (nearly in equilibrium); but as neutrinos are transported from inside the neutron star, they go from a nearly isotropic diffusive regime to strongly forward-peaked free-streaming. Heating behind the shock occurs precisely in this transition region, and modeling this process accurately requires tracking both the energy and angle dependence of the neutrino distribution functions at every point in space. A full treatment of this six-dimensional neutrino radiation hydrodynamics problem is a major challenge, too costly for contemporary computational resources. While much has been learned over the years through simulation of model systems of reduced dimensionality, there is as yet no robust confirmation of the delayed neutrino-driven scenario described above (see Sec. \ref{sec:history}). Recent detections of a handful of unusually energetic Type Ib/c supernovae (often called `hypernovae') in connection with gamma-ray bursts pose additional challenges to theory and observation. Prominent examples of this supernova/gamma-ray burst connection include SN1998bw / GRB980425,\cite{galama98,iwamoto98} SN2002lt / GRB021211,\cite{dellaValle03} SN2003dh / GRB030329,\cite{hjorth03,stanek03} and SN2003lw / GRB031203;\cite{thomsen04,cobb04,malesani04,galYam04} there are probably many others (see, for example, Ref. \cite{zeh04}). Like many gamma-ray bursts without direct evidence for a supernova connection, GRB030329 has evidence of a jet; GRB980425 and GRB031203 do not, and are also underluminous gamma-ray bursts (but still unusually energetic Type Ib/c supernovae).\cite{ghirlanda04,soderberg04} Determining the relative rates of jet-like hypernovae, non--jet-like hypernovae, and `normal' supernovae---and the possible associated variety of mechanisms---are important challenges. In summary, the details of how the stalled shock is revived sufficiently to continue plowing through the outer layers of the progenitor star are unclear. In normal supernovae, it may well be that some combination of neutrino heating of material behind the shock, convection, and instability of the spherical accretion shock leads to the explosion (see Sec. \ref{sec:history}). It is tempting to think that rotation (for example, Refs. \cite{fryer00,thompson04}) and magnetic fields (for example, Ref. \cite{wheeler04}) in more massive progenitors may play a more significant role in the rare jet-like hypernovae, perhaps giving birth to `magnetars,' the class of neutron stars with unusually large magnetic fields. This temptation appears to be sweetened by observational support.\cite{gaensler05,figer05} (Observations of two nearby supernova remnants may suggest that rotation and magnetic fields also operate in normal supernovae, perhaps subdominantly.\cite{burrows04b}) From the above discussion, several key aspects of physics that a core-collapse simulation must address can be identified; these are discussed in sections that follow, after a discussion of the history of approximate treatments of neutrino radiation transport and an overview of our new code. \section{History of Neutrino Radiation Hydrodynamics} \label{sec:history} While in general terms supernovae have been challenging us for centuries, the challenge of their simulation via computer modeling has `only' been with us for a few decades---a `short term' in comparison with centuries, but still a `long term' in comparison with the time scales of individual academic careers. Here we sketch the last two decades' progress on one critical aspect of core-collapse supernova simulations: the high dimensionality (three space and three momentum space dimensions---not to mention time dependence) of neutrino radiation hydrodynamics (see Table 1). The development of this aspect of the simulations is intertwined with important advances in the field, but of course does not represent every insight relevant to the explosion mechanism obtained via simulation or otherwise. \def{} \begin{sidewaystable} \tbl{Selected neutrino radiation hydrodynamics milestones in stellar collapse simulations studying the long-term fate of the shock. \label{history}} {\begin{tabular}{@{}lcccccc} \hline Group & Year & Explosion & Total & Fluid space & $\nu$ space & $\nu$ momentum \\ & & & dimensions & dimensions & dimensions & space dimensions \\ \hline Lawrence Livermore\cite{bowers82,wilson85,bethe85} & 1982 & Yes$^*$ & 2 & 1 (PN) & 1 & 1 (${ O}(v/c)$) \\ \hline Lawrence Livermore\cite{mayle85,wilson88}& 1985 & Yes$^*$ & ``2.25'' & ``1.5'' NS (PN) & 1 & 1 (${ O}(v/c)$) \\ \hline Florida Atlantic\cite{bruenn85,bruenn87,bruenn91,bruenn93} & 1987 & No & 2 & 1 (GR) & 1 & 1 (${ O}(v/c)$) \\ \hline Lawrence Livermore\cite{mayle90,mayle91,wilson93} & 1989 & Yes$^*$ & ``2.25'' & ``1.5'' NS+HR (GR) & 1 & 1 (GR) \\ \hline Lawrence Livermore\cite{miller93} & 1992 & Yes$^*$ & 2 & 2 HR (N) & 2 & 0 (N) \\ \hline Los Alamos\cite{herant94} & 1993 & Yes$^*$ & ``1.75'' & 2 (N) & ``1.5'' thick/thin & 0 (PN)\\ Arizona\cite{burrows95} & 1994 & Yes$^*$ & ``1.75'' & 2 (N) & ``1.5'' ray-by-ray & 0 (N) \\ \hline Florida Atlantic\cite{bruenn94,bruenn95}& 1994 & No & ``2.25'' & ``1.5'' NS (GR) & 1 & 1 (${ O}(v/c)$) \\ \hline Oak Ridge\cite{mezzacappa98a,mezzacappa98b} & 1996 & No & ``2.5'' & 2 (N) & 1 & 1 (${ O}(v/c)$) \\ \hline Max Planck\cite{rampp00,rampp02,kitaura03,janka04b} & 2000 & No, Yes$^*$ (ONeMg)& 3 & 1 (N) & 1 & 2 (${ O}(v/c)$) \\ Oak Ridge\cite{mezzacappa93b,mezzacappa93c,mezzacappa99,liebendoerfer00,mezzacappa01} & 2000 & No & 3 & 1 (N) & 1 & 2 (${ O}(v/c)$) \\ Arizona\cite{burrows00,thompson03} & 2002 & No & 3 & 1 (N) & 1 & 2 (${ O}(v/c)$) \\ \hline Oak Ridge\cite{mezzacappa93b,mezzacappa93c,mezzacappa99,liebendoerfer01a,liebendoerfer02,liebendoerfer04} & 2000 & No & 3 & 1 (GR)& 1 & 2 (GR)\\ \hline Los Alamos\cite{herant94,fryer02} & 2002 & Yes$^*$ & ``2.5'' & 3 (N) & ``2'' thick/thin & 0 (PN) \\ \hline Max Planck\cite{rampp02,buras03,janka02,janka04a,janka04b} & 2003 & No, Yes$^*$ (180$^{\mathrm{o}}$)& ``3.75'' & 2 (PN)& ``1.5'' ray-by-ray & 2 (${ O}(v/c)$, PN) \\ \hline \end{tabular}\\[2pt]} \begin{tabnote} The ``Yes'' entries in the ``Explosion'' column are all marked with an asterisk as a reminder that questions about the simulations---described in the main text---have prevented a consensus about the explosion mechanism. ``Total dimensions'' is the average of ``Fluid space dimensions'' and ``$\nu$ space dimensions,'' added to ``$\nu$ momentum dimensions.'' The abbreviation ``N'' stands for `Newtonian,' while ``PN''---for `Post-Newtonian'---stands for some attempt at inclusion of general relativistic effects, and ``GR'' denotes full relativity. A space dimensionality in quotes---like ``1.5''---denotes an attempt at modeling higher dimensional effects within the context of a lower dimensional simulation. For the fluid, this is a mixing-length prescription in the neutron star (``NS'') or the heating region (``HR'') behind the stalled shock. For neutrino transport, it indicates one of two approaches: multidimensional diffusion in regions with strong radiation/fluid coupling, matched with a spherically symmetric `light bulb' approximation in weakly coupled regions (``thick/thin''); or the (mostly) independent application of a spherically symmetric formalism/algorithm to separate spatial angle bins (``ray-by-ray''). \end{tabnote} \end{sidewaystable} We pick up the story in 1982, when simulations showing the stalled shock reenergized by neutrino heating on a time scale of hundreds of milliseconds were first performed.\cite{wilson85} This was initially achieved in a simulation with a total of 2 dimensions (spherical symmetry, and energy-dependent neutrino transport). But these simulations required significant rezoning, possibly attended by nontrivial numerical error;\cite{wilson85} and further, with the introduction of full general relativity and a correction in an outer boundary condition,\cite{mayle90} it became clear that these models would not explode without a mock-up of a doubly-diffusive fluid instability in the newly-born neutron star that serves to boost neutrino luminosities\cite{mayle90,mayle91,wilson93,miller93}---a simulation of effective total dimensionality ``2.5'' (see Table 1). That the necessary conditions exist for this particular instability to operate has been disputed;\cite{bruenn95,bruenn96,bruenn04} and though related phenomena may operate,\cite{bruenn04} more recent simulations with energy-dependent neutrino transport and true two-dimensional fluid dynamics indicate that fluid motions are either suppressed by neutrino transport\cite{mezzacappa98a} or have little effect on neutrino luminosities and supernova dynamics.\cite{buras03,janka04a} Recognizing that the profiles obtained in spherically symmetric simulations implied convective instabilities, and that observations of supernova 1987A also pointed to asphericities, several groups explored fluid motions in two spatial dimensions in the supernova environment in the 1990s. In two spatial dimensions, the computational limitations of that era required approximations that simplified the neutrino transport. One class of simplifications allowed for neutrino transport in ``1.5'' or 2 spatial dimensions, but with neutrino energy and angle dependence integrated out, reducing a five dimensional problem to ``1.75'' or 2 effective total dimensions (see Table 1).\cite{miller93,herant94,burrows95} These simulations exhibited explosions, and elucidated an undeniably important physical effect: a negative entropy gradient behind the stalled shock results in convection that increases the efficiency of heating by neutrinos. However, in the scheme of Table 1, the inability to track the neutrino energy dependence in these simulations could be viewed as a minor step backwards in effective total dimensionality. The energy dependence of neutrino interactions has the important effect of enhancing core deleptonization, which makes explosions more difficult;\cite{bruenn85,bruenn89a,bruenn89b} this raised the question of whether the exploding models of the early- and mid-1990s were too optimistic. This concern about the lack of neutrino energy dependence received some support from a simulation in the late 1990s involving a different simplification of neutrino transport: the imposition of energy-dependent neutrino distributions from spherically symmetric simulations onto fluid dynamics in two spatial dimensions.\cite{mezzacappa98b} Unlike the simulations discussed above, these did not explode, casting doubt upon claims that convection-aided neutrino heating constituted a robust explosion mechanism. The nagging qualitative difference between spatially multidimensional simulations with different neutrino transport approximations motivated interest in the possible importance of even more complete neutrino transport: Might the retention of both the energy {\em and} angle dependence of the neutrino distributions improve the chances of explosion, as preliminary ``snapshot'' studies suggested?\cite{messer98,burrows00} Of necessity, the first such simulations were performed in spherical symmetry, which nevertheless represented an advance to a total dimensionality of 3 (see Table 1). Results from three different groups are in accord: Spherically symmetric models of iron core collapse do not explode, even with solid neutrino transport\cite{rampp00,mezzacappa01,thompson03} and general relativity.\cite{liebendoerfer01a,liebendoerfer04} Recently, however, it has been shown that the more modest oxygen/neon/magnesium cores of the lightest stars to undergo core collapse (8-10 M$_\odot$) may explode in spherical symmetry.\cite{kitaura03,janka04b} The current state of the art in neutrino transport in supernova simulations determining the long-term fate of the shock has been achieved by a group centered at the Max Planck Institute for Astrophysics in Garching, who deployed their spherically symmetric energy- and angle-dependent neutrino transport capability\cite{rampp02} along separate radial rays, with partial coupling between rays.\cite{janka02} Initial results---from axisymmetric simulations with a restricted angular domain---were negative with regards to explosions (in spite of the salutary effects of convection, and also rotation),\cite{buras03} apparently supporting the results of Ref. \cite{mezzacappa98b}. An explosion was seen in one simulation\cite{janka03} in which certain terms in the neutrino transport equation corresponding to Doppler shifts and angular aberration due to fluid motion were dropped; this simulation also yielded a neutron star mass and nucleosynthetic consequences in better agreement with observations than the ``successful'' explosion simulations of the 1990s,\cite{herant94,burrows95} arguably because of more accurate neutrino transport in the case of both observables. The continuing lesson is that getting the details of the neutrino transport right makes a difference. In addition to accurate neutrino transport, low-mode ($\ell = 1,2$) instabilities that can develop only in simulations allowing the full range of polar angles may make a subtle but decisive difference, as in an explosion recently reported by the Garching group.\cite{janka04a,janka04b} This achievement was presaged by earlier studies of the supernova context, which featured a demonstration of the tendency for convective cells to merge to the lowest order allowed by the spatial domain\cite{herant92} and a newly-recognized spherical accretion shock instability\cite{blondin03} (discovered independently in a different context in Ref. \cite{foglizzo02}). These these global asymmetries may be sufficient to account for observed asphericities that have often been attributed to rotation and/or magnetic fields. Surely every `Yes' entry in the explosion column of Table 1 has been hailed in its time as `the answer' (at least by some!), and as a community we cannot help hoping once again that these recent developments mark the turning of a corner; but important work remains to verify if this is the case. Several groups are committed to further efforts. For example, the Terascale Supernova Initiative (TSI, which includes authors of Ref. \cite{blondin03}) comprises efforts aimed at `ray-by-ray' simulations\cite{hix01} like those of the Garching group, as well as full spatially multdimensional neutrino transport, both with energy dependence only\cite{myra04} and with energy {\em and} angle dependence (Sec. \ref{sec:transport}, and Ref. \cite{cardall04}). Delineation of the possible roles of rotation and magnetic fields are also being pursued by TSI. At least one other group is pursuing full spatially multidimensional neutrino transport.\cite{livne04,walder04} \section{{\em GenASiS:} Philosophy and Basic Features} \label{sec:genasis} As discussed in Sec. \ref{sec:challenges}, a core-collapse supernova involves a six-dimensional radiation hydrodynamics problem, making it a major computational challenge. Even three-dimensional pure hydrodynamics problems (with only space dimensions, no momentum space) have only become relatively common in the last few years, with manageable workflows on today's terascale machines ($\sim 10^{12}$ bytes of memory and flop/s). To begin to get a feel for the requirements of {\em radiation} hydrodynamics, consider just a five-dimensional problem, in which axisymmetry in the space dimensions is assumed. For example, supposing the numbers of spatial zones in spherical coordinates $(r,\theta)$ to be $(256,128)$, and the numbers of momentum bins in energy and angle variables $(\epsilon,\vartheta,\varphi)$ to be $(64,32,16)$, of order $10^{10}$ bytes are required just to store one copy of one neutrino distribution function. While this gives rise to a taxing (but not necessarily insurmountable) workflow on terascale machines, it is apparent that the addition of the third spatial dimension---necessary for full exploration of the interacting effects of convection, rotation, and magnetic fields---will require petascale systems. But petascale systems will eventually be available (five to seven years is the current expectation); and given the long development time scales of sophisticated software, we believe it wise to develop our code with the full six-dimensional capability, even if it is only deployed in five dimensions in the near term. The computational demands of radiation hydrodynamics can be ameliorated by ``adapative mesh refinement'' (AMR). The basic idea of AMR is to employ high resolution only where needed in order to conserve memory and computational effort. Our current expectation is to allow for refinement only in the space dimensions. This will help with the management of two difficulties: the large dynamic range in length scales associated with the density increase of six orders of magnitude that occurs during core collapse, and adequate resolution of particular features of the flow (the shock, for instance). With Eulerian codes in multiple space dimensions, these tasks require high resolution; and particularly for radiation hydrodynamics, the savings achievable by reducing the number of zones is considerable, since an entire three-dimensional momentum space is carried by each spatial zone. Of the two basic types of AMR on structured grids---the block-structured and zone-by-zone varieties---we have chosen the zone-by-zone approach for use with neutrino radiation hydrodynamics. Block-structured AMR\cite{berger89} involves the deployment of subgrids of a certain reasonable minimum size (e.g. eight or sixteen zones per side) at various levels of refinement. A basic solver routine is applied independently to each subgrid. Extra spatial zones (referred to as `guard zones') are required on the edges of each subgrid, which carry information from neighboring subgrids; these become the boundary conditions applied by the solver routine. The strategy is designed for explicit solution algorithms, in which the functions describing time evolution need only be evaluated at the previous time step. However, the rapid time scales of neutrino interactions with the fluid require an {\em implicit} solution algorithm, in which the functions describing the evolution of the neutrino radiation field are evaluated at the {\em current} (that is, new) time step. Because of this mismatch with the intended purposes of block-structured AMR (implicit vs. explicit evolution), and the fact that popular block-structured AMR community packages did not seem readily amenable to handling momentum space variables in a natural way, we decided upon another flavor of AMR: the zone-by-zone refinement approach.\cite{khokhlov98} In this method, individual zones are refined (typically by bisection) and coarsened as needed. This provides more flexibility than the block-structured approach; the fine-grained control allows for maximum savings in the number of spatial zones deployed. A drawback for many users is that a single-grid explicit solver cannot be used ``as is.'' Instead, new solution algorithms must be developed that address the entire hierarchical data structure (Ref. \cite{khokhlov98} and our Sec. \ref{sec:hydro} are examples for explicit hydrodynamics); but we are required to develop such `global solvers' for gravity (Sec. \ref{sec:gravity}) and implicit neutrino transport (Sec. \ref{sec:transport}) anyway. And as a bonus, the need to carry memory-wasting `guard zones' is obviated. In implementing the zone-by-zone-refinement approach to the representation of spacetime, we have tried to follow object-oriented design principles to the extent allowed by Fortran 90/95. Figure \ref{zone} outlines the basic data structures we use to model the ideal of a continuous spacelike slice with a discretized approximation. (The hierarchy of structures, and our operations on them with well-controlled interfaces, are instances of the object-oriented principles of {\em inheritance} and {\em encapsulation}.) A region of a spacelike slice is represented by an object of {\tt zoneArrayType}. Each such object contains an array of objects of {\tt zoneType}, along with information about the coordinates of the zones and pointers to neighboring zone arrays. Each zone, an object of {\tt zoneType}, contains various forms of stress-energy, each of which is a separate object. Figure \ref{zone} shows a perfect fluid and a radiation field; in the code we have an electromagnetic field as well. Each zone has a pointer to another object of {\tt zoneArrayType}, whose allocation constitutes refinement of that zone; this structure can be extended to arbitrarily deep. A simple two-dimensional pure hydrodynamics test problem computed with our adaptive mesh code is shown in Fig. \ref{initialFinal2D}. \begin{figure} \includegraphics[width=3.5in,angle=270]{zone.eps} \caption{Data structures used in an adaptive mesh for radiation hydrodynamics.} \label{zone} \end{figure} \begin{figure} \includegraphics[width=4.7in]{initialFinal2D.eps} \caption{Density in a two-dimensional generalization of the shock tube. Red and blue indicate high and low density respectively. Left: Initial state. Right: Evolved state.} \label{initialFinal2D} \end{figure} The word ``General'' that goes into the name of our code, {\em GenASiS}---for {\em Gen}eral {\em A}strophysical {\em Si}mulation {\em S}ystem---may give an initial impression of a messianic quest to create an impossibly all-purpose code for solving all conceivable problems in astrophysics and cosmology; but the code's `generality' is, of course, considerably more modest: It refers to the use of Fortran 90's facility for function overloading. (This is an instance of the object-oriented principle of {\em polymorphism}.) This allows a generic function name to have several different implementations, providing for extensibility of the physics: Different equations of state, hydrodynamic flux methods, coordinate systems, gravity theories, and so forth can be employed by adding new implementations of generic function names, without having to go back and change basic parts of the code to implement new physics. \section{Magnetohydrodynamics} \label{sec:hydro} We employ a conservative formulation of the equations of magnetohydrodynamics. The Newtonian case will be described here. Conservation of baryons is described by the equation \begin{equation} {\partial n\over\partial t} + \mbox{\boldmath$\nabla$}\cdot(n\mbox{\boldmath$v$})=0,\label{baryon} \end{equation} where $n$ is the baryon number density, and $\mbox{\boldmath$v$}$ is the fluid velocity. The equation \begin{equation} {\partial\over\partial t}(mn\,\mbox{\boldmath$v$}) + \mbox{\boldmath$\nabla$}\cdot\left[mn\,\mbox{\boldmath$v$}\bv+\left(p+{B^2\over2}\right)\,\mbox{\boldmath$1$}-\mbox{\boldmath$B$}\bb\right]= -mn\,\mbox{\boldmath$\nabla$}\Phi \end{equation} describes conservation of momentum. Here $m$ is the average baryon mass, $p$ is the pressure, $\mbox{\boldmath$B$}$ is the magnetic field, $\Phi$ is the gravitational potential (discussed further in Sec. \ref{sec:gravity}), and $\mbox{\boldmath$1$}$ is the unit tensor. Units of the magnetic field are chosen such that the vacuum magnetic permeability is unity. One way to express conservation of energy is \begin{eqnarray} {\partial\over\partial t}\left[e+{mn\over2}\left(v^2+\Phi\right)+{B^2\over2}\right] +&&\nonumber\\ \mbox{\boldmath$\nabla$}\cdot\left[(e+p+B^2)\mbox{\boldmath$v$}+{mn\over2}\left(v^2+\Phi\right)\mbox{\boldmath$v$}-\mbox{\boldmath$B$}(\mbox{\boldmath$v$}\cdot\mbox{\boldmath$B$})\right]&=&-{m n\over 2}\left(\mbox{\boldmath$\nabla$}\cdot\mbox{\boldmath$\Psi$}+\mbox{\boldmath$v$}\cdot\mbox{\boldmath$\nabla$}\Phi\right)-\nonumber\\ & &n\left({\partial m\over\partial t}-\mbox{\boldmath$v$}\cdot\mbox{\boldmath$\nabla$} m\right),\label{energy} \end{eqnarray} where $e$ is the internal energy density, and $\mbox{\boldmath$\Psi$}$ is a kind of ``gravitational vector potential'' (also discussed in Sec. \ref{sec:gravity}). We have opted to employ the baryon number density, instead of the mass density, in explicit deference to the fact that mass is not conserved in the presence of nuclear reactions. The energy input from nuclear reactions has then resulted naturally in the derivation of Eq. (\ref{energy}), appearing in the last two terms of the right-hand side. We hope to add nuclear reaction networks to our code in the future. This formulation is called ``conservative'' because volume integrals of the divergences in Eqs. (\ref{baryon})-(\ref{energy}) are related to surface integrals through the divergence theorem: \begin{equation} \int_V dV\,(\mbox{\boldmath$\nabla$}\cdot \mbox{\boldmath$F$}) = \oint_{\partial V} \mbox{\boldmath$F$}\cdot d\mbox{\boldmath$A$}. \end{equation} The physical meaning of a conservative equation is that (modulo source terms) the time rate of change of a conserved quantity in a volume is equal to a flux $\mbox{\boldmath$F$}$ through the volume's enclosing surface. This meaning is built into the finite-difference representation of Eqs. (\ref{baryon})-(\ref{energy}); divergences are represented in discrete correspondence to their mathematical definition, using zone volumes $V$ and face areas $A$: \begin{equation} \mbox{\boldmath$\nabla$}\cdot \mbox{\boldmath$F$}\rightarrow{1\over V_{\leftrightarrow}}\sum_q\left[\left(A_q\,F^q\right)_{{q\ra}} - \left(A_q\,F^q\right)_{{\la q}}\right].\label{discreteDivergence} \end{equation} Here $q$ runs over the three space dimensions. A double-headed arrow ($\leftrightarrow$) indicates evaluation at a zone center. Left-arrows (${\la q}$) and right-arrows (${q\ra}$) denote evaluation at zone inner and outer faces respectively, in the $q$ direction; dimensions other than $q$ are evaluated at the zone centers. In this way the divergence theorem is replicated in every zone, ensuring global conservation to machine precision. Use of generalized zone volumes and areas in Eq. (\ref{discreteDivergence}) enables the use of curvilinear coordinates (``ficticious forces'' arising from curvilinear coordinates must also be included in the momentum equation). The evolution of the magnetic field is described by Faraday's law: \begin{equation} {\partial \mbox{\boldmath$B$}\over\partial t} = -\mbox{\boldmath$\nabla$}\times\mbox{\boldmath$E$}, \label{faraday} \end{equation} supplemented by the constraint \begin{equation} \mbox{\boldmath$\nabla$}\cdot\mbox{\boldmath$B$} = 0. \label{divB} \end{equation} In Eq. (\ref{faraday}), we take the electric field to be $\mbox{\boldmath$E$} = -\mbox{\boldmath$v$}\times\mbox{\boldmath$B$}$, in accordance with the usual astrophysical assumption of a perfectly conducting medium. While Eq. (\ref{faraday}) for the evolution of the magnetic field does not require conservation of the magnetic field, Eq. (\ref{divB}) requires the magnetic field to be divergence-free at all times. In the presence of discontinuous flow numerical solutions to Eqs. (\ref{baryon})-(\ref{faraday}) can produce severe unphysical artifacts if this requirement is not met,\cite{brackbill80} but it can be automatically enforced by the method of constrained transport.\cite{evans88} Integrating Eq. (\ref{faraday}) over a zone's enclosing surface, the left-hand side becomes the volume integral of $\mbox{\boldmath$\nabla$}\cdot\mbox{\boldmath$B$}$, via the divergence theorem. The right-hand side is a sum over area integrals over each zone face. With Stokes' theorem, \begin{equation} \int_A (\mbox{\boldmath$\nabla$}\times \mbox{\boldmath$E$})\cdot d\mbox{\boldmath$A$} = \oint_{\partial A} \mbox{\boldmath$E$}\cdotd\mbox{\boldmath$l$}, \label{stokes} \end{equation} each of these surface integrals becomes a line integral around the zone face boundary. Summed over all faces, two line integrals in opposite directions cancel on every zone edge, enforcing $\mbox{\boldmath$\nabla$}\cdot\mbox{\boldmath$B$} = 0$ in the zone as desired. The method of constrained transport, then, is to evaluate $\mbox{\boldmath$\nabla$}\times\mbox{\boldmath$E$}$ on zone edges in discrete correspondence to the mathematical definition of the curl, using zone face areas $A$ and edge lengths $L$: \begin{equation} (\mbox{\boldmath$\nabla$}\times\mbox{\boldmath$E$})_q \rightarrow{1\over A_q}\sum_{r\ne q}\left[\left(L_s\,E^s\right)_{{r\ra}} - \left(L_s\,E^s\right)_{{\la r}}\right].\label{discreteCurl} \end{equation} Here $r$ runs over the two space dimensions orthogonal to a particular direction $q$, and $s$ indicates the direction perpendicular to both $q$ and $r$. Left-arrows (${\la r}$) and right-arrows (${r\ra}$) denote evaluation along face inner and outer edges respectively, in the $r$ direction. This ensures divergence-free evolution of the discrete representation of the area-averaged magnetic field, with components located on the appropriate zone faces, for all times to machine precision, provided the initial magnetic field satisfies Eq. (\ref{divB}). Accurate computation of fluxes at zone faces and electric fields at zone edges is a key feature. So-called ``central schemes'' have been noted recently by astrophysicists for their ability to capture shocks with an accuracy comparable to Riemann solvers, but with much greater simplicity.\cite{lucas04} In particular, we employ so-called ``HLL'' versions of these schemes for both fluid conservation laws\cite{delZanna02} and the magnetic induction equation.\cite{delZanna03} We achieve second order in space by linear interpolation within zones (as usual, a slope limiter---deployed where necessary in order maintain discontinuities---reduces the treatment to first order). While we have taken Khokhlov's zone-by-zone refinement approach\cite{khokhlov98} as our basic paradigm, we have made a novel extension to evolve the magnetic field, and use a different time-stepping scheme. As in Khokhlov's work, fluxes are computed at each zone interface only once, with the results used to update zones on both sides of the interface. At coarse/fine interfaces, fluxes are computed only on the faces of the refined zones. We have developed a similar approach for the induction equation: the electromotive forces on the zone edges are computed only once, and are used to update all zones sharing that edge. We use a second-order Runge-Kutta time stepping algorithm, made possible by the semi-discrete formulation of the central scheme. In doing so we evolve all levels of the mesh synchronously, unlike Khokhlov's approach of evolving refined levels with greater frequency. In addition to making it possible to take advantage of the semi-discrete formulation for time evolution, problems we encountered in self-gravitating systems with `asynchronous' evolution a la Khokhlov were avoided. Parallelization is achieved by giving each processor its share of spatial zones. Partitioning is accomplished by walking through all levels of the mesh in a recursive manner similar to a Morton space-filling curve; the result is a mapping of the multidimensional mesh to a one dimensional ``string'' of zones, which, when cut into pieces of uniform length, leaves each processor with roughly the same number of zones at each level of refinement. An equation of state determines $p$, the quantity in Eqs. (\ref{baryon})-(\ref{energy}) whose determination has not yet been mentioned. So far, the code has two overloaded options. One is the familiar polytropic equation of state: \begin{eqnarray} p&=&\kappa\, n^\Gamma, \\ e&=&(\Gamma-1)^{-1} p, \end{eqnarray} where $\Gamma$ is a specified parameter, and $\kappa$ is updated in response to changes in $e$ determined from Eq. (\ref{energy}). We have also implemented a ``realistic'' equation of state\cite{lattimer91} suitable for problems involving nuclear matter. This equation of state takes as input the temperature, baryon number density, and electron fraction $Y_e$, defined by \begin{equation} Y_e = {n_{e^-} - n_{e^+}\over n}, \end{equation} where $n_{e^-}$ and $n_{e^+}$ are the number densities of electrons and positrons respectively. When using this ``realistic'' equation of state, an advection equation for $Y_e$ must be added to the above list of conservation laws. We have been working with a number of hydrodynanic and magnetohydrodynamic test problems, one of which is shown here. The rotor problem---which consists of a rapidly rotating dense fluid, initially cylindrical, threaded by an initially uniform magnetic field---was devised to test the onset and propagation of strong torsional Alfv\'en waves into the ambient fluid.\cite{balsara99} We have computed a version of the rotor problem with initial data identical to a so-called 'second rotor problem,'\cite{toth00} and display the results in Fig. \ref{rotorContours}. \begin{figure} \begin{center} $\begin{array}{@{\hspace{-0.25in}}c@{\hspace{-0.75in}}c} \includegraphics[width=3in]{fastRotorDensity40Lines} & \includegraphics[width=3in]{fastRotorPressure40Lines} \end{array}$ \end{center} \caption{Density (left panel) and thermal pressure (right panel) at $t=0.295$ for the second rotor problem given in Ref. \protect\cite{toth00}. 40 countours were used to produce the plots, with $0.512 \le mn \le 9.622$ and $0.010 \le p \le 0.776$. A $200\times 200$ grid was used to produce the results.} \label{rotorContours} \end{figure} \section{Newtonian Gravity} \label{sec:gravity} The Poisson equation for the Newtonian gravitational potential $\Phi$ is \begin{equation} \nabla^2\Phi = 4\pi G\, m n,\label{poisson} \end{equation} where $G$ is the gravitational constant. Our finite-differenced approach to Eq. (\ref{poisson}) is based on the fact that the Laplacian is a divergence of a gradient: \begin{equation} \mbox{\boldmath$\nabla$}\cdot\mbox{\boldmath$\nabla$}\Phi = 4\pi G\, m n.\label{poisson2} \end{equation} We discretize this equation in a manner similar to Eq. (\ref{discreteDivergence}). There results a linear system for the values of $\Phi$ at the center of every leaf zone. In the matrix representation of this linear system, each row corresponds to the discrete version of Eq. (\ref{poisson2}) centered on a given ``leaf zone'' (those not having refined children). On a single-level grid, the resulting matrix would have three, five, and seven bands in one, two, and three dimensions respectively. With adaptive mesh refinement, the matrix structure becomes more diffuse, because several refined zones may contribute to the discrete representation of $\mbox{\boldmath$\nabla$}\Phi$ at a zone face featuring a coarse/fine interface. Each processor fills in the portion of the matrix corresponding to its share of zones, and we rely on the PETSc library ({\tt http://www-unix.mcs.anl.gov/petsc/petsc-2/}) to perform the distributed sparse matrix inversion. We now discuss the contribution of gravity to the fluid energy evolution in Eq. (\ref{energy}). Without the gravitational potential energy included in the ``total energy'' in the time derivative, Eq. (\ref{energy}) appears in the more familiar form \begin{eqnarray} {\partial\over\partial t}\left[e+{1\over2}mn\,v^2\right] +\mbox{\boldmath$\nabla$}\cdot\left[(e+p+{1\over2}mn\,v^2)\mbox{\boldmath$v$}\right]&=&-m n \,\mbox{\boldmath$v$}\cdot\mbox{\boldmath$\nabla$}\Phi-\nonumber\\ & &n\left({\partial m\over\partial t}-\mbox{\boldmath$v$}\cdot\mbox{\boldmath$\nabla$} m\right).\label{energy2} \end{eqnarray} Including the graviational potential energy density $(m n/2)\Phi$ in the time derivative, and making use of baryon conservation, we have \begin{eqnarray} {\partial\over\partial t}\left[e+{mn\over2}\left(v^2+\Phi\right)\right] +&&\nonumber\\ \mbox{\boldmath$\nabla$}\cdot\left[(e+p)\mbox{\boldmath$v$}+{mn\over2}\left(v^2+\Phi\right)\mbox{\boldmath$v$}\right]&=&{m n\over 2}\left({\partial\Phi\over\partial t}-\mbox{\boldmath$v$}\cdot\mbox{\boldmath$\nabla$}\Phi\right)-\nonumber\\ & &n\left({\partial m\over\partial t}-\mbox{\boldmath$v$}\cdot\mbox{\boldmath$\nabla$} m\right).\label{energy3} \end{eqnarray} Using the formal solution for $\Phi$, \begin{equation} \Phi(\mbox{\boldmath$x$},t)= - G\int {m(\mbox{\boldmath$x$}',t) n(\mbox{\boldmath$x$}',t)\,d^3 x'\over |\mbox{\boldmath$x$}-\mbox{\boldmath$x$}'|}, \end{equation} together with baryon conservation (and neglecting time derivatives of $m$), one can show that \begin{equation} {\partial\Phi\over\partial t} = -\mbox{\boldmath$\nabla$}\cdot\mbox{\boldmath$\Psi$}, \end{equation} where $\mbox{\boldmath$\Psi$}$ satisfies a vector Poisson equation: \begin{equation} \nabla^2\mbox{\boldmath$\Psi$} = 4\pi G\, m n\, \mbox{\boldmath$v$}. \end{equation} This may be solved in a manner similar to that used to solve for $\Phi$ (though the vector Poisson equation contains some additional terms in curvilinear coordinates). Conservation of total energy, including gravitational, is not local: the gravitational source terms (the first two terms on the right-hand side of Eq. (\ref{energy})) vanish only upon integration over all space. \section{Neutrino Radiation Transport} \label{sec:transport} Here we briefly describe our approach to the greatest computational challenge in supernova simulations: neutrino radiation transport. Neutrino distributions must be tracked in order to compute the transfer of lepton number and energy between the neutrinos and the fluid. There are three major challenges. One challenge is constructing a discretization that allows both energy and lepton number to be conserved to high precision. The two other challenges are associated with the limits of computational resources: the solution of a very large nonlinear system of equations, and neutrino interaction kernels of high dimensionality. Energy conservation is an obvious measure of quality control, and care with the transport formalism and differencing can help achieve it. The importance of energy conservation is brought into focus by this question: How should we interpret the prediction of a $\sim 10^{51}$ erg explosion in a model where the total energy varies during the course of the simulation by $\sim 10^{51}$ erg or more? To achieve the required precision (say, global energy changes of less than $\sim 10^{50}$ erg), conservative formulations are a useful starting point, and relativistic treatments avoid quantitatively non-negligible conflicts at $O(v^2/c^2)$ between the number and energy transport equations.\cite{liebendoerfer01b,liebendoerfer04,cardall03,cardall05b} Finite-differencing that simultaneously satisfies energy and lepton number conservation has been implemented in spherical symmetry,\cite{liebendoerfer04} and should be pursued in multiple spatial dimensions as well. Our algorithm for solving the large nonlinear system has been described elsewhere.\cite{dazevedo05,cardall04} The large system of equations requiring inversion (as opposed to explicit updates) results from the disparity between hydrodynamic and particle interaction time scales, which motivates implicit time evolution. The nonlinear solve is achieved with the Newton-Raphson method. A fixed-point method employing a preconditioner that splits the space and momentum space couplings is used for the linear solve required within each Newton-Raphson iteration.\cite{dazevedo05} An advantage of this linear solver method is that the dense blocks representing couplings in momentum space---which cannot all be stored at once---need only be constructed a few at a time, used in all steps required in a given fixed-point iteration, and discarded. In contrast, other linear solver algorithms seem to require dense blocks to be discarded and rebuilt multiple times in each iteration. We have sucessfully tested this solver on a two-dimensional problem in spherical coordinates with a static background and a simple emission/absorption interaction. Because neutrino interactions are expensive to compute on-the-fly, we have implemented interpolation tables. Neutrino interactions depend on the neutrino momentum components and the state of the fluid with which the neutrinos interact. A grid of neutrino energies and angles is fixed, but the fluid density $n$, temperature $T$, and electron fraction $Y_e$ vary throughout the simulation. Hence we employ tables that may be interpolated in $n$, $T$, and $Y_e$. Particularly for neutrino scattering and pair interactions---which depend on neutrino states before and after the collision---the interaction kernels are of high dimensionality, requiring a globally distributed table. On each processor a local table is constructed, which contains a copy of each $n$, $T$, and $Y_e$ vertex required by the zones for which that processor is responsible. As $n$, $T$, and $Y_e$ in a processor's zones evolve, the relevant vertices are pulled from the global table as needed. For each zone we construct a ``cube'' of pointers to the eight vertices surrounding the zone's values of $n$, $T$, and $Y_e$. This cube is then used for the necessary interpolations. \section{Outlook} We have made a promising start on {\em GenASiS}, a new code being developed to study the explosion mechanism of core-collapse supernovae. Our plan is to include all the relevant physics---including magnetohydrodynamics, gravity, and energy- and angle-dependent neutrino transport---in a code with adaptive mesh refinement in two and three spatial dimensions. Parallelization and implementation on the adaptive mesh are not yet complete, and the physics components have not yet been fully integrated; but steady progress and the successful completion of test problems give us confidence that we are well on our way towards a tool that will provide important insights into the supernova explosion mechanism. \section*{Acknowledgments} We gratefully acknowledge S.~W. Bruenn's contribution of subroutines for the computation of neutrino interaction kernels. E.~J. Lentz collaborates on the development of a comprehensive neutrino radiation transport discretization scheme, still too immature to report here. We thank R.~D. Budiardja and M.~W. Guidry for discussions on the Poisson solver, and R.~D. Budiardja for helping with that solver's interface to the PETSc library. This work was supported by Scientific Discovery Through Advanced Computing (SciDAC), a program of the Office of Science of the U.S. Department of Energy (DoE); and by Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the DoE under contract DE-AC05-00OR22725. \def{\em Astron. Astrophys.}{{\em Astron. Astrophys.}} \def{\em Am. Astron. Soc. Meet. Abs.}{{\em Am. Astron. Soc. Meet. Abs.}} \def{\em Astrophys. J.}{{\em Astrophys. J.}} \def{\em Astrophys. J. Lett.}{{\em Astrophys. J. Lett.}} \def{\em Astrophys. J. Suppl. Ser.}{{\em Astrophys. J. Suppl. Ser.}} \def{\em Annu. Rev. Astron. Astrophys.}{{\em Annu. Rev. Astron. Astrophys.}} \def{\em Bull. Am. Astron. Soc.}{{\em Bull. Am. Astron. Soc.}} \def{\em J. Comput. Appl. Math.}{{\em J. Comput. Appl. Math.}} \def{\em J. Comput. Phys.}{{\em J. Comput. Phys.}} \def{\em Nature}{{\em Nature}} \def{\em Nucl. Phys. A}{{\em Nucl. Phys. A}} \def{\em Phys. Rep.}{{\em Phys. Rep.}} \def{\em Phys. Rev. D}{{\em Phys. Rev. D}} \def{\em Phys. Rev. Lett.}{{\em Phys. Rev. Lett.}} \def{\em Phys. Rev.}{{\em Phys. Rev.}} \def{\em Rev. Mod. Phys.}{{\em Rev. Mod. Phys.}}
2024-02-18T23:39:51.034Z
2005-10-25T03:45:24.000Z
algebraic_stack_train_0000
635
7,811
proofpile-arXiv_065-3212
\section{Introduction} In 1970 I was in England, where my wife and I stayed for five months with my parents in Essex. It was largely holiday, as we were on our way back to Australia after two years in Boston, where I had been introduced to the six-vertex models and the Bethe ansatz by Elliott Lieb. However, I did visit Cyril Domb's group at King's College, London, and it was there that I first interacted with Tony Guttmann, who was also visiting the department: he was an invaluable aid to navigating the labyrinthine corridors and staircases that linked the department's quarters in Surrey Street with the main part of the College. Tony's natural enthusiasm for statistical mechanics must have been infectious, for it was at this time that I realised that the transfer matrices of the six-vertex model commuted - a vital first step in the subsequent solution of the eight-vertex model. This led to the solution of a number of other two-dimensional lattice models. One that has proved particularly challenging is the chiral Potts model. Here I wish to discuss some of the insights that led to the recent derivation of its order parameters. The chiral Potts model is a two-dimensional classical lattice model in statistical mechanics, where spins live on sites of a lattice and each spin takes $N$ values $0,1, \ldots, N-1$, and adjacent spins interact with Boltzmann weight functions $W, \overline{W}$. We consider only the case when the model is ``solvable'', by which we mean that $W, \overline{W}$ satisfy the star-triangle (``Yang-Baxter'') relations \cite{BPAuY88}. The free energy of the infinite lattice was first obtained in 1988 by using the invariance properties of the free energy and its derivatives.\cite{RJB88} Then in 1990 the functional transfer matrix relations of Bazhanov and Stroganov \cite{BazStrog90} were used to calculate the free energy more explicitly as a double integral.\cite{BBP90, RJB90, RJB91} The model has a critical temperature, below which the system exhibits ferromagnetic order. The next step was to calculate the order parameters ${\cal M}_1, \ldots , {\cal M}_{N-1}$ (defined below). These depend on a constant $k$ which decreases from one to zero as the temperature increases from zero to criticality. In 1989 Albertini {\it et al} \cite{AMPT89} made the elegant conjecture, based on the available series expansions, that \begin{equation} \label{conj} {\cal M}_r \; = \; k^{r(N-r)/N^2} \; \; , \; \; 0 \leq r \leq N \;\; . \end{equation} It might have been expected that a proof of such a simple formula would not have been long in coming, but in fact it proved to be a remarkably difficult problem. Order parameters (spontaneous magnetizations) are notoriously more difficult to calculate than free energies. For the Ising model (to which the chiral Potts model reduces when $N=2$), the free energy was calculated by Onsager in 1944 \cite{Onsager44}, but it was five years later when at a conference in Florence he announced his result for the spontaneous magnetization, and not till 1952 that the first published proof was given by Yang\cite{Yang52,Onsager71}. Similarly, the free energy of the eight-vertex model was calculated in 1971.\cite{Baxter71} The spontaneous magnetization and polarization were conjectured in 1973 and 1974, respectively\cite{BarberBax73, BaxKelland74}, but it was not till 1982 that a proof of the first of these conjectures were published\cite{book82}. A proof of the second had to wait until 1993\cite{JMN93}! By then three separate methods had been used. The Onsager-Yang calculation was based on the particular free-fermion/spinor/pfaffian/Clifford algebra structure of the Ising model\cite{MPW63}. As far as the auther is aware, this has never been extended to the other models: it would be very significant if it could be. The eight-vertex and subsequent hard-hexagon calculation was made using the corner transfer matrix method, which had been discovered in 1976\cite{Baxter76}. This worked readily for the magnetization (a single-site correlation), but not for the polarization (a single-edge correlation). This problem was remedied by the ``broken rapidity line'' technique discovered by Jimbo {\it et al} \cite{JMN93}. For all the two-dimensional solvable models, the Boltzmann weight functions $W, \overline{W}$ depend on parameters $p$ and $q$. These parameters are known as {\em rapidities} and are associated with lines (the dotted lines of Figure \ref{sqlattice}) that run through the midpoints of the edges of the lattice. In general these are complex numbers, or sets of related complex numbers. In all of the models we have mentioned, with the notable exception of the $N > 2$ chiral Potts model, these parameters can be chosen so that $W, \overline{W}$ depend only on the {\em rapidity difference} (spectral parameter) $p - q$. This property seems to be an essential element in the corner transfer matrix method: the star-triangle relation ensures that the corner transfer matrices factor, but the difference property is then needed to show that the factors commute with one another and are exponentials in the rapidities. The difference property is {\em not} possessed by the $N > 3$ chiral Potts model and one is unable to proceed. At first the author thought this would prove to be merely a technical complication and embarked on a low-temperature numerical calculation\cite{Baxter93} in the hope this would reveal the kind of simplifications that happen with the other models. This hope was not realised. I then looked at the technique of Jimbo {\it et al} and in 1998 applied it to the chiral Potts model. One could write down functional relations satisfied by the generalized order parameter ratio function $G_{pq}(r)$, and for $N=2$ these were sufficient (together with an assumed but very plausible analyticity property) to solve the problem. However, for $N > 2$ there was still a difficulty. Then $p$, $q$ are points on an algebraic curve of genus $> 1$ and there is no obvious uniformizing substitution. The functional relations themselves do not define $G_{pq}(r)$: one needs some additional analyticity information, and that seems hard to come by. The calculation of the free energy of the chiral Potts model \cite{RJB90, RJB91, RJB03} proceeds in two stages. First one considers a related ``$\tau_2 (t_q)$'' model.\cite{RJB04} This is intimately connected with the superintegrable case of the chiral Potts model.\cite{RJB89} It is much simpler than the chiral Potts model in that its Boltzmann weights depend on the horizontal rapidity $q$ only via a single parameter $t_q$, and are linear in $t_q$. Its row-to-row transfer matrix is the product of two chiral Potts transfer matrices, one with horizontal rapidity $q$, the other with a related rapidity $r = V R q$ defined by eqn. (\ref{autos}) of section 2. For a finite lattice, the partition function $Z$ of the $\tau_2 (t_q)$ model is therefore a polynomial in $t_p$. The free energy is the logarithm of $Z^{1/M}$, where $M$ is the number of sites of the lattice, evaluated in the thermodynamic limit when the lattice becomes infinitely big. This limiting function of course may have singularities in the complex $t_q$ plane. {\it A priori}, one might expect it to have $N$ branch cuts, each running though one of the $N$ roots of unity. However, one can argue that in fact it only has one such cut. As a result the free energy (i.e. the maximum eigenvalue of the transfer matrix) can be calculated by a Wiener-Hopf factorization. The second stage is to factor this free energy to obtain that of the chiral Potts model. It was not until 2004 that I realised that : (1) If one takes $p$, $q$ to be related by eqn. (\ref{spcase}) below, then $G_{pq}(r)$ can be expressed in terms of partition functions that involve $p, q$ only via the Boltzmann weights of the $\tau_2 (t_{p'})$ model, with $p' = R^{-1} p$. (2) It is {\em not} necessary to obtain $G_{pq}(r)$ for arbitrary $p$ and $q$. To verify the conjecture (\ref{conj}) it is sufficient to obtain it under the restriction (\ref{spcase}). I indicate the working in the following sections: a fuller account is given in Ref. \cite{RJB05b}. The calculation of $G_{pq}(r)$ for general $p$, $q$ remains an unsolved problem: still interesting, but not necessary for the derivation of the order parameters ${\cal M}_r$. \section{Chiral Potts model} We use the notation of \cite{BPAuY88, BBP90, RJB98}. Let $k, k'$ be two real variables in the range $(0,1)$, satisfying \begin{equation} k^2 + {k'}^2 = 1 \;\; . \end{equation} Consider four parameters $x_p, y_p, \mu_p, t_p$ satisfying the relations \begin{equation} \label{xymu} k x_p^N = 1-k'/\mu_p^N \;\; , \;\; k y_p^N = 1-k'\mu_p^N \;\; , \;\; t_p = x_p y_p \;\; . \end{equation} Let $p$ denote the set $\{x_p, y_p, \mu_p, t_p \}$. Similarly, let $q$ denote the set $\{x_q, y_q, \mu_q, t_q \}$. We call $p$ and $q$ ``rapidity'' variables. Each has one free parameter and is a point on an algebraic curve. Define Boltzmann weight functions $W_{pq}(n), \overline{W} _{pq}(n)$ by \addtocounter{equation}{1} \setcounter{storeeqn}{\value{equation}} \setcounter{equation}{0} \renewcommand{\theequation}{\arabic{storeeqn}\alph{equation}} \begin{eqnarray} \label{WWba} W_{pq}(n) & = & (\mu_p/\mu_q)^n \prod_{j=1}^n \frac{y_q - \omega^j x_p} {y_p - \omega^j x_q} \;\; , \\ \label{WWbb} \overline{W}_{pq}(n) & = & (\mu_p \mu_q)^n \prod_{j=1}^n \frac{\omega x_p - \omega^j x_q} {y_q - \omega^j y_p} \;\; , \end{eqnarray} where \begin{displaymath} \omega \; = \; {\rm e}^{2\pi \i/N} \;\; . \end{displaymath} They satisfy the periodicity conditions \begin{displaymath} W_{pq}(n + N) = W_{pq}(n) \;\; , \;\; \overline{W}_{pq}(n + N) = \overline{W}_{pq}(n) \;\; . \end{displaymath} \setcounter{equation}{\value{storeeqn}} \renewcommand{\theequation}{\arabic{equation}} \setlength{\unitlength}{1pt} \begin{figure}[hbt] \begin{picture}(420,260) (0,0) \multiput(30,15)(5,0){73}{.} \multiput(30,75)(5,0){32}{\bf .} \multiput(31,75)(5,0){32}{\bf .} \multiput(202,75)(5,0){35}{\bf .} \multiput(203,75)(5,0){35}{\bf .} \multiput(30,135)(5,0){73}{.} \multiput(30,195)(5,0){73}{.} \put (190,72) {\line(0,1) {8}} \put (200,72) {\line(0,1) {8}} \thicklines \put (69,72) {\large $< $} \put (70,72) {\large $< $} \put (71,72) {\large $< $} \put (308,12) {\large $< $} \put (309,12) {\large $< $} \put (310,12) {\large $< $} \put (308,72) {\large $< $} \put (309,72) {\large $< $} \put (310,72) {\large $< $} \put (308,132) {\large $< $} \put (309,132) {\large $< $} \put (310,132) {\large $< $} \put (308,192) {\large $< $} \put (309,192) {\large $< $} \put (310,192) {\large $< $} \put (42,230) {\large $\wedge$} \put (42,229) {\large $\wedge$} \put (42,228) {\large $\wedge$} \put (102,230) {\large $\wedge$} \put (102,229) {\large $\wedge$} \put (102,228) {\large $\wedge$} \put (162,230) {\large $\wedge$} \put (162,229) {\large $\wedge$} \put (162,228) {\large $\wedge$} \put (222,230) {\large $\wedge$} \put (222,229) {\large $\wedge$} \put (222,228) {\large $\wedge$} \put (282,230) {\large $\wedge$} \put (282,229) {\large $\wedge$} \put (282,228) {\large $\wedge$} \put (342,230) {\large $\wedge$} \put (342,229) {\large $\wedge$} \put (342,228) {\large $\wedge$} \thinlines \put (176,102) {{\Large \it a}} \put (320,60) {{\Large \it q}} \put (83,60) {{\Large \it p}} \put (380,-2) {{\Large \it h}} \put (380,118) {{\Large \it h}} \put (380,178) {{\Large \it h}} \put (195,105) {\circle{7}} \put (16,45) {\line(1,-1) {60}} \put (16,165) {\line(1,-1) {180}} \put (76,225) {\line(1,-1) {117}} \put (198,103) {\line(1,-1) {117}} \put (196,225) {\line(1,-1) {180}} \put (316,225) {\line(1,-1) {60}} \put (16,165) {\line(1,1) {60}} \put (16,45) {\line(1,1) {180}} \put (76,-15) {\line(1,1) {117}} \put (198,107) {\line(1,1) {118}} \put (196,-15) {\line(1,1) {180}} \put (316,-15) {\line(1,1) {60}} \put (75,105) {\circle*{7}} \put (315,105) {\circle*{7}} \put (75,-15) {\circle*{7}} \put (195,-15) {\circle*{7}} \put (315,-15) {\circle*{7}} \put (15,45) {\circle*{7}} \put (135,45) {\circle*{7}} \put (255,45) {\circle*{7}} \put (375,45) {\circle*{7}} \put (15,165) {\circle*{7}} \put (135,165) {\circle*{7}} \put (255,165) {\circle*{7}} \put (375,165) {\circle*{7}} \put (75,225) {\circle*{7}} \put (195,225) {\circle*{7}} \put (315,225) {\circle*{7}} \put (42,-40) {{\Large \it v}} \put (102,-40) {{\Large \it v}} \put (162,-40) {{\Large \it v}} \put (222,-40) {{\Large \it v}} \put (282,-40) {{\Large \it v}} \put (342,-40) {{\Large \it v}} \multiput(45,-25)(0,5){52}{.} \multiput(105,-25)(0,5){52}{.} \multiput(165,-25)(0,5){52}{.} \multiput(225,-25)(0,5){52}{.} \multiput(285,-25)(0,5){52}{.} \multiput(345,-25)(0,5){52}{.} \end{picture} \vspace{1.5cm} \caption{\footnotesize The square lattice (solid lines, drawn diagonally), and the associated rapidity lines (broken or dotted).} \label{sqlattice} \end{figure} Now consider the square lattice $\cal L$, drawn diagonally as in Figure \ref{sqlattice}, with a total of $M$ sites. On each site $i$ place a spin $\sigma_i$, which can take any one of the $N$ values $0, 1, \ldots, N-1$. The solid lines in Figure \ref{sqlattice} are the edges of $\cal L$. Through each such edge there pass two dotted or broken lines - a vertical line denoted $v$ and a horizontal line denoted $h$ (or $p$ or $q$). These $v, h, p, q$ are rapidity variables, as defined above. We refer to each dotted line as a ``rapidity line''. With each SW - NE edge $(i,j)$ (with $i$ below $j$) associate an edge weight $W_{vh}(\sigma_i - \sigma_j)$. Similarly, with each SW - NE edge $(j,k)$ ($j$ below $k$), associate an edge weight $\overline{W}_{vh}(\sigma_j - \sigma_k)$. (Replace $h$ by $p$ or $q$ for the broken left and right half-lines.) Then the partition function is \begin{equation} \label{defZ} Z \; = \; \sum_{\sigma} \, \prod W_{vh}(\sigma_i - \sigma_j) \prod \overline{W}_{vh}(\sigma_j - \sigma_k) \;\; , \end{equation} the products being over all edges of each type, and the sum over all $N^M$ values of the $M$ spins. We expect the partition function per site \begin{displaymath} \kappa \; = \; Z^{1/M} \end{displaymath} to tend to a unique limit as the lattice becomes large in both directions. Let $a$ be a spin on a site near the centre of the lattice, as in the figure, and $r$ be any integer. Then the thermodynamic average of $\omega^{r a}$ is \begin{equation} \label{avfa} \tilde{F}_{pq}(r) \; = \; \langle \omega^{r a} \rangle \; = \; Z^{-1} \, \sum_{\sigma} \, \omega^{r a} \prod W_{vh}(\sigma_i - \sigma_j) \prod \overline{W}_{vh}(\sigma_j - \sigma_k) \;\; . \end{equation} We expect this to also tend to a limit as the lattice becomes large. We could allow each vertical (horizontal) rapidity line $\alpha$ to have a different rapidity $v_{\alpha}$ ($h_{\beta}$). If an edge of $\cal L$ lies on lines with rapidities $v_{\alpha}$, $h_{\beta}$, then the Boltzmann weight function of that edge is to be taken as $W_{vh}(n)$ or $\overline{W}_{vh}(n)$, with $v = v_{\alpha}$ and $h = h_{\beta}$. The weight functions $W_{pq}(n)$, $\overline{W}_{pq}(n)$ satisfy the star- triangle relation.\cite{BPAuY88} For this reason we are free to move the rapidity lines around in the plane, in particular to interchange two vertical or two horizontal rapidity lines.\cite{RJB78} So long as no rapidity line crosses the site with spin $a$ while making such rearrangements, the average $\langle \omega^{r a} \rangle$ is {\em unchanged} by the rearrangement.\footnote{Subject to boundary conditions: here we are primarily interested in the infinite lattice, where we expect the boundary conditions to have no effect on the rearrangements we consider.} All of the $v, h$ rapidity lines shown in Figure \ref{sqlattice} are ``full'', in the sense that they extend without break from one boundary to another. We can move any such line away from the central site to infinity, where we do not expect it to contribute to $\langle \omega^{r a} \rangle$. Hence in the infinite lattice limit $\tilde{F}_{pq}(r) = \langle \omega^{r a} \rangle$ must be {\em independent} of {\em all } the full-line $v$ and $h$ rapidities. The horizontal rapidity line immediately below $a$ has different rapidity variables $p$, $q$ on the left and the right of the break below $a$. This means that we cannot use the star-triangle relation to move it away from $a$. It follows that $\tilde{F}_{pq}(r)$ will in general depend on $p$ and $q$, as well as on the `` universal'' constants $k$ or $k'$. We are particularly interested in the case when $q = p$. Then the $p,q$ line is not broken, it can be removed to infinity, so \begin{equation} \label{defMr} {\cal M}_r \; = \; \tilde{F}_{pp}(r) \; = \; \langle \omega^{r a} \rangle \; = \; {\rm independent \; \; of }\; \; p \;\; . \end{equation} These are the desired order parameters of the chiral Potts model, studied by Albertini {\it et al}. By using this ``broken rapidity line'' approach, I was finally ably to verify their conjecture (\ref{conj}) in 2005\cite{RJB05a,RJB05b}. Here I shall present some of the observations that enabled me to do this. \subsection*{Automorphisms} There are various automorphisms that change $x_p, y_p \mu_p, t_p$ while leaving the relations (\ref{xymu} ) still satisfied. Four that we shall use are $R, S, M, V$, defined by: \begin{eqnarray} \label{autos} \{x_{Rp}, y_{Rp}, \mu_{Rp}, t_{Rp} \} & = & \{ y_p,\omega x_p, 1/\mu_p, \omega t_p \} \;\; , \nonumber \\ \{x_{Sp}, y_{Sp}, \mu_{Sp}, t_{Sp} \} & = & \{ 1/y_p, 1/x_p, \omega^{-1/2} y_p /(x_p \mu_p), 1/t_p \} \;\; , \\ \{x_{Mp}, y_{Mp}, \mu_{Mp}, t_{Mp} \} & = & \{ x_p, y_p, \omega \mu_p, t_p \} \;\; , \nonumber \\ \{x_{Vp}, y_{Vp}, \mu_{Vp}, t_{Vp} \} & = & \{ x_p, \omega y_p, \mu_p, \omega t_p \} \;\; . \nonumber \end{eqnarray} \subsection*{The central sheet $\cal D$ and its neighbours.} We shall find it natural, at least for the special case discussed below, to regard $t_p$ as the independent variable, and $x_p, y_p, \mu_p$ to be defined it terms of it by (\ref{xymu}). They are not single-valued functions of $t_p$: to make them single-valued we must introduce $N$ branch cuts $B_0, B_1, \ldots, B_{N-1}$ in the complex $t_p$-plane as indicated in Figure (\ref{brcuts}). They are about the points $1, \omega, \ldots, \omega^{N-1}$, respectively, \setlength{\unitlength}{1pt} \begin{figure}[hbt] \begin{picture}(420,260) (0,0) \put (50,125) {\line(1,0) {350}} \put (225,0) {\line(0,1) {250}} \put (325,125) {\circle*{9}} \put (175,208) {\circle*{9}} \put (175,42) {\circle*{9}} \put (315,100) {\Large 1} \put (185,214) {\Large $\omega$} \put (184,32) {\Large $\omega^2$} \put (358,100) {{\Large {$B_0$}}} \put (135,219) {{\Large {$B_1$}}} \put (134,22) {{\Large {$B_2$}}} \put (305,10) {{\Large {$t_p$-plane}}} \thicklines \put (295,124) {\line(1,0) {60}} \put (295,125) {\line(1,0) {60}} \put (295,126) {\line(1,0) {60}} \put (160,16) {\line(3,5) {30}} \put (160,17) {\line(3,5) {30}} \put (160,18) {\line(3,5) {30}} \put (160,234) {\line(3,-5) {30}} \put (160,233) {\line(3,-5) {30}} \put (160,232) {\line(3,-5) {30}} \thinlines \ \end{picture} \vspace{1.5cm} \caption{The cut $t_p$-plane for $N=3$.} \label{brcuts} \end{figure} Since the Boltzmann weights are rational functions of $x_p, y_p$, we expect $G_{pq}(r)$, considered as a function of $t_p$ or $t_q$, to also have these $N$ branch cuts. Given $t_p$ in the cut plane of Figure \ref{brcuts}, choose $\mu_p^N$ to be outside the unit circle. Then $x_p$ must lie in one of $N$ disjoint regions centred on the points $1, \omega, \ldots , \omega^{N-1}$. Choose it to be in the region centred on $1$. We then say that $p$ lies in the domain $\cal D$. When this is so (and $t_p$ is not close to a branch cut), then in the limit $k' \rightarrow 0$, $\mu_p^N = O(1/k')$ and $x_p \rightarrow 1$. The domain $\cal D$ has $N$ neighbours ${\cal D}_0, \ldots, {\cal D}_{N-1}$ , corresponding to $t_p$ crossing the $N$ branch cuts $B_0, \ldots, B_{N-1}$, respectively. The automorphism that takes $\cal D$ to ${\cal D}_i$, while leaving $t_p$ unchanged, is \begin{equation} \label{defAi} A_i \; = \; V^{i-1} R V^{N-i} \;\; . \end{equation} The mappings $A_i$ are involutions: $A_i^2 = 1$. \section{Functional relations} We define the ratio function \begin{equation} \label{defGpq} G_{pq}(r) \; = \; \tilde{F}_{pq}(r) /\tilde{F}_{pq}(r-1) \;\; . \end{equation} The functions $\tilde{F}_{pq}(r)$, $G_{pq}(r) $ satisfy two reflection symmetry relations. Also, although we cannot move the break in the $(p,q)$ rapidity line away from the spin $a$, we can rotate its parts about $a$ and then cross them over. As we show in \cite{RJB98} and \cite{RJB05b}, this leads to functional relations for $G_{pq}(r)$: \begin{eqnarray} \label{functrl} G_{Rp,Rq}(r) & = & 1/G_{pq}(N-r+1) \;\; , \nonumber \\ G_{p,q}(r) & = & 1/G_{RSq,RSp}(N-r+1) \;\; , \nonumber \\ G_{pq}(r) & = & G_{Rq, R^{-1} p}(r) \;\; , \\ G_{pq}(r) & = & \frac{x_q \mu_q - \omega^r x_p \mu_p } {y_p \mu_q - \omega^{\, r-1} y_q \mu_p} \; G_{R^{-1}q, R p}(r) \nonumber \\ G_{Mp,q}(r) & = & G_{p,M^{-1} q}(r) = G_{pq}(r+1) \;\; , \nonumber \\ \prod_{r=1}^N G_{pq}(r) & = & 1 \;\; . \nonumber \end{eqnarray} Also, {from} (\ref{defMr}), \begin{equation} \label{calcMr} {\cal M}_r \; = \; G_{pp}(1) \cdots G_{pp}(r) \;\; . \end{equation} For the case when $N=2$ we regain the Ising model. As is shown in \cite{RJB98}, there is then a uniformizing substitution such that $x_p, y_p, \mu_p, t_p$ are all single-valued meromorphic functions of a variable $u_p$, and $W_{pq}(n), \overline{W}_{pq}(n)$ and hence $G_{pq}(r)$ depend on $u_p$, $u_q$ only via their difference $u_q - u_p$. In fact all quantities are Jacobi elliptic functions of $u_p, u_q$ with modulus $k$. One can argue (based on low-temperature series expansions) that $G_{pq}(r)$ is analytic and non-zero in a particular vertical strip in the complex $u_q - u_p$ plane. The relations (\ref{functrl}) then define $G_{pq}(r)$. They can be solved by Fourier transforms and one readily obtains the famous Onsager result \begin{equation} {\cal M}_1 \; = \; (1-{k'}^2)^{1/8} \;\; . \end{equation} For $N > $ the problem is much more difficult. There then appears to be no uniformizing substitution and $G_{pq}(r)$ lives on a many-sheeted Riemann surface obtainable from $\cal D$ by repeated crossings of the branch cuts. One can argue from the physical cases (when the Boltzmann weights are real and positive) that $G_{pq}(r)$ should be analytic and non-zero when $p, q$ both lie in $\cal D$, but the relations (\ref{functrl}) only relate these sheets to a small sub-set of all possible sheets. There seems to be a basic lack of information. \section{Solvable special case: $q = V p$} The author spent much time mulling over this problem, then towards the end of 2004 he realised that the case \begin{equation} \label{spcase} q \; = \; Vp \end{equation} may be much simpler to handle, and still be sufficient to obtain the order parameters ${\cal M}_r$. The reason it is simpler is that one can rotate the left-half line $p$ anti-clockwise below $a$ until it lies immediately below the half-line $q$, as in Fig. 5 of \cite{RJB05b}. One has to reverse the direction of the arrow, which means the rapidity is not $p$ but $p' = R^{-1}p$. \setlength{\unitlength}{1pt} \begin{figure}[hbt] \begin{picture}(420,260) (0,0) \multiput(15,75)(5,0){74}{\bf .} \multiput(16,75)(5,0){74}{\bf .} \multiput(15,135)(5,0){74}{\bf .} \multiput(16,135)(5,0){74}{\bf .} \thicklines \put (308,72) {\large $< $} \put (309,72) {\large $< $} \put (310,72) {\large $< $} \put (308,132) {\large $< $} \put (309,132) {\large $< $} \put (310,132) {\large $< $} \put (42,170) {\large $\wedge$} \put (42,169) {\large $\wedge$} \put (42,168) {\large $\wedge$} \put (102,170) {\large $\wedge$} \put (102,169) {\large $\wedge$} \put (102,168) {\large $\wedge$} \put (162,170) {\large $\wedge$} \put (162,169) {\large $\wedge$} \put (162,168) {\large $\wedge$} \put (222,170) {\large $\wedge$} \put (222,169) {\large $\wedge$} \put (222,168) {\large $\wedge$} \put (282,170) {\large $\wedge$} \put (282,169) {\large $\wedge$} \put (282,168) {\large $\wedge$} \put (342,170) {\large $\wedge$} \put (342,169) {\large $\wedge$} \put (342,168) {\large $\wedge$} \thinlines \put (-5,166) {{\Large \it a}} \put (-5,36) {{\Large \it a}} \put (360,83) {{\Large $p' = R^{-1} p$}} \put (393,132) {{\Large $q$ }} \put (121,31) {{\Large \it b}} \put (241,31) {{\Large \it c}} \put (118,161) {{\Large \it e}} \put (238,161) {{\Large \it d}} \put (191,90) {{\Large \it g}} \put (195,105) {\circle*{7}} \put (18,163) {\line(1,-1) {115}} \put (138,163) {\line(1,-1) {115}} \put (258,163) {\line(1,-1) {115}} \put (18,47) {\line(1,1) {115}} \put (138,47) {\line(1,1) {115}} \put (258,47) {\line(1,1) {115}} \put (75,105) {\circle*{7}} \put (315,105) {\circle*{7}} \put (15,45) {\circle{7}} \put (135,45) {\circle{7}} \put (255,45) {\circle{7}} \put (375,45) {\circle{7}} \put (15,165) {\circle{7}} \put (135,165) {\circle{7}} \put (255,165) {\circle{7}} \put (375,165) {\circle{7}} \put (42,10) {{\Large \it v}} \put (102,10) {{\Large \it v}} \put (162,10) {{\Large \it v}} \put (222,10) {{\Large \it v}} \put (282,10) {{\Large \it v}} \put (342,10) {{\Large \it v}} \multiput(45,35)(0,5){28}{.} \multiput(105,35)(0,5){28}{.} \multiput(165,35)(0,5){28}{.} \multiput(225,35)(0,5){28}{.} \multiput(285,35)(0,5){28}{.} \multiput(345,35)(0,5){28}{.} \end{picture} \vspace{1.5cm} \caption{\footnotesize The lattice after rotating the half-line $p$ to a position immediately below $q$.} \label{dblerow} \end{figure} The result is that $p$ enters the sums in (\ref{defZ}), (\ref{avfa}) only via the weights of the edges shown in Figure \ref{dblerow}. The left-hand spins are the same - the spin $a$. The right-hand spins are set to the boundary value of zero. Further, we can sum over the spins between lines $p'$ and $q$. For instance, summing over the spin $g$ gives a contribution \begin{displaymath} U(b,c,d,e) \; = \; \sum_g W_{v p'} (b-g) \overline{W}_{v p'} (c-g) W_{v q} (g-d) \overline{W}_{v q} (g-e) \;\; . \end{displaymath} If $a, \sigma_1, \ldots , \sigma_L$ are the spins on the lowest row of Figure \ref{dblerow}, and $a, \sigma'_1, \ldots , \sigma'_L$ are those in the upper, then the combined weight of the edges shown in Figure \ref{dblerow} is \begin{equation} \label{rowprod} \prod_{i=1}^L U(\sigma_{i-1},\sigma_i, \sigma'_i,\sigma'_{i-1}) \;\; . \end{equation} Now $q = VRp'$, which from (\ref{autos}) means that \begin{equation} x_q = y_{p'} \;\; , \;\; y_q = \omega^2 x_{p'} \;\; , \;\; \mu_q = 1/\mu_{p'} \;\; . \end{equation} This is the equation (3.13) of \cite{BBP90}, the $q,r$ therein being our $p', q$ and $k, \ell$ having the values $0, 2$. From (3.17) therein, $U(b,c,d,e)$ vanishes if $0 \leq {\rm mod}(b-e,N) \leq 1$ and $2 \leq {\rm mod}(c-d,N) \leq N-1$. It follows that the spins in the upper row are either equal to the corresponding spins in the lower row, or just one less than them. From (2.29) and (3.39) of \cite{BBP90}, it follows that to within ``gauge factors'' (i.e. factors that cancel out of eqn. \ref{rowprod}) $U(b,c,d,e)$ depends on $p$ very simply: it is {\em linear} in $t_p$. In fact, these Boltzmann weights $U(b,c,d,e)$ are those of the $\tau_2(t_{p'})$ model\cite{BBP90,RJB90,RJB91} mentioned earlier. Just as this model plays a central role in the calculation of the chiral Potts free energy, so it naturally enters this calculation of the order parameters. In the low-temperature limit, when $k' \rightarrow 0$, $\mu_p, \mu_q \sim O({k'}^{-1/N})$, $x_p, x_q \rightarrow 1$, we can verify that the dominant contribution to the sums in (\ref{defZ}), (\ref{avfa}) comes from the case when $\sigma_1, \ldots, \sigma_{L}, \sigma'_1, \ldots, \sigma'_{L} $ are all zero. Also, to within factors that cancel out of(\ref{rowprod}) and (\ref{avfa}), \begin{equation} U(b,c,c,b) = 1 - \omega t_{p'} = 1 - t_p \;\; . \end{equation} It follows that the RHS of (\ref{avfa}), and therefore of (\ref{defGpq}), is a ratio of two polynomials in $t_p$, each of degree $L$, and each equal to $(1-t_p)^L$ in the limit $k' \rightarrow 0$. By continuity (keeping $L$ finite), for small values of $k'$ their $L$ zeros must be close to one. Provided this remains true (which we believe it does) when we take the limit $L \rightarrow \infty$, we expect $G_{p,Vp}(r)$ to be an analytic and non-zero function of $t_p$, except in some region near $t_p = 1$. As $k'$ becomes small, this region must shrink down to the point $t_p = 1$. Similarly, if we rotate the half line $p$ in Figure \ref{sqlattice} clockwise above $a$, we can move it be immediately above $q$, with $p$ replaced by $Rp$, as in Fig. 6 of \cite{RJB05b}. The $p', q$ of Figure\ref{dblerow} herein are now replaced by $q, Rp$. This corresponds equation (3.13) of \cite{BBP90} with the $q,r$ therein replaced by $q, Rp$. From (\ref{spcase}) it follows that $k, \ell$ in \cite{BBP90} now have the values $-1, N+1$. The combined star weights $U$ are now those of the $\tau_{N}(t_p)$ model. They are polynomials in $t_p$ of degree $N-1$, except for terms which contribute a factor $x_p^{\epsilon(r)}$ to the contribution of (\ref{rowprod}) to $G_{p,Vp}(r)$, where \begin{equation} \epsilon(r) \; = \; 1 - N \delta_{r,0} \;\; , \end{equation} the $\delta$ function being interpreted modulo $N$, so $\epsilon(0) = \epsilon(N) = 1-N$. When $k' \rightarrow 0$ these polynomials are $(1-\omega t_p) (1-\omega^2 t_p) \cdots (1-\omega^{N-1} t_p)$. In the large-$L$ limit, with $k'$ not too large, we therefore expect $x_p^{\epsilon(r)} G_{p,Vp}(r)$ to have singularities near $t_p = \omega, \ldots, \omega^{N-1}$, but {\em not} near $t_p = 1$. If we define \begin{equation} \label{greln} g(p;r) \; = \; G_{p,Vp}(r) \;\; , \end{equation} then this implies that the function $ x_p^{\epsilon(r)} g(p;r)$ does {\em not} have $B_0$ as a branch cut. This is in agreement with the fourth and sixth functional relations in (\ref{functrl}). If we set $q = Vp$ therein we obtain \begin{equation} \label{frln4} x_p^{-\epsilon(r)} g(p;r) \; = \; y_p^{-\epsilon(r)} g(V^{-1} R p;r) \;\; , \end{equation} using $V^{-1}R = R^{-1} V$. Here we have used the fourth relation for $r \neq 0$ and the sixth to then determine the behaviour for $r=0$. (For $r=0$ the fourth relation merely gives $0 = 0$.) {From} (\ref{defAi}) the automorphism $V^{-1}R$ is the automorphism $A_0$ that takes $p$ across the branch cut $B_0$, returning $t_p$ to its original value, while interchanging $x_p$ with $y_p$. Thus (\ref{frln4}) states that $ x_p^{-\epsilon(r)} g(p;r)$ is the same on both sides of the cut, i.e. it does not have the cut $B_0$. These are the key analyticity properties that we need to calculate $g(p;r)$ and ${\cal M}_r$. We do this in \cite{RJB05b,RJB05a}, but this meeting is in honour of Tony Guttmann, an expert in series expansion methods, so it seems appropriate to here describe the series expansion checks I made (for $N=3$) when I first began to suspect these properties. \section{Consequences of this analyticity} The above observations imply that $g(p;r)$, considered as a function of $t_p$, does {\em not} have the branch cuts of Figure \ref{brcuts}, except for the branch cut on the positive real axis. This means that $g(p;r)$ is unchanged by taking allowing $t_p$ to cross any of the branch cuts $B_1, \ldots ,B_{N-1}$ and then returning it to its original value, i.e. it satisfies the $N-1$ symmetry relations: \begin{equation} \label{autosA} g(p;r) \; = \; g(A_i \, p;r) \; \; \; {\rm for } \; \; i = 1, \ldots ,N-1 \;\; , \end{equation} $A_i$ being the automorphism (\ref{defAi}). For $N = 3$, this can be checked using the series expansions obtained in \cite{RJB98b}. We use the hyperelliptic parametrisation introduced in \cite{RJB90b,RJB93a,RJB93b}. We define parameters $x, z_p, w_p$ related to one another and to $t_p$ by \begin{equation} \label{defx} (k'/k)^2 = 27 x \prod_{ n =1}^{\infty} \left( \frac{1-x^{3n}}{1-x^n} \right)^{12} \;\; . \end{equation} \begin{equation} \label{eq4.5} w = \prod_{n=1}^{\infty} \frac{(1-x^{2n-1} z/w) (1-x^{2n-1} w/z) (1-x^{6n-5} zw) (1-x^{6n-1} z^{-1} w^{-1})} {(1-x^{2n-2} z/w) (1-x^{2n} w/z) (1-x^{6n-2} zw) (1-x^{6n-4} z^{-1} w^{-1})} \end{equation} (writing $z_p, w_p$ here simply as $z, w$), and \begin{equation} \label{eq27} t_p = \omega \frac{f(\omega z_p)}{f(\omega^2 z_p)} = \frac{f(-\omega /w_p)}{f(-\omega^2/w_p)} = \omega^2 \frac{f(-\omega w_p/z_p)}{f(-\omega^2 w_p/z_p)} \;\; , \end{equation} where $f(z)$ is the function \begin{equation} f(z) \; = \; \prod_{n=1}^{\infty} (1-x^{n-1}z ) (1-x^n/z) \;\; . \end{equation} Note that $x$, like $k'$, is a constant (not a rapidity variable) and is small at low temperatures. We develop expansions in powers of $x$. For $p$ in $\cal D$, the parameters $z_p, w_p$ are of order unity, so to leading order $w_p = z_p +1$, $x_p = 1$, $y_p = (\omega - \omega^2 z_p)/(1- \omega^2 z_p)$. The automorphisms $R,S, V$ transform $z_p, w_p$ to \begin{displaymath} z_{R p} = x z_p \;\; , \;\; z_{Sp} = 1/(x z_p) \;\; , \;\; z_{V p} = -1/w_p \end{displaymath} \begin{equation} w_{R p} = z_p/w_p \;\; , \;\; w_{Sp} = 1/(x w_p) \;\; , \;\; w_{V p} = z_p/w_p \;\; , \end{equation} so from (\ref{defAi}), if $p_i = A_i p$ then \begin{eqnarray} z_{p_0} = -1/(x w_p) , & z_{p_1} = -x w_p/z_p , & z_{p_2} = z_p \nonumber \\ w_{p_0} = -1/(x z_p) , & w_{p_1} = w_p , & w_{p_2} = x z_p/w_p \;\; . \end{eqnarray} If we write $ g(p;r)$ more explicitly as $g(z_p,w_p;r)$, then the relations (\ref{autosA}) become \addtocounter{equation}{1} \setcounter{storeeqn}{\value{equation}} \setcounter{equation}{0} \renewcommand{\theequation}{\arabic{storeeqn}\alph{equation}} \begin{eqnarray} \label{eq1} g(z_p,w_p;r) & = & g(-x w_p/z_p,w_p;r) \\ \label{eq2} g(z_p,w_p;r) & = & g(z_p,x z_p/w_p;r) \;\; . \end{eqnarray} \setcounter{equation}{\value{storeeqn}} \renewcommand{\theequation}{\arabic{equation}} Using (\ref{defZ}), (\ref{avfa}), we can write (\ref{defGpq}) as \begin{equation} G_{pq}(r) \; = \; \sum_{j=0}^2 \omega^{jr } F_{pq}(j) \left/ \sum_{j=0}^2 \omega^{j(r-1) } F_{pq}(j) \right. \;\; , \end{equation} where $F_{pq}(j)$ is the probability that spin $a$ has value $j$. We use the series expansions (39) - (52) of \cite{RJB98b} for $F_{pq}(1)/F_{pq}(0)$ and $F_{pq}(2)/F_{pq}(0)$ in terms of \begin{equation} \alpha = z_q/z_p \;\; , \;\; \beta = w_q/w_p \;\; . \end{equation} Since $q = Vp$, $ z_{q} = -1/w_p$, $ w_{q} = z_p/w_p$ and we find from (39) of \cite{RJB98b} that $u = -\omega \, w_p/z_p$. (Choosing the cube root for $u$ to ensure that $F_{pq}(i)/F_{pq}(0)$ is real when $y_p = y_q = 0$ which is when $z_p = \omega^2$, $w_p = -\omega$: we then regain the physically interesting $q = p$ case of eqn. \ref{defMr}. ) For $p, q$ in $\cal D$, the parameters $z_p,w_p,z_q,w_q, \alpha, \beta$ are all of order unity, we can then use the expansion (48) of \cite{RJB98b} to obtain \begin{displaymath} F_{pq}(1)/F_{pq}(0) = \omega^2 \psi_1(z_p) \; = \; \omega^2 \psi_2(-w_p) \;\; , \end{displaymath} \begin{equation} \label{F12} F_{pq}(2)/F_{pq}(0) = \omega \psi_2(z_p) \; = \; \omega \psi_1(-w_p) \;\; , \end{equation} where \begin{displaymath} \psi_1(z) = - (z+1) x + (z+1)^3x^2/z - (z^3+6 z^2+ 16 z +16 +4 z^{-1}+z^{-2}) x^3 \end{displaymath} \begin{displaymath} + (z^4+11 z^3+ 41 z^2+85 z +81+25 z^{-1} + 7 z^{-2}+z^{-3})x^4 + O(x^5) \;\; , \end{displaymath} and \begin{displaymath} \psi_2(z) = z x - (2 z+1 +z^{-1}) x^2 - (z^2- 8 z -2 - 3 z^{-1}-z^{-2}) x^3 \end{displaymath} \begin{displaymath} - ( 2 z^3 - 5z^2+31 z+6 +14 z^{-1} + 5 z^{-2}+z^{-3}) x^4 + O(x^5) \;\; . \end{displaymath} The automorphism (\ref{eq1}) interchanges $\cal D$ with ${\cal D}_1$. To leading order in $x$, the mid-point is when $z_p = \i \, x^{1/2}, w_p = 1$. This is on the boundary of the domain $\cal D$, in which the series (48) of \cite{RJB98b} was obtained, so the series is not necessarily convergent at this point. Nevertheless, if we take $z_p = O(x^{1/2})$ in the above two series, we find the terms originally of order $x^j$ become of order not larger than $x^{(j+1)/2}$. Extrapolating, this suggests that the series do still converge at the midpoint, so we can use them to check whether the symmetry is satisfied. The first check occurs at order $x^{3/2}$, where both series contain a term \begin{displaymath} \pm \, (x z_p - x^2 w_p/z_p) \end{displaymath} (using the fact that to leading order $w_p = 1$ at the midpoint). This is indeed symmetric under $z_p \rightarrow -x w_p/z_p$. If we subtract this term from the series (using the expansion of $w_p$ in terms of $z_p$), we can then check the behaviour at order $x^2$, and similarly then at order $x^{5/2}$. All three checks are satisfied by both series. The perceptive reader will remark that (\ref{F12}) allows us to work with $w_p$ instead of $z_p$. Since $w_p$ is unchanged by $A_1$, the symmetry appears obvious. Indeed it is, but only because a quite remarkable event occurred in deriving these series, namely the $z$ series contains no powers of $z+1$ as denominators, and the $w$ series contains no powers of $w-1$. If one expands $w$ in terms of $z$ (or $z$ in terms of $w$), then one does find such terms. It is their absence from (\ref{F12}) that makes the series obviously convergent near $w = 1$ or $z = -1$. I have presented the argument in terms of $z_p$ to make it clear that one does indeed have three non-trivial checks on the symmetry to the available order of the series expansion. Similarly, (\ref{eq2}) interchanges $\cal D$ with ${\cal D}_2$, with mid-point $z_p = -1, w_p = \i \, x^{1/2}$. If one now works with $w_p$ as the variable, one can verify to the same three orders the symmetry $w_p \rightarrow x z_p/w_p$. So our series provide no less than six checks on the symmetries (\ref{eq1}), (\ref{eq2}). When I first observed this, I could see the resemblance to the properties of the free energy of the $\tau_2(t_q)$ model. One such property is that $\tau_2(t_q) \tau_2(\omega t_q) \cdots \tau_2(\omega^{N-1}t_q)$ is a rational function of $x_q^N$, so I looked at the series for \begin{eqnarray} \label{defL} {\cal L}(p;r) & = & \prod_{j\, = 0}^{N-1} g(V^j \, p;r) \nonumber \\ & = & g(z_p,w_p;r) \, g(-1/w_p,z_p/w_p;r) \, g(-w_p/z_p,-1/z_p;r) \;\; . \end{eqnarray} Choosing an arbitrary value for $z_p$ and working to 30 digits of accuracy, I soon found that the series (known to order $x^4$) fitted with the simple formulae \begin{equation} \label{Lconj} {\cal L}(p;0) = 1/x_p^2 \;\; , \;\; {\cal L}(p;1) = k^{1/3} x_p \;\; , \;\; {\cal L}(p;2) = k^{-1/3} x_p \;\; . \end{equation} All this strongly suggested that I was on the right track. It did not take long to justify my observations for general $N$. For instance, if $g(p;r)$ only has the branch cut $B_0$, and $x_p^{-\epsilon(r)} g(p;r)$ does not have that cut, then $x_p^{-\epsilon(r)} {\cal L}(p;r)$ does not have the cut $B_0$. But this function is unchanged by $p \rightarrow Vp$, which rotates the $t_p$ plane through an angle $2 \pi/N$. Hence it cannot have any of the cuts $B_0, B_1, \ldots, B_{N-1}$. We do not expect any other singularities (e.g. poles) for $p$ in $\cal D$, so the function is analytic in the entire $t_p$ plane. It is bounded (the Boltzmann weights $W, \overline{W}$ remain finite and non-zero as $y_p \rightarrow \infty$, the ratio $\mu_p/y_p$ remaining finite), so from Liouville's theorem it is a constant (independent of $p$ but dependent on $r$). We can relate these constants to the desired order parameters ${\cal M}_r$ in two ways, and then use these relations to calculate the ${\cal M}_r$. When $y_p = y_q = 0$ and $x_p = k^{1/N}$, our special case $q = Vp$ intersects with physically interesting case $q = p$, so from (\ref{defMr}), \begin{equation} x_p^{-\epsilon(r)} {\cal L}(p;r) \; = \; k^{-\epsilon(r)/N} \, ({\cal M}_r/ {\cal M}_{r-1})^N \;\; . \end{equation} When $y_p = y_q = \infty$ ($\mu_p/y_p$ remaining finite) and $x_p = k^{-1/N}$ we find not $q = p$ but $q = M^{-1}p$, which is related to $q = p$ by the fifth of the functional relations (\ref{functrl}), giving \begin{equation} x_p^{-\epsilon(r)} {\cal L}(p;r) \; = \; k^{\epsilon(r)/N} \, ({\cal M}_{r+1}/ {\cal M}_{r})^N \;\; . \end{equation} The left-hand sides of these last two equations, being constants, are the same in both equations. We can therefore equate the two right-hand sides, for $r = 1, \ldots, N-1$. Using the fact that ${\cal M}_0 = {\cal M}_N = 1$, we can solve for ${\cal M}_1, \ldots, {\cal M}_{N-1}$ to obtain \begin{equation} {\cal M}_r \; = \; k^{r(N-r)/N^2} \; \; {\rm for \; \;} r = 0, \ldots , N \;\; , \end{equation} which verifies the conjecture (\ref{conj}) of Albertini {\it et al} \cite{AMPT89}. For $N=3$ these results do of course agree with my original conjectures (\ref{Lconj}). In \cite{RJB05b} I also show that one can calculate $G_{P,Vp}(r) = g(p;r)$ by a Wiener-Hopf factorization, giving \begin{equation} \label{gS} g(p;r) \; = \; k^{(N+1-2r)/N^2} \, {\cal S}_p^{\, \epsilon (r)} \end{equation} for $r = 1, \ldots , N$, where \begin{equation} \label{defS} \log {\cal S}_p \; = \; - \frac{2}{N^2} \log k + \frac {1}{2 N \pi } \, \int_0^{2 \pi} \frac{k' {\rm e}^{\i\theta}}{1-k' {\rm e}^{\i\theta}} \, \log [\Delta(\theta) - t_p] \, {\rm d}\theta \;\; , \end{equation} and \begin{equation} \Delta( \theta ) \; = \; [(1-2k' \cos \theta + {k'}^2 )/k^2]^{1/N} \;\; . \end{equation} (This function ${\cal S}_p$ should not be confused with the automorphism $S$ defined in (\ref{autos}). As is implied by the above equations, ${\cal S}_p$ satisfies the product relation \begin{equation} {\cal S}_p {\cal S}_{Vp} \cdots {\cal S}_{V^{N-1} p} \; = \; k^{-1/N} x_p \;\; . \end{equation} Also, if one sets $q = Vp$ in the second of the relations (\ref{functrl}), uses the identity $R S = M V R S V$ and the fifth relation, one obtains $ g(p;r) g(RSVp;N-r) = 1$, from which we can deduce the symmetry \begin{equation} {\cal S}_p \, {\cal S}_{RSVp} = k^{-2/N^2} \;\; . \end{equation} For $N=3$ the automorphism $p \rightarrow RSVp$ takes $z_p, w_p$ to $-w_p, -z_p$, so this relation can then be written \begin{equation} {\cal S}(z_p,w_p ) {\cal S}(-w_p,-z_p) \; = \; k^{-2/9} \;\; . \end{equation} \section{Another interesting case: $q = V^2 p$} We now have the solution for $G_{pq}(r)$ for $q=p$ and for $q= Vp$. This suggests looking at one more case: $q= V^2 p$, where $y_q = \omega^2 y_p$. Similarly to section 5, we set $g_2(p;r) = G_{pq}(r)$ and \begin{displaymath} L_2(p;r) = \prod_{j=0}^{N-1} g_2(V^j p;r) \;\; . \end{displaymath} For $N = 3$ we have used the series expansions of \cite{RJB98b} to obtain for this case \begin{equation} F_{pq}(1) = \omega \phi(w_p) \;\; , \;\; F_{pq}(2) = \omega^2 \phi(1/w_p) \;\; , \end{equation} where \begin{displaymath} \phi(w) = (w - 1)x - (2 w^2 - 2 w + 1)x^2/w + (2 w^3 + 6 w^2 - 6 w + 1) x^3/w \end{displaymath} \begin{equation} - (2 w^4 + 8 w^3 + 24 w^2 - 22 w + 5) x^4/w + O(x^5) \;\; . \end{equation} As in the previous case, the coefficients are Laurent polynomials in $w$. There is no sign of any singularity near $w_p=1$, $t_p = \omega$ so this suggests that $G_{pq}(r)$, considered as a function of $t_p$, does not have the branch cut $B_1$. Indeed, this is a consequence of the third functional relation (\ref{functrl}). Setting $q = V^2 p$ therein, we obtain \begin{displaymath} g_2(p;r) \; = \; g_2(A_1 p;r) \;\; , \end{displaymath} which tells us that $g_2(p;r)$ is unchanged by taking $t_p$ across the branch cut $B_1$ and returning it to its original value. This means that the cut $B_1$ is unnecessary. However, $g_2(p;r)$ does appear to have the other two cuts $B_0$ and $B_2$. To the available four terms in the series expansion we found \begin{displaymath} L_2(p;1) \; = \; x_p^2 \;\; , \end{displaymath} and \begin{equation} L_2(p;0) \; = \; k^{-1/3} x_p^{-1} h(z_p,w_p)^3 \;\; , \;\; L_2(p;2) \; = \; k^{1/3} x_p^{-1} h(z_p,w_p)^{-3} \;\; , \end{equation} where \begin{displaymath} h(z,w) = 1 + (x^2 - 6 x^3 + 35 x^4) (w/z^2 + zw - z/w^2 + 3) \end{displaymath} \begin{equation} + \; x^4 (w^2/z^4 + z^2/w^4 + z^2 w^2 - 3) +O(x^5) \;\; . \end{equation} The result for $ L_2(p;1)$ looks encouraging, and indeed to the four available terms in the series expansion we also find \begin{equation} \label{g2p1} g_2(p;1) = k^{2/9} \, {\cal S}_p \, {\cal S}_{Vp} \;\; . \end{equation} The results for $ L_2(p;0)$ and $ L_2(p;2)$ are not so encouraging and I have failed to find any obvious result for these or for $g_2(p;0)$, $g_2(p;2)$. In \cite{RJB05b} I conjecture that for general $N$ the functions $G_{p,V^i p}(r)$ have a simple form as a product of $\cal S$ functions provided $i=0, \ldots, N-1$ and $r = 1, \ldots, N-i$. For other values of $i, r$ they remain a puzzle. (Except when $i=1$ and $r = N $: this case can be deduced from the sixth relation of eqn \ref{functrl}.) If (\ref{g2p1}) is correct, then we have some information on the function $L_{pq}(r) $ of eqn. 56 of \cite{RJB98}. From this and the first equation of (\ref{functrl}), \begin{equation} L_{pq}(r) = G_{pq}(r) G_{Rq,Rp}(r) = G_{pq}(r)/G_{qp}(N-r+1) \;\; . \end{equation} Setting $q= Vp$ and using (\ref{greln}), we obtain \begin{equation} L_{pq}(r) = g(p;r)/g_2(Vp;N-r+1) \;\; . \end{equation} Taking $r=0$, it follows from (\ref{gS}) and (\ref{g2p1}) that \begin{equation} L_{pq}(0) = k^{-4/9} /({\cal S}_p^2 \, {\cal S}_{Vp} \, {\cal S}_{V^2 p} ) = k^{-1/9}/(x_p{\cal S}_p) \;\; . \end{equation} The function $L_{pq}$, for arbitrary $p, q$, was introduced in \cite{RJB98} partly because its square is a rational function of $x_p, y_p$, $\mu_p$, $x_q, y_q$, $\mu_q$ when $N=2$, so the hope was that it might be similarly simple for all $N$. We see that this cannot be so: ${\cal S}_p$ is {\em not} such a function. \section{Summary} I have outlined the recent derivation of the order parameters of the solvable chiral Potts model, a derivation that verifies a long-standing and elegant conjecture.\cite{AMPT89} As with all the calculations on solvable models satisfying the star-triangle relations, the trick is to generalize the model to a point where one has a function, here $G_{pq}(r)$, to calculate, rather than a constant, as one can obtain relations and properties that define this function. On the other hand, this is an example where it pays {\em not} to over-generalize: we can handle the particular function $G_{p,Vp}(r)$, and this is sufficient for the purpose of obtaining the order parameters. The general $G_{pq}(r)$ continues to defy calculation. Series expansion methods can provide a valuable check on such derivations, which are of their nature believable but hard to make fully mathematically rigorous. One usually tries to present the argument in as logical a manner as possible, but this is usually {\em not} the manner in which it was originally developed. Here I have indicated the points in the calculation when I found the available checks both reassuring and encouraging.
2024-02-18T23:39:51.181Z
2005-10-26T09:44:02.000Z
algebraic_stack_train_0000
646
8,367
proofpile-arXiv_065-3322
\section{Introduction} The {\textbf{\textsc{ZeroIn}}} project is aimed at building a novel computational framework to enable the characterization and classification of buggy or vulnerable code changes at the very origin of source code, as early as the time at which they are committed to software repositories. We achieve this by novel use of machine learning for analyzing, classifying, visualizing, and modeling massive logs of version control in combination with key characterization of developers’ historical traits. The intrinsically temporal nature of software coder and repository interactions creates transitive and complex dependencies that could potentially reveal insights into the vulnerabilities \cite{10.5555/580808}. The research in this project is aimed at tapping the potential to broadly benefit modern software development platforms and increase software security globally. In order to apply modern machine learning techniques to this difficult problem, large datasets are necessary for training, testing, and validating the techniques. Most importantly, ground truth data at scale is needed not only to meet the accuracy measures but also to be relevant to the large sizes of modern software repositories and global developer team sizes \cite{alali2008s}. Here, we present the data gathering and characterization of software development spanning institutions (coder team strengths, etc.), coder features (experience, expertise, etc.), software repository characteristics (commit volumes, frequencies, bugs, etc.). To our knowledge, this is one of the first attempts at characterizing the metadata about modern software repositories at a large scale and across multiple dimensions including the software bases, commits, and coder features. \subsection{Closed-form Distributions from Datasets} The large sizes of the datasets are distilled into closed form distributions defined by probability density functions by applying multiple candidate distributions and sorting the distributions and their parameters by their goodness of fit and selecting the top candidates that provides the best fit. The cumulative distribution functions of these probability density functions are necessary to derive the inverse cumulative distribution functions, which are in turn necessary for accurately sampling the distributions in the generation of synthetic ground truth driven by the real data. To illustrate, the probability density function of the Log-Normal distribution is given by \begin{equation*} \begin{array}{lcl} f(x-\theta) & = & \frac{1}{(x-\theta)\sigma\sqrt{2\pi}}\exp\left(\frac{-(\ln(x-\theta)-\mu)^2}{2\sigma^2}\right), \end{array} \end{equation*} where $\theta$ is the shift parameter, and $\sigma$ and $\mu$ are such that $\log(x-\theta)$ follows the Normal distribution $N(\mu,\sigma)$ with a mean $\mu$ and standard deviation $\sigma$. The cumulative density function of $f(x-\theta)$ is given by \begin{equation*} \begin{array}{lcl} F(x-\theta) & = & \int_\theta^x f(t-\theta)dt \\ & = & \frac{1}{2} [1+\erf(\frac{ln(x-\theta)-\mu)^2}{\sigma\sqrt{2}})], \end{array} \end{equation*} where, the error function is defined as: \begin{equation*} \begin{array}{lcl} \erf(z) & = & \frac{2}{\pi} \int_0^z \exp{(-t^2)}dt. \end{array} \end{equation*} The inverse, $F^{-1}(\cdot)$, of the cumulative distribution function $F(\cdot)$ is given by \begin{equation*} (x-\theta) = F^{-1}(u) = \exp\left(\mu+\sqrt{\sigma\sqrt2\erf^{-1} (2 u-1)}\right), \end{equation*} where, $u=F(x-\theta)$, and $\erf^{-1}$ is defined by the pair of relations $q=\erf(p)$ and $p=\erf^{-1}(q)$. For the other distributions used in later sections (such as Exponential and Negative Binomial), a similar set of derivations provide the inverse CDF equations from their corresponding PDF equations. The inverse CDF $F^{-1}(x)$ is directly useful in regenerating any probability density function $f(x)$ using random sampling techniques (see, for example, chapter 12 on reversible distribution sampling \cite{perumalla2013}). We exploit this key feature of the relation between empirically-determined fit for $f(x)$ (and, by implication, $F(x)$) and its inverse CDF $F^{-1}$ to develop concise representations of all large datasets used in our study. \subsection{Machine Learning Model} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/GNN_Tripartite.png} \caption{Our tripartite graph model of the software development network formulated to suit machine learning. The columns represent coders, commits, and repositories, and the vectors attached to the nodes represent the feature vectors corresponding to them} \label{fig:GNN-tripartite} \end{figure} Figure~\ref{fig:GNN-tripartite} depicts our graph model for modelling the software development process. The first column of nodes denotes coders, each with its own feature vector. The second column of nodes (along with their own feature vectors) represents the commits performed by the coders, and the last column of nodes (with their feature vectors) represents the repositories to which the commits are made. The datasets are used to ultimately generate the distributions needed to populate this tripartite graph and their feature vectors for use in training, testing, and validating our machine learning algorithms. In the later inferencing stage, the same graph model will be used to represent the production data on which the learner will be exercised to watch every commit and signal the prognoses of vulnerabilities \cite{hu2020open}. \section{GitHub Dataset with 452 Million Commits} To capture and characterize the volumes of commits to software repositories, we undertook a search for publicly available metadata on large software repository storehouses. We found the \texttt{data.world} repository on 452 million commits to the popular GitHub software storehouse as a suitable dataset. The dataset is described at the following website: \url{https://data.world/vmarkovtsev/452-m-commits-on-github}. The direct URL to the statistics file containing the metadata in comma-separated value (CSV) format is \url{https://query.data.world/s/7euzfiycvbfxikevuc2cg2p4pbuish}. This dataset contains metadata from 16 million repositories on GitHub. The primary data in this dataset is organized into four main columns: repository name, number of commits, number of contributors and average length of the commit. Using Python’s Scipy Library, we performed regression using multiple candidate distributions. Among the different distributions evaluated, the Exponential and Log-Normal distributions fared the best in terms of goodness of fit. The best fitting distributions are shown in Table~\ref{tbl:Fit}. The distribution provides \textit{loc} and \textit{scale} parameters: the \textit{loc} parameter shifts the distribution by the appropriate amount and the \textit{scale} parameter stretches the distribution as required. These values are indicated in the table as (\textit{loc}, \textit{shape}) values against the distribution. Similar parameters are used to define the other distributions as well. \begin{table} \centering \caption{Distributions of commits by number of repositories} {\small \begin{tabular}{|C{0.25\textwidth}|R{0.20\textwidth}|C{0.20\textwidth}|C{0.25\textwidth}|} \hline \textbf{Commits} & \textbf {Repositories} & \textbf {Best Fit} & \textbf {Second-Best Fit}\\ \hline\hline \textless 20 & 13,156,036 & Exponential (-0.83,0.83) & Log-Normal (5.67,-0.832,0) \\ \hline 20 - 100 & 2,235,831 & Exponential (-1.07,1.07) & Log-Normal (1.01,-1.17,0.76) \\ \hline 100 - 1000 & 554,079 & Log-Normal (1.30,-0.83,0.41) & Weibull Min (0.81,-0.81,0.71) \\ \hline 1000 - 4000 & 28,549 & Exponential (-1.07,1.07) & Weibull Min (0.93,-1.07,1.11)\\ \hline 4000 - 10,000 & 4,766 & Exponential (-1.26,1.26) & Gamma (1.17,-1.26,1.07) \\ \hline 10,000 - 100,000 & 2,221 & Log-Normal (1.30,-0.81,0.40) & Inv. Gaussian (2.13,-0.851,0.40) \\ \hline \textgreater 100,000 & 128 & Exponential (-0.94,0.94) & Log-Normal (1.33,-0.96,0.48) \\ \hline \end{tabular} } \label{tbl:Fit} \end{table} Table~\ref{tbl:Fit} gives the parameters of the best fitting and second best fitting distributions over the different range of number of commits. \subsection{Distributions of Commits to Repositories} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/452M_Commitsbw20and100.pdf} \caption{Histogram of number of commits between 20 and 100} \label{fig:452M_hist_bw 20 & 100} \end{figure} \begin{figure} \centering \includegraphics [width=\myfigwidth] {figs/452M_Commitsbw100and1000.pdf} \caption{Histogram of number of commits between 100 and 1K} \label{fig:452M_hist_bw 100 & 1000} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/452M_Commitsbw1000and4000.pdf} \caption{Histogram of number commits between 1K and 4K} \label{fig:452M_hist_bw 1000 & 4000} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/452M_Commitsbw4000and10000.pdf} \caption{Histogram number of commits between 4K and 10K} \label{fig:452M_hist_bw 4000 & 10000} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/452M_Commitsbw10000and100k.pdf} \caption{Histogram number of commits between 10K and 100K} \label{fig:452M_hist_bw 10000 & 100k} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/452M_CommitsGreaterthan100k.pdf} \caption{Histogram number of commits greater than 100K} \label{fig:452M_hist_greater than 100k} \end{figure} Figure~\ref{fig:452M_hist_bw 20 & 100} shows the histogram of number of commits between 20 and 100 (very low activity repositories). In this figure (and all other related figures), we overlay the Probability Density Functions (PDFs) of the best fitting distributions on the histogram. Figure~\ref{fig:452M_hist_bw 100 & 1000} shows the histogram of number of commits between 100 and 1000 (low activity repositories). Figure~\ref{fig:452M_hist_bw 1000 & 4000} shows the histogram of number of commits between 1000 and 4000 (medium activity repositories). Figure~\ref{fig:452M_hist_bw 4000 & 10000} shows the histogram of number of commits between 4000 and 10000 (medium activity repositories). Figure~\ref{fig:452M_hist_bw 10000 & 100k} shows the histogram of number of commits between 10000 and 100K (high activity repositories). Figure~\ref{fig:452M_hist_greater than 100k} shows the histogram of number of commits greater than 100K (very high activity repositories). \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/452M_Commits1.png} \caption{Histogram of all the commits to all repositories} \label{fig:All commits} \end{figure} Figure~\ref{fig:All commits} shows the histogram of all the commits. \subsection{Distributions of Contributors to Repositories} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/CommitsVsContri.png} \caption{Number of Commits vs Number of Contributors} \label{fig:452M_CommitsVsContri} \end{figure} Figure~\ref{fig:452M_CommitsVsContri} shows the relationship between number of commits and number of contributors across all the repositories. Figure~\ref{fig:All contributors} shows the histogram of all the contributors. \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/452M_Contributors1.png} \caption{Histogram of the numbers of contributors to the repositories} \label{fig:All contributors} \end{figure} \section{StackOverFlow Developer Dataset} Stack Overflow conducts an annual survey of developers across the globe every year. We used the 2019, 2020 and 2021 datasets to find the best fit distributions to the professional years of experience of coders. The datasets can be found through this link: \href{https://insights.stackoverflow.com/survey}{StackOverflow Dataset} \subsection{Overview and Preprocessing of the Data} \subsubsection{Preprocessing 2021 Dataset} In the preprocessing stage, we dropped all rows that contain N/A values or coders with less than 1 year of experience or more than 50 years of experience. And to categorize the professionals from all coders, we used the column named ‘Main Branch’ and restricted the analysis to those participants who marked ‘I am a developer by profession’. \subsubsection{Preprocessing 2020 Dataset} We started our preprocessing of the data by converting ``Less than 1 year'' values to 0, ``More than 50 years'' to 51 and dropping ``NA'' values from the column ``YearsCodePro''. We also considered only ``Employed full-time'' entries for ``Employment'' column. Further, values in the range of 30-90 were considered for the column ``WorkWeekHrs''. \subsubsection{Preprocessing for Comparison of Datasets} In order to be able to make comparison across the three years, we normalize the comparison by using the same preprocessing method as used for the 2021 dataset. In other words, the preprocessing steps described earlier for the 2021 dataset are applied to the 2019 and 2020 datasets also, and the resulting data is used for direct comparison across the years. \subsection{Professional Coder Distributions in 2021} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/SF2021_2and20.pdf} \caption{Histogram of professional coders between 2 and 20 years of experience} \label{fig:Coders_2and20} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/SF2021_morethan20.pdf} \caption{Histogram of professional coders with more than 20 years of experience} \label{fig:Coders_morethan20} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/SF2021_Professional.pdf} \caption{Histogram of all the professional coders} \label{fig:Professionals} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/SF2021_nonProfessional.pdf} \caption{Histogram of non-professional coders} \label{fig:Non-Professionals} \end{figure} Figure~\ref{fig:Coders_2and20} shows the histogram of professional coders with years of experience between 2 and 20 years. Figure~\ref{fig:Coders_morethan20} shows the histogram of professional coders with more than 20 years of experience. Figure~\ref{fig:Professionals} shows the histogram of all the Professional coders with more than 1 year of experience. Figure~\ref{fig:Non-Professionals} shows the histogram of all the Professional coders with more than 1 year of experience. Table~\ref{tbl:stackoverflow-2021-coder-distributions} captures all the distributions from these figures into best-fit and second-best-fit distributions for all the groups. \begin{table}[h!] \caption{Distributions of coders in the StackOverflow 2021 dataset} \label{tbl:stackoverflow-2021-coder-distributions} \begin{tabular}{|C{0.3\textwidth}|C{0.3\textwidth}|C{0.3\textwidth}|}\hline \textbf{Group} & \textbf{Best fit}& \textbf{Second best fit} \\\hline\hline Professional coders bw 2 and 20 & Log-Normal (0.89, -1.29, 0.92) & Inverse Gaussian (0.77, -1.39, 1.8)\\\hline Professional coders more than 20 & Inverse Gaussian (0.71, -1.42, 2.00) & Log-Normal (0.84, -1.32, 0.97)\\\hline All professional coders & Log-Normal (0.91, -1.13, 0.78) & Inverse Gaussian (0.89, -1.20, 1.36) \\\hline Non- professional coders & Exponential (-1.04,1.04) & Beta: Visually looks weird so dropped \\\hline \end{tabular} \end{table} \subsection{Professional Coder Distributions in 2020} Here, we start with determining the frequency histograms of the coders which provides the visual indication of the potential distributions to attempt in fitting the data. This is described in Section~\ref{sec:SO2020-freqhist}. For the rest of the analyses in this section, the \texttt{fitdistrplus} package in the R statistical analysis software is used. We compute the skewness-kurtosis measures in order to uncover specific candidate distributions to further narrow down the potential distributions. This is achieved using the \texttt{descdist} function of the R package. The results from this step are described in Section~\ref{sec:SO2020-skewness}. The dataset is then tested against fitted distributions (i.e., Normal, Poisson and Negative Binomial) using the \texttt{fitdist} function, which provides the Log-likelihood, Akaike information criterion (AIC), and Bayesian information criterion (BIC) values. This step is described in Section~\ref{sec:SO2020-fitofdist}. Following this, we generated the P-P plots for theoretical probabilities against the distribution of the empirical data (this is achieved using the \texttt{ppcomp} function). The CDF plots for empirical cumulative distribution are generated and plotted against fitted distribution functions (this is achieved using the \texttt{cdfcomp} function). These plots are presented in Section~\ref{sec:SO2020-closenessoffit}. The density plots for the histogram are fitted against density functions (this is performed using \texttt{denscomp} function). This is described in Section~\ref{sec:SO2020-densityoverlay}. Finally, considering all the statistical analyses, we select the best fitting distribution for each professional coding year range. This is described in Section~\ref{sec:SO2020-bestfits}. \subsubsection{Frequency Histograms of Professional Coders} \label{sec:SO2020-freqhist} Figure~\ref{fig:hist_freq_0_20}, Figure~\ref{fig:hist_freq_21_30}, and Figure~\ref{fig:hist_freq_31_40} show the frequency histogram of professional coders with 0-20, 21-30, and 31-40 years of working experience, respectively. Figure~\ref{fig:freq_hist_0_50} captures the same for the broader range of 0-50 years working experience. \begin{figure} \centering \includegraphics[width=12 cm, height= 7 cm]{figs/hist_freq_0_50_first_pass.pdf} \caption{Frequency histogram of professional coding years (0-50)} \label{fig:freq_hist_0_50} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.5 cm]{figs/cullen_frey_0_50_first_pass_updated.pdf} \caption{Skewness-kurtosis plot of professional coding years (0-50)} \label{fig:cullen_frey_0_50} \end{figure} \subsubsection{Determining the Potential Matching Distributions} \label{sec:SO2020-skewness} For every group representing the coding experience range of the coders, the skewness-kurtosis plot of professional coders in that experience range is computed. This plot helps to visually determine which discrete distribution the data set follows more closely. Figure~\ref{fig:cullen_frey_0_20}, Figure~\ref{fig:cullen_frey_21_30}, and Figure~\ref{fig:cullen_frey_31_40} show the skewness-kurtosis plots of professional coders with 0-20, 21-30, and 31-40 years of working experience, respectively. Figure~\ref{fig:cullen_frey_0_50} captures the same for the broader range from 0 to 50 years of experience. Table~\ref{tbl:2020-StackOverflow-Summary} shows the frequency of coders and the best distribution for each professional coding year range along with parameters. \begin{table} \centering \begin{tabular}{|C{0.25\textwidth}|C{0.25\textwidth}|C{0.40\textwidth}|}\hline Coding Year Range & Count of Coders & Best Fit \\\hline\hline 0--50 & 33,734 & Neg. Binomial (1.59,8.33)\\\hline 0--20 & 31,202 & Neg. Binomial (2.2,6.8)\\\hline 21--30 & 1,957 & Normal (24.66,2.81)\\\hline 31--40 & 524 & Normal (35.02,2.82)\\\hline \end{tabular} \caption{Best-fit distributions for coding year ranges} \label{tbl:2020-StackOverflow-Summary} \end{table} \subsubsection{Fit of Distributions} \label{sec:SO2020-fitofdist} Figure~\ref{fig:normal_0_20} (left), Figure~\ref{fig:normal_21_30} (left), and Figure~\ref{fig:normal_31_40} (left) represent density plots that compare between empirical (data set) and Normal distribution of professional coders with 0-20, 21-30, and 31-40 years of working experience, respectively. Figure~\ref{fig:normal_0_50} (left) captures the same for the broader range of 0-50 years working experience. Figure~\ref{fig:poisson_0_20} (left), Figure~\ref{fig:poisson_21_30} (left), and Figure~\ref{fig:poisson_31_40} (left) represent density plots that compare between empirical (data set) and Poisson distribution of professional coders with 0-20, 21-30, and 31-40 years of working experience, respectively. Figure~\ref{fig:poisson_0_50} (left) captures the same for the broader range of 0-50 years working experience. Figure~\ref{fig:n. binomial_0_20} (left), Figure~\ref{fig:n. binomial_21_30} (left), and Figure~\ref{fig:n. binomial_31_40} (left) represent density plots that compare between empirical (data set) and Negative Binomial distribution of professional coders with 0-20, 21-30, and 31-40 years of working experience, respectively. Figure~\ref{fig:n. binomial_0_50} (left) captures the same for the broader range of 0-50 years working experience. Figure~\ref{fig:normal_0_20} (right), Figure~\ref{fig:normal_21_30} (right), and Figure~\ref{fig:normal_31_40} (right) represent CDF plots that compare between empirical (data set) and Normal distribution of professional coders with 0-20, 21-30, and 31-40 years of working experience, respectively. Figure~\ref{fig:normal_0_50} (right) captures the same for the broader range of 0-50 years working experience. Figure~\ref{fig:poisson_0_20} (right), Figure~\ref{fig:poisson_21_30} (right), and Figure~\ref{fig:poisson_31_40} (right) represent CDF plots that compare between empirical (data set) and Poisson distribution of professional coders with 0-20, 21-30, and 31-40 years of working experience, respectively. Figure~\ref{fig:poisson_0_50} (right) captures the same for the broader range of 0-50 years working experience. Figure~\ref{fig:n. binomial_0_20} (right), Figure~\ref{fig:n. binomial_21_30} (right), and Figure~\ref{fig:n. binomial_31_40} (right) represent CDF plots that compare between empirical (data set) and Negative Binomial distribution of professional coders with 0-20, 21-30, and 31-40 years of working experience, respectively. Figure~\ref{fig:n. binomial_0_50} (right) captures the same for the broader range of 0-50 years working experience. \begin{figure} \centering \includegraphics[width=10 cm, height= 7 cm]{figs/fitdist_normal_0_50_first_pass_updated.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Normal distribution for professional coding years 0-50} \label{fig:normal_0_50} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 8 cm]{figs/fitdist_poi_0_50_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Poisson distribution for professional coding years 0-50} \label{fig:poisson_0_50} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 8.2 cm]{figs/fitdist_nbinom_0_50_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Negative Binomial distribution for professional coding years 0-50} \label{fig:n. binomial_0_50} \end{figure} \subsubsection{Closeness of Fit of Cumulative Distribution Functions} \label{sec:SO2020-closenessoffit} The fit of each distribution can be evaluated by comparing their cumulative distribution function curves against the the perfect linear match representing the actual data on a plot of distribution probability versus empirical probability (P-P plot). Figure~\ref{fig:P_P_0_20}, Figure~\ref{fig:P_P_21_30}, and Figure~\ref{fig:P_P_31_40} show the P-P plots for the Normal, Poisson and Negative Binomial distribution with respect to empirical distribution (data set) of professional coders with 0-20, 21-30, and 31-40 years of working experience, respectively. Figure~\ref{fig:P_P_0_50} captures the same for the broader range of 0-50 years working experience. Figure~\ref{fig:CDF_0_20}, Figure~\ref{fig:CDF_21_30}, and Figure~\ref{fig:CDF_31_40} show the CDF plots for the Normal, Poisson and Negative Binomial distribution with respect to empirical distribution (data set) of professional coders with 0-20, 21-30, and 31-40 years of working experience respectively. Figure~\ref{fig:CDF_0_50} captures the same for the broader range of 0-50 years working experience. \begin{figure} \centering \includegraphics[width=12 cm, height= 7.5 cm]{figs/P_P_plot_first_pass.pdf} \caption{P-P plot of Normal, Poisson and Negative Binomial distribution along with empirical distribution (data set) for professional coding years 0-50} \label{fig:P_P_0_50} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.5 cm]{figs/CDF_0_50_first_pass.pdf} \caption{CDF plot of Normal, Poisson and Negative Binomial distribution along with empirical distribution (data set) for professional coding years 0-50} \label{fig:CDF_0_50} \end{figure} \subsubsection{Density Overlay for Different Distributions} \label{sec:SO2020-densityoverlay} Figure~\ref{fig:den_overlay_def_bin_0_20}, Figure~\ref{fig:den_overlay_def_bin_21_30}, and Figure~\ref{fig:den_overlay_def_bin_31_40} show the histogram (default bin size) against fitted Normal, Poisson and Negative Binomial density functions of professional coders with 0-20, 21-30, and 31-40 years of working experience respectively. Figure~\ref{fig:den_overlay_def_bin} captures the same for the broader range of 0-50 years working experience, respectively. \begin{figure} \centering \includegraphics[width=12 cm, height= 8 cm]{figs/hist_theo_den_0_50_first_pass.pdf} \caption{Histogram and distribution densities for professional coding years 0-50} \label{fig:den_overlay_def_bin} \end{figure} \subsubsection{Best Fit Distributions} \label{sec:SO2020-bestfits} Table~\ref{tab:0_20}, Table~\ref{tab:21_30}, and Table~\ref{tab:31_40} exhibits summary of fitting different distributions considering parameters, log-likelihood, Akaike's Information Criteria (AIC) and Bayesian Information Criteria (BIC) for professional coders with 0-20, 21-30, and 31-40 years of working experience respectively. Table~\ref{tab:0_50} exhibits the same for the broader range of 0-50 years working experience. \begin{table} \centering \caption{Fits of distributions for professional coding years 0-50} \begin{tabular}{|c|p{5.5em}|C{0.14\textwidth}|c|c|c|}\hline \multicolumn{2}{|c|}{\textbf{Distribution}} & \textbf{Log-likelihood} & \textbf{AIC} & \textbf{BIC} & \textbf{Best Fit} \\\hline\hline Normal & mean = 8.33\newline{}sd = 7.45 & -115,628 & 231,260 & 231,277 & \\\hline Poisson & lambda=\newline{}8.33 & -159,718 & 319,438 & 319,447 & \\\hline N. Binomial & size = 1.59\newline{}mu = 8.33 & -105,885 & 211,774 & 211,790 & \checkmark{} \\\hline \end{tabular} \label{tab:0_50} \end{table} \begin{table} \centering \caption{Fits of distributions for professional coding years 0-20} \begin{tabular}{|c|p{5.5em}|C{0.14\textwidth}|c|c|c|}\hline \multicolumn{2}{|c|}{\textbf{Distribution}} & \textbf{Log-likelihood} & \textbf{AIC} & \textbf{BIC} & \textbf{Best Fit} \\\hline\hline Normal & mean = 6.80\newline{}sd = 5.13 & -95,336 & 190,676 & 190,693 & \\\hline Poisson & lambda=\newline{}6.80 & -113,532 & 227,066 & 227,074 & \\\hline N. Binomial & size = 2.20\newline{}mu = 6.80 & -90,475 & 180,955 & 180,972 & \checkmark{} \\\hline \end{tabular} \label{tab:0_20} \end{table} \begin{table} \centering \caption{Fits of distributions for professional coding years 21-30} \begin{tabular}{|c|p{5.5em}|C{0.14\textwidth}|c|c|c|}\hline \multicolumn{2}{|c|}{\textbf{Distribution}} & \textbf{Log-likelihood} & \textbf{AIC} & \textbf{BIC} & \textbf{Best Fit} \\\hline\hline Normal & mean = 24.66\newline{}sd = 2.81 & -4,801 & 9,607 & 9,618 & \checkmark{} \\\hline Poisson & lambda=\newline{}24.66 & -5,244 & 10,490 & 10,496 & \\\hline N. Binomial & size = 1.08e+08\newline{}mu = 24.66 & -5,244 & 10,492 & 10,503 & \\\hline \end{tabular} \label{tab:21_30} \end{table} \begin{table} \centering \caption{Fits of distributions for professional coding years 31-40} \begin{tabular}{|c|p{5.5em}|C{0.14\textwidth}|c|c|c|}\hline \multicolumn{2}{|c|}{\textbf{Distribution}} & \textbf{Log-likelihood} & \textbf{AIC} & \textbf{BIC} & \textbf{Best Fit} \\\hline\hline Normal & mean = 35.02\newline{}sd = 2.82 & -1,286 & 2,576 & 2,585 & \checkmark{} \\\hline Poisson & lambda=\newline{}35.02 & -1,472 & 2,947 & 2,951 & \\\hline N. Binomial & size = 1.39e+08\newline{}mu = 35.02 & -1,472 & 2,949 & 2,957 & \\\hline \end{tabular} \label{tab:31_40} \end{table} \begin{figure} \centering \includegraphics[width=12 cm, height= 8 cm]{figs/freq_hist_0_20_first_pass.pdf} \caption{Frequency histogram of professional coding years (0-20)} \label{fig:hist_freq_0_20} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7 cm]{figs/cullen_frey_0_20_first_pass.pdf} \caption{Skewness-kurtosis plot of professional coding years (0-20)} \label{fig:cullen_frey_0_20} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.5 cm]{figs/fit_of_dist_normal_0_20_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Normal distribution for professional coding years 0-20} \label{fig:normal_0_20} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7 cm]{figs/fit_of_dist_poisson_0_20_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Poisson distribution for professional coding years 0-20} \label{fig:poisson_0_20} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 8 cm]{figs/fit_of_dist_negative_binomial_0_20_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Negative Binomial distribution for professional coding years 0-20} \label{fig:n. binomial_0_20} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.5 cm]{figs/p_p_plot_0_20_first_pass.pdf} \caption{P-P plot of Normal, Poisson and Negative Binomial distribution along with empirical distribution (data set) for professional coding years 0-20} \label{fig:P_P_0_20} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.5 cm]{figs/CDF_0_20_first_pass.pdf} \caption{CDF plot of Normal, Poisson and Negative Binomial distribution along with empirical distribution (data set) for professional coding years 0-20} \label{fig:CDF_0_20} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7 cm]{figs/density_overlay_0_20_first_pass.pdf} \caption{Histogram and distribution densities for professional coding years 0-20} \label{fig:den_overlay_def_bin_0_20} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 8 cm]{figs/hist_freq_21_30_first_pass.pdf} \caption{Frequency histogram of professional coding years (21-30)} \label{fig:hist_freq_21_30} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7 cm]{figs/cullen_frey_21_30_first_pass.pdf} \caption{Skewness-kurtosis plot of professional coding years (21-30)} \label{fig:cullen_frey_21_30} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.8 cm]{figs/fit_dist_norm_21_30_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Normal distribution for professional coding years 21-30} \label{fig:normal_21_30} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.8 cm]{figs/fit_dist_poi_21_30_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Poisson distribution for professional coding years 21-30} \label{fig:poisson_21_30} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.8 cm]{figs/fit_dist_n_binom_21_30_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Negative Binomial distribution for professional coding years 21-30} \label{fig:n. binomial_21_30} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.6 cm]{figs/p_p_21_30_first_pass.pdf} \caption{P-P plot of Normal, Poisson and Negative Binomial distribution along with empirical distribution (data set) for professional coding years 21-30} \label{fig:P_P_21_30} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.5 cm]{figs/cdf_21_30_first_pass.pdf} \caption{CDF plot of Normal, Poisson and Negative Binomial distribution along with empirical distribution (data set) for professional coding years 21-30} \label{fig:CDF_21_30} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7 cm]{figs/Den_overlay_21_30_first_pass_updated.pdf} \caption{Histogram and distribution densities for professional coding years 21-30} \label{fig:den_overlay_def_bin_21_30} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 8 cm]{figs/hist_freq_31_40_first_pass.pdf} \caption{Frequency histogram of professional coding years (31-40)} \label{fig:hist_freq_31_40} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 8 cm]{figs/cullen_frey_31_40_first_pass.pdf} \caption{Skewness-kurtosis plot of professional coding years (31-40)} \label{fig:cullen_frey_31_40} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.8 cm]{figs/fit_dist_norm_31_40_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Normal distribution for professional coding years 31-40} \label{fig:normal_31_40} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.8 cm]{figs/fit_dist_poi_31_40_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Poisson distribution for professional coding years 31-40} \label{fig:poisson_31_40} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.8 cm]{figs/fit_dist_n_binom_31_40_first_pass.pdf} \caption{Density (left) and CDF (right) of empirical (data set) and Negative Binomial distribution for professional coding years 31-40} \label{fig:n. binomial_31_40} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.6 cm]{figs/p_p_plot_31_40_first_pass.pdf} \caption{P-P plot of Normal, Poisson and Negative Binomial distribution along with empirical distribution (data set) for professional coding years 31-40} \label{fig:P_P_31_40} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7.5 cm]{figs/cdf_31_40_first_pass.pdf} \caption{CDF plot of Normal, Poisson and Negative Binomial distribution along with empirical distribution (data set) for professional coding years 31-40} \label{fig:CDF_31_40} \end{figure} \begin{figure} \centering \includegraphics[width=12 cm, height= 7 cm]{figs/den_overlay_31_40_first_pass_updated.pdf} \caption{Histogram and distribution densities for professional coding years 31-40} \label{fig:den_overlay_def_bin_31_40} \end{figure} \subsection{Comparison of Changes across Years} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/SF_Allyears_Upto20.pdf} \caption{Histogram of professional coders in 2019, 2020 and 2021 with upto 20 years of experience} \label{fig:AllYears_Upto20} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/SF_Allyears_Bw20and50.pdf} \caption{Histogram of professional coders in 2019, 2020 and 2021 between 20 and 50 years of experience} \label{fig:AllYears_Bw20and50} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/SF_Allyears_Professionals.pdf} \caption{Histogram of all professional coders in 2019, 2020 and 2021} \label{fig:AllYears_AllProfessionals} \end{figure} \begin{figure} \centering \includegraphics[width=\myfigwidth]{figs/SF_Allyears_NonProfessional.pdf} \caption{Histogram of all non-professional coders in 2019, 2020 and 2021} \label{fig:AllYears_AllNonProfessionals} \end{figure} Figure~\ref{fig:AllYears_Upto20} shows histogram of professional coders in 2019, 2020 and 2021 with 2 to 20 years of experience. Figure~\ref{fig:AllYears_Bw20and50} shows histogram of professional coders in 2019, 2020 and 2021 with 20 to 50 years of experience. Figure~\ref{fig:AllYears_AllProfessionals} shows histogram of all professional coders in 2019, 2020 and 2021 with more than 1 year of experience. Figure~\ref{fig:AllYears_AllNonProfessionals} shows histogram of all non-professional coders in 2019, 2020 and 2021. \section{TravisTorrent Dataset } \subsection{Dataset Overview} The TravisTorrent dataset contains metadata from \texttt{git} repositories for 1283 Java and Ruby projects hosted at Github up to January 27, 2017. It consists of an \texttt{SQLite} table containing following columns: \begin{itemize} \item \textbf{project}: project name on Github, in the form 'owner/project' \item \textbf{sha}: the commit id \item \textbf{message}: the commit message \item \textbf{date}: the commit date \item \textbf{author name}: name of the commit author \item \textbf{author email}: email of the commit author \end{itemize} After some minor preprocessing, the dataset contains 2,249,243 rows. This dataset has been analyzed to provide insights on commit volumes on repository-basis and developer-basis. This dataset also provides unique information on temporal nature of the commits, which enables us to analyze measures such as commit rates for various time intervals. \subsection{Project-Level Metadata Information} \subsubsection{Commit Counts of Projects} Figure~\ref{fig:project_commit_trend_line} shows project-level commits (high to low) with a trend-line. The slope and intercept of the trend-line are -5.49 and 5278.26. \begin{figure} \centering \includegraphics[width=10 cm, height= 7 cm]{figs/project_commit_all_trend_line.pdf} \caption{Project commits with trend line} \label{fig:project_commit_trend_line} \end{figure} \subsubsection{Granular View of Commits for Top 100 Projects} Figure~\ref{fig:project_commit_trend_line_top_20}, Figure~\ref{fig:project_commit_trend_line_top_21_40}, Figure~\ref{fig:project_commit_trend_line_top_41_70}, and Figure~\ref{fig:project_commit_trend_line_top_71_100} provides granular view of commits for 1-20, 21-40, 41-70 and 71-100 projects with trend-line. The slopes of the trendline are -1958.95, -264.86, -86.11, and -48.71 respectively. The intercepts of the trend-line are 48426.97, 18869.19, 11984.57, and 9336.82 respectively. \begin{figure} \centering \includegraphics[width=10 cm, height= 6.7 cm]{figs/project_commits_with_trendline_top_20.pdf} \caption{Project commits with trend line for top 20 projects} \label{fig:project_commit_trend_line_top_20} \end{figure} \begin{figure} \centering \includegraphics[width=10 cm, height= 8 cm]{figs/project_commits_with_trendline_21_40.pdf} \caption{Project commits with trend line for top 21-40 projects} \label{fig:project_commit_trend_line_top_21_40} \end{figure} \begin{figure} \centering \includegraphics[width=10 cm, height= 8 cm]{figs/project_commits_with_trendline_41_70.pdf} \caption{Project commits with trend line for top 41-70 projects} \label{fig:project_commit_trend_line_top_41_70} \end{figure} \begin{figure} \centering \includegraphics[width=10 cm, height= 8 cm]{figs/project_commits_with_trendline_71_100.pdf} \caption{Project commits with trend line for top 71-100 projects} \label{fig:project_commit_trend_line_top_71_100} \end{figure} \subsection{Project-Level Commit Rates} Figure~\ref{fig:commit_rate_project} shows project-level commit rate for 1281 projects. Although total number of project is 1283, two of the projects had commits on a particular date only. While calculating commit rate, those projects were excluded. \begin{figure} \centering \includegraphics[width=10 cm, height= 6 cm]{figs/commit_rate_project_wise.pdf} \caption{Project-level commit rate for 1281 projects} \label{fig:commit_rate_project} \end{figure} Figure~\ref{fig:commit_rate_project_rearranged} shows the rearranged (high-low) project-level commit rate for 1281 projects. As we can see from figure~\ref{fig:commit_rate_project_rearranged}, most of the projects had very low (<=2) commit rate. \begin{figure} \centering \includegraphics[width=10 cm, height= 7 cm]{figs/commit_rate_project_wise_reorganized.pdf} \caption{Project-level commit rate (high-low)} \label{fig:commit_rate_project_rearranged} \end{figure} \subsection{Author-Level Commit Metadata Information} \subsubsection{Commit Counts of Authors} Figure~\ref{fig:commit_author_trend_line} shows author-level commits (high-low) with trend-line. The slope and intercept of the trend-line are -0.00427 and 156.67. \begin{figure} \centering \includegraphics[width=10 cm, height= 6.7 cm]{figs/author_commit_all_trend_line.pdf} \caption{Commit authors with trend line} \label{fig:commit_author_trend_line} \end{figure} \subsubsection{Granular View of Commits for Top 100 Authors} Figure~\ref{fig:commit_author_trend_line_top_20}, Figure~\ref{fig:commit_author_trend_line_top_21_50}, Figure~\ref{fig:commit_author_trend_line_top_51_80}, and Figure~\ref{fig:commit_author_trend_line_top_81_100} provides granular view of commits for 1-20, 21-50, 51-80 and 81-100 authors with trend-line. The slopes of the trendline are -271.74, -54.21, -28.22, and -14.06 respectively. The intercepts of the trend-line are 10278.89, 6655.92, 5294.05, and 4222.15 respectively. \begin{figure} \centering \includegraphics[width=10 cm, height= 6.6 cm]{figs/author_commit_1_20_trend_line.pdf} \caption{Commit authors with trend line for top 20 authors} \label{fig:commit_author_trend_line_top_20} \end{figure} \begin{figure} \centering \includegraphics[width=10 cm, height= 8 cm]{figs/author_commit_21_50_trend_line.pdf} \caption{Commit authors with trend line for top 21-50 authors} \label{fig:commit_author_trend_line_top_21_50} \end{figure} \begin{figure} \centering \includegraphics[width=10 cm, height= 8 cm]{figs/author_commit_51_80_trend_line.pdf} \caption{Commit authors with trend line for top 51-80 authors} \label{fig:commit_author_trend_line_top_51_80} \end{figure} \begin{figure} \centering \includegraphics[width=10 cm, height= 8 cm]{figs/author_commit_81_100_trend_line.pdf} \caption{Commit authors with trend line for top 81-100 authors} \label{fig:commit_author_trend_line_top_81_100} \end{figure} \subsection{Author-Level Commit Rates} Figure~\ref{fig:commit_rate_author} shows author-level commit rate for over 28k authors. About 50 percent of the authors were excluded because either they committed only once or on a particular date only. \begin{figure} \centering \includegraphics[width=10 cm, height= 7.3 cm]{figs/commit_rate_author_wise.pdf} \caption{Author-level commit rates} \label{fig:commit_rate_author} \end{figure} Figure~\ref{fig:commit_rate_author_rearranged} shows the rearranged (high-low) author-level commit rate for over 28k authors. As we can see from Figure~\ref{fig:commit_rate_author_rearranged}, most of the authors had very low (<=2) commit rate. \begin{figure} \centering \includegraphics[width=10 cm, height= 7.3 cm]{figs/commit_rate_author_wise_reorganized.pdf} \caption{Author-level commit rates (high-low)} \label{fig:commit_rate_author_rearranged} \end{figure} \subsection{Weekly Commit Trends} Figure~\ref{fig:weekly_commit_trend} shows day-wise commit counts in a typical week for the entire time period (1998-2017) of the data set. \begin{figure} \centering \includegraphics[width=10 cm, height= 7.3 cm]{figs/weekly_commit_trend_updated.pdf} \caption{Weekly commit trend for total number of commits} \label{fig:weekly_commit_trend} \end{figure} \subsection{Overall Commit Timelines} Figure~\ref{fig:TravisTorrent timeseries} shows time series of the commit counts for the entire time frame (1998-2017) of the data set. \begin{figure} \centering \includegraphics[width=10 cm, height= 7 cm]{figs/TravisTorrent_timeseries_updated.pdf} \caption{Time series of commit counts for the entire time frame (1998-2017)} \label{fig:TravisTorrent timeseries} \end{figure} Figure~\ref{fig:TravisTorrent timeseries overlay} shows the overlay of project start date on the time series of the commit counts for the entire time frame (1998-2017) of the data set. As we can see from the figure~\ref{fig:TravisTorrent timeseries overlay}, most of the projects started just before the year 2014 causing the peak of commit counts in the year 2014. \begin{figure} \centering \includegraphics[width=10 cm, height= 7.3 cm]{figs/TravisTorrent_timeseries_overlay_updated.pdf} \caption{Time series of commit counts for the entire time frame (1998-2017) with project start date overlay} \label{fig:TravisTorrent timeseries overlay} \end{figure} \section{Additional Datasets} \subsection{Bug Datasets} This is a collection of datasets used in the literature \cite{acoss2019,deeptriage2019,dupbug2014,redis2017,bug_fix2019} and portions of it are available on GitHub (\url{https://github.com/logpai/bugrepo}) for analyzing software bugs. Table~\ref{tab:bug_dataset} shows the names of the datasets and their number of metadata fields and the number of projects covered in each bug data set. This collection of datasets is potentially useful in our project to serve as training data for bug patterns in our machine learning methods. \begin{table} \centering \caption{Collection of Bug Datasets and their Parameters} \begin{tabular}{|l|c|c|}\hline \textbf{Dataset} & \textbf{Metadata Fields} & \textbf{Projects Covered} \\\hline\hline Bug triage & 8 & 3 \\ Bugrepo & 11 & 5 \\ SEOSS & 17 & 33 \\ Rediscovery & 14 & 3 \\ 10 years bug fixing activity & 53 & 55 \\ Generating duplicate bug & 13 & 4 \\\hline \end{tabular}% \label{tab:bug_dataset}% \end{table}% \subsection{Android Application Development Dataset} The ``Dataset of Commit History of Real-World Android Applications'', also named AndroidTimeMachine, is the first, self-contained, publicly available dataset weaving spread-out data sources about a large number of Android apps \cite{geiger2018graph}. Covering over 8,431 real-world, open-source Android applications, this dataset contains: \begin{enumerate} \item metadata about the apps’ \texttt{git} projects, with their full commit history, and \item metadata from the Google Play store, with app ratings and permissions. \end{enumerate} The following is the characterization of file distribution from this dataset available at this \href{https://github.com/AndroidTimeMachine/neo4j_open_source_android_apps/tree/master/data}{link}. \begin{figure} \centering \includegraphics[width=0.5\myfigwidth]{figs/AndroidData_AdditionOfFiles.pdf} \includegraphics[width=0.5\myfigwidth]{figs/AndroidData_DeletionOfFiles.pdf} \caption{Histograms of additions (left) and deletions (right) of files} \label{fig:Android_Files} \end{figure} Figure~\ref{fig:Android_Files} shows the histogram of the number of files added to and deleted from in different repositories. Although our initial analyses were based on the more readily accessible Excel spreadsheet-formatted information from this dataset, this dataset also includes a greater amount of data included inside a container image. Examining this additional data can potentially provide more training data that can be useful in our machine learning approaches. \afterpage{\clearpage} \bibliographystyle{plain}
2024-02-18T23:39:51.571Z
2022-04-19T02:21:11.000Z
algebraic_stack_train_0000
669
6,731
proofpile-arXiv_065-3381
\section{Open Problems and Challenges} Despite the performance improvement and interesting salient features of attention models, various challenges are associated with their practical settings in computer vision applications. The essential impediments include a requirement for high computational costs, significant amounts of training data, the efficiency of the model, and a cost-benefit analysis of performance improvement. In addition, there have also been some challenges to visualize and interpret attention blocks. This section provides an overview of these challenges and limitations, mentions some recent efforts to address those limitations, and highlights the open research questions. \vspace{1mm} \noindent \textbf{Generalization:} Attention models' generalization is a challenging task. Many of the proposed models are specific to the application underhand and only work well in the proposed settings. Whereas some models (\latinphrase{e.g.}\xspace channel and spatial attention) have performed better in classification since attention models are primarily designed for high-level tasks; they fail when applied directly on low-level vision tasks. Moreover, the data quality has a notable influence on the generalization and robustness of attention models. Thus, there is still a significant step to generalize pre-trained attention models on more generalized low-level vision tasks. \vspace{1mm} \noindent \textbf{Efficiency:} Efficiency of vision models is vital for many real-time computer vision applications. Unfortunately, current models focus more on performance than efficiency. Recently, self-attention has been successfully applied in transformers and shown to achieve better performance; however, at the cost of huge computational complexity \latinphrase{e.g.}\xspace the base ViT~\cite{dosovitskiy2020image} has 18 billion FLOPs compared to the CNN models~\cite{han2020ghostnet,anwar2019real} with 600 million FLOPs, achieving similar performance to process an image. Although fewer attempts such as Efficient Channel attention are made to make the attention models more efficient, they remain complex to train; hence, efficient models are required for deployment on real-time devices. \vspace{1mm} \noindent \textbf{Multi-Model Data:} Attention has been applied mainly on single domain data and in a single task setting. An important question is whether the attention model can fuse input data in a meaningful manner and exploit multiple label types (or tasks) in the data. Also, it is yet to be seen that attention models can leverage the various labels available, such as combining the point clouds and the RGB images of the KITTI dataset, to provide a meaningful performance. Similarly, attention models can also be used to know whether they can predict relationships between the labels, actions, and attributes in a unified manner. \vspace{1mm} \noindent \textbf{Amount of Training Data:} Attention models usually rely on much more training data to learn the important aspects, compared to simple non-attentional models. For example, self-attention employed in transformers needs to learn the invariance, translation \latinphrase{etc.}\xspace by themselves instead of non-attentional CNNs, where these properties are inbuilt due to operations such as pooling. The increase in data also means more training time and computational resources. Hence, an open question here is how to address this problem with more efficient attention models. \vspace{1mm} \noindent \textbf{Performance Comparisons:} Models that employ attentional blocks mostly compare their performance against the baseline without having the attention while ignoring other attentional blocks. The lack of comparison between different attentional models provides little information about the actual performance improvement against other attentions. Therefore, there is a need to present a more in-depth analysis of the number of parameters increased versus the performance gain of different attentional models proposed in the literature. \section{Conclusions} In this paper, we reviewed more than 70 articles related to various attention mechanisms used in vision applications. We provided a comprehensive discussion of the attention techniques along with their strengths and limitations. We provided a restructuring of the existing attention mechanisms proposed in the literature into a hierarchical framework based on how they compute their attention scores. Choosing the attention score calculation to group the reviewed techniques has been effective in determining how the attention-based models are built and which training strategies are employed therein. Although the capability of the developed attention-based techniques in modeling the salient features and boosting the performance is commendable, various challenges and open questions still remain unanswered, especially with the use of these techniques for computer vision tasks. We have listed these challenges and have highlighted research questions that still remain open. Despite some recent efforts introduced to cope with some of these limitations, we are still far from having solved the problems related to attention in vision. This survey will help researchers to better focus their efforts in addressing these challenges efficiently and in developing attention mechanisms that are better suited for vision-based applications. \section{Acknowledgments} Professor Ajmal Mian is the recipient of an Australian Research Council Future Fellowship Award (project number FT210100268) funded by the Australian Government. We thank Professor Mubarak Shah for his useful comments that significantly improved the presentation of the survey. \section{Introduction} \IEEEPARstart{A}{ttention} has a natural bond with the human cognitive system. According to cognitive science, the human optic nerve receives massive amounts of data, more than it can process. Thus, the human brain weighs the input and pays attention only to the necessary information. With recent developments in machine learning, more specifically, deep learning, and the increasing ability to process large and multiple input data streams, researchers have adopted a similar concept in many domains and formulated various attention mechanisms to improve the performance of deep neural network models in machine translation~\cite{gehring2017convolutional, m_transformers}, visual recognition~\cite{m_nonlocal}, generative models~\cite{zhang2019self}, multi-agent reinforcement learning~\cite{iqbal2019actor}, \latinphrase{etc.}\xspace Over the past decade, deep learning has advanced in leaps and bounds, leading to many deep neural network architectures capable of learning complex relationships in data. Generally, neural networks provide implicit attention to extract meaningful information from the data. \begin{figure*}[t] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.33\textwidth]{figures/attentions_chart}& \includegraphics[width=0.33\textwidth]{figures/self_attentions_chart}& \includegraphics[width=0.29\textwidth]{figures/atts}\\ Non-self attentions& Self-attention methods& All types of attentions\\ \end{tabular} \end{center} \caption{Visual charts show the increase in the number of attention related papers in the top conferences including CVPR, ICCV, ECCV, NeurIPS, ICML, and ICLR.} \label{fig:attentionincrease} \end{figure*} Explicit attention mechanism in deep learning was first introduced to tackle the \emph{forgetting} issue in encoder-decoder architectures designed for the machine translation problem~\cite{bahdanau2014neural}. Since the network's encoder part focuses on generating a representative input vector, the decoder generates the output from the representation vector. A bi-directional Recurrent Neural Network (RNN)~\cite{bahdanau2014neural} is employed for solving the \emph{forgetting} issue by generating a context vector from the input sequence and then decoding the output based on the context vector as well as the previous hidden states. The context vector is computed by a weighted sum of the intermediate representations which makes this method an example of explicit attention. Moreover, Long-Short-Term-Memory (LSTM)~\cite{sutskever2014sequence} is employed to generate both the context vector and the output. Both methods compute the context vector considering all the hidden states of the encoder. However,~\cite{luong2015effective} introduced another idea by getting the attention mechanism to focus on only a subset of the hidden states to generate every item in the context vector. This was computationally less expensive compared to the previous attention methods and shows a trade-off between \emph{global} and \emph{local} attention mechanisms. Another attention-based breakthrough was made by Vaswani~\latinphrase{et~al.}\xspace~\cite{m_transformers}, where an entire architecture was created based on the self-attention mechanism. The items in the input sequence are first encoded in parallel into multiple representations called key, query, and value. This architecture, coined the Transformer, helps capture the importance of each item relative to others in the input sequence more effectively. Recently, many researchers have extended the basic Transformer architecture for specific applications. To pay attention to the significant parts in an image and suppress unnecessary information, advancements of attention-based learning have found their way into multiple computer vision tasks, either employing a different attention map for every image pixel, comparing it with the representations of other pixels~\cite{m_nonlocal, dosovitskiy2020image, zhang2019self} or generating an attention map to extract the global representation for the whole image~\cite{kosiorek2017hierarchical, jetley2018learn}. However, the design of attention mechanism is highly dependent on the problem at hand. To enforce the selection of hidden states that correspond to the critical information in the input, attention techniques have been used as plug-in units in vision-based tasks, alleviating the risk of vanishing gradients. To sum up, attention scores are calculated, and hidden states are selected either deterministically or stochastically. \input{sections/TaxonomyFigure2} Attention has been the center of significant research efforts over the past few years and image attention has been flourishing in many different machine learning and vision applications, for example, classification~\cite{show_attend}, detection~\cite{zhao2019object}, image captioning~\cite{hossain2019comprehensive}, 3D analysis~\cite{Qiu2021Att3Ddetection}, \latinphrase{etc.}\xspace Despite the impressive performance of attention techniques employed in deep learning, there is no literature survey that comprehensively reviews all, especially deep learning based, attention mechanisms in vision to categorize them based on their basic underlying architectures and highlight their strengths and weaknesses. Recently, researchers surveyed application-specific attention techniques with emphasis on NLP-based~\cite{hu2019introductory}, transformer-based~\cite{han2020survey, khan2021transformers}, and graph-based approaches~\cite{lee2019attention}. However, no comprehensive study collates with the huge and diverse scope of {\em all} deep learning based attention techniques developed for visual inputs. In this article, we review attention techniques specific to vision. Our survey covers the numerous basic building blocks (operations and functions) and complete architectures designed to learn suitable representations while making the models attentive to the relevant and important information in the input images or videos. Our survey broadly classifies attention mechanisms proposed in the computer vision literature including soft attention, hard attention, multi-modal, arithmetic, class attention, and logical attention. We note that some methods belong to more than one category; however, we assign each method to the category where it has a dominant association with other methods of that category. Following such a categorization helps track the common attention mechanism characteristics and offers insights that can potentially help in designing novel attention techniques. Figure \ref{fig:taxonomy} shows the classification of the attention mechanisms. We emphasize that a survey is warranted for attention in vision due to the large number of papers published as outlined in Figure~\ref{fig:attentionincrease}. It is evident from Figure~\ref{fig:attentionincrease} that the number of articles published in the last year has significantly increased compared to previous years, and we expect to see a similar trend in the coming years. Furthermore, our survey lists articles of significant importance to assist the computer vision and machine learning community in adopting the most suitable attention mechanisms in their models and avoid duplicating attention methodologies. It also identifies research gaps, provides the current research context, presents plausible research directions and future areas of focus. Since transformers have been employed across many vision applications; a few surveys \cite{khan2021transformers,han2020survey} summarize the recent trends of transformers in computer vision. Although transformers offer high accuracy, this comes at the cost of very high computational complexity which hinders their feasibility for mobile and embedded system applications. Furthermore, transformer based models require substantially more training data than CNNs and lack efficient hardware designs and generalizability. According to our survey, transformers only cover a single category in self-attention out of the 50 different attention categories surveyed. Another significant difference is that our survey focuses on attention types rather than applications covered in transformer-based surveys \cite{khan2021transformers,han2020survey}. \section{Attention in Vision} The primary purpose of the attention in vision is to imitate the human visual cognitive system and focus on the essential features~\cite{hermann2015teaching} in the input image. We categorize attention methods based on the main function used to generate attention scores, such as softmax or sigmoid. Table~\ref{tab:overall} provides the summary, application, strengths, and limitations for the category presented in this survey. \subsection{Soft (Deterministic) Attention} This section reviews soft-attention methods such as channel attention, spatial attention, and self-attention. In channel attention, the scores are calculated channel wise because each one of the feature maps (channels) attend to specific parts of the input. In spatial attention, the main idea is to attend to the critical regions in the image. Attending over regions of interest facilitates object detection, semantic segmentation, and person re-identification. In contrast to channel attention, spatial attention attends to the important parts in the spatial map (bounded by width and height). It can be used independently or as a complementary mechanism to channel attention. On the other hand, self-attention is proposed to encode higher-order interactions and contextual information by extracting the relationships between input sequence tokens. It is different from channel attention in how it generates the attention scores, as it mainly calculates the similarity between two maps (K, Q) of the same input, whereas channel attention generates the scores from a single map. However, self attention and channel attention both operate on channels. Soft attention methods calculate the attention scores as the weighted sum of all the input entities~\cite{luong2015effective} and mainly use soft functions such as softmax and sigmoid. Since these methods are differentiable, they can be trained through back-propagation techniques. However, they suffer from other issues such as high computational complexity and assigning weights to non-attended objects. \subsubsection{Channel Attention} \label{sec:channel} \vspace{1.5mm} \noindent \textbf{Squeeze \& Excitation Attention}: The Squeeze-and-Excitation (SE) Block~\cite{hu2018squeeze}, shown in Figure~\ref{fig:channels}(a), is a unit design to perform dynamic channel-wise feature attention. The SE attention takes the output of a convolution block and converts each channel to a single value via global average pooling; this process is called \enquote{squeeze}. The output channel ratio is reduced after passing through the fully connected layer and ReLU for adding non-linearity. The features are passed through the fully connected layer, followed by a sigmoid function to achieve a smooth gating operation. The convolutional block feature maps are weighted based on the side network's output, called the \enquote{excitation}. The process can be summarized as \begin{equation} f_s = \sigma( FC (ReLU( FC(f_g)) )), \label{eq:SE_att} \end{equation} where $FC$ is the fully connected layer, $f_g$ is the average global pooling, $\sigma$ is the sigmoid operation. The main intuition is to choose the best representation of each channel in order to generate attention scores. \vspace{1.5mm} \noindent \textbf{Efficient Channel Attention (ECA)~\cite{wang2020eca}} is based on squeeze $\&$ excitation network~\cite{hu2018squeeze} and aims to increase efficiency as well as decrease model complexity by removing the dimensionality reduction. ECA (see Fig~\ref{fig:channels}(g)) achieves cross-channel interaction locally through analyzing each channel and its $k$ neighbors, following channel-wise global average pooling but with no dimensionality reduction. ECA accomplishes efficient processing via fast 1D convolutions. The size $k$ represents the number of neighbors that can participate in one channel attention prediction \latinphrase{i.e.}\xspace the coverage of local cross-channel interaction. \vspace{1.5mm} \noindent \textbf{Split-Attention Networks}: ResNest~\cite{zhang2020resnest}, a variant of ResNet~\cite{resnet}, uses split attention blocks as shown in Figure~\ref{fig:channels}(h). Attention is obtained by summing the inputs from previous modules and applying global pooling, passing through a composite function \latinphrase{i.e.}\xspace convolutional layer-batch normalization-ReLU activation. The output is again passed through convolutional layers. Afterwards, a softmax is applied to normalize the values and then multiply them with the corresponding inputs. Finally, all the features are summed together. This mechanism is similar to the squeeze $\&$ excitation attention~\cite{hu2018squeeze}. ResNest is also a special type of squeeze $\&$ excitation where it squeezes the channels using average pooling and summing of the split channels. \vspace{1.5mm} \noindent \textbf{Channel Attention in CBAM}: Convolutional Block Attention Module (CBAM)~\cite{woo2018cbam} employs channel attention and exploits the inter-channel feature relationship as each feature map channel is considered a feature detector focusing on the \enquote{what} part of the input image. The input feature map's spatial dimensions are squeezed for computing the channel attention followed by aggregation while using both average-pooling and max-pooling to obtain two descriptors. These descriptors are forwarded to a three-layer shared multi-layer perceptron (MLP) to generate the attention map. Subsequently, the output of each MLP is summed element-wise and then passed through a sigmoid function as shown in Figure~\ref{fig:channels}(b). In summary, the channel attention is computed as \begin{equation} f_{ch} = \sigma( MLP(MaxPool(f)) + MLP(AvgPool(f))), \label{eq:ch-atten} \end{equation} where $\sigma$ denotes the sigmoid function, and $f$ represents the input features. The ReLU activation function is employed in MLP after each convolutional layer. Channel attention in CBAM is the same as Squeeze and Excitation (SE) attention~\cite{se} if only average pooling is used. \vspace{1.5mm} \noindent \textbf{Second-order Attention Network}: For single image super-resolution, in~\cite{Dai_2019_CVPR}, the authors presented a second-order channel attention module, abbreviated as SOCA, to learn feature interdependencies via second-order feature statistics. A covariance matrix ($\Sigma$) is first computed and normalized using the features map from the previous network layers to obtain discriminative representations. The symmetric positive semi-definite covariance matrix is decomposed into $\Sigma = U\Lambda U^T$, where $U$ is orthogonal, and $\Lambda$ is the diagonal matrix with non-increasing eigenvalues. The power of eigenvalues $\Sigma = U\Lambda^\alpha U^T$ help in achieving the attention mechanism, that is, if $\alpha < 1$, then the eigenvalues larger than 1.0 will nonlinearly shrink while stretching others. The authors chose $\alpha < \frac{1}{2}$ based on previous work~\cite{li2017second}. The subsequent attention mechanism is similar to SE~\cite{hu2018squeeze} as shown in Figure~\ref{fig:channels}(c), but instead of providing first-order statistics (\latinphrase{i.e.}\xspace global average pooling), the authors furnished second-order statistics (\latinphrase{i.e.}\xspace global covariance pooling). \vspace{1.5mm} \noindent \textbf{High-Order Attention}: To encode global information and contextual representations, \cite{ding2020high}, Ding~\latinphrase{et~al.}\xspace proposed High-order Attention (HA) with adaptive receptive fields and dynamic weights. HA mainly constructs a feature map for each pixel, including the relationships to other pixels. HA is required to address the issue of fixed-shape receptive fields that cause false prediction in case of high-shape objects \latinphrase{i.e.}\xspace, similar shape objects. Specifically, after calculating the attention maps for each pixel, graph transduction is used to form the final feature map. This feature representation is used to update each pixel position by using the weighted sum of contextual information. High-order attention maps are calculated using Hadamard product \cite{horn1990hadamard, kim2016hadamard}. It is classified as channel attention because it is generate attention scores from channels as in SE \cite{se}. \vspace{1.5mm} \noindent \textbf{Harmonious attention}: proposes a joint attention module of soft pixel and hard regional attentions \cite{harmonious}. The main idea is to tackle the limitation of the previous attention modules in person Re-Identification by learning attention selection and feature representation jointly and hence solving the issue of misalignment calibration caused by constrained attention mechanisms \cite{harmony1, harmony2, harmony3, harmony4}. Specifically, harmonious attention learns two types of soft attention (spatial and channel) in one branch and hard attention in the other one. Moreover, it proposes cross-interaction attention harmonizing between these two attention types as shown in Figure~\ref{fig:channels}(i). \input{sections/table} \vspace{1.5mm} \noindent \textbf{Auto Learning Attention}: Ma~\latinphrase{et~al.}\xspace \cite{NEURIPS2020_103303dd} introduced a novel idea for designing attention automatically. The module, named Higher-Order Group Attention (HOGA), is in the form of a Directed Acyclic Graph (DAG) \cite{pham2018efficient, dag1, dag2, dag3} where each node represents a group and each edge represents a heterogeneous attention operation. There is a sequential connection between the nodes to represent hybrids of attention operations. Thus, these connections can be represented as K-order attention modules, where K is the number of attention operations. DARTS \cite{liu2018darts} is customized to facilitate the search process efficiently. This auto-learning module can be integrated into legacy architectures and performs better than manual ones. However, the core idea of attention modules remains the same as the previous architectures \latinphrase{i.e.}\xspace SE \cite{se}, CBAM \cite{woo2018cbam}, splat \cite{zhang2020resnest}, mixed \cite{chen2019mixed}. \vspace{1.5mm} \noindent \textbf{Double Attention Networks}: Chen~\latinphrase{et~al.}\xspace~\cite{chen20182} proposed Double Attention Network (A2-Nets), which attends over the input image in two steps. The first step gathers the required features using bilinear pooling to encode the second-order relationships between entities, and the second step distributes the features over the various locations adaptively. In this architecture, the second-order statistics, which are mostly lost with other functions such as average pooling of SE~\cite{se}, of the pooled features are captured first by bilinear pooling. The attention scores are then calculated not from the whole image such as \cite{m_nonlocal} but from a compact bag, hence, enriching the objects with the required context only. The first step \latinphrase{i.e.}\xspace, feature gathering uses the outer product $\sum_{\forall i} a_i b_i^T$ then softmax is used for attending the discriminative features. The second step \latinphrase{i.e.}\xspace, distribution is based on complementing each location with the required features where their summation is $1$. The complete design of A2-Nets is shown in Figure~\ref{fig:channels}(d). Experimental comparisons demonstrated that A2-Net improves the performance better than SE and non-local networks, and is more efficient in terms of memory and time. \vspace{1.5mm} \noindent \textbf{Dual Attention Network}: Jun~\latinphrase{et~al.}\xspace~\cite{fu2019dual} presented a dual attention network for scene segmentation composed of position attention and channel attention working in parallel. The position attention aims to encode the contextual features in local ones. The attention process is straightforward: the input features $f_A$ are passed through three convolutional layers to generate three feature maps ($f_B$, $f_C$, and $f_D$), which are reshaped. Matrix multiplication is performed between the $f_B$ and the transpose of $f_C$, followed by softmax to obtain the spatial attention map. Again, matrix multiplication is performed between the generated $f_D$ features and the spatial attention map. Finally, the output is multiplied with a scalar and summed element-wise with the input features $f_A$ as shown in Figure~\ref{fig:channels}(f). Although channel attention involves similar steps as position attention, it is different because the features are used directly without passing through convolutional layers. The input features $f_A$ are reshaped, transposed, multiplied (\latinphrase{i.e.}\xspace $f_A \times f_A'$), and then passed through the softmax layer to obtain the channel attention map. Moreover, the input features are multiplied with the channel attention map, followed by the element-wise summation, to give the final output as shown in Figure~\ref{fig:channels}(e). \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.2\paperwidth, height=.05\paperheight]{figures/SE_att.png} \\ \small (a) SENet \cite{hu2018squeeze} \tabularnewline \includegraphics[width=.22\paperwidth, height=.05\paperheight]{figures/Channel-Attention.png} \\ \small (b) CBAM \cite{woo2018cbam} \tabularnewline \includegraphics[width=.22\paperwidth, height=.08\paperheight]{figures/soca.png} \\ \small (c) SOCA \cite{Dai_2019_CVPR} \tabularnewline \includegraphics[width=.22\paperwidth, height=.08\paperheight]{figures/double_attention.PNG} \\ \small (d) A$^2-$Net \cite{chen20182} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.27\paperwidth, height=.09\paperheight]{figures/dan_position.PNG} \\ \small (e) DAN Positional\cite{fu2019dual} \tabularnewline \includegraphics[width=.27\paperwidth, height=.09\paperheight]{figures/dan_channel.PNG} \\ \small (f) DAN Channel \cite{fu2019dual} \tabularnewline \includegraphics[width=.25\paperwidth, height=.09\paperheight]{figures/ECA.png} \\ \small (g) ECA-Net \cite{wang2020eca} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.2\paperwidth, height=.16\paperheight]{figures/ResNest_att.PNG} \\ \small (h) RESNest \cite{zhang2020resnest} \tabularnewline \includegraphics[width=.2\paperwidth, height=.14\paperheight]{figures/harmonious.png} \\ \small (i) Harmonious \cite{harmonious} \end{tabular} \caption{Core structures of the channel-based attention methods. Different methods to generate the attention scores including squeeze and excitation \cite{se}, splitting and squeezing \cite{zhang2020resnest}, calculating the second order \cite{fu2019dual} or efficient squeezing and excitation \cite{wang2020eca}. Images are taken from the original papers and are best viewed in color.} \label{fig:channels} \end{figure*} \vspace{1.5mm} \noindent \textbf{Frequency Channel Attention}: Channel attention requires global average pooling as a pre-processing step. Qin~\latinphrase{et~al.}\xspace~\cite{qin2020fcanet} argued that the global average pooling operation can be replaced with frequency components. The frequency attention views the discrete cosine transform as the weighted input sum with the cosine parts. As global average pooling is a particular case of frequency-domain feature decomposition, the authors use various frequency components of 2D discrete cosine transform, including the zero-frequency component, \latinphrase{i.e.}\xspace global average pooling. \subsubsection{Spatial Attention} \label{sec:spatial} Different from channel attention that mainly generates channel-wise attention scores, spatial attention focuses on generating attention scores from spatial patches of the feature maps rather than the channels. However, the sequence of operations to generate the attentions are similar. \\ \noindent \textbf{Spatial Attention in CBAM} uses the inter-spatial feature relationships to complement the channel attention~\cite{woo2018cbam}. The spatial attention focuses on an informative part and is computed by applying average pooling and max pooling channel-wise, followed by concatenating both to obtain a single feature descriptor. Furthermore, a convolution layer on the concatenated feature descriptor is applied to generate a spatial attention map that encodes to emphasize or suppress. The feature map channel information is aggregated via average-pooled features and max-pooled features and then concatenated and convolved to generate a 2D spatial attention map. The overall process is shown in Figure~\ref{fig:spatial}(a) and computed as \begin{equation} f_{sp} = \sigma( Conv_{7\times 7}([MaxPool(f); AvgPool(f)])), \label{eq:sp-atten} \end{equation} where $Conv_{7\times 7}$ denotes a convolution operation with the 7 $\times$ 7 kernel size and $\sigma$ represents the sigmoid function. \vspace{1.5mm} \noindent \textbf{Co-attention \& Co-excitation}: Hsieh~\latinphrase{et~al.}\xspace~\cite{NEURIPS2019_92af93f7} proposed co-attention and co-excitation to detect all the instances that belong to the same target for one-shot detection. The main idea is to enrich the extracted feature representation using non-local networks that encode long-range dependencies and second-order interactions \cite{m_nonlocal}. Co-excitation is based on squeeze-and-excite network \cite{se} as shown in Figure~\ref{fig:spatial}(e). While squeeze uses global average pooling \cite{lin2013network} to reweight the spatial positions, co-excite serves as a bridge between the feature of query and target. Encoding high-contextual representations using co-attention and co-excitation show improvements in one-shot detector performance achieving state-of-the-art results. \vspace{1.5mm} \noindent \textbf{Spatial Pyramid Attention Network} abbreviated as SPAN~\cite{hu2020span}, was proposed for localizing multiple types of image manipulations. It is composed of three main blocks \latinphrase{i.e.}\xspace feature extraction (head) module, pyramid spatial attention module, and decision (tail) module. The head module employs the Wider \& Deeper VGG Network as the backbone, while Bayer and SRM layers extract features from visual artifacts and noise patterns. The spatial relationship of the pixels is achieved through five local self-attention blocks applied recursively, and to preserve the details, the input of the self-attention is added to the output of the block as shown in Figure~\ref{fig:spatial}(c). These features are then fed into the final tail module of 2D convolutional blocks to generate the output mask after employing a sigmoid activation. \vspace{1.5mm} \noindent \textbf{Spatial-Spectral Self-Attention}: Figure~\ref{fig:spatial}(d) shows the architecture of spatial-spectral self-attention which is composed of two attention modules, namely, spatial attention and spectral attention, both utilizing self-attention. \begin{enumerate \item {Spatial Attention:} To model the non-local region information, Meng~\latinphrase{et~al.}\xspace~\cite{meng2020end} utilize a 3$\times$3 kernel to fuse the input features indicating the region-based correlation followed by a convolutional network mapping the fused features into Q $\&$ K. The kernel number indicates the heads' number and the size denotes the dimension. Moreover, the dimension-specified features from Q $\&$ K build the related attention maps, then modulating the corresponding dimension in a sequence to achieve the order-independent property. Finally, to finish the spatial correlation modeling, the features are forwarded to a deconvolution layer. \item{Spectral Attention}: First, the spectral channel samples are convolved with one kernel and flattened into a signle dimension, set as the feature vector for that channel. The input feature is converted to Q $\&$ K, building attention maps for the spectral axis. The adjacent channels have a higher correlation due to the image patterns on the exact location, denoted via a spectral smoothness on the attention maps. The similarity is indicated by normalized cosine distance as spectral embedding where each similar score is scaled and summed with the coefficients in the attention maps, which is then to modulate \enquote{Value} in self-attention, inducing spectral smoothness constraint. \end{enumerate} \vspace{1.5mm} \noindent \textbf{Pixel-wise Contextual Attention}: (PiCANet)~\cite{liu2018picanet} aims to learn accurate saliency detection. PiCANet generates a map at each pixel over the context region and constructs an accompanied contextual feature to enhance the feature representability at the local and global levels. To generate global attention, each pixel needs to \enquote{see} via ReNet~\cite{visin2015renet} with four recurrent neural networks sweeping horizontally and vertically. The contexts from directions, using biLSTM, are blended propagating information of each pixel to all other pixels. Next, a convolutional layer transforms the feature maps to different channels, further normalized by a softmax function used to weight the feature maps. The local attention is performed on a local neighborhood forming a local feature cube where each pixel needs to \enquote{see} every other pixel in the local area using a few convolutional layers having the same receptive field as the patch. The features are then transformed to channel and normalized using softmax, which are further weighted summed to get the final attention. \vspace{1.5mm} \noindent \textbf{Pyramid Feature Attention} extracts features from different levels of VGG~\cite{zhao2019pyramid}. The low-level features extracted from lower layers of VGG are provided to the spatial attention mechanism~\cite{woo2018cbam}, and the high-level features obtained from the higher layers are supplied to a channel attention mechanism~\cite{woo2018cbam}. The term feature pyramid attention originates form the VGG features obtained from different layers. \vspace{1.5mm} \noindent \textbf{Spatial Attention Pyramid}: For unsupervised domain adaptation, Li~\latinphrase{et~al.}\xspace~\cite{li2020spatial} introduced a spatial attention pyramid that takes features from multiple average pooling layers with various sizes operating on feature maps. These features are forwarded to spatial attention followed by channel-wise attention. All the features after attention are concatenated to form a single semantic vector. \vspace{1.5mm} \noindent \textbf{Region Attention Network}: (RANet)~\cite{shen2020ranet} was proposed for semantic segmentation. It consists of novel network components, the Region Construction Block (RCB) and the Region Interaction Block (RIB), for constructing the contextual representations as illustrated in Figure~\ref{fig:spatial}(b). The RCB analyzes the boundary score and the semantic score maps jointly to compute the attention region score for each image pixel pair. High attention score indicates that the pixels are from the same object region, dividing the image into various object regions. Subsequently, the RIB takes the region maps and selects the representative pixels in different regions where each representative pixel receives the context from other pixels to effectively represent the object region's local content. Furthermore, capturing the spatial and category relationship between various objects communicating the representative pixels in the different regions yields the global contextual representation to augment the pixels, eventually forming the contextual feature map for segmentation. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/Spatial-attention.png}\\ \small (a) Spatial Attention~\cite{woo2018cbam} \tabularnewline \includegraphics[width=.29\paperwidth]{figures/RANet.PNG} \small \\(b) RANet~\cite{shen2020ranet} \tabularnewline \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.32\paperwidth]{figures/co-excite.png} \small \\(c) Co-excite \cite{NEURIPS2019_92af93f7} \end{tabular} \caption{The structures of the spatial-based attention methods, including RANet~\cite{shen2020ranet}, and Co-excite \cite{NEURIPS2019_92af93f7}. These methods focus on attending to the most important parts in the spatial map. The images are taken from~\cite{woo2018cbam,shen2020ranet,NEURIPS2019_92af93f7}.} \label{fig:spatial} \end{figure*} \subsubsection{Self-attention } \label{sec:self} Self-attention, also known as \emph{intra-attention}, is an attention mechanism that encodes the relationships between all the input entities. It is a process that enables input sequences to interact with each other and aggregate the attention scores, which illustrate how similar they are. The main idea is to replicate the feature maps into three copies and then measure the similarity between them. Apart from channel- and spatial-wise attention that use the physical feature maps, self-attention replicates feature copies to measure long-range dependencies. However, self-attention methods use channels to calculate attention scores. Cheng~\latinphrase{et~al.}\xspace extracted the correlations between the words of a single sentence using Long-Short-Term Memory (LSTM) \cite{m_self_attention}. An attention vector is produced from each hidden state during the recurrent iteration, which attends all the responses in a sequence for this position. In \cite{m_self_attention_parikah}, a decomposable solution was proposed to divide the input into sub-problems, which improved the processing efficiency compared to \cite{m_self_attention}. The attention vector is calculated as an alignment factor to the content (bag-of-words). Although these methods introduced the idea of self-attention, they are very expensive in terms of resources and do not consider contextual information. Also, RNN models process input sequentially; hence, it is difficult to parallelize or process large-scale schema efficiently. \vspace{1.5mm} \noindent \textbf{Transformers}: Vaswani~\latinphrase{et~al.}\xspace~\cite{m_transformers} proposed a new method, called transformers, based on the self-attention concept without convolution or recurrent modules. As shown in Figure~\ref{fig:self_attentions} (f), it is mainly composed of encoder-decoder layers, where the encoder comprises of self-attention module followed by positional feed-forward layer and the decoder is the same as the encoder except that it has an encoder-decoder attention layer in between. Positional encoding is represented by a sine wave that incorporates the passage of time as input before the linear layer. This positional encoding serves as a generalization term to help recognize unseen sequences and encodes relative positions rather than absolute representations. Algorithm~\ref{algorithm:self-attention} shows the detailed steps of calculating self-attention (multi-head attention) using transformers. Although transformers have achieved much progress in the text-based models, it lacks in encoding the context of the sentence because it calculates the word's attention for the left-side sequences. To address this issue, Bidirectional Encoder Representations from Transformers (BERT) learns the contextual information by encoding both sides of the sentence jointly \cite{m_BERT}. \begin{algorithm}[t] \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input {set of sequences $(x_1, x_2, ..., x_n)$ of an entity $\mathbf{X} \in \mathbf{R}$} \Output{attention scores of $\mathbf{X}$ sequences.} { Initialize weights: Key ($\mathbf{W_K}$), Query ($\mathbf{W_Q}$), Value ($\mathbf{W_V}$) for each input sequence. \\ Derive Key, Query, Value for each input sequence and its corresponding weight, such that $\mathbf{Q = XW_Q}$, $\mathbf{K = XW_K}$, $\mathbf{V = XV_Q}$, respectively.\\ Compute attention scores by calculating the dot product between the query and key.\\ Compute the scaled-dot product attention for these scores and Values $\mathbf{V}$, \[ \mathrm{softmax} \left( \frac{\mathbf{QK^T}}{\sqrt{d_k}}\right)\mathbf{V}.\]\\ repeat steps from 1 to 4 for all the heads \\ } \caption{The main steps of generating self-attention by transformers (multi-head attention)} \label{algorithm:self-attention} \end{algorithm} \vspace{1.5mm} \noindent \textbf{Standalone self-attention}: As stated above, convolutional features do not consider the global information due to their local-biased receptive fields. Instead of augmenting attentional features to the convolutional ones, Ramachandran~\latinphrase{et~al.}\xspace~\cite{m_standalone} proposed a fully-attentional network that replaces spatial convolutions with self-attentional modules. The convolutional stem (the first few convolutions) is used to capture spatial information. They designed a small kernel (\latinphrase{e.g.}\xspace $n \times n$) instead of processing the whole image simultaneously. This design built a computationally efficient model that enables processing images with their original sizes without downsampling. The computation complexity is reduced to $\mathcal{O}(hwn^2)$, where $h$ and $w$ denotes height and width, respectively. A patch is extracted as a query along with each local patch, while the identity image is used as Value and Key. Calculating the attention maps follows the same steps as in Algorithm \ref{algorithm:self-attention}. Although stand-alone self-attention shows competitive results compared to convolutional models, it suffers from encoding positional information. \vspace{1.5mm} \noindent \vspace{1.5mm} \noindent \textbf{Clustered Attention}: To address the computational inefficiency of transformers, Vyas~\latinphrase{et~al.}\xspace~\cite{m_clsutered} proposed a clustered attention mechanism that relies on the idea that correlated queries follow the same distribution around Euclidean centers. Based on this idea, they use the K-means algorithm with fixed centers to group similar queries together. Instead of calculating the queries for attention, they are calculated for clusters' centers. Therefore, the total complexity is minimized to a linear form $\mathcal{O}(qc)$, where $q$ is the number of the queries while $c$ is the cluster number. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.25\paperwidth]{figures/eff_attention.PNG} \\ \small (a) Efficient Attention~\cite{Efficient_attention} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.16\paperwidth]{figures/slot.PNG} \\ \small (b) Slot Attention~\cite{slot} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.30\paperwidth]{figures/RFA.png} \\ \small (c) RFA~\cite{peng2021random} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.27\paperwidth]{figures/xlinear.png} \\ \small (d) X-Linear~\cite{pan2020x} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.27\paperwidth]{figures/axial.png} \\ \small (e) Axial~\cite{axial_attention} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.18\paperwidth]{figures/transformer.PNG} \\ \small (f) Transformer~\cite{m_transformers} \end{tabular} \caption{ The architectures of self-attention methods: Transformers~\cite{m_transformers}, Axial attention~\cite{axial_attention}, X-Linear~\cite{pan2020x}, Slot~\cite{slot} and RFA~\cite{peng2021random} (pictures taken from the corresponding articles). All of these methods are self-attention, which generate the scores from measuring the similarity between two maps of the same input. However, there is difference in the way of processing.} \label{fig:self_attentions} \end{figure*} \vspace{1.5mm} \noindent \textbf{Slot Attention}: Locatello~\latinphrase{et~al.}\xspace~\cite{scouter_slot} proposed slot attention, an attention mechanism that learns the objects' representations in a scene. In general, it learns to disentangle the image into a series of slots. As shown in Figure~\ref{fig:self_attentions}(d), each slot represents a single object. Slot attention module is applied to a learned representation $h \in \mathbb{R}^{W \times H \times D}$, where $H$ is the height, $W$ is the width, and $D$ is the representation size. SAM has two main steps: learning $n$ slots using an iterative attention mechanism and representing individual objects (slots). Inside each iteration, two operations are implemented: 1) slot competition using softmax followed by normalization according to slot dimension using this equation \begin{equation} a = Softmax \bigg( \frac{1}{\sqrt{D}} n (h). q(c)^T\bigg). \end{equation} 2) An aggregation process for the attended representations with a weighted mean \begin{equation} r = Weightedmean \bigg(a, v(h) \bigg), \end{equation} where $k, q, v$ are learnable variables as showed in \cite{m_transformers}. Then, a feed-forward layer is used to predict the slot representations $s=fc(r)$. Slot attention is based on Transformer-like attention \cite{m_transformers} on top of CNN-feature extractors. Given an image $\mathbb{I}$, the slot attention parses the scene into a set of slots, each one referring to an object $(z, x, m)$, where $z$ is the object feature, $x$ is the input image and $m$ is the mask. In the decoders, convolutional networks are used to learn slot representations and object masks. The training process is guided by $\ell_2$-norm loss \begin{equation} \mathbb{L} = \bigg\rVert \bigg( \sum_{k=1}^K mx_k\bigg) - \mathbb{I} \bigg\lVert_2^2 \end{equation} Following the slot-attention module, Li~\latinphrase{et~al.}\xspace developed an explainable classifier based on slot attentions \cite{scouter_slot}. This method aims to find the positive supports and negatives ones for a class $l$. In this way, a classifier can also be explained rather than being completely black-box. The primary entity of this work is xSlot, a variant of slot attention~\cite{slot}, which is related to a category and gives confidence for inclusion of this category in the input image. \vspace{1.5mm} \noindent \textbf{Efficient Attention}: Using Asymmetric Clustering (SMYRF) Daras~\latinphrase{et~al.}\xspace~\cite{SMYRF} proposed symmetric Locality Sensitive Hashing (LSH) clustering in a novel way to reduce the size of attention maps, therefore, developing efficient models. They observed that attention weights are sparse as well as the attention matrix is low-rank. As a result, pre-trained models pertain to decay in their values. In SMYRF, this issue is addressed by approximating attention maps through balanced clustering, produced by asymmetric transformations and an adaptive scheme. SMYRF is a drop-in replacement for pre-trained models for normal dense attention. Without retraining models after integrating this module, SMYRF showed significant effectiveness in memory, performance, and speed. Therefore, the feature maps can be scaled up to include contextual information. In some models, the memory usage is reduced by $50\%$. Although SMYRF enhanced memory usage in self-attention models, the improvement over efficient attention models is marginal (see Figure~\ref{fig:self_attentions} (a)). \vspace{1.5mm} \noindent \textbf{Random Feature Attention}: Transformers have a major shortcoming with regards to time and memory complexity, which hinder attention scaling up and thus limiting higher-order interactions. Peng~\latinphrase{et~al.}\xspace~\cite{peng2021random} proposed reducing the space and time complexity of transformers from quadratic to linear. They simply enhance softmax approximation using random functions. Random Feature Attention (RFA) \cite{rawat2019sampled} uses a variant of softmax that is sampled from simple distribution-based Fourier random features \cite{rahimi2007random, yang2014quasi}. Using the kernel trick $exp(x.y) \approx \phi(x).\phi(y)$ of \cite{hofmann2008kernel}, softmax approximation is reduced to linear as shown in Figure~\ref{fig:self_attentions} (c). Moreover, the similarity of RFA connections and recurrent networks help in developing a gating mechanism to learn recency bias \cite{lstm, cho2014learning, schmidhuber1992learning}. RFA can be integrated into backbones easily to replace the normal softmax without much increase in the number of parameters, only $0.1\%$ increase. Plugging RFA into a transformer show comparable results to softmax, while gating RFA outperformed it in language models. RFA executes 2$\times$ faster than a conventional transformer. \vspace{1.5mm} \noindent \textbf{Non-local Networks}: Recent breakthroughs in the field of artificial intelligence are mostly based on the success of Convolution Neural Networks (CNNs) \cite{m_deep_learning, resnet}. In particular, they can be processed in parallel mode and are inductive biases for the extracted features. However, CNNs fail to learn the context of the whole image due to their local-biased receptive fields. Therefore, long-range dependencies are disregarded in CNNs. In \cite{m_nonlocal}, Wang~\latinphrase{et~al.}\xspace proposed non-local networks to alleviate the bias of CNNs towards the local information and fuse global information into the network. It augments each pixel of the convolutional features with the contextual information, the weighted sum of the whole feature map. In this manner, the correlated patches in an image are encoded in a long-range fashion. Non-local networks showed significant improvement in long-range interaction tasks such as video classification \cite{m_kinect} as well as low-level image processing \cite{m_non_denoise, non_local_advers}. Non-local model attention in the network in a graphical fashion \cite{m_graph_atten}. However, stacking multiple non-local modules in the same stage shows instability and ill-pose in the training process \cite{m_non_local_diffuse}. In~\cite{liu2020learning}, Liu~\latinphrase{et~al.}\xspace uses non-local networks to form self-mutual attention between two modalities (RGB and Depth) to learn global contextual information. The idea is straightforward \latinphrase{i.e.}\xspace to sum the corresponding features before softmax normalization such that $\mathrm{softmax}(f^r (\mathbb{X}^r)+\alpha^d \bigodot f^d (\mathbb{X}^d))$ for RGB attention and vice versa. \vspace{1.5mm} \noindent \textbf{Non-Local Sparse Attention (NLSA)}: Mei~\latinphrase{et~al.}\xspace \cite{mei2021image} proposed a sparse non-local network to combine the benefits of non-local modules to encode long-range dependencies and sparse representations for robustness. Deep features are split into different groups (buckets) with high inner correlations. Locality Sensitive Hashing (LSH) \cite{gionis1999similarity} is used to find similar features to each bucket. Then, the Non-Local block processes the pixel within its bucket and the similar ones. NLSA reduces the complexity to asymptotic linear from quadratic as well as uses the power of sparse representations to focus on informative regions only. \vspace{1.5mm} \noindent \textbf{X-Linear Attention}: Bilinear pooling is a calculation process that computers the outer product between two entities rather than the inner product \cite{bilinear4, bilinear3, bilinear2, bilinear1} and has shown the ability to encode higher-order interaction and thus encourage more discriminability in the models. Moreover, it yields compact models with the required details even though it compresses the representations \cite{bilinear2}. In particular, bilinear applications has shown significant improvements in fine-grained visual recognition \cite{fine3, fine2, fine1} and visual question answering \cite{yu2017multi}. As Figure~\ref{fig:self_attentions}(e) depicts, a low-rank bilinear pooling is performed between queries and keys, and hence, the $2^{nd}$-order interactions between keys and queries are encoded. Through this query-key interaction, spatial-wise and channel-wise attention are aggregated with the values. The channel-wise attention is the same as squeeze-excitation attention \cite{se}. The final output of the x-linear module is aggregated with the low-rank bilinear of keys and values \cite{pan2020x}. They claimed that encoding higher interactions require only repeating the x-linear module accordingly (\latinphrase{e.g.}\xspace three iterative x-linear blocks for $4^{th}$-order interactions). Modeling infinity-order interaction is also explained by the usage of h Exponential Linear Unit \cite{barron2017continuously}. X-Linear attention module proposes a novel mechanism to attentions, different from transformer \cite{m_transformers}. It is able to encode the relations between input tokens without positional encoding with only linear complexity as opposed to quadratic in transformer. \vspace{1.5mm} \noindent \textbf{Axial-Attention}: Wang~\latinphrase{et~al.}\xspace~\cite{axial_attention} proposed axial attention to encode global information and long-range context for the subject. Although conventional self-attention methods use fully-connected layers to encode non-local interactions, they are very expensive given their dense connections \cite{m_transformers,bert,detr,image_transfomer}. Axial uses self-attention models in a non-local way without any constraints. Simply put, it factorizes the 2D self-attentions in two axes (width and height) of 1D-self attentions. This way, axial attention shows effectiveness in attending over wide regions. Moreover, unlike \cite{bam, ramachandran2019stand, hu2019local}, axial attention uses positional information to include contextual information in an agnostic way. With axial attention, the computational complexity is reduced to $\mathcal{O}(hwm)$. Also, axial attention showed competitive performance not only in comparison to full-attention models \cite{attention_augmented,m_standalone}, but convolutional ones as well \cite{resnet, huang2017densely}. \vspace{1.5mm} \noindent \textbf{Efficient Attention Mechanism}: Conventional attention mechanisms are built on double matrix multiplication that yields quadratic complexity $n \times n$ where $n$ is the size of the matrix. Many methods propose efficient architectures for attention \cite{kitaev2019reformer, Efficient_attention, wu2021centroid, kim2020fastformers}. In \cite{Efficient_attention}, Zhuoran~\latinphrase{et~al.}\xspace used the associative feature of matrix multiplication and suggested efficient attention. Formally, instead of using dot-product of the form $\rho (QK^T)V$, they process it in an efficient sequence $\rho_q(Q)(\rho_k(K)^T V)$ where $\rho$ denotes a normalization step. Regarding the normalization of $\mathrm{softmax}$, it is performed twice instead of once at the end. Hence, the complexity is reduced from quadratic $\mathcal{O}(n^2)$ to linear $\mathcal{O}(n)$. Through a simple change, the complexity of processing and memory usage are reduced to enable the integration of attention modules in large-scale tasks. \subsubsection{Arithmetic Attention} \label{sec:arithmetic} This part introduces arithmetic attention methods such as dropout, mirror, reverse, inverse, and reciprocal. We named it arithmetic because these methods are different from the above techniques even though they use their core. However, these methods mainly produce the final attention scores from simple arithmetic equations such as the reciprocal of the attention, \latinphrase{etc.}\xspace \vspace{1.5mm} \noindent \textbf{Attention-based Dropout Layer}: In weakly-supervised object localization, detecting the whole object without location annotation is a challenging task \cite{wsol1, wsol1, wsol2}. Choe~\latinphrase{et~al.}\xspace \cite{choe2019attention} proposed using the dropout layer to improve the localization accuracy through two steps: making the whole object location even by hiding the most discriminative part and attending over the whole area to improve the recognition performance. As Figure~\ref{fig:arithmetic} (a) shows, ADL has two branches: 1) drop mask to conceal the discriminative part, which is performed by a threshold hyperparameter where values bigger than this threshold are set to zero and vice versa, and 2) importance map to give weight for the channels contributions by using a sigmoid function. Although the proposed idea is simple, experiments showed it is efficient (gained 15 $\%$ more than the state-of-the-art). \vspace{1.5mm} \noindent \textbf{Mirror Attention}: In a line detection application \cite{jin2020semantic}, Lee~\latinphrase{et~al.}\xspace developed mirrored attention to learn more semantic features. They flipped the feature map around the candidate line, and then concatenated the feature maps together. In case the line is not aligned, zero padding is applied. \vspace{1.5mm} \noindent \textbf{Reverse Attention}: Huang~\latinphrase{et~al.}\xspace~\cite{BMVC2017_18} proposed the negative context (\latinphrase{e.g.}\xspace what is not related to the class) in training to learn semantic features. They were motivated by less discriminability between the classes in the high-level semantic representations and the weak response to the correct class from the latent representations. The network is composed of two branches, the first one learns discriminative features using convolutions for the target class, and the second one learns the reverse attention scores that are not associated with the target class. These scores are aggregated together to form the final attentions shown in Figure~\ref{fig:arithmetic} (b). A deeper look inside the reverse attention shows that it is mainly dependent on negating the extracted features of convolutions followed by sigmoid $\mathrm{sigmoid} (-F_{conv})$. However, for the purpose of convergence, this simple equation is changed to be $\mathrm{sigmoid}(\frac{1}{ReLU(F_{conv})+0.125} - 4)$. On semantic segmentation datasets, reverse attention achieved significant improvement over state-of-the-art. In a similar work, Chen~\latinphrase{et~al.}\xspace~\cite{chen2018reverse} proposed using reverse attention for salient object detection. The main intuition was to erase the final predictions of the network and hence learn the missing parts of the objects. However, the calculation method of attention scores is different from \cite{BMVC2017_18}, whereas they used $1 - \mathrm{sigmoid}(F_{i+1})$ where $F_{i+1}$ denoted the features of the next stage. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.5\paperwidth]{figures/adl.PNG} \\ \small (a) Attention-Based Dropout~\cite{choe2019attention} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.28\paperwidth]{figures/ran.png} \\ \small (b) Reverse Attention~\cite{BMVC2017_18} \end{tabular} \caption{The arithmetic-based attention methods i.e. Attention-based Dropout~\cite{choe2019attention} and Reverse Attention \cite{BMVC2017_18}. Images are taken from the original papers. These methods use arithmetic operations to generate the attention scores such as reverse, dropout or reciprocal. } \label{fig:arithmetic} \end{figure*} \subsubsection{Multi-modal attentions} \label{sec:Multi-modal} As the name reveals, multi-modal attention is proposed to handle multi-modal tasks, using different modalities to generate attentions, such as text and image. It should be noted that some attention methods below, such as Perceiver~\cite{jaegle2021perceiver} and Criss-Cross \cite{context1, context2}, are transformer types~\cite{m_transformers}, but are customized for multi-modal tasks by including text, audio, and image. \vspace{1.5mm} \noindent \textbf{Cross Attention Network}: In \cite{hou2019cross}, a cross attention module (CAN) was proposed to enhance the overall discrimination of few-shot classification \cite{sung2018learning}. Inspired by the human behavior of recognizing novel images, the similarity between seen and unseen parts is identified first. CAN learns to encode the correlation between the query and the target object. As Figure~\ref{fig:multimodals} (a) shows, the features of query and target are extracted independently, and then a correlation layer computes the interaction between them using cosine distance. Next, 1D convolution is applied to fuse correlation (GAP is performed first) and attentions, followed by softmax normalization. The output is reshaped to give a single channel feature map to preserve spatial representations. Although, experiments show that CAN produces state-of-the-art results, it depends on non-learnable functions such as the cosine correlation. Also, the design is suitable for few-shot classification but is not general because it depends on two streams (query and target). \vspace{1.5mm} \noindent \textbf{Criss-Cross Attention}: The contextual information is still very important for scene understanding \cite{context1, context2}. Criss-cross attention proposed encoding the context of each pixel in the image in the criss-cross path. By building recurrent modules of criss-cross attention, the whole context is encoded for each pixel. This module is more efficient than non-local block \cite{m_nonlocal} in memory and time, where the memory is reduced by $11x$ and GFLOPS reduced by $85\%$. Since this survey focuses on the core attention ideas, we show the criss-cross module in Figure~\ref{fig:multimodals} (b). Initially, three $1 \times 1$ convolutions are applied to the feature maps, whereas two of them are multiplied together (first map with each row of the second) to produce criss-cross attentions for each pixel. Then, softmax is applied to generate the attention scores, aggregated with the third convolution outcome. However, the encoded context captures only information in the criss-cross direction, and not the whole image. For this reason, the authors repeated the attention module by sharing the weights to form recurrent criss-cross, which includes the whole context. \cite{huang2019ccnet} \vspace{1.5mm} \noindent \textbf{Perceiver Traditional}: CNNs have achieved high performance in handling several tasks \cite{resnet, chen2011multi, hassanin2021mitigating}, however, they are designed and trained for a single domain rather than multi-modal tasks \cite{modal1, modal2, modal3}. Inspired by biological systems that understand the environment through various modalities simultaneously, Jaegle~\latinphrase{et~al.}\xspace proposed perceiver that leverages the relations between these modalities iteratively. The main concept behind perceiver is to form an attention bottleneck composed of a set of latent units. This solves the scale of quadratic processing, as in traditional transformer, and encourages the model to focus on important features through iterative processing. To compensate for the spatial context, Fourier transform is used to encode the features \cite{mildenhall2020nerf, kandel2000principles, stanley2007compositional, parmar2018image}. As Figure~\ref{fig:multimodals} (c) shows, the perceiver is similar to RNN because of weights sharing. It composes two main components: cross attention to map the input image or input vector to a latent vector and transformer tower that maps the latent vector to a similar one with the same size. The architecture reveals that perceiver is an attention bottleneck that learns a mapping function from high-dimensional data to a low-dimensional one and then passes it to the transformer \cite{m_transformers}. The cross-attention module has multi-byte attend layers to enrich the context, which might be limited from such mapping. This design reduces the quadratic processing $\mathcal{O}(M^2)$ to $\mathcal{O}(MN)$, where $M$ is the sequence length and $N$ is a hyperparameter that can be chosen smaller than $M$. Additionally, sharing the weights of the iterative attention reduces the parameters to one-tenth and enhances the model's generalization. \vspace{1.5mm} \noindent \textbf{Stacked Cross Attention}: Lee~\latinphrase{et~al.}\xspace~\cite{stacked_cross} proposed a method to attend between an image and a sentence context. Given an image and sentence, it learns the attention of words in a sentence for each region in the image and then scores the image regions by comparing each region to the sentence. This way of processing enables stacked cross attention to discover all possible alignments between text and image. Firstly, they compute image-text cross attention by a few steps as follows: a) compute cosine similarity for all image-text pairs \cite{karpathy2014deep} followed by $\ell_2$ normalization \cite{wang2017normface}, b) compute the weighted sum of these pairs attentions, where the image one is calculated by softmax \cite{chorowski2015attention}, c) the final similarity between these pairs is computed using LogSumExp pooling \cite{he2008discriminative, huang2018learning}. The same steps are repeated to get the text-image cross attention, but the attention in the second step uses text-based softmax. Although stacked attention enriches the semantics of multi-modal tasks by attending text over image and vice versa, shared semantics might lead to misalignment in case of lack of similarity. With slight changes to the main concept, several works in various paradigms such as question answering and image captioning \cite{cross_attention1, cross_attention2, cross_attention4, cross_attention3, cross_attention5} used the stacked-cross attention. \vspace{1.5mm} \noindent \textbf{Boosted Attention}: While top-down attention mechanisms~\cite{lu2017knowing} fail to focus on regions of interest without prior knowledge, visual stimuli methods~\cite{tavakoli2017paying,sugano2016seeing} alone are not sufficient to generate captions for images. For this reason, in \ref{fig:multimodals} (d), the authors proposed a boosted attention model to combine both of them in one approach to focus on top-down signals from the language and attend to the salient regions from stimuli independently. Firstly, they integrate stimuli attention with visual feature $I^{'} = W\ I \circ \mathrm{log}(W_{sal}I+\epsilon))$, where $I$ is the extracted features from the backbone, $W_{sal}$ denotes the weight of the layer that produces stimuli attention, $W$ is the weight of the layer that output the visual feature layer. Boosted attention is achieved using the Hadamard product on $I^{'}$. Their experiments work showed that boosted attention improved performance significantly. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.5\paperwidth]{figures/cros.png} \\ \small (a) Cross Attention~\cite{hou2019cross} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/criss-cross.PNG} \\ \small (b) Criss-Cross Attention~\cite{huang2019ccnet} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.48\paperwidth]{figures/perceiver.png} \\ \small (c) Perceiver~\cite{jaegle2021perceiver} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/boosted.png} \\ \small (d) Boosted Attention~\cite{chen2018boosted} \end{tabular} \caption{Multi-modal attention methods consisting of Attention-based Perceiver~\cite{jaegle2021perceiver}, Criss-Cross~\cite{hou2019cross}, Boosted attention~\cite{chen2018boosted}, Cross-attention module~\cite{hou2019cross}. The mentioned methods employ multi-modalities to generate the attention scores. Images are taken from the original papers.} \label{fig:multimodals} \end{figure*} \subsubsection{Logical Attention} \label{sec:logical} Similar to how human beings pay more attention to the crucial features, some methods have been proposed to use recurrences to encode better relationships. These methods rely on using RNNs or any type of sequential network to calculate the attentions. We named it logical methods because they use architectures similar to logic gates. \vspace{1.5mm} \noindent \textbf{Sequential Attention Models}: Inspired by the primate visual system, Zoran~\latinphrase{et~al.}\xspace~\cite{zoran2020towards} proposed soft, sequential, spatial top-down attention method (S3TA) to focus more on attended regions of an image \cite{mott2019towards} (as shown in Figure~\ref{fig:logical} (b)). At each step of the sequential process, the model queries the input and refines the total score based on spatial information in a top-down manner. Specifically, the backbone extracts features \cite{resnet, huang2017densely, pham2018efficient} channels that are split into keys and values. A Fourier transform encodes the spatial information for these two sets to preserve the spatial information from disappearing for later use. The main module is a top-down controller, which is a version of Long-Short Term Model (LSTM) \cite{lstm}, where its previous state is decoded as query vectors. The size of each query vector equals the sum of channels in keys and spatial basis. At each spatial location, the similarity between these vectors is calculated through the inner product, and then the softmax concludes the attention scores. These attention scores are multiplied by the values, and then the summation is taken to produce the corresponding answer vector for each query. All these steps are in the current step of LSTM and are then passed to the next step. Note that the input of the attention module is an output of the LSTM state to focus more on the relevant information as well as the attention map comprises only one channel to preserve the spatial information. Empirical evaluations show that attention is crucial for adversarial robustness because adversarial perturbations drag the object's attention away to degrade the model performance. Such an attention model proved its ability to resist strong attacks \cite{madry2018towards} and natural noises \cite{hendrycks2019natural}. Although S3TA provides a novel method to empower the attention modules using recurrent networks, it is inefficient. \vspace{1.5mm} \noindent \textbf{Permutation invariant Attention}: Initially, Zaheer~\latinphrase{et~al.}\xspace~\cite{deep_sets} suggested handling deep networks in the form of sets rather than ordered lists of elements. For instance, performing pooling over sets of extracted features \latinphrase{e.g.}\xspace $\rho(pool({\phi(x_1), \phi(x_2), \cdots, \phi(x_n)}))$, where $\rho$ and $\phi$ are continuous functions and pool can be the $sum$ function. Formally, any set of deep learning features is invariant permutation if $f(\pi x) = \pi f(x)$. Hence, Lee~\latinphrase{et~al.}\xspace~\cite{permutation_invariant} proposed an attention-based method that processes sets of data. In \cite{deep_sets}, simple functions ($sum$ or $mean$) are proposed to combine the different branches of the network, but they lose important information due to squashing the data. To address these issues, set transformer~\cite{permutation_invariant} parameterizes the pooling functions and provides richer representations that can encode higher-order interaction. They introduced three main contributions: a) Set Attention Block (SAB), which is similar to Multi-head Attention Block (MAB) layer \cite{m_transformers}, but without positional encoding and dropout; b) induced Set Attention Blocks (ISAB), which reduced complexity from $\mathcal{O}(n^2)$ to $\mathcal{O}(mn)$, where $m$ is the size of induced point vectors and c) pooling by Multihead Attention (PMA) uses MAB over the learnable set of seed vectors. \vspace{1.5mm} \noindent \textbf{Show, Attend and Tell}: Xu~\latinphrase{et~al.}\xspace~\cite{xu2015show} introduced two types of attentions to attend to specific image regions for generating a sequence of captions aligned with the image using LSTM~\cite{zaremba2014recurrent}. They used two types of attention: hard attention and soft attention. Hard attention is applied to the latent variable after assigning multinoulli distribution to learn the likelihood of $log p(y|a)$, where $a$ is the latent variable. By using multinoulli distribution and reducing the variance of the estimator, they trained their model by maximizing a variational lower bound as pointed out in \cite{mnih2014recurrent, ba2015multiple} provided that attentions sum to $1$ at every point \latinphrase{i.e.}\xspace $\sum_i \alpha_{ti} = 1$, where $\alpha$ refers to attentions scores. For soft attention, they used softmax to generate the attention scores, but for $p(s_t|a)$ as in \cite{baldi2014dropout}, where $s_t$ is the extracted feature at this step. The training of soft attention is easily done by normal backpropagation and to minimize the negative log-likelihood of $-\log(p(y|a))+\sum_i(1 - \sum_t\alpha_{ti})^2$. This model achieved the benchmark for visual captioning at the time as it paved the way for visual attention to progress. \vspace{1.5mm} \noindent \textbf{Kalman Filtering Attention}: Liu~\latinphrase{et~al.}\xspace identified two main limitations that hinder using attention in various fields where there is insufficient learning or history~\cite{kalman}. These issues are 1) object's attention for input queries is covered by past training; and 2) conventional attentions do not encode hierarchical relationships between similar queries. To address these issues, they proposed the use of Kalman filter attention. Moreover, KFAtt-freq to capture the homogeneity of the same queries, correcting the bias towards frequent queries. \vspace{1.5mm} \noindent \textbf{Prophet Attention}: In prophet attention~\cite{prophet}, the authors noticed that the conventional attention models are biased and cause deviated focus to the tasks, especially in sequence models such as image captioning~\cite{captioning1, captioning2} and visual grounding~\cite{grounding1, grounding2}. Further, this deviation happens because attention models utilize previous input in a sequence to attend to the image rather than the outputs. As shown in Figure~\ref{fig:logical} (a), the model is attending on \enquote{yellow and umbrella} instead of \enquote{umbrella and wearing}. In a like-self-supervision way, they calculate the attention vectors based on the words generated in the future. Then, they guide the training process using these correct attentions, which can be considered a regularization of the whole model. Simply put, this method is based on summing the attentions of the post sequences in the same sentence to eliminate the impact of deviated focus towards the inputs. Overall, prophet attention addresses sequence-models biases towards history while disregarding the future. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.40\paperwidth]{figures/prophet.jpg} \\ \small (a) Prophet ~ \cite{prophet} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.40\paperwidth]{figures/s3ta.png} \\ \small (b) S3TA~ \cite{zoran2020towards} \end{tabular} \caption{The core structure of logic-based attention methods such as Prophet attention~\cite{prophet} and S3TA \cite{zoran2020towards} which are a type of attention that use logical networks such as RNN to infer the attention scores. Images are taken from the original papers and are best viewed in color.} \label{fig:logical} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/tell.PNG}% \caption{An example of Guided Attention Inference Networks \cite{li2018tell}.}% \label{fig:class} \end{figure} \subsubsection{Category-Based Attentions} \label{sec:Category} The above methods generate the attention scores from the features regardless of the presence of the class. On the other hand, some methods use class annotation to force the network to attend over specific regions. \vspace{1.5mm} \noindent \textbf{Guided Attention Inference Network}: In \cite{li2018tell}, the authors proposed class-aware attention, namely Guided Attention Inference Networks (GAIN), guided by the labels. Instead of focusing only on the most discriminative parts in the image~\cite{zhou2016learning}, GAIN includes the contextual information in the feature maps. Following~\cite{selvaraju2017grad}, GAIN obtains the attention maps from an inference branch, which are then used for training. As shown in Figure~\ref{fig:class}, through 2D-convolutions, global average pooling, and ReLU, the important features are extracted $A^c$ for each class. Following this, the features of each class are obtained as $I - (T(A^c)\bigodot I)$ where $\bigodot$ is matrix multiplication. $T(A^c) = \frac{1}{1+exp(-w(A^c - \sigma))}$ where $\sigma$ is a threshold parameter and $w$ is a scaling parameter. Their experiments showed that without recursive runs, GAINS gained significant improvement over the state-of-the-art. \vspace{1.5mm} \noindent \textbf{Curriculum Enhanced Supervised Attention Network}: The majority of attention methods are trained in a weakly supervised manner, and hence, the attention scores are still far from the best representations \cite{m_transformers, m_nonlocal}. In \cite{zhu2020curriculum}, the authors introduced a novel idea to generate a Supervised-Attention Network (SAN). Using the convolution layers, they defined the output of the last layer to be equal to the number of classes. Therefore, performing attention using global average pooling \cite{lin2013network} yields a weight for each category. In a similar study, Fukui~\latinphrase{et~al.}\xspace proposed using a network composed of three branches to obtain class-specific attention scores: feature extractor to learn the discriminative features, attention branch to compute the attention scores based on a response model, perception to output the attention scores of each class by using the first two modules. The main objective was to increase the visual explanations \cite{zhou2016learning} of the CNN networks as it showed significant improvements in various fields such as fine-grained recognition and image classification. \vspace{1.5mm} \noindent \textbf{Attentional Class Feature Network}: Zhang~\latinphrase{et~al.}\xspace~\cite{zhang2019acfnet} introduced ACFNet, a novel idea to exploit the contextual information for improving semantic segmentation. Unlike conventional methods that learn spatial-based global information \cite{chen2017rethinking}, this contextual information is categorial-based, firstly presenting the class-center concept and then employing it to aggregate all the corresponding pixels to form a specific class representation. In the training phase, ground-truth labels are used to learn class centers, while coarse segmentation results are used in the test phase. Finally, class-attention maps are the results of class centers and coarse segmentation outcomes. The results show significant improvement for semantic segmentation using ACFNet. \subsection{Hard (Stochastic) Attention} Instead of using the weighted average of the hidden states, hard attention selects one of the states as the attention score. Proposing hard attention depends on answering two questions: (1) how to model the problem, and (2) how to train it without vanishing the gradients. In this part, hard attention methods are discussed as well as their training mechanisms. It includes a discussion of Bayesian attention, variational inference, reinforced, and Gaussian attentions. The main idea of Bayesian attention and variational one is to use latent random variables as attention scores. Reinforced attention replaces softmax with a Bernoulli-sigmoid unit \cite{williams1992simple}, whereas Gaussian attention uses a 2D Gaussian kernel instead. Similarly, self-critic attention~\cite{chen2019self} employs a re-enforcement technique to generate the attention scores, whereas Expectation-Maximization uses EM to generate the scores. \subsubsection{Statistical-based attention} \label{sec:statistical} \textbf{Bayesian Attention Modules (BAM)} In contrast to the deterministic attention modules, Fan~\latinphrase{et~al.}\xspace~\cite{fan2020bayesian} proposed a stochastic attention method based on Bayesian-graph models. Firstly, keys and queries are aligned to form distributions parameters for attention weights, treated as latent random variables. They trained the whole model by reparameterization, which results from weight normalization by Lognormal or Weibull distributions. Kullback–Leibler (KL) divergence is used as a regularizer to introduce contextual prior distribution in the form of keys' functions. Their experiments illustrate that BAM significantly outperforms the state-of-the-art in various fields such as visual question answering, image captioning, and machine translation. However, this improvement happens on account of computational cost as well as memory usages. Compared to deterministic attention models, it is an efficient alternative in general, showing consistent effectiveness in language-vision tasks. \begin{itemize} \vspace{1.5mm} \item \textbf{Bayesian Attention Belief Networks}: Zhang~\latinphrase{et~al.}\xspace~\cite{zhang2021bayesian} proposed using Bayesian Belief modules to generate attention scores given their ability to model high structured data along with uncertainty estimations. As shown in \ref{fig:stochastics} (b), they introduced a simple structure to change any deterministic attention model into a stochastic one through four steps: 1) using Gamma distributions to build the decoder network 2) using Weibull distributions along with stochastic and deterministic paths for downward and upward, respectively 3) Parameterizing BABN distributions from the queries and keys of the current network 4) using evidence lower bound to optimize the encoder and decoder. The whole network is differentiable because of the existence of Weibull distributions in the encoder. In terms of accuracy and uncertainty, BABN proved improvement over the state-of-the-art in NLP tasks. \vspace{1.5mm} \item \textbf{Repulsive Attention}: Multi-head attention \cite{m_transformers} is the core of attention used in transformers. However, MHA may cause attention collapse when extracting the same features \cite{an2020repulsive, prakash2019repr, han2016dsd} and consequently, the discrimination power for feature representation will not be diverse. To address this issue, An~\latinphrase{et~al.}\xspace~\cite{an2020repulsive} adapted MHA to a Bayesian network with an underlying stochastic attention. MHA is considered a special case without sharing parameters and using a particle-optimization sample to perform Bayesian inference on the attention parameter imposes attention repulsiveness \cite{liu2016stein}. Through this sampling method, each MHA is considered a sample seeking posterior distribution approximation, far from other heads. \end{itemize} \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.36\paperwidth]{figures/self_critic.PNG} \\ \small (a) Self-Critic Attention~\cite{chen2019self}\\ \includegraphics[width=.36\paperwidth]{figures/EMA.png} \\ \small (c) Expectation-Maximization Attention ~ \cite{li2019expectation} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/babn.PNG}\\ \small (b) Bayesian Attention Belief Networks \cite{zhang2021bayesian} \tabularnewline\\ \includegraphics[width=.3\paperwidth]{figures/gaussian_attention.PNG} \\ \small (d) Gaussian Attention~\cite{gaussian_attention} \tabularnewline \end{tabular} \caption{Illustration of hard attention architectures. Building blocks of EMA~\cite{li2019expectation}, Gaussian~\cite{gaussian_attention}, Self-critic~\cite{chen2019self} and Bayesian~\cite{zhang2021bayesian}. Images are taken from the original papers.} \label{fig:stochastics} \end{figure*} \vspace{1.5mm} \noindent \textbf{Variational Attention} \label{sec:variational} In a study to improve the latent variable alignments, Deng~\latinphrase{et~al.}\xspace~\cite{deng2018latent} proposed using variational attention mechanism. A latent variable is crucial because it encodes the dependencies between entities, whereas variational inference methods represent it in a stochastic manner \cite{salimbeni2019deep, drori2020deep}. On the other hand, soft attention can encode alignments, but it has poor representation because of the nature of softmax. Using stochastic methods show better performance when optimized well \cite{lin2003toward, wang2020survey}. The main idea is to propose variational attention along with keeping the training tractable. They introduced two types of variational attention: categorical (hard) attention that uses amortized variational inference based on policy gradient and soft attention variance; relaxed (probabilistic soft attention) using Dirichlet distribution that allows attending over multiple sources. Regarding reparameterization, Dirichlet distribution is not parameterizable, and thus the gradient has high variances \cite{jankowiak2018pathwise}. Inspired by \cite{deng2018latent}, Bahuleyan~\latinphrase{et~al.}\xspace developed stochastic attention-based variational inference \cite{bahuleyan2018variational}, but using a normal distribution instead of Dirichlet distribution. They observed that variational encoder-decoders should not have a direct connection; otherwise, traditional attentions serve as bypass connections. \subsubsection{Reinforcement-based Attention } \label{sec:reinforced} \vspace{1.5mm} \noindent \textbf{Self-Critic Attention}: \label{sec:self-critic} Chen~\latinphrase{et~al.}\xspace \cite{chen2019self} proposed a self-critic attention model that generates attention using an agent and re-evaluates the gain from this attention using the REINFORCE algorithm. They observed that most of the attention modules are trained in a weakly-supervised manner. Therefore, the attention maps are not always discriminative and lack supervisory signals during training \cite{lee2015deeply}. To supervise the generation of attention maps, they used a reinforcement algorithm to guide the whole process. As shown in Figure~\ref{fig:stochastics} (a), the feature maps are evaluated to predict whether it needs self correctness or not. \vspace{1.5mm} \noindent \textbf{Reinforced Self-Attention Network}: Shen~\latinphrase{et~al.}\xspace~\cite{shen2018reinforced} used a reinforced technique to combine soft and hard attention in one method. Soft attention has shown effectiveness in modeling local and global dependencies, which output from the dot-product similarity \cite{bahdanau2014neural}. However, soft attention is based on the softmax function that assigns values to each item, even the non-attended ones, which weakens the whole attention. On the other hand, hard attention \cite{show_attend} attends to important regions or tokens only and disregards others. Despite its importance to textual tasks, it is inefficient in terms of time and differentiability \cite{williams1992simple}. Shen~\latinphrase{et~al.}\xspace~\cite{shen2018reinforced} used hard attention to extract rich information and then feed it into soft attention for further processing. Simultaneously, soft attention is used to reward hard attention and hence stabilize the training process. Specifically, they used hard attention to encode tokens from the input in parallel while combining it with soft attention \cite{shen2018disan} without any CNN/RNN modules (see Figure~\ref{fig:stochastics} (e)). In \cite{karianakis2018reinforced}, reinforcement attention was proposed to extract better temporal context from video. Specifically, this attention module uses Bernoulli-sigmoid unit \cite{williams1992simple}, a stochastic module. Thus, to train the whole system, REINFORCE algorithm is used to stabilize the gradients \cite{jankowiak2018pathwise}. \subsubsection{Gaussian-based Attention} \label{sec:Gaussian} \vspace{1.5mm} \noindent \textbf{Self Supervised Gaussian-Attention}: Most soft-attention models use softmax to predict the attention of feature maps \cite{m_transformers, image_transfomer, zhang2020resnest} which suffers from various drawbacks. In~\cite{niu2020gatcluster}, Niu~\latinphrase{et~al.}\xspace proposed replacing the classical softmax with a Gaussian attention module. As shown in Figure~\ref{fig:stochastics} (d), they build a 2D Gaussian kernel to generate attention maps instead of softmax $K = e(-\frac{1}{\alpha}(u - \mu)^T \sum^{-1} (u - \mu))$ for each individual element, where $u = [x, y]^T$, $\mu = [\mu_x, \mu^y]^T$. A fully connected layer passes the extracted features, and then the Gaussian kernel is used to predict the attention scores. Using Gaussian kernels proved its effectiveness in discriminating the important features. Since it does not require any further learning steps, such as fully connected layers or convolutions, this significantly reduces the number of parameters. As stochastic training models need careful designs because of SGD mismatching~\cite{stochastic1, stochastic2, stochastic3}, the Gaussian attention model developed binary classification loss that takes normalized logits to suppress the low scores and discriminate the high ones. This normalization uses a modified version of softmax, where the input is squared and divided by temperature value (\latinphrase{e.g.}\xspace batch size). \vspace{1.5mm} \noindent \textbf{Uncertainty-Aware Attention}: Since attention is generated without full supervision (\latinphrase{i.e.}\xspace in a weakly-supervised manner), it lacks full reliability \cite{li2018tell}. To fix this issue, \cite{heo2018uncertainty} proposed the use of uncertainty which is based on input. It generates varied attention maps according to the input and, therefore, learns higher variance for uncertain inputs. Gaussian distribution is used to handle attention weights, such that it gives small values in case of high confidence and vice versa \cite{kendall2017uncertainties}. Bayesian network is employed to build the model with variational inference as a solution~\cite{zhang1994simple, blei2017variational}. Note that this model is stochastic and SGD backpropagation flow can not work properly due to randomness \cite{kingma2013auto}. For this reason, they used the reparameterization trick \cite{gal2017concrete, kingma2015variational} to train their model. \subsubsection{Clustering} \label{sec:EM} \textbf{Expectation Maximization attention}: Traditional soft attention mechanisms can encode long-range dependencies by comparing each position to all positions, which is computationally very expensive~\cite{m_nonlocal}. In this regard, Li~\latinphrase{et~al.}\xspace~\cite{li2019expectation} proposed using expectation maximization to build an attention method that iteratively forms a set of bases that compute the attention maps \cite{dempster1977maximum}. The main intuition is to use expectation maximization to select a compact basis set instead of using all the pixels as in \cite{m_nonlocal, li2020spatial} (see Figure~\ref{fig:stochastics} (c)). These bases are regarded as the learning parameters, whereas the latent variables serve as the attention maps. The output is the weighted sum of bases, and the attention maps are the weights. The estimation step is defined by $ z=\frac{\mathbb{K}(x_n, \mu_n)}{\sum_j\mathbb{K}(x_n, \mu_j)}$, where $\mathbb{K}$ denotes a kernel function. The maximization step updates $\mu$ through data likelihood maximization such that $\mu = \frac{z_{nk}(x_n, \mu_n)}{\sum_j z_{jk}}$. Finally, the features are multiplied by attention scores $\mathbb{X} = \mathbb{Z} \mu$. Since EMA is a stochastic model, training the whole model needs special care. Firstly, the authors average the $\mu$ over the mini-batch and update the maximization step to train it stably. Secondly, they normalize the Value of $\mu$ to be within (1, $T$) by $\ell_2$-Norm. EMA has shown the ability to remove noisy representation and to give promising results after the third step. Also, it is worth noting that the complexity is reduced to a linear form $\mathcal{O}(NK)$ from a quadratic one $\mathcal{O}(N^2)$. \section{Attention in Vision} The primary purpose of the attention mechanism is to imitate the human brain and focus on the essential features~\cite{hermann2015teaching} in the input image. We categorize attention methods based on the main function used to generate attention scores such as softmax or sigmoid. \subsection{Soft (Deterministic) Attention} This section reviews soft-attention methods such as channel attention, spatial attention, and self-attention. In channel attention, the scores are calculated channel wise because each one of the feature maps (channels) attend to some parts of the input. In spatial attention, the main idea is to attend to the critical regions in the image. Attending over regions of interest facilitates object detection, semantic segmentation, and person re-identification. In contrast to channel attention, spatial attention attends to the important parts in the spatial map (bounded by width and height). It can be used independently or as a complementary mechanism to channel attention. On the other hand, self-attention is proposed to encode higher-order interactions and contextual information by extracting the relationships between input sequence tokens. It is different from channel attention in how it generates the attention scores, as it mainly calculates the similarity between two maps (K, Q) of the same input, whereas channel attention generates the scores from a single map. However, self attention and channel attention both operate on channels. Soft attention methods calculate the attention scores as the weighted sum of all the input entities~\cite{luong2015effective} and mainly use soft functions such as softmax and sigmoid. Since these methods are differentiable, they can be trained through back-propagation techniques. However, they suffer from other issues such as high computational complexity and assigning weights to non-attended objects. \subsubsection{Channel Attention} \vspace{1.5mm} \noindent \textbf{Squeeze \& Excitation Attention}: The Squeeze-and-Excitation (SE) Block~\cite{hu2018squeeze}, shown in Figure~\ref{fig:channels}(a), is a unit design to perform dynamic channel-wise feature attention. The SE attention takes the output of a convolution block and converts each channel to a single value via global average pooling; this process is called \enquote{squeeze}. The output channel ratio is reduced after passing through the fully connected layer and ReLU for adding non-linearity. The features are passed through the fully connected layer, followed by a sigmoid function to achieve a smooth gating operation. The convolutional block feature maps are weighted based on the side network's output, called the \enquote{excitation}. The process can be summarized as \begin{equation} f_s = \sigma( FC (ReLU( FC(f_g)) )), \label{eq:SE_att} \end{equation} where $FC$ is the fully connected layer, $f_g$ is the average global pooling, $\sigma$ is the sigmoid operation. The main intuition is to choose the best representation of each channel in order to generate attention scores. \vspace{1.5mm} \noindent \textbf{Efficient Channel Attention (ECA)~\cite{wang2020eca}} is based on squeeze $\&$ excitation network~\cite{hu2018squeeze} and aims to increase efficiency as well as decrease model complexity by removing the dimensionality reduction. ECA (see Fig~\ref{fig:channels}(g)) achieves cross-channel interaction locally through analyzing each channel and its $k$ neighbors, following channel-wise global average pooling but with no dimensionality reduction. ECA accomplishes efficient processing via fast 1D convolutions. The size $k$ represents the number of neighbors that can participate in one channel attention prediction \latinphrase{i.e.}\xspace the coverage of local cross-channel interaction. \vspace{1.5mm} \noindent \textbf{Split-Attention Networks}: ResNest~\cite{zhang2020resnest}, a variant of ResNet~\cite{resnet}, uses split attention blocks as shown in Figure~\ref{fig:channels}(h). Attention is obtained by summing the inputs from previous modules and applying global pooling, passing through a composite function \latinphrase{i.e.}\xspace convolutional layer-batch normalization-ReLU activation. The output is again passed through convolutional layers. Afterwards, a softmax is applied to normalize the values and then multiply them with the corresponding inputs. Finally, all the features are summed together. This mechanism is similar to the squeeze $\&$ excitation attention~\cite{hu2018squeeze}. ResNest is also a special type of squeeze $\&$ excitation where it squeezes the channels using average pooling and summing of the split channels. \vspace{1.5mm} \noindent \textbf{Channel Attention in CBAM}: Convolutional Block Attention Module (CBAM)~\cite{woo2018cbam} employs channel attention and exploits the inter-channel feature relationship as each feature map channel is considered a feature detector focusing on the \enquote{what} part of the input image. The input feature map's spatial dimensions are squeezed for computing the channel attention followed by aggregation while using both average-pooling and max-pooling to obtain two descriptors. These descriptors are forwarded to a three-layer shared multi-layer perceptron (MLP) to generate the attention map. Subsequently, the output of each MLP is summed element-wise and then passed through a sigmoid function as shown in Figure~\ref{fig:channels}(b). In summary, the channel attention is computed as \begin{equation} f_{ch} = \sigma( MLP(MaxPool(f)) + MLP(AvgPool(f))), \label{eq:ch-atten} \end{equation} where $\sigma$ denotes the sigmoid function, and $f$ represents the input features. The ReLU activation function is employed in MLP after each convolutional layer. Channel attention in CBAM is the same as Squeeze and Excitation (SE) attention~\cite{se} if only average pooling is used. \renewcommand{\arraystretch}{1.2} \begin{table*} \centering \scalebox{0.90}{ \begin{tabular} {>{\raggedright}m{0.03\textwidth}>{\raggedright}m{0.085\textwidth}>{\raggedright}m{0.04\textwidth}>{\raggedright}m{0.12\textwidth}>{\raggedright}m{0.18\textwidth}>{\raggedright}p{.2\textwidth}>{\raggedright}p{0.25\textwidth}} \hline Type & Category & Section& References & Applications & Strengths & Limitations\tabularnewline \hline \multirow{5}{*}{{\rotatebox[origin=c]{90}{\textbf{Soft (Deterministic)}}}} & Channel & 2.1.1&SE-Net~\cite{se}, ECA-Net~\cite{wang2020eca}, CBAM~\cite{woo2018cbam},~\cite{harmonious}, A2-Net~\cite{ding2020high}, Dual~\cite{fu2019dual} & visual recognition, person re-identification, medical image segmentation, video recognition & \multirow{5}{0.43\columnwidth}{ \begin{itemize} \item Easy to model \item Differentiable gradients \item Non-local operations \item Encoding context \item Improving the performance \end{itemize} } & \multirow{5}{0.35\columnwidth}{ \begin{itemize} \item Expensive in memory usage \item High computation cost \item Biased to softmax radial nature \item Subject to attention collapse due to lack of diversity \end{itemize} } \tabularnewline \cline{2-4} & Spatial &2.1.2 &CBAM \cite{woo2018cbam}, PFA \cite{zhao2019pyramid}, \cite{li2020spatial}, \cite{meng2020end} & Visual Recognition, domain adaptation, saliency detection & & \tabularnewline \cline{2-4} & Frequency & 2.1.1&FCA-Net~\cite{qin2020fcanet} & Visual Recognition & & \tabularnewline \cline{2-4} & Spectral &2.1.1 &\cite{meng2020end} , FCA-Net~\cite{qin2020fcanet} & hyper-spectral imaging, visual recognition & & \tabularnewline \cline{2-4} & Self-attentions&2.1.3 & Transformers \cite{m_transformers}, Image Transformers\cite{image_transfomer}, \cite{m_self_attention}, \cite{m_standalone} & Visual Recognition, multi-modal tasks, video processing, low-level vision , video recognition , 3D analysis & &\tabularnewline \hline \multirow{6}{*}{{\rotatebox[origin=c]{90}{\textbf{Hard (Stochastic)}}}}& Reinforced & 2.2.3 &RESA~\cite{shen2018reinforced},~\cite{karianakis2018reinforced} & person re-identification, natural language processing & \multirow{5}{0.35\columnwidth}{ \begin{itemize} \item Encoding context \item Encoding higher-order interactions \item Diverse attention scores \item Higher improvements \end{itemize} } & \multirow{5}{0.4\columnwidth}{ \begin{itemize} \item Expensive in memory usage \item High computation cost \item Non-differentiable \item Gradient vanishing \item Requires tricks for training \end{itemize} } \tabularnewline \cline{2-4} & Variational &2.2.2& \cite{deng2018latent},~\cite{xu2015show} & image captioning, natural language processing & & \tabularnewline \cline{2-4} & Bayesian& 2.2.1& BAM~\cite{fan2020bayesian}, Repulsive~\cite{an2020repulsive} & visual-question answering, captioning, image translation & & \tabularnewline \cline{2-4} & Gaussian& 2.2.4& GatCluster~\cite{gaussian_attention}, Uncertainty~\cite{heo2018uncertainty} & image clustering medical natural language processing && \tabularnewline \cline{2-4} & Self-critic &2.2.5& \cite{chen2019self} & person re-identification & & \tabularnewline \cline{2-4} & Expectation Maximization &2.2.6& EMA~\cite{li2019expectation} & semantic segmentation & & \tabularnewline \hline \rotatebox[origin=c]{90}{\textbf{Category-based}} & & 2.6 & GAIN~\cite{li2018tell}~\cite{zhu2020curriculum} & explainable machine learning, person re-identification, semantic segmentation& \begin{itemize} \item Providing gradient understanding \item Does not require extra supervision \end{itemize} & \begin{itemize} \item Extra computation \item Used only for supervised classification \end{itemize} \tabularnewline \hline \rotatebox[origin=c]{90}{\textbf{Multi-modal}}& &2.4& CAN~\cite{hou2019cross}, SCAN~\cite{stacked_cross}, Perceiver~\cite{jaegle2021perceiver}, Boosted~\cite{chen2018boosted} & few-shot classification, image-text matching, image captioning & \begin{itemize} \item Benefiting visual-language-based applications \item Providing attentive supervision signals \item Achieving higher accuracy rates \end{itemize} & \begin{itemize} \item Expensive in memory usage \item High computation cost \item Inherit the limitations of soft and hard attentions \end{itemize} \tabularnewline \hline \rotatebox[origin=c]{90}{\textbf{Arithmetic}} & & 2.3& Drop-out \cite{baldi2014dropout}, Mirror~\cite{jin2020semantic}, Reverse~\cite{chen2018reverse}, Inverse~\cite{zhang2020robust}, Reciprocal~\cite{xia2019exploring} & weakly-supervised object localization, line detection, semantic segmentation, & \begin{itemize} \item Efficient methods \item Simple ideas \item Easy to implement \item Enriching the semantics of the models \end{itemize} & \begin{itemize} \item Limited to certain applications \item Inability to scale up \item Inherit the limitations of soft and hard attentions \end{itemize} \tabularnewline \hline \rotatebox[origin=c]{90}{\textbf{Logical}} & &2.5&Recurrent~\cite{liu2018picanet}, Sequential~\cite{zoran2020towards, NEURIPS2020_103303dd}, Permutation invariant~\cite{permutation_invariant} & image recognition, object detection and segmentation, adversarial image classification, image tagging, anomaly detection & \begin{itemize} \item Overcoming the issues of soft attentions \item Addressing hard attention disadvantages \end{itemize} & \begin{itemize} \item Complex architectures \item High computation cost \item Iterative processing \end{itemize} \tabularnewline \hline \end{tabular}} \end{table*} \vspace{1.5mm} \noindent \textbf{Second-order Attention Network}: For single image super-resolution, in~\cite{Dai_2019_CVPR}, the authors presented a second-order channel attention module, abbreviated as SOCA, to learn feature interdependencies via second-order feature statistics. A covariance matrix ($\Sigma$) is first computed and normalized using the features map from the previous network layers to obtain discriminative representations. The symmetric positive semi-definite covariance matrix is decomposed into $\Sigma = U\Lambda U^T$, where $U$ is orthogonal, and $\Lambda$ is the diagonal matrix with non-increasing eigenvalues. The power of eigenvalues $\Sigma = U\Lambda^\alpha U^T$ help in achieving the attention mechanism, that is, if $\alpha < 1$, then the eigenvalues larger than 1.0 will nonlinearly shrink while stretching others. The authors chose $\alpha < \frac{1}{2}$ based on previous work~\cite{li2017second}. The subsequent attention mechanism is similar to SE~\cite{hu2018squeeze} as shown in Figure~\ref{fig:channels}(c), but instead of providing first-order statistics (\latinphrase{i.e.}\xspace global average pooling), the authors furnished second-order statistics (\latinphrase{i.e.}\xspace global covariance pooling). \vspace{1.5mm} \noindent \textbf{High-Order Attention}: To encode global information and contextual representations, \cite{ding2020high}, Ding~\latinphrase{et~al.}\xspace proposed High-order Attention (HA) with adaptive receptive fields and dynamic weights. HA mainly constructs a feature map for each pixel, including the relationships to other pixels. HA is required to address the issue of fixed-shape receptive fields that cause false prediction in case of high-shape objects \latinphrase{i.e.}\xspace, similar shape objects. Specifically, after calculating the attention maps for each pixel, graph transduction is used to form the final feature map. This feature representation is used to update each pixel position by using the weighted sum of contextual information. High-order attention maps are calculated using Hadamard product \cite{horn1990hadamard, kim2016hadamard}. It is classified as channel attention because it is generate attention scores from channels as in SE \cite{se}. \vspace{1.5mm} \noindent \textbf{Harmonious attention}: proposes a joint attention module of soft pixel and hard regional attentions \cite{harmonious}. The main idea is to tackle the limitation of the previous attention modules in person Re-Identification by learning attention selection and feature representation jointly and hence solving the issue of misalignment calibration caused by constrained attention mechanisms \cite{harmony1, harmony2, harmony3, harmony4}. Specifically, harmonious attention learns two types of soft attention (spatial and channel) in one branch and hard attention in the other one. Moreover, it proposes cross-interaction attention harmonizing between these two attention types as shown in Figure~\ref{fig:channels}(i). \vspace{1.5mm} \noindent \textbf{Auto Learning Attention}: Ma~\latinphrase{et~al.}\xspace \cite{NEURIPS2020_103303dd} introduced a novel idea for designing attention automatically. The module, named Higher-Order Group Attention (HOGA), is in the form of a Directed Acyclic Graph (DAG) \cite{pham2018efficient, dag1, dag2, dag3} where each node represents a group and each edge represents a heterogeneous attention operation. There is a sequential connection between the nodes to represent hybrids of attention operations. Thus, these connections can be represented as K-order attention modules, where K is the number of attention operations. DARTS \cite{liu2018darts} is customized to facilitate the search process efficiently. This auto-learning module can be integrated into legacy architectures and performs better than manual ones. However, the core idea of attention modules remains the same as the previous architectures \latinphrase{i.e.}\xspace SE \cite{se}, CBAM \cite{woo2018cbam}, splat \cite{zhang2020resnest}, mixed \cite{chen2019mixed}. \vspace{1.5mm} \noindent \textbf{Double Attention Networks}: Chen~\latinphrase{et~al.}\xspace~\cite{chen20182} proposed Double Attention Network (A2-Nets), which attends over the input image in two steps. The first step gathers the required features using bilinear pooling to encode the second-order relationships between entities, and the second step distributes the features over the various locations adaptively. In this architecture, the second-order statistics, which are mostly lost with other functions such as average pooling of SE~\cite{se}, of the pooled features are captured first by bilinear pooling. The attention scores are then calculated not from the whole image such as \cite{m_nonlocal} but from a compact bag, hence, enriching the objects with the required context only. The first step \latinphrase{i.e.}\xspace, feature gathering uses the outer product $\sum_{\forall i} a_i b_i^T$ then softmax is used for attending the discriminative features. The second step \latinphrase{i.e.}\xspace, distribution is based on complementing each location with the required features where their summation is $1$. The complete design of A2-Nets is shown in Figure~\ref{fig:channels}(d). Experimental comparisons demonstrated that A2-Net improves the performance better than SE and non-local networks, and is more efficient in terms of memory and time. \vspace{1.5mm} \noindent \textbf{Dual Attention Network}: Jun~\latinphrase{et~al.}\xspace~\cite{fu2019dual} presented a dual attention network for scene segmentation composed of position attention and channel attention working in parallel. The position attention aims to encode the contextual features in local ones. The attention process is straightforward: the input features $f_A$ are passed through three convolutional layers to generate three feature maps ($f_B$, $f_C$, and $f_D$), which are reshaped. Matrix multiplication is performed between the $f_B$ and the transpose of $f_C$, followed by softmax to obtain the spatial attention map. Again, matrix multiplication is performed between the generated $f_D$ features and the spatial attention map. Finally, the output is multiplied with a scalar and summed element-wise with the input features $f_A$ as shown in Figure~\ref{fig:channels}(f). Although channel attention involves similar steps as position attention, it is different because the features are used directly without passing through convolutional layers. The input features $f_A$ are reshaped, transposed, multiplied (\latinphrase{i.e.}\xspace $f_A \times f_A'$), and then passed through the softmax layer to obtain the channel attention map. Moreover, the input features are multiplied with the channel attention map, followed by the element-wise summation, to give the final output as shown in Figure~\ref{fig:channels}(e). \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.2\paperwidth, height=.05\paperheight]{figures/SE_att.png} \\ \small (a) SENet \cite{hu2018squeeze} \tabularnewline \includegraphics[width=.22\paperwidth, height=.05\paperheight]{figures/Channel-Attention.png} \\ \small (b) CBAM \cite{woo2018cbam} \tabularnewline \includegraphics[width=.22\paperwidth, height=.08\paperheight]{figures/soca.png} \\ \small (c) SOCA \cite{Dai_2019_CVPR} \tabularnewline \includegraphics[width=.22\paperwidth, height=.08\paperheight]{figures/double_attention.PNG} \\ \small (d) A$^2-$Net \cite{chen20182} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.27\paperwidth, height=.09\paperheight]{figures/dan_position.PNG} \\ \small (e) DAN Positional\cite{fu2019dual} \tabularnewline \includegraphics[width=.27\paperwidth, height=.09\paperheight]{figures/dan_channel.PNG} \\ \small (f) DAN Channel \cite{fu2019dual} \tabularnewline \includegraphics[width=.25\paperwidth, height=.09\paperheight]{figures/ECA.png} \\ \small (g) ECA-Net \cite{wang2020eca} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.2\paperwidth, height=.16\paperheight]{figures/ResNest_att.PNG} \\ \small (h) RESNest \cite{zhang2020resnest} \tabularnewline \includegraphics[width=.2\paperwidth, height=.14\paperheight]{figures/harmonious.png} \\ \small (i) Harmonious \cite{harmonious} \end{tabular} \caption{Core structures of the channel-based attention methods. Different methods to generate the attention scores including squeezing and excitation \cite{se}, splitting and squeezing \cite{zhang2020resnest},calculating the second order \cite{fu2019dual} or efficient squeezing and excitation \cite{wang2020eca}. Images are taken from the original papers and are best viewed in color.} \label{fig:channels} \end{figure*} \vspace{1.5mm} \noindent \textbf{Frequency Channel Attention}: Channel attention requires global average pooling as a pre-processing step. Qin~\latinphrase{et~al.}\xspace~\cite{qin2020fcanet} argued that the global average pooling operation can be replaced with frequency components. The frequency attention views the discrete cosine transform as the weighted input sum with the cosine parts. As global average pooling is a particular case of frequency-domain feature decomposition, the authors use various frequency components of 2D discrete cosine transform, including the zero-frequency component, \latinphrase{i.e.}\xspace global average pooling. \subsubsection{Spatial Attention} Different from channel attention that mainly generates channel-wise attention scores, spatial attention focuses on generating attention scores from spatial patches of the feature maps rather than the channels. However, the sequence of operations to generate the attentions are similar. \\ \noindent \textbf{Spatial Attention in CBAM} uses the inter-spatial feature relationships to complement the channel attention~\cite{woo2018cbam}. The spatial attention focuses on an informative part and is computed by applying average pooling and max pooling channel-wise, followed by concatenating both to obtain a single feature descriptor. Furthermore, a convolution layer on the concatenated feature descriptor is applied to generate a spatial attention map that encodes to emphasize or suppress. The feature map channel information is aggregated via average-pooled features and max-pooled features and then concatenated and convolved to generate a 2D spatial attention map. The overall process is shown in Figure~\ref{fig:spatial}(a) and computed as \begin{equation} f_{sp} = \sigma( Conv_{7\times 7}([MaxPool(f); AvgPool(f)])), \label{eq:sp-atten} \end{equation} where $Conv_{7\times 7}$ denotes a convolution operation with the 7 $\times$ 7 kernel size and $\sigma$ represents the sigmoid function. \vspace{1.5mm} \noindent \textbf{Co-attention \& Co-excitation}: Hsieh~\latinphrase{et~al.}\xspace~\cite{NEURIPS2019_92af93f7} proposed co-attention and co-excitation to detect all the instances that belong to the same target for one-shot detection. The main idea is to enrich the extracted feature representation using non-local networks that encode long-range dependencies and second-order interactions \cite{m_nonlocal}. Co-excitation is based on squeeze-and-excite network \cite{se} as shown in Figure~\ref{fig:spatial}(e). While squeeze uses global average pooling \cite{lin2013network} to reweight the spatial positions, co-excite serves as a bridge between the feature of query and target. Encoding high-contextual representations using co-attention and co-excitation show improvements in one-shot detector performance achieving state-of-the-art results. \vspace{1.5mm} \noindent \textbf{Spatial Pyramid Attention Network} abbreviated as SPAN~\cite{hu2020span}, was proposed for localizing multiple types of image manipulations. It is composed of three main blocks \latinphrase{i.e.}\xspace feature extraction (head) module, pyramid spatial attention module, and decision (tail) module. The head module employs the Wider \& Deeper VGG Network as the backbone, while Bayer and SRM layers extract features from visual artifacts and noise patterns. The spatial relationship of the pixels is achieved through five local self-attention blocks applied recursively, and to preserve the details, the input of the self-attention is added to the output of the block as shown in Figure~\ref{fig:spatial}(c). These features are then fed into the final tail module of 2D convolutional blocks to generate the output mask after employing a sigmoid activation. \vspace{1.5mm} \noindent \textbf{Spatial-Spectral Self-Attention}: Figure~\ref{fig:spatial}(d) shows the architecture of spatial-spectral self-attention which is composed of two attention modules, namely, spatial attention and spectral attention, both utilizing self-attention. \begin{enumerate \item {Spatial Attention:} To model the non-local region information, Meng~\latinphrase{et~al.}\xspace~\cite{meng2020end} utilize a 3$\times$3 kernel to fuse the input features indicating the region-based correlation followed by a convolutional network mapping the fused features into Q $\&$ K. The kernel number indicates the heads' number and the size denotes the dimension. Moreover, the dimension-specified features from Q $\&$ K build the related attention maps, then modulating the corresponding dimension in a sequence to achieve the order-independent property. Finally, to finish the spatial correlation modeling, the features are forwarded to a deconvolution layer. \item{Spectral Attention}: First, the spectral channel samples are convolved with one kernel and flattened into a signle dimension, set as the feature vector for that channel. The input feature is converted to Q $\&$ K, building attention maps for the spectral axis. The adjacent channels have a higher correlation due to the image patterns on the exact location, denoted via a spectral smoothness on the attention maps. The similarity is indicated by normalized cosine distance as spectral embedding where each similar score is scaled and summed with the coefficients in the attention maps, which is then to modulate \enquote{Value} in self-attention, inducing spectral smoothness constraint. \end{enumerate} \vspace{1.5mm} \noindent \textbf{Pixel-wise Contextual Attention}: (PiCANet)~\cite{liu2018picanet} aims to learn accurate saliency detection. PiCANet generates a map at each pixel over the context region and constructs an accompanied contextual feature to enhance the feature representability at the local and global levels. To generate global attention, each pixel needs to \enquote{see} via ReNet~\cite{visin2015renet} with four recurrent neural networks sweeping horizontally and vertically. The contexts from directions, using biLSTM, are blended propagating information of each pixel to all other pixels. Next, a convolutional layer transforms the feature maps to different channels, further normalized by a softmax function used to weight the feature maps. The local attention is performed on a local neighborhood forming a local feature cube where each pixel needs to \enquote{see} every other pixel in the local area using a few convolutional layers having the same receptive field as the patch. The features are then transformed to channel and normalized using softmax, which are further weighted summed to get the final attention. \vspace{1.5mm} \noindent \textbf{Pyramid Feature Attention} extracts features from different levels of VGG~\cite{zhao2019pyramid}. The low-level features extracted from lower layers of VGG are provided to the spatial attention mechanism~\cite{woo2018cbam}, and the high-level features obtained from the higher layers are supplied to a channel attention mechanism~\cite{woo2018cbam}. The term feature pyramid attention originates form the VGG features obtained from different layers. \vspace{1.5mm} \noindent \textbf{Spatial Attention Pyramid}: For unsupervised domain adaptation, Li~\latinphrase{et~al.}\xspace~\cite{li2020spatial} introduced a spatial attention pyramid that takes features from multiple average pooling layers with various sizes operating on feature maps. These features are forwarded to spatial attention followed by channel-wise attention. All the features after attention are concatenated to form a single semantic vector. \vspace{1.5mm} \noindent \textbf{Region Attention Network}: (RANet)~\cite{shen2020ranet} was proposed for semantic segmentation. It consists of novel network components, the Region Construction Block (RCB) and the Region Interaction Block (RIB), for constructing the contextual representations as illustrated in Figure~\ref{fig:spatial}(b). The RCB analyzes the boundary score and the semantic score maps jointly to compute the attention region score for each image pixel pair. High attention score indicates that the pixels are from the same object region, dividing the image into various object regions. Subsequently, the RIB takes the region maps and selects the representative pixels in different regions where each representative pixel receives the context from other pixels to effectively represent the object region's local content. Furthermore, capturing the spatial and category relationship between various objects communicating the representative pixels in the different regions yields the global contextual representation to augment the pixels, eventually forming the contextual feature map for segmentation. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/Spatial-attention.png}\\ \small (a) Spatial Attention \cite{woo2018cbam} \tabularnewline \includegraphics[width=.29\paperwidth]{figures/RANet.PNG} \small \\(b) (RANet)~\cite{shen2020ranet} \tabularnewline \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.32\paperwidth]{figures/co-excite.png} \small \\(c) Co-excite \cite{NEURIPS2019_92af93f7} \end{tabular} \caption{The core structure of the spatial-based attention methods RANet~\cite{shen2020ranet}, and Co-excite \cite{NEURIPS2019_92af93f7}. These methods focus on attending to the most important parts in the spatial map. The images are taken from the original papers. Best view in color.} \label{fig:spatial} \end{figure*} \subsubsection{Self-attention } Self-attention, also known as \emph{intra-attention}, is an attention mechanism that encodes the relationships between all the input entities. It is a process that enables input sequences to interact with each other and aggregate the attention scores, which illustrate how similar they are. The main idea is to replicate the feature maps into three copies and then measure the similarity between them. Apart from channel- and spatial-wise attention that use the physical feature maps, self-attention replicates feature copies to measure long-range dependencies. However, self-attention methods use channels to calculate attention scores. Cheng~\latinphrase{et~al.}\xspace extracted the correlations between the words of a single sentence using Long-Short-Term Memory (LSTM) \cite{m_self_attention}. An attention vector is produced from each hidden state during the recurrent iteration, which attends all the responses in a sequence for this position. In \cite{m_self_attention_parikah}, a decomposable solution was proposed to divide the input into sub-problems, which improved the processing efficiency compared to \cite{m_self_attention}. The attention vector is calculated as an alignment factor to the content (bag-of-words). Although these methods introduced the idea of self-attention, they are very expensive in terms of resources and do not consider contextual information. Also, RNN models process input sequentially; hence, it is difficult to parallelize or process large-scale schema efficiently. \vspace{1.5mm} \noindent \textbf{Transformers}: Vaswani~\latinphrase{et~al.}\xspace~\cite{m_transformers} proposed a new method, called transformers, based on the self-attention concept without convolution or recurrent modules. As shown in Figure~\ref{fig:self_attentions} (f), it is mainly composed of encoder-decoder layers, where the encoder comprises of self-attention module followed by positional feed-forward layer and the decoder is the same as the encoder except that it has an encoder-decoder attention layer in between. Positional encoding is represented by a sine wave that incorporates the passage of time as input before the linear layer. This positional encoding serves as a generalization term to help recognize unseen sequences and encodes relative positions rather than absolute representations. Algorithm~\ref{algorithm:self-attention} shows the detailed steps of calculating self-attention (multi-head attention) using transformers. Although transformers have achieved much progress in the text-based models, it lacks in encoding the context of the sentence because it calculates the word's attention for the left-side sequences. To address this issue, Bidirectional Encoder Representations from Transformers (BERT) learns the contextual information by encoding both sides of the sentence jointly \cite{m_BERT}. \begin{algorithm}[t] \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input {set of sequences $(x_1, x_2, ..., x_n)$ of an entity $\mathbf{X} \in \mathbf{R}$} \Output{attention scores of $\mathbf{X}$ sequences.} { Initialize weights: Key ($\mathbf{W_K}$), Query ($\mathbf{W_Q}$), Value ($\mathbf{W_V}$) for each input sequence. \\ Derive Key, Query, Value for each input sequence and its corresponding weight, such that $\mathbf{Q = XW_Q}$, $\mathbf{K = XW_K}$, $\mathbf{V = XV_Q}$, respectively.\\ Compute attention scores by calculating the dot product between the query and key.\\ Compute the scaled-dot product attention for these scores and Values $\mathbf{V}$, \[ \mathrm{softmax} \left( \frac{\mathbf{QK^T}}{\sqrt{d_k}}\right)\mathbf{V}.\]\\ repeat steps from 1 to 4 for all the heads \\ } \caption{The main steps of generating self-attention by transformers (multi-head attention)} \label{algorithm:self-attention} \end{algorithm} \vspace{1.5mm} \noindent \textbf{Standalone self-attention}: As stated above, convolutional features do not consider the global information due to their local-biased receptive fields. Instead of augmenting attentional features to the convolutional ones, Ramachandran~\latinphrase{et~al.}\xspace~\cite{m_standalone} proposed a fully-attentional network that replaces spatial convolutions with self-attentional modules. The convolutional stem (the first few convolutions) is used to capture spatial information. They designed a small kernel (\latinphrase{e.g.}\xspace $n \times n$) instead of processing the whole image simultaneously. This design built a computationally efficient model that enables processing images with their original sizes without downsampling. The computation complexity is reduced to $\mathcal{O}(hwn^2)$, where $h$ and $w$ denotes height and width, respectively. A patch is extracted as a query along with each local patch, while the identity image is used as Value and Key. Calculating the attention maps follows the same steps as in Algorithm \ref{algorithm:self-attention}. Although stand-alone self-attention shows competitive results compared to convolutional models, it suffers from encoding positional information. \vspace{1.5mm} \noindent \vspace{1.5mm} \noindent \textbf{Clustered Attention}: To address the computational inefficiency of transformers, Vyas~\latinphrase{et~al.}\xspace~\cite{m_clsutered} proposed a clustered attention mechanism that relies on the idea that correlated queries follow the same distribution around Euclidean centers. Based on this idea, they use the K-means algorithm with fixed centers to group similar queries together. Instead of calculating the queries for attention, they are calculated for clusters' centers. Therefore, the total complexity is minimized to a linear form $\mathcal{O}(qc)$, where $q$ is the number of the queries while $c$ is the cluster number. \vspace{1.5mm} \noindent \textbf{Slot Attention}: Locatello~\latinphrase{et~al.}\xspace~\cite{scouter_slot} proposed slot attention, an attention mechanism that learns the objects' representations in a scene. In general, it learns to disentangle the image into a series of slots. As shown in Figure~\ref{fig:self_attentions}(d), each slot represents a single object. Slot attention module is applied to a learned representation $h \in \mathbb{R}^{W \times H \times D}$, where $H$ is the height, $W$ is the width, and $D$ is the representation size. SAM has two main steps: learning $n$ slots using an iterative attention mechanism and representing individual objects (slots). Inside each iteration, two operations are implemented: 1) slot competition using softmax followed by normalization according to slot dimension using this equation \begin{equation} a = Softmax \bigg( \frac{1}{\sqrt{D}} n (h). q(c)^T\bigg). \end{equation} 2) An aggregation process for the attended representations with a weighted mean \begin{equation} r = Weightedmean \bigg(a, v(h) \bigg), \end{equation} where $k, q, v$ are learnable variables as showed in \cite{m_transformers}. Then, a feed-forward layer is used to predict the slot representations $s=fc(r)$. Slot attention is based on Transformer-like attention \cite{m_transformers} on top of CNN-feature extractors. Given an image $\mathbb{I}$, the slot attention parses the scene into a set of slots, each one referring to an object $(z, x, m)$, where $z$ is the object feature, $x$ is the input image and $m$ is the mask. In the decoders, convolutional networks are used to learn slot representations and object masks. The training process is guided by $\ell_2$-norm loss \begin{equation} \mathbb{L} = \bigg\rVert \bigg( \sum_{k=1}^K mx_k\bigg) - \mathbb{I} \bigg\lVert_2^2 \end{equation} Following the slot-attention module, Li~\latinphrase{et~al.}\xspace developed an explainable classifier based on slot attentions \cite{scouter_slot}. This method aims to find the positive supports and negatives ones for a class $l$. In this way, a classifier can also be explained rather than being completely black-box. The primary entity of this work is xSlot, a variant of slot attention~\cite{slot}, which is related to a category and gives confidence for inclusion of this category in the input image. \vspace{1.5mm} \noindent \textbf{Efficient Attention}: Using Asymmetric Clustering (SMYRF) Daras~\latinphrase{et~al.}\xspace~\cite{SMYRF} proposed symmetric Locality Sensitive Hashing (LSH) clustering in a novel way to reduce the size of attention maps, therefore, developing efficient models. They observed that attention weights are sparse as well as the attention matrix is low-rank. As a result, pre-trained models pertain to decay in their values. In SMYRF, this issue is addressed by approximating attention maps through balanced clustering, produced by asymmetric transformations and an adaptive scheme. SMYRF is a drop-in replacement for pre-trained models for normal dense attention. Without retraining models after integrating this module, SMYRF showed significant effectiveness in memory, performance, and speed. Therefore, the feature maps can be scaled up to include contextual information. In some models, the memory usage is reduced by $50\%$. Although SMYRF enhanced memory usage in self-attention models, the improvement over efficient attention models is marginal (see Figure~\ref{fig:self_attentions} (a)). \vspace{1.5mm} \noindent \textbf{Random Feature Attention}: Transformers have a major shortcoming with regards to time and memory complexity, which hinder attention scaling up and thus limiting higher-order interactions. Peng~\latinphrase{et~al.}\xspace~\cite{peng2021random} proposed reducing the space and time complexity of transformers from quadratic to linear. They simply enhance softmax approximation using random functions. Random Feature Attention (RFA) \cite{rawat2019sampled} uses a variant of softmax that is sampled from simple distribution-based Fourier random features \cite{rahimi2007random, yang2014quasi}. Using the kernel trick $exp(x.y) \approx \phi(x).\phi(y)$ of \cite{hofmann2008kernel}, softmax approximation is reduced to linear as shown in Figure~\ref{fig:self_attentions} (c). Moreover, the similarity of RFA connections and recurrent networks help in developing a gating mechanism to learn recency bias \cite{lstm, cho2014learning, schmidhuber1992learning}. RFA can be integrated into backbones easily to replace the normal softmax without much increase in the number of parameters, only $0.1\%$ increase. Plugging RFA into a transformer show comparable results to softmax, while gating RFA outperformed it in language models. RFA executes 2$\times$ faster than a conventional transformer. \vspace{1.5mm} \noindent \textbf{Non-local Networks}: Recent breakthroughs in the field of artificial intelligence are mostly based on the success of Convolution Neural Networks (CNNs) \cite{m_deep_learning, resnet}. In particular, they can be processed in parallel mode and are inductive biases for the extracted features. However, CNNs fail to learn the context of the whole image due to their local-biased receptive fields. Therefore, long-range dependencies are disregarded in CNNs. In \cite{m_nonlocal}, Wang~\latinphrase{et~al.}\xspace proposed non-local networks to alleviate the bias of CNNs towards the local information and fuse global information into the network. It augments each pixel of the convolutional features with the contextual information, the weighted sum of the whole feature map. In this manner, the correlated patches in an image are encoded in a long-range fashion. Non-local networks showed significant improvement in long-range interaction tasks such as video classification \cite{m_kinect} as well as low-level image processing \cite{m_non_denoise, non_local_advers}. Non-local model attention in the network in a graphical fashion \cite{m_graph_atten}. However, stacking multiple non-local modules in the same stage shows instability and ill-pose in the training process \cite{m_non_local_diffuse}. In~\cite{liu2020learning}, Liu~\latinphrase{et~al.}\xspace uses non-local networks to form self-mutual attention between two modalities (RGB and Depth) to learn global contextual information. The idea is straightforward \latinphrase{i.e.}\xspace to sum the corresponding features before softmax normalization such that $\mathrm{softmax}(f^r (\mathbb{X}^r)+\alpha^d \bigodot f^d (\mathbb{X}^d))$ for RGB attention and vice versa. \vspace{1.5mm} \noindent \textbf{Non-Local Sparse Attention (NLSA)}: Mei~\latinphrase{et~al.}\xspace \cite{mei2021image} proposed a sparse non-local network to combine the benefits of non-local modules to encode long-range dependencies and sparse representations for robustness. Deep features are split into different groups (buckets) with high inner correlations. Locality Sensitive Hashing (LSH) \cite{gionis1999similarity} is used to find similar features to each bucket. Then, the Non-Local block processes the pixel within its bucket and the similar ones. NLSA reduces the complexity to asymptotic linear from quadratic as well as uses the power of sparse representations to focus on informative regions only. \vspace{1.5mm} \noindent \textbf{X-Linear Attention}: Bilinear pooling is a calculation process that computers the outer product between two entities rather than the inner product \cite{bilinear4, bilinear3, bilinear2, bilinear1} and has shown the ability to encode higher-order interaction and thus encourage more discriminability in the models. Moreover, it yields compact models with the required details even though it compresses the representations \cite{bilinear2}. In particular, bilinear applications has shown significant improvements in fine-grained visual recognition \cite{fine3, fine2, fine1} and visual question answering \cite{yu2017multi}. As Figure~\ref{fig:self_attentions}(e) depicts, a low-rank bilinear pooling is performed between queries and keys, and hence, the $2^{nd}$-order interactions between keys and queries are encoded. Through this query-key interaction, spatial-wise and channel-wise attention are aggregated with the values. The channel-wise attention is the same as squeeze-excitation attention \cite{se}. The final output of the x-linear module is aggregated with the low-rank bilinear of keys and values \cite{pan2020x}. They claimed that encoding higher interactions require only repeating the x-linear module accordingly (\latinphrase{e.g.}\xspace three iterative x-linear blocks for $4^{th}$-order interactions). Modeling infinity-order interaction is also explained by the usage of h Exponential Linear Unit \cite{barron2017continuously}. X-Linear attention module proposes a novel mechanism to attentions, different from transformer \cite{m_transformers}. It is able to encode the relations between input tokens without positional encoding with only linear complexity as opposed to quadratic in transformer. \vspace{1.5mm} \noindent \textbf{Axial-Attention}: Wang~\latinphrase{et~al.}\xspace~\cite{axial_attention} proposed axial attention to encode global information and long-range context for the subject. Although conventional self-attention methods use fully-connected layers to encode non-local interactions, they are very expensive given their dense connections \cite{m_transformers,bert,detr,image_transfomer}. Axial uses self-attention models in a non-local way without any constraints. Simply put, it factorizes the 2D self-attentions in two axes (width and height) of 1D-self attentions. This way, axial attention shows effectiveness in attending over wide regions. Moreover, unlike \cite{bam, ramachandran2019stand, hu2019local}, axial attention uses positional information to include contextual information in an agnostic way. With axial attention, the computational complexity is reduced to $\mathcal{O}(hwm)$. Also, axial attention showed competitive performance not only in comparison to full-attention models \cite{attention_augmented,m_standalone}, but convolutional ones as well \cite{resnet, huang2017densely}. \vspace{1.5mm} \noindent \textbf{Efficient Attention Mechanism}: Conventional attention mechanisms are built on double matrix multiplication that yields quadratic complexity $n \times n$ where $n$ is the size of the matrix. Many methods propose efficient architectures for attention \cite{kitaev2019reformer, Efficient_attention, wu2021centroid, kim2020fastformers}. In \cite{Efficient_attention}, Zhuoran~\latinphrase{et~al.}\xspace used the associative feature of matrix multiplication and suggested efficient attention. Formally, instead of using dot-product of the form $\rho (QK^T)V$, they process it in an efficient sequence $\rho_q(Q)(\rho_k(K)^T V)$ where $\rho$ denotes a normalization step. Regarding the normalization of $\mathrm{softmax}$, it is performed twice instead of once at the end. Hence, the complexity is reduced from quadratic $\mathcal{O}(n^2)$ to linear $\mathcal{O}(n)$. Through a simple change, the complexity of processing and memory usage are reduced to enable the integration of attention modules in large-scale tasks. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.25\paperwidth]{figures/eff_attention.PNG} \\ \small (a) Efficient attention~ \cite{Efficient_attention} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.16\paperwidth]{figures/slot.PNG} \\ \small (b) Slot attention module \cite{slot} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.30\paperwidth]{figures/RFA.png} \\ \small (c) RFA ~ \cite{peng2021random} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.27\paperwidth]{figures/xlinear.png} \\ \small (d) X-Linear ~ \cite{pan2020x} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.27\paperwidth]{figures/axial.png} \\ \small (e) Axial ~ \cite{axial_attention} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.18\paperwidth]{figures/transformer.PNG} \\ \small (f) Transformer~ \cite{m_transformers} \end{tabular} \caption{ The core structure of self-attention methods: Transformers~\cite{m_transformers}, Axial attention~\cite{axial_attention}, X-Linear \cite{pan2020x}, Slot \cite{slot} and RFA \cite{peng2021random}. All of these methods are self-attention, which generate the scores from measuring the similarity between two maps of the same input. However, there is difference in the way of processing. The images are taken from the original papers. Best view in color.} \label{fig:self_attentions} \end{figure*} \subsection{Hard (Stochastic) Attention} Instead of using the weighted average of the hidden states, hard attention selects one of the states as the attention score. Proposing hard attention depends on answering two questions: (1) how to model the problem, and (2) how to train it without vanishing the gradients. In this part, hard attention methods are discussed as well as their training mechanisms. It includes a discussion of Bayesian attention, variational inference, reinforced, and Gaussian attentions. The main idea of Bayesian attention and variational one is to use latent random variables as attention scores. Reinforced attention replaces softmax with a Bernoulli-sigmoid unit \cite{williams1992simple}, whereas Gaussian attention uses a 2D Gaussian kernel instead. Similarly, self-critic attention~\cite{chen2019self} employs a re-enforcement technique to generate the attention scores, whereas Expectation-Maximization uses EM to generate the scores. \subsubsection{Bayesian Attention Modules (BAM)} In contrast to the deterministic attention modules, Fan~\latinphrase{et~al.}\xspace~\cite{fan2020bayesian} proposed a stochastic attention method based on Bayesian-graph models. Firstly, keys and queries are aligned to form distributions parameters for attention weights, treated as latent random variables. They trained the whole model by reparameterization, which results from weight normalization by Lognormal or Weibull distributions. Kullback–Leibler (KL) divergence is used as a regularizer to introduce contextual prior distribution in the form of keys' functions. Their experiments illustrate that BAM significantly outperforms the state-of-the-art in various fields such as visual question answering, image captioning, and machine translation. However, this improvement happens on account of computational cost as well as memory usages. Compared to deterministic attention models, it is an efficient alternative in general, showing consistent effectiveness in language-vision tasks. \vspace{1.5mm} \noindent \textbf{Bayesian Attention Belief Networks}: Zhang~\latinphrase{et~al.}\xspace~\cite{zhang2021bayesian} proposed using Bayesian Belief modules to generate attention scores given their ability to model high structured data along with uncertainty estimations. As shown in \ref{fig:stochastics} (b), they introduced a simple structure to change any deterministic attention model into a stochastic one through four steps: 1) using Gamma distributions to build the decoder network 2) using Weibull distributions along with stochastic and deterministic paths for downward and upward, respectively 3) Parameterizing BABN distributions from the queries and keys of the current network 4) using evidence lower bound to optimize the encoder and decoder. The whole network is differentiable because of the existence of Weibull distributions in the encoder. In terms of accuracy and uncertainty, BABN proved improvement over the state-of-the-art in NLP tasks. \vspace{1.5mm} \noindent \textbf{Repulsive Attention}: Multi-head attention \cite{m_transformers} is the core of attention used in transformers. However, MHA may cause attention collapse when extracting the same features \cite{an2020repulsive, prakash2019repr, han2016dsd} and consequently, the discrimination power for feature representation will not be diverse. To address this issue, An~\latinphrase{et~al.}\xspace~\cite{an2020repulsive} adapted MHA to a Bayesian network with an underlying stochastic attention. MHA is considered a special case without sharing parameters and using a particle-optimization sample to perform Bayesian inference on the attention parameter imposes attention repulsiveness \cite{liu2016stein}. Through this sampling method, each MHA is considered a sample seeking posterior distribution approximation, far from other heads. \subsubsection{Variational Attention} In a study to improve the latent variable alignments, Deng~\latinphrase{et~al.}\xspace~\cite{deng2018latent} proposed using variational attention mechanism. A latent variable is crucial because it encodes the dependencies between entities, whereas variational inference methods represent it in a stochastic manner \cite{salimbeni2019deep, drori2020deep}. On the other hand, soft attention can encode alignments, but it has poor representation because of the nature of softmax. Using stochastic methods show better performance when optimized well \cite{lin2003toward, wang2020survey}. The main idea is to propose variational attention along with keeping the training tractable. They introduced two types of variational attention: categorical (hard) attention that uses amortized variational inference based on policy gradient and soft attention variance; relaxed (probabilistic soft attention) using Dirichlet distribution that allows attending over multiple sources. Regarding reparameterization, Dirichlet distribution is not parameterizable, and thus the gradient has high variances \cite{jankowiak2018pathwise}. Inspired by \cite{deng2018latent}, Bahuleyan~\latinphrase{et~al.}\xspace developed stochastic attention-based variational inference \cite{bahuleyan2018variational}, but using a normal distribution instead of Dirichlet distribution. They observed that variational encoder-decoders should not have a direct connection; otherwise, traditional attentions serve as bypass connections. \subsubsection{Reinforced Self-Attention } Shen~\latinphrase{et~al.}\xspace~\cite{shen2018reinforced} used a reinforced technique to combine soft and hard attention in one method. Soft attention has shown effectiveness in modeling local and global dependencies, which output from the dot-product similarity \cite{bahdanau2014neural}. However, soft attention is based on the softmax function that assigns values to each item, even the non-attended ones, which weakens the whole attention. On the other hand, hard attention \cite{show_attend} attends to important regions or tokens only and disregards others. Despite its importance to textual tasks, it is inefficient in terms of time and differentiability \cite{williams1992simple}. Shen~\latinphrase{et~al.}\xspace~\cite{shen2018reinforced} used hard attention to extract rich information and then feed it into soft attention for further processing. Simultaneously, soft attention is used to reward hard attention and hence stabilize the training process. Specifically, they used hard attention to encode tokens from the input in parallel while combining it with soft attention \cite{shen2018disan} without any CNN/RNN modules (see Figure~\ref{fig:stochastics} (e)). In \cite{karianakis2018reinforced}, reinforcement attention was proposed to extract better temporal context from video. Specifically, this attention module uses Bernoulli-sigmoid unit \cite{williams1992simple}, a stochastic module. Thus, to train the whole system, REINFORCE algorithm is used to stabilize the gradients \cite{jankowiak2018pathwise}. \subsubsection{Gaussian-based attention} \vspace{1.5mm} \noindent \textbf{Self Supervised Gaussian-Attention}: Most soft-attention models use softmax to predict the attention of feature maps \cite{m_transformers, image_transfomer, zhang2020resnest} which suffers from various drawbacks. In~\cite{niu2020gatcluster}, Niu~\latinphrase{et~al.}\xspace proposed replacing the classical softmax with a Gaussian attention module. As shown in Figure~\ref{fig:stochastics} (d), they build a 2D Gaussian kernel to generate attention maps instead of softmax $K = e(-\frac{1}{\alpha}(u - \mu)^T \sum^{-1} (u - \mu))$ for each individual element, where $u = [x, y]^T$, $\mu = [\mu_x, \mu^y]^T$. A fully connected layer passes the extracted features, and then the Gaussian kernel is used to predict the attention scores. Using Gaussian kernels proved its effectiveness in discriminating the important features. Since it does not require any further learning steps, such as fully connected layers or convolutions, this significantly reduces the number of parameters. As stochastic training models need careful designs because of SGD mismatching~\cite{stochastic1, stochastic2, stochastic3}, the Gaussian attention model developed binary classification loss that takes normalized logits to suppress the low scores and discriminate the high ones. This normalization uses a modified version of softmax, where the input is squared and divided by temperature value (\latinphrase{e.g.}\xspace batch size). \vspace{1.5mm} \noindent \textbf{Uncertainty-Aware Attention}: Since attention is generated without full supervision (\latinphrase{i.e.}\xspace in a weakly-supervised manner), it lacks full reliability \cite{li2018tell}. To fix this issue, \cite{heo2018uncertainty} proposed the use of uncertainty which is based on input. It generates varied attention maps according to the input and, therefore, learns higher variance for uncertain inputs. Gaussian distribution is used to handle attention weights, such that it gives small values in case of high confidence and vice versa \cite{kendall2017uncertainties}. Bayesian network is employed to build the model with variational inference as a solution~\cite{zhang1994simple, blei2017variational}. Note that this model is stochastic and SGD backpropagation flow can not work properly due to randomness \cite{kingma2013auto}. For this reason, they used the reparameterization trick \cite{gal2017concrete, kingma2015variational} to train their model. \subsubsection{Self-Critic attention} Chen~\latinphrase{et~al.}\xspace \cite{chen2019self} proposed a self-critic attention model that generates attention using an agent and re-evaluates the gain from this attention using the REINFORCE algorithm. They observed that most of the attention modules are trained in a weakly-supervised manner. Therefore, the attention maps are not always discriminative and lack supervisory signals during training \cite{lee2015deeply}. To supervise the generation of attention maps, they used a reinforcement algorithm to guide the whole process. As shown in Figure~\ref{fig:stochastics} (a), the feature maps are evaluated to predict whether it needs self correctness or not. \subsubsection{Expectation Maximization attention} Traditional soft attention mechanisms can encode long-range dependencies by comparing each position to all positions, which is computationally very expensive~\cite{m_nonlocal}. In this regard, Li~\latinphrase{et~al.}\xspace~\cite{li2019expectation} proposed using expectation maximization to build an attention method that iteratively forms a set of bases that compute the attention maps \cite{dempster1977maximum}. The main intuition is to use expectation maximization to select a compact basis set instead of using all the pixels as in \cite{m_nonlocal, li2020spatial} (see Figure~\ref{fig:stochastics} (c)). These bases are regarded as the learning parameters, whereas the latent variables serve as the attention maps. The output is the weighted sum of bases, and the attention maps are the weights. The estimation step is defined by $ z=\frac{\mathbb{K}(x_n, \mu_n)}{\sum_j\mathbb{K}(x_n, \mu_j)}$, where $\mathbb{K}$ denotes a kernel function. The maximization step updates $\mu$ through data likelihood maximization such that $\mu = \frac{z_{nk}(x_n, \mu_n)}{\sum_j z_{jk}}$. Finally, the features are multiplied by attention scores $\mathbb{X} = \mathbb{Z} \mu$. Since EMA is a stochastic model, training the whole model needs special care. Firstly, the authors average the $\mu$ over the mini-batch and update the maximization step to train it stably. Secondly, they normalize the Value of $\mu$ to be within (1, $T$) by $\ell_2$-Norm. EMA has shown the ability to remove noisy representation and to give promising results after the third step. Also, it is worth noting that the complexity is reduced to a linear form $\mathcal{O}(NK)$ from a quadratic one $\mathcal{O}(N^2)$. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.36\paperwidth]{figures/self_critic.PNG} \\ \small (a) Self-Critic Attention~\cite{chen2019self}\\ \includegraphics[width=.36\paperwidth]{figures/EMA.png} \\ \small (c) Expectation-Maximization Attention ~ \cite{li2019expectation} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/babn.PNG}\\ \small (b) Bayesian Attention Belief Networks \cite{zhang2021bayesian} \tabularnewline\\ \includegraphics[width=.3\paperwidth]{figures/gaussian_attention.PNG} \\ \small (d) Gaussian attention module ~ \cite{gaussian_attention} \tabularnewline \end{tabular} \caption{The core structure of stochastic-based attention methods EMA~\cite{li2019expectation}, Gaussian attention module~\cite{gaussian_attention}, Self-critic \cite{chen2019self} and Bayesian module \cite{zhang2021bayesian}. The core functions of attention score generation is not softmax. The images are taken from the original papers. Best view in color.} \label{fig:stochastics} \end{figure*} \subsection{Arithmetic Attention} This part introduces arithmetic attention methods such as dropout, mirror, reverse, inverse, and reciprocal. We named it arithmetic because these methods are different from the above techniques even though they use their core. However, these methods mainly produce the final attention scores from simple arithmetic equations such as the reciprocal of the attention, \latinphrase{etc.}\xspace \vspace{1.5mm} \noindent \textbf{Attention-based Dropout Layer}: In weakly-supervised object localization, detecting the whole object without location annotation is a challenging task \cite{wsol1, wsol1, wsol2}. Choe~\latinphrase{et~al.}\xspace \cite{choe2019attention} proposed using the dropout layer to improve the localization accuracy through two steps: making the whole object location even by hiding the most discriminative part and attending over the whole area to improve the recognition performance. As Figure~\ref{fig:arithmetic} (a) shows, ADL has two branches: 1) drop mask to conceal the discriminative part, which is performed by a threshold hyperparameter where values bigger than this threshold are set to zero and vice versa, and 2) importance map to give weight for the channels contributions by using a sigmoid function. Although the proposed idea is simple, experiments showed it is efficient (gained 15 $\%$ more than the state-of-the-art). \vspace{1.5mm} \noindent \textbf{Mirror Attention}: In a line detection application \cite{jin2020semantic}, Lee~\latinphrase{et~al.}\xspace developed mirrored attention to learn more semantic features. They flipped the feature map around the candidate line, and then concatenated the feature maps together. In case the line is not aligned, zero padding is applied. \vspace{1.5mm} \noindent \textbf{Reverse Attention}: Huang~\latinphrase{et~al.}\xspace~\cite{BMVC2017_18} proposed the negative context (\latinphrase{e.g.}\xspace what is not related to the class) in training to learn semantic features. They were motivated by less discriminability between the classes in the high-level semantic representations and the weak response to the correct class from the latent representations. The network is composed of two branches, the first one learns discriminative features using convolutions for the target class, and the second one learns the reverse attention scores that are not associated with the target class. These scores are aggregated together to form the final attentions shown in Figure~\ref{fig:arithmetic} (b). A deeper look inside the reverse attention shows that it is mainly dependent on negating the extracted features of convolutions followed by sigmoid $\mathrm{sigmoid} (-F_{conv})$. However, for the purpose of convergence, this simple equation is changed to be $\mathrm{sigmoid}(\frac{1}{ReLU(F_{conv})+0.125} - 4)$. On semantic segmentation datasets, reverse attention achieved significant improvement over state-of-the-art. In a similar work, Chen~\latinphrase{et~al.}\xspace~\cite{chen2018reverse} proposed using reverse attention for salient object detection. The main intuition was to erase the final predictions of the network and hence learn the missing parts of the objects. However, the calculation method of attention scores is different from \cite{BMVC2017_18}, whereas they used $1 - \mathrm{sigmoid}(F_{i+1})$ where $F_{i+1}$ denoted the features of the next stage. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.5\paperwidth]{figures/adl.PNG} \\ \small (a) Attention-Based Dropout Layer \cite{choe2019attention} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.28\paperwidth]{figures/ran.png} \\ \small (b) Reverse Attention \cite{BMVC2017_18} \end{tabular} \caption{The core structure of arithmetic-based attention methods Attention-based Dropout~\cite{choe2019attention}, Reverse Attention \cite{BMVC2017_18}. These methods use arithmetic operations to generate the attention scores such as reverse, dropout or reciprocal. The images are taken from the original papers. Best view in color.} \label{fig:arithmetic} \end{figure*} \subsection{Multi-modal attentions} As the name reveals, multi-modal attention is proposed to handle multi-modal tasks, using different modalities to generate attentions, such as text and image. It should be noted that some attention methods below, such as Perceiver~\cite{jaegle2021perceiver} and Criss-Cross \cite{context1, context2}, are transformer types~\cite{m_transformers}, but are customized for multi-modal tasks by including text, audio, and image. \vspace{1.5mm} \noindent \textbf{Cross Attention Network}: In \cite{hou2019cross}, a cross attention module (CAN) was proposed to enhance the overall discrimination of few-shot classification \cite{sung2018learning}. Inspired by the human behavior of recognizing novel images, the similarity between seen and unseen parts is identified first. CAN learns to encode the correlation between the query and the target object. As Figure~\ref{fig:multimodals} (a) shows, the features of query and target are extracted independently, and then a correlation layer computes the interaction between them using cosine distance. Next, 1D convolution is applied to fuse correlation (GAP is performed first) and attentions, followed by softmax normalization. The output is reshaped to give a single channel feature map to preserve spatial representations. Although, experiments show that CAN produces state-of-the-art results, it depends on non-learnable functions such as the cosine correlation. Also, the design is suitable for few-shot classification but is not general because it depends on two streams (query and target). \vspace{1.5mm} \noindent \textbf{Criss-Cross Attention}: The contextual information is still very important for scene understanding \cite{context1, context2}. Criss-cross attention proposed encoding the context of each pixel in the image in the criss-cross path. By building recurrent modules of criss-cross attention, the whole context is encoded for each pixel. This module is more efficient than non-local block \cite{m_nonlocal} in memory and time, where the memory is reduced by $11x$ and GFLOPS reduced by $85\%$. Since this survey focuses on the core attention ideas, we show the criss-cross module in Figure~\ref{fig:multimodals} (b). Initially, three $1 \times 1$ convolutions are applied to the feature maps, whereas two of them are multiplied together (first map with each row of the second) to produce criss-cross attentions for each pixel. Then, softmax is applied to generate the attention scores, aggregated with the third convolution outcome. However, the encoded context captures only information in the criss-cross direction, and not the whole image. For this reason, the authors repeated the attention module by sharing the weights to form recurrent criss-cross, which includes the whole context. \cite{huang2019ccnet} \vspace{1.5mm} \noindent \textbf{Perceiver Traditional}: CNNs have achieved high performance in handling several tasks \cite{resnet, chen2011multi, hassanin2021mitigating}, however, they are designed and trained for a single domain rather than multi-modal tasks \cite{modal1, modal2, modal3}. Inspired by biological systems that understand the environment through various modalities simultaneously, Jaegle~\latinphrase{et~al.}\xspace proposed perceiver that leverages the relations between these modalities iteratively. The main concept behind perceiver is to form an attention bottleneck composed of a set of latent units. This solves the scale of quadratic processing, as in traditional transformer, and encourages the model to focus on important features through iterative processing. To compensate for the spatial context, Fourier transform is used to encode the features \cite{mildenhall2020nerf, kandel2000principles, stanley2007compositional, parmar2018image}. As Figure~\ref{fig:multimodals} (c) shows, the perceiver is similar to RNN because of weights sharing. It composes two main components: cross attention to map the input image or input vector to a latent vector and transformer tower that maps the latent vector to a similar one with the same size. The architecture reveals that perceiver is an attention bottleneck that learns a mapping function from high-dimensional data to a low-dimensional one and then passes it to the transformer \cite{m_transformers}. The cross-attention module has multi-byte attend layers to enrich the context, which might be limited from such mapping. This design reduces the quadratic processing $\mathcal{O}(M^2)$ to $\mathcal{O}(MN)$, where $M$ is the sequence length and $N$ is a hyperparameter that can be chosen smaller than $M$. Additionally, sharing the weights of the iterative attention reduces the parameters to one-tenth and enhances the model's generalization. \vspace{1.5mm} \noindent \textbf{Stacked Cross Attention}: Lee~\latinphrase{et~al.}\xspace~\cite{stacked_cross} proposed a method to attend between an image and a sentence context. Given an image and sentence, it learns the attention of words in a sentence for each region in the image and then scores the image regions by comparing each region to the sentence. This way of processing enables stacked cross attention to discover all possible alignments between text and image. Firstly, they compute image-text cross attention by a few steps as follows: a) compute cosine similarity for all image-text pairs \cite{karpathy2014deep} followed by $\ell_2$ normalization \cite{wang2017normface}, b) compute the weighted sum of these pairs attentions, where the image one is calculated by softmax \cite{chorowski2015attention}, c) the final similarity between these pairs is computed using LogSumExp pooling \cite{he2008discriminative, huang2018learning}. The same steps are repeated to get the text-image cross attention, but the attention in the second step uses text-based softmax. Although stacked attention enriches the semantics of multi-modal tasks by attending text over image and vice versa, shared semantics might lead to misalignment in case of lack of similarity. With slight changes to the main concept, several works in various paradigms such as question answering and image captioning \cite{cross_attention1, cross_attention2, cross_attention4, cross_attention3, cross_attention5} used the stacked-cross attention. \vspace{1.5mm} \noindent \textbf{Boosted Attention}: While top-down attention mechanisms~\cite{lu2017knowing} fail to focus on regions of interest without prior knowledge, visual stimuli methods~\cite{tavakoli2017paying,sugano2016seeing} alone are not sufficient to generate captions for images. For this reason, in \ref{fig:multimodals} (d), the authors proposed a boosted attention model to combine both of them in one approach to focus on top-down signals from the language and attend to the salient regions from stimuli independently. Firstly, they integrate stimuli attention with visual feature $I^{'} = W\ I \circ \mathrm{log}(W_{sal}I+\epsilon))$, where $I$ is the extracted features from the backbone, $W_{sal}$ denotes the weight of the layer that produces stimuli attention, $W$ is the weight of the layer that output the visual feature layer. Boosted attention is achieved using the Hadamard product on $I^{'}$. Their experiments work showed that boosted attention improved performance significantly. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.5\paperwidth]{figures/cros.png} \\ \small (a) Cross Attention Module~\cite{hou2019cross} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/criss-cross.PNG} \\ \small (b) Criss-Cross Attention~\cite{huang2019ccnet} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.48\paperwidth]{figures/perceiver.png} \\ \small (c) Perceiver~\cite{jaegle2021perceiver} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/boosted.png} \\ \small (d) Boosted Attention~\cite{chen2018boosted} \end{tabular} \caption{The core structure of cross-based (multi-modal) attention methods Attention-based Perceiver~\cite{jaegle2021perceiver}, Criss-Cross \cite{hou2019cross}, Boosted attention \cite{chen2018boosted}, Cross-attention module \cite{hou2019cross}. These methods belongs to soft or hard attention, but they use multi-modalities to generate the attention scores. The images are taken from the original papers. Best view in color.} \label{fig:multimodals} \end{figure*} \subsection{Logical Attention} Similar to how human beings pay more attention to the crucial features, some methods have been proposed to use recurrences to encode better relationships. These methods rely on using RNNs or any type of sequential network to calculate the attentions. We named it logical methods because they use architectures similar to logic gates. \vspace{1.5mm} \noindent \textbf{Sequential Attention Models}: Inspired by the primate visual system, Zoran~\latinphrase{et~al.}\xspace~\cite{zoran2020towards} proposed soft, sequential, spatial top-down attention method (S3TA) to focus more on attended regions of an image \cite{mott2019towards} (as shown in Figure~\ref{fig:logical} (b)). At each step of the sequential process, the model queries the input and refines the total score based on spatial information in a top-down manner. Specifically, the backbone extracts features \cite{resnet, huang2017densely, pham2018efficient} channels that are split into keys and values. A Fourier transform encodes the spatial information for these two sets to preserve the spatial information from disappearing for later use. The main module is a top-down controller, which is a version of Long-Short Term Model (LSTM) \cite{lstm}, where its previous state is decoded as query vectors. The size of each query vector equals the sum of channels in keys and spatial basis. At each spatial location, the similarity between these vectors is calculated through the inner product, and then the softmax concludes the attention scores. These attention scores are multiplied by the values, and then the summation is taken to produce the corresponding answer vector for each query. All these steps are in the current step of LSTM and are then passed to the next step. Note that the input of the attention module is an output of the LSTM state to focus more on the relevant information as well as the attention map comprises only one channel to preserve the spatial information. Empirical evaluations show that attention is crucial for adversarial robustness because adversarial perturbations drag the object's attention away to degrade the model performance. Such an attention model proved its ability to resist strong attacks \cite{madry2018towards} and natural noises \cite{hendrycks2019natural}. Although S3TA provides a novel method to empower the attention modules using recurrent networks, it is inefficient. \vspace{1.5mm} \noindent \textbf{Permutation invariant Attention}: Initially, Zaheer~\latinphrase{et~al.}\xspace~\cite{deep_sets} suggested handling deep networks in the form of sets rather than ordered lists of elements. For instance, performing pooling over sets of extracted features \latinphrase{e.g.}\xspace $\rho(pool({\phi(x_1), \phi(x_2), \cdots, \phi(x_n)}))$, where $\rho$ and $\phi$ are continuous functions and pool can be the $sum$ function. Formally, any set of deep learning features is invariant permutation if $f(\pi x) = \pi f(x)$. Hence, Lee~\latinphrase{et~al.}\xspace~\cite{permutation_invariant} proposed an attention-based method that processes sets of data. In \cite{deep_sets}, simple functions ($sum$ or $mean$) are proposed to combine the different branches of the network, but they lose important information due to squashing the data. To address these issues, set transformer~\cite{permutation_invariant} parameterizes the pooling functions and provides richer representations that can encode higher-order interaction. They introduced three main contributions: a) Set Attention Block (SAB), which is similar to Multi-head Attention Block (MAB) layer \cite{m_transformers}, but without positional encoding and dropout; b) induced Set Attention Blocks (ISAB), which reduced complexity from $\mathcal{O}(n^2)$ to $\mathcal{O}(mn)$, where $m$ is the size of induced point vectors and c) pooling by Multihead Attention (PMA) uses MAB over the learnable set of seed vectors. \vspace{1.5mm} \noindent \textbf{Show, Attend and Tell}: Xu~\latinphrase{et~al.}\xspace~\cite{xu2015show} introduced two types of attentions to attend to specific image regions for generating a sequence of captions aligned with the image using LSTM~\cite{zaremba2014recurrent}. They used two types of attention: hard attention and soft attention. Hard attention is applied to the latent variable after assigning multinoulli distribution to learn the likelihood of $log p(y|a)$, where $a$ is the latent variable. By using multinoulli distribution and reducing the variance of the estimator, they trained their model by maximizing a variational lower bound as pointed out in \cite{mnih2014recurrent, ba2015multiple} provided that attentions sum to $1$ at every point \latinphrase{i.e.}\xspace $\sum_i \alpha_{ti} = 1$, where $\alpha$ refers to attentions scores. For soft attention, they used softmax to generate the attention scores, but for $p(s_t|a)$ as in \cite{baldi2014dropout}, where $s_t$ is the extracted feature at this step. The training of soft attention is easily done by normal backpropagation and to minimize the negative log-likelihood of $-\log(p(y|a))+\sum_i(1 - \sum_t\alpha_{ti})^2$. This model achieved the benchmark for visual captioning at the time as it paved the way for visual attention to progress. \vspace{1.5mm} \noindent \textbf{Kalman Filtering Attention}: Liu~\latinphrase{et~al.}\xspace identified two main limitations that hinder using attention in various fields where there is insufficient learning or history~\cite{kalman}. These issues are 1) object's attention for input queries is covered by past training; and 2) conventional attentions do not encode hierarchical relationships between similar queries. To address these issues, they proposed the use of Kalman filter attention. Moreover, KFAtt-freq to capture the homogeneity of the same queries, correcting the bias towards frequent queries. \vspace{1.5mm} \noindent \textbf{Prophet Attention}: In prophet attention~\cite{prophet}, the authors noticed that the conventional attention models are biased and cause deviated focus to the tasks, especially in sequence models such as image captioning~\cite{captioning1, captioning2} and visual grounding~\cite{grounding1, grounding2}. Further, this deviation happens because attention models utilize previous input in a sequence to attend to the image rather than the outputs. As shown in Figure~\ref{fig:logical} (a), the model is attending on \enquote{yellow and umbrella} instead of \enquote{umbrella and wearing}. In a like-self-supervision way, they calculate the attention vectors based on the words generated in the future. Then, they guide the training process using these correct attentions, which can be considered a regularization of the whole model. Simply put, this method is based on summing the attentions of the post sequences in the same sentence to eliminate the impact of deviated focus towards the inputs. Overall, prophet attention addresses sequence-models biases towards history while disregarding the future. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.40\paperwidth]{figures/prophet.jpg} \\ \small (a) Prophet ~ \cite{prophet} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.40\paperwidth]{figures/s3ta.png} \\ \small (b) S3TA~ \cite{zoran2020towards} \end{tabular} \caption{The core structure of logic-based attention methods such as Prophet attention~\cite{prophet} and S3TA \cite{zoran2020towards}. Another type of attention which use logical networks such as RNN to infer the attention scores. The images are taken from the original papers. Best view in color.} \label{fig:logical} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/tell.PNG}% \caption{An example of categories attentions types (Guided Attention Inference Networks \cite{li2018tell})}% \label{fig:class} \end{figure} \subsection{Category-Based Attentions} The above methods generate the attention scores from the features regardless of the presence of the class. On the other hand, some methods use class annotation to force the network to attend over specific regions. \vspace{1.5mm} \noindent \textbf{Guided Attention Inference Network}: In \cite{li2018tell}, the authors proposed class-aware attention, namely Guided Attention Inference Networks (GAIN), guided by the labels. Instead of focusing only on the most discriminative parts in the image~\cite{zhou2016learning}, GAIN includes the contextual information in the feature maps. Following~\cite{selvaraju2017grad}, GAIN obtains the attention maps from an inference branch, which are then used for training. As shown in Figure~\ref{fig:class}, through 2D-convolutions, global average pooling, and ReLU, the important features are extracted $A^c$ for each class. Following this, the features of each class are obtained as $I - (T(A^c)\bigodot I)$ where $\bigodot$ is matrix multiplication. $T(A^c) = \frac{1}{1+exp(-w(A^c - \sigma))}$ where $\sigma$ is a threshold parameter and $w$ is a scaling parameter. Their experiments showed that without recursive runs, GAINS gained significant improvement over the state-of-the-art. \vspace{1.5mm} \noindent \textbf{Curriculum Enhanced Supervised Attention Network}: The majority of attention methods are trained in a weakly supervised manner, and hence, the attention scores are still far from the best representations \cite{m_transformers, m_nonlocal}. In \cite{zhu2020curriculum}, the authors introduced a novel idea to generate a Supervised-Attention Network (SAN). Using the convolution layers, they defined the output of the last layer to be equal to the number of classes. Therefore, performing attention using global average pooling \cite{lin2013network} yields a weight for each category. In a similar study, Fukui~\latinphrase{et~al.}\xspace proposed using a network composed of three branches to obtain class-specific attention scores: feature extractor to learn the discriminative features, attention branch to compute the attention scores based on a response model, perception to output the attention scores of each class by using the first two modules. The main objective was to increase the visual explanations \cite{zhou2016learning} of the CNN networks as it showed significant improvements in various fields such as fine-grained recognition and image classification. \vspace{1.5mm} \noindent \textbf{Attentional Class Feature Network} Zhang~\latinphrase{et~al.}\xspace~\cite{zhang2019acfnet} introduced ACFNet, a novel idea to exploit the contextual information for improving semantic segmentation. Unlike conventional methods that learn spatial-based global information \cite{chen2017rethinking}, this contextual information is categorial-based, firstly presenting the class-center concept and then employing it to aggregate all the corresponding pixels to form a specific class representation. In the training phase, ground-truth labels are used to learn class centers, while coarse segmentation results are used in the test phase. Finally, class-attention maps are the results of class centers and coarse segmentation outcomes. The results show significant improvement for semantic segmentation using ACFNet. \section{Attention in Vision} The primary purpose of the attention in vision is to imitate the human visual cognitive system and focus on the essential features~\cite{hermann2015teaching} in the input image. We categorize attention methods based on the main function used to generate attention scores such as softmax or sigmoid. \subsection{Soft (Deterministic) Attention} This section reviews soft-attention methods such as channel attention, spatial attention, and self-attention. In channel attention, the scores are calculated channel wise because each one of the feature maps (channels) attend to specific parts of the input. In spatial attention, the main idea is to attend to the critical regions in the image. Attending over regions of interest facilitates object detection, semantic segmentation, and person re-identification. In contrast to channel attention, spatial attention attends to the important parts in the spatial map (bounded by width and height). It can be used independently or as a complementary mechanism to channel attention. On the other hand, self-attention is proposed to encode higher-order interactions and contextual information by extracting the relationships between input sequence tokens. It is different from channel attention in how it generates the attention scores, as it mainly calculates the similarity between two maps (K, Q) of the same input, whereas channel attention generates the scores from a single map. However, self attention and channel attention both operate on channels. Soft attention methods calculate the attention scores as the weighted sum of all the input entities~\cite{luong2015effective} and mainly use soft functions such as softmax and sigmoid. Since these methods are differentiable, they can be trained through back-propagation techniques. However, they suffer from other issues such as high computational complexity and assigning weights to non-attended objects. \subsubsection{Channel Attention} \label{sec:channel} \vspace{1.5mm} \noindent \textbf{Squeeze \& Excitation Attention}: The Squeeze-and-Excitation (SE) Block~\cite{hu2018squeeze}, shown in Figure~\ref{fig:channels}(a), is a unit design to perform dynamic channel-wise feature attention. The SE attention takes the output of a convolution block and converts each channel to a single value via global average pooling; this process is called \enquote{squeeze}. The output channel ratio is reduced after passing through the fully connected layer and ReLU for adding non-linearity. The features are passed through the fully connected layer, followed by a sigmoid function to achieve a smooth gating operation. The convolutional block feature maps are weighted based on the side network's output, called the \enquote{excitation}. The process can be summarized as \begin{equation} f_s = \sigma( FC (ReLU( FC(f_g)) )), \label{eq:SE_att} \end{equation} where $FC$ is the fully connected layer, $f_g$ is the average global pooling, $\sigma$ is the sigmoid operation. The main intuition is to choose the best representation of each channel in order to generate attention scores. \vspace{1.5mm} \noindent \textbf{Efficient Channel Attention (ECA)~\cite{wang2020eca}} is based on squeeze $\&$ excitation network~\cite{hu2018squeeze} and aims to increase efficiency as well as decrease model complexity by removing the dimensionality reduction. ECA (see Fig~\ref{fig:channels}(g)) achieves cross-channel interaction locally through analyzing each channel and its $k$ neighbors, following channel-wise global average pooling but with no dimensionality reduction. ECA accomplishes efficient processing via fast 1D convolutions. The size $k$ represents the number of neighbors that can participate in one channel attention prediction \latinphrase{i.e.}\xspace the coverage of local cross-channel interaction. \vspace{1.5mm} \noindent \textbf{Split-Attention Networks}: ResNest~\cite{zhang2020resnest}, a variant of ResNet~\cite{resnet}, uses split attention blocks as shown in Figure~\ref{fig:channels}(h). Attention is obtained by summing the inputs from previous modules and applying global pooling, passing through a composite function \latinphrase{i.e.}\xspace convolutional layer-batch normalization-ReLU activation. The output is again passed through convolutional layers. Afterwards, a softmax is applied to normalize the values and then multiply them with the corresponding inputs. Finally, all the features are summed together. This mechanism is similar to the squeeze $\&$ excitation attention~\cite{hu2018squeeze}. ResNest is also a special type of squeeze $\&$ excitation where it squeezes the channels using average pooling and summing of the split channels. \vspace{1.5mm} \noindent \textbf{Channel Attention in CBAM}: Convolutional Block Attention Module (CBAM)~\cite{woo2018cbam} employs channel attention and exploits the inter-channel feature relationship as each feature map channel is considered a feature detector focusing on the \enquote{what} part of the input image. The input feature map's spatial dimensions are squeezed for computing the channel attention followed by aggregation while using both average-pooling and max-pooling to obtain two descriptors. These descriptors are forwarded to a three-layer shared multi-layer perceptron (MLP) to generate the attention map. Subsequently, the output of each MLP is summed element-wise and then passed through a sigmoid function as shown in Figure~\ref{fig:channels}(b). In summary, the channel attention is computed as \begin{equation} f_{ch} = \sigma( MLP(MaxPool(f)) + MLP(AvgPool(f))), \label{eq:ch-atten} \end{equation} where $\sigma$ denotes the sigmoid function, and $f$ represents the input features. The ReLU activation function is employed in MLP after each convolutional layer. Channel attention in CBAM is the same as Squeeze and Excitation (SE) attention~\cite{se} if only average pooling is used. \vspace{1.5mm} \noindent \textbf{Second-order Attention Network}: For single image super-resolution, in~\cite{Dai_2019_CVPR}, the authors presented a second-order channel attention module, abbreviated as SOCA, to learn feature interdependencies via second-order feature statistics. A covariance matrix ($\Sigma$) is first computed and normalized using the features map from the previous network layers to obtain discriminative representations. The symmetric positive semi-definite covariance matrix is decomposed into $\Sigma = U\Lambda U^T$, where $U$ is orthogonal, and $\Lambda$ is the diagonal matrix with non-increasing eigenvalues. The power of eigenvalues $\Sigma = U\Lambda^\alpha U^T$ help in achieving the attention mechanism, that is, if $\alpha < 1$, then the eigenvalues larger than 1.0 will nonlinearly shrink while stretching others. The authors chose $\alpha < \frac{1}{2}$ based on previous work~\cite{li2017second}. The subsequent attention mechanism is similar to SE~\cite{hu2018squeeze} as shown in Figure~\ref{fig:channels}(c), but instead of providing first-order statistics (\latinphrase{i.e.}\xspace global average pooling), the authors furnished second-order statistics (\latinphrase{i.e.}\xspace global covariance pooling). \vspace{1.5mm} \noindent \textbf{High-Order Attention}: To encode global information and contextual representations, \cite{ding2020high}, Ding~\latinphrase{et~al.}\xspace proposed High-order Attention (HA) with adaptive receptive fields and dynamic weights. HA mainly constructs a feature map for each pixel, including the relationships to other pixels. HA is required to address the issue of fixed-shape receptive fields that cause false prediction in case of high-shape objects \latinphrase{i.e.}\xspace, similar shape objects. Specifically, after calculating the attention maps for each pixel, graph transduction is used to form the final feature map. This feature representation is used to update each pixel position by using the weighted sum of contextual information. High-order attention maps are calculated using Hadamard product \cite{horn1990hadamard, kim2016hadamard}. It is classified as channel attention because it is generate attention scores from channels as in SE \cite{se}. \vspace{1.5mm} \noindent \textbf{Harmonious attention}: proposes a joint attention module of soft pixel and hard regional attentions \cite{harmonious}. The main idea is to tackle the limitation of the previous attention modules in person Re-Identification by learning attention selection and feature representation jointly and hence solving the issue of misalignment calibration caused by constrained attention mechanisms \cite{harmony1, harmony2, harmony3, harmony4}. Specifically, harmonious attention learns two types of soft attention (spatial and channel) in one branch and hard attention in the other one. Moreover, it proposes cross-interaction attention harmonizing between these two attention types as shown in Figure~\ref{fig:channels}(i). \input{sections/table} \vspace{1.5mm} \noindent \textbf{Auto Learning Attention}: Ma~\latinphrase{et~al.}\xspace \cite{NEURIPS2020_103303dd} introduced a novel idea for designing attention automatically. The module, named Higher-Order Group Attention (HOGA), is in the form of a Directed Acyclic Graph (DAG) \cite{pham2018efficient, dag1, dag2, dag3} where each node represents a group and each edge represents a heterogeneous attention operation. There is a sequential connection between the nodes to represent hybrids of attention operations. Thus, these connections can be represented as K-order attention modules, where K is the number of attention operations. DARTS \cite{liu2018darts} is customized to facilitate the search process efficiently. This auto-learning module can be integrated into legacy architectures and performs better than manual ones. However, the core idea of attention modules remains the same as the previous architectures \latinphrase{i.e.}\xspace SE \cite{se}, CBAM \cite{woo2018cbam}, splat \cite{zhang2020resnest}, mixed \cite{chen2019mixed}. \vspace{1.5mm} \noindent \textbf{Double Attention Networks}: Chen~\latinphrase{et~al.}\xspace~\cite{chen20182} proposed Double Attention Network (A2-Nets), which attends over the input image in two steps. The first step gathers the required features using bilinear pooling to encode the second-order relationships between entities, and the second step distributes the features over the various locations adaptively. In this architecture, the second-order statistics, which are mostly lost with other functions such as average pooling of SE~\cite{se}, of the pooled features are captured first by bilinear pooling. The attention scores are then calculated not from the whole image such as \cite{m_nonlocal} but from a compact bag, hence, enriching the objects with the required context only. The first step \latinphrase{i.e.}\xspace, feature gathering uses the outer product $\sum_{\forall i} a_i b_i^T$ then softmax is used for attending the discriminative features. The second step \latinphrase{i.e.}\xspace, distribution is based on complementing each location with the required features where their summation is $1$. The complete design of A2-Nets is shown in Figure~\ref{fig:channels}(d). Experimental comparisons demonstrated that A2-Net improves the performance better than SE and non-local networks, and is more efficient in terms of memory and time. \vspace{1.5mm} \noindent \textbf{Dual Attention Network}: Jun~\latinphrase{et~al.}\xspace~\cite{fu2019dual} presented a dual attention network for scene segmentation composed of position attention and channel attention working in parallel. The position attention aims to encode the contextual features in local ones. The attention process is straightforward: the input features $f_A$ are passed through three convolutional layers to generate three feature maps ($f_B$, $f_C$, and $f_D$), which are reshaped. Matrix multiplication is performed between the $f_B$ and the transpose of $f_C$, followed by softmax to obtain the spatial attention map. Again, matrix multiplication is performed between the generated $f_D$ features and the spatial attention map. Finally, the output is multiplied with a scalar and summed element-wise with the input features $f_A$ as shown in Figure~\ref{fig:channels}(f). Although channel attention involves similar steps as position attention, it is different because the features are used directly without passing through convolutional layers. The input features $f_A$ are reshaped, transposed, multiplied (\latinphrase{i.e.}\xspace $f_A \times f_A'$), and then passed through the softmax layer to obtain the channel attention map. Moreover, the input features are multiplied with the channel attention map, followed by the element-wise summation, to give the final output as shown in Figure~\ref{fig:channels}(e). \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.2\paperwidth, height=.05\paperheight]{figures/SE_att.png} \\ \small (a) SENet \cite{hu2018squeeze} \tabularnewline \includegraphics[width=.22\paperwidth, height=.05\paperheight]{figures/Channel-Attention.png} \\ \small (b) CBAM \cite{woo2018cbam} \tabularnewline \includegraphics[width=.22\paperwidth, height=.08\paperheight]{figures/soca.png} \\ \small (c) SOCA \cite{Dai_2019_CVPR} \tabularnewline \includegraphics[width=.22\paperwidth, height=.08\paperheight]{figures/double_attention.PNG} \\ \small (d) A$^2-$Net \cite{chen20182} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.27\paperwidth, height=.09\paperheight]{figures/dan_position.PNG} \\ \small (e) DAN Positional\cite{fu2019dual} \tabularnewline \includegraphics[width=.27\paperwidth, height=.09\paperheight]{figures/dan_channel.PNG} \\ \small (f) DAN Channel \cite{fu2019dual} \tabularnewline \includegraphics[width=.25\paperwidth, height=.09\paperheight]{figures/ECA.png} \\ \small (g) ECA-Net \cite{wang2020eca} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.2\paperwidth, height=.16\paperheight]{figures/ResNest_att.PNG} \\ \small (h) RESNest \cite{zhang2020resnest} \tabularnewline \includegraphics[width=.2\paperwidth, height=.14\paperheight]{figures/harmonious.png} \\ \small (i) Harmonious \cite{harmonious} \end{tabular} \caption{Core structures of the channel-based attention methods. Different methods to generate the attention scores including squeeze and excitation \cite{se}, splitting and squeezing \cite{zhang2020resnest}, calculating the second order \cite{fu2019dual} or efficient squeezing and excitation \cite{wang2020eca}. Images are taken from the original papers and are best viewed in color.} \label{fig:channels} \end{figure*} \vspace{1.5mm} \noindent \textbf{Frequency Channel Attention}: Channel attention requires global average pooling as a pre-processing step. Qin~\latinphrase{et~al.}\xspace~\cite{qin2020fcanet} argued that the global average pooling operation can be replaced with frequency components. The frequency attention views the discrete cosine transform as the weighted input sum with the cosine parts. As global average pooling is a particular case of frequency-domain feature decomposition, the authors use various frequency components of 2D discrete cosine transform, including the zero-frequency component, \latinphrase{i.e.}\xspace global average pooling. \subsubsection{Spatial Attention} \label{sec:spatial} Different from channel attention that mainly generates channel-wise attention scores, spatial attention focuses on generating attention scores from spatial patches of the feature maps rather than the channels. However, the sequence of operations to generate the attentions are similar. \\ \noindent \textbf{Spatial Attention in CBAM} uses the inter-spatial feature relationships to complement the channel attention~\cite{woo2018cbam}. The spatial attention focuses on an informative part and is computed by applying average pooling and max pooling channel-wise, followed by concatenating both to obtain a single feature descriptor. Furthermore, a convolution layer on the concatenated feature descriptor is applied to generate a spatial attention map that encodes to emphasize or suppress. The feature map channel information is aggregated via average-pooled features and max-pooled features and then concatenated and convolved to generate a 2D spatial attention map. The overall process is shown in Figure~\ref{fig:spatial}(a) and computed as \begin{equation} f_{sp} = \sigma( Conv_{7\times 7}([MaxPool(f); AvgPool(f)])), \label{eq:sp-atten} \end{equation} where $Conv_{7\times 7}$ denotes a convolution operation with the 7 $\times$ 7 kernel size and $\sigma$ represents the sigmoid function. \vspace{1.5mm} \noindent \textbf{Co-attention \& Co-excitation}: Hsieh~\latinphrase{et~al.}\xspace~\cite{NEURIPS2019_92af93f7} proposed co-attention and co-excitation to detect all the instances that belong to the same target for one-shot detection. The main idea is to enrich the extracted feature representation using non-local networks that encode long-range dependencies and second-order interactions \cite{m_nonlocal}. Co-excitation is based on squeeze-and-excite network \cite{se} as shown in Figure~\ref{fig:spatial}(e). While squeeze uses global average pooling \cite{lin2013network} to reweight the spatial positions, co-excite serves as a bridge between the feature of query and target. Encoding high-contextual representations using co-attention and co-excitation show improvements in one-shot detector performance achieving state-of-the-art results. \vspace{1.5mm} \noindent \textbf{Spatial Pyramid Attention Network} abbreviated as SPAN~\cite{hu2020span}, was proposed for localizing multiple types of image manipulations. It is composed of three main blocks \latinphrase{i.e.}\xspace feature extraction (head) module, pyramid spatial attention module, and decision (tail) module. The head module employs the Wider \& Deeper VGG Network as the backbone, while Bayer and SRM layers extract features from visual artifacts and noise patterns. The spatial relationship of the pixels is achieved through five local self-attention blocks applied recursively, and to preserve the details, the input of the self-attention is added to the output of the block as shown in Figure~\ref{fig:spatial}(c). These features are then fed into the final tail module of 2D convolutional blocks to generate the output mask after employing a sigmoid activation. \vspace{1.5mm} \noindent \textbf{Spatial-Spectral Self-Attention}: Figure~\ref{fig:spatial}(d) shows the architecture of spatial-spectral self-attention which is composed of two attention modules, namely, spatial attention and spectral attention, both utilizing self-attention. \begin{enumerate \item {Spatial Attention:} To model the non-local region information, Meng~\latinphrase{et~al.}\xspace~\cite{meng2020end} utilize a 3$\times$3 kernel to fuse the input features indicating the region-based correlation followed by a convolutional network mapping the fused features into Q $\&$ K. The kernel number indicates the heads' number and the size denotes the dimension. Moreover, the dimension-specified features from Q $\&$ K build the related attention maps, then modulating the corresponding dimension in a sequence to achieve the order-independent property. Finally, to finish the spatial correlation modeling, the features are forwarded to a deconvolution layer. \item{Spectral Attention}: First, the spectral channel samples are convolved with one kernel and flattened into a signle dimension, set as the feature vector for that channel. The input feature is converted to Q $\&$ K, building attention maps for the spectral axis. The adjacent channels have a higher correlation due to the image patterns on the exact location, denoted via a spectral smoothness on the attention maps. The similarity is indicated by normalized cosine distance as spectral embedding where each similar score is scaled and summed with the coefficients in the attention maps, which is then to modulate \enquote{Value} in self-attention, inducing spectral smoothness constraint. \end{enumerate} \vspace{1.5mm} \noindent \textbf{Pixel-wise Contextual Attention}: (PiCANet)~\cite{liu2018picanet} aims to learn accurate saliency detection. PiCANet generates a map at each pixel over the context region and constructs an accompanied contextual feature to enhance the feature representability at the local and global levels. To generate global attention, each pixel needs to \enquote{see} via ReNet~\cite{visin2015renet} with four recurrent neural networks sweeping horizontally and vertically. The contexts from directions, using biLSTM, are blended propagating information of each pixel to all other pixels. Next, a convolutional layer transforms the feature maps to different channels, further normalized by a softmax function used to weight the feature maps. The local attention is performed on a local neighborhood forming a local feature cube where each pixel needs to \enquote{see} every other pixel in the local area using a few convolutional layers having the same receptive field as the patch. The features are then transformed to channel and normalized using softmax, which are further weighted summed to get the final attention. \vspace{1.5mm} \noindent \textbf{Pyramid Feature Attention} extracts features from different levels of VGG~\cite{zhao2019pyramid}. The low-level features extracted from lower layers of VGG are provided to the spatial attention mechanism~\cite{woo2018cbam}, and the high-level features obtained from the higher layers are supplied to a channel attention mechanism~\cite{woo2018cbam}. The term feature pyramid attention originates form the VGG features obtained from different layers. \vspace{1.5mm} \noindent \textbf{Spatial Attention Pyramid}: For unsupervised domain adaptation, Li~\latinphrase{et~al.}\xspace~\cite{li2020spatial} introduced a spatial attention pyramid that takes features from multiple average pooling layers with various sizes operating on feature maps. These features are forwarded to spatial attention followed by channel-wise attention. All the features after attention are concatenated to form a single semantic vector. \vspace{1.5mm} \noindent \textbf{Region Attention Network}: (RANet)~\cite{shen2020ranet} was proposed for semantic segmentation. It consists of novel network components, the Region Construction Block (RCB) and the Region Interaction Block (RIB), for constructing the contextual representations as illustrated in Figure~\ref{fig:spatial}(b). The RCB analyzes the boundary score and the semantic score maps jointly to compute the attention region score for each image pixel pair. High attention score indicates that the pixels are from the same object region, dividing the image into various object regions. Subsequently, the RIB takes the region maps and selects the representative pixels in different regions where each representative pixel receives the context from other pixels to effectively represent the object region's local content. Furthermore, capturing the spatial and category relationship between various objects communicating the representative pixels in the different regions yields the global contextual representation to augment the pixels, eventually forming the contextual feature map for segmentation. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/Spatial-attention.png}\\ \small (a) Spatial Attention \cite{woo2018cbam} \tabularnewline \includegraphics[width=.29\paperwidth]{figures/RANet.PNG} \small \\(b) RANet~\cite{shen2020ranet} \tabularnewline \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.32\paperwidth]{figures/co-excite.png} \small \\(c) Co-excite \cite{NEURIPS2019_92af93f7} \end{tabular} \caption{The core structure of the spatial-based attention methods RANet~\cite{shen2020ranet}, and Co-excite \cite{NEURIPS2019_92af93f7}. These methods focus on attending to the most important parts in the spatial map. The images are taken from the original papers. Best view in color.} \label{fig:spatial} \end{figure*} \subsubsection{Self-attention } \label{sec:self} Self-attention, also known as \emph{intra-attention}, is an attention mechanism that encodes the relationships between all the input entities. It is a process that enables input sequences to interact with each other and aggregate the attention scores, which illustrate how similar they are. The main idea is to replicate the feature maps into three copies and then measure the similarity between them. Apart from channel- and spatial-wise attention that use the physical feature maps, self-attention replicates feature copies to measure long-range dependencies. However, self-attention methods use channels to calculate attention scores. Cheng~\latinphrase{et~al.}\xspace extracted the correlations between the words of a single sentence using Long-Short-Term Memory (LSTM) \cite{m_self_attention}. An attention vector is produced from each hidden state during the recurrent iteration, which attends all the responses in a sequence for this position. In \cite{m_self_attention_parikah}, a decomposable solution was proposed to divide the input into sub-problems, which improved the processing efficiency compared to \cite{m_self_attention}. The attention vector is calculated as an alignment factor to the content (bag-of-words). Although these methods introduced the idea of self-attention, they are very expensive in terms of resources and do not consider contextual information. Also, RNN models process input sequentially; hence, it is difficult to parallelize or process large-scale schema efficiently. \vspace{1.5mm} \noindent \textbf{Transformers}: Vaswani~\latinphrase{et~al.}\xspace~\cite{m_transformers} proposed a new method, called transformers, based on the self-attention concept without convolution or recurrent modules. As shown in Figure~\ref{fig:self_attentions} (f), it is mainly composed of encoder-decoder layers, where the encoder comprises of self-attention module followed by positional feed-forward layer and the decoder is the same as the encoder except that it has an encoder-decoder attention layer in between. Positional encoding is represented by a sine wave that incorporates the passage of time as input before the linear layer. This positional encoding serves as a generalization term to help recognize unseen sequences and encodes relative positions rather than absolute representations. Algorithm~\ref{algorithm:self-attention} shows the detailed steps of calculating self-attention (multi-head attention) using transformers. Although transformers have achieved much progress in the text-based models, it lacks in encoding the context of the sentence because it calculates the word's attention for the left-side sequences. To address this issue, Bidirectional Encoder Representations from Transformers (BERT) learns the contextual information by encoding both sides of the sentence jointly \cite{m_BERT}. \begin{algorithm}[t] \SetAlgoLined \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input {set of sequences $(x_1, x_2, ..., x_n)$ of an entity $\mathbf{X} \in \mathbf{R}$} \Output{attention scores of $\mathbf{X}$ sequences.} { Initialize weights: Key ($\mathbf{W_K}$), Query ($\mathbf{W_Q}$), Value ($\mathbf{W_V}$) for each input sequence. \\ Derive Key, Query, Value for each input sequence and its corresponding weight, such that $\mathbf{Q = XW_Q}$, $\mathbf{K = XW_K}$, $\mathbf{V = XV_Q}$, respectively.\\ Compute attention scores by calculating the dot product between the query and key.\\ Compute the scaled-dot product attention for these scores and Values $\mathbf{V}$, \[ \mathrm{softmax} \left( \frac{\mathbf{QK^T}}{\sqrt{d_k}}\right)\mathbf{V}.\]\\ repeat steps from 1 to 4 for all the heads \\ } \caption{The main steps of generating self-attention by transformers (multi-head attention)} \label{algorithm:self-attention} \end{algorithm} \vspace{1.5mm} \noindent \textbf{Standalone self-attention}: As stated above, convolutional features do not consider the global information due to their local-biased receptive fields. Instead of augmenting attentional features to the convolutional ones, Ramachandran~\latinphrase{et~al.}\xspace~\cite{m_standalone} proposed a fully-attentional network that replaces spatial convolutions with self-attentional modules. The convolutional stem (the first few convolutions) is used to capture spatial information. They designed a small kernel (\latinphrase{e.g.}\xspace $n \times n$) instead of processing the whole image simultaneously. This design built a computationally efficient model that enables processing images with their original sizes without downsampling. The computation complexity is reduced to $\mathcal{O}(hwn^2)$, where $h$ and $w$ denotes height and width, respectively. A patch is extracted as a query along with each local patch, while the identity image is used as Value and Key. Calculating the attention maps follows the same steps as in Algorithm \ref{algorithm:self-attention}. Although stand-alone self-attention shows competitive results compared to convolutional models, it suffers from encoding positional information. \vspace{1.5mm} \noindent \vspace{1.5mm} \noindent \textbf{Clustered Attention}: To address the computational inefficiency of transformers, Vyas~\latinphrase{et~al.}\xspace~\cite{m_clsutered} proposed a clustered attention mechanism that relies on the idea that correlated queries follow the same distribution around Euclidean centers. Based on this idea, they use the K-means algorithm with fixed centers to group similar queries together. Instead of calculating the queries for attention, they are calculated for clusters' centers. Therefore, the total complexity is minimized to a linear form $\mathcal{O}(qc)$, where $q$ is the number of the queries while $c$ is the cluster number. \vspace{1.5mm} \noindent \textbf{Slot Attention}: Locatello~\latinphrase{et~al.}\xspace~\cite{scouter_slot} proposed slot attention, an attention mechanism that learns the objects' representations in a scene. In general, it learns to disentangle the image into a series of slots. As shown in Figure~\ref{fig:self_attentions}(d), each slot represents a single object. Slot attention module is applied to a learned representation $h \in \mathbb{R}^{W \times H \times D}$, where $H$ is the height, $W$ is the width, and $D$ is the representation size. SAM has two main steps: learning $n$ slots using an iterative attention mechanism and representing individual objects (slots). Inside each iteration, two operations are implemented: 1) slot competition using softmax followed by normalization according to slot dimension using this equation \begin{equation} a = Softmax \bigg( \frac{1}{\sqrt{D}} n (h). q(c)^T\bigg). \end{equation} 2) An aggregation process for the attended representations with a weighted mean \begin{equation} r = Weightedmean \bigg(a, v(h) \bigg), \end{equation} where $k, q, v$ are learnable variables as showed in \cite{m_transformers}. Then, a feed-forward layer is used to predict the slot representations $s=fc(r)$. Slot attention is based on Transformer-like attention \cite{m_transformers} on top of CNN-feature extractors. Given an image $\mathbb{I}$, the slot attention parses the scene into a set of slots, each one referring to an object $(z, x, m)$, where $z$ is the object feature, $x$ is the input image and $m$ is the mask. In the decoders, convolutional networks are used to learn slot representations and object masks. The training process is guided by $\ell_2$-norm loss \begin{equation} \mathbb{L} = \bigg\rVert \bigg( \sum_{k=1}^K mx_k\bigg) - \mathbb{I} \bigg\lVert_2^2 \end{equation} Following the slot-attention module, Li~\latinphrase{et~al.}\xspace developed an explainable classifier based on slot attentions \cite{scouter_slot}. This method aims to find the positive supports and negatives ones for a class $l$. In this way, a classifier can also be explained rather than being completely black-box. The primary entity of this work is xSlot, a variant of slot attention~\cite{slot}, which is related to a category and gives confidence for inclusion of this category in the input image. \vspace{1.5mm} \noindent \textbf{Efficient Attention}: Using Asymmetric Clustering (SMYRF) Daras~\latinphrase{et~al.}\xspace~\cite{SMYRF} proposed symmetric Locality Sensitive Hashing (LSH) clustering in a novel way to reduce the size of attention maps, therefore, developing efficient models. They observed that attention weights are sparse as well as the attention matrix is low-rank. As a result, pre-trained models pertain to decay in their values. In SMYRF, this issue is addressed by approximating attention maps through balanced clustering, produced by asymmetric transformations and an adaptive scheme. SMYRF is a drop-in replacement for pre-trained models for normal dense attention. Without retraining models after integrating this module, SMYRF showed significant effectiveness in memory, performance, and speed. Therefore, the feature maps can be scaled up to include contextual information. In some models, the memory usage is reduced by $50\%$. Although SMYRF enhanced memory usage in self-attention models, the improvement over efficient attention models is marginal (see Figure~\ref{fig:self_attentions} (a)). \vspace{1.5mm} \noindent \textbf{Random Feature Attention}: Transformers have a major shortcoming with regards to time and memory complexity, which hinder attention scaling up and thus limiting higher-order interactions. Peng~\latinphrase{et~al.}\xspace~\cite{peng2021random} proposed reducing the space and time complexity of transformers from quadratic to linear. They simply enhance softmax approximation using random functions. Random Feature Attention (RFA) \cite{rawat2019sampled} uses a variant of softmax that is sampled from simple distribution-based Fourier random features \cite{rahimi2007random, yang2014quasi}. Using the kernel trick $exp(x.y) \approx \phi(x).\phi(y)$ of \cite{hofmann2008kernel}, softmax approximation is reduced to linear as shown in Figure~\ref{fig:self_attentions} (c). Moreover, the similarity of RFA connections and recurrent networks help in developing a gating mechanism to learn recency bias \cite{lstm, cho2014learning, schmidhuber1992learning}. RFA can be integrated into backbones easily to replace the normal softmax without much increase in the number of parameters, only $0.1\%$ increase. Plugging RFA into a transformer show comparable results to softmax, while gating RFA outperformed it in language models. RFA executes 2$\times$ faster than a conventional transformer. \vspace{1.5mm} \noindent \textbf{Non-local Networks}: Recent breakthroughs in the field of artificial intelligence are mostly based on the success of Convolution Neural Networks (CNNs) \cite{m_deep_learning, resnet}. In particular, they can be processed in parallel mode and are inductive biases for the extracted features. However, CNNs fail to learn the context of the whole image due to their local-biased receptive fields. Therefore, long-range dependencies are disregarded in CNNs. In \cite{m_nonlocal}, Wang~\latinphrase{et~al.}\xspace proposed non-local networks to alleviate the bias of CNNs towards the local information and fuse global information into the network. It augments each pixel of the convolutional features with the contextual information, the weighted sum of the whole feature map. In this manner, the correlated patches in an image are encoded in a long-range fashion. Non-local networks showed significant improvement in long-range interaction tasks such as video classification \cite{m_kinect} as well as low-level image processing \cite{m_non_denoise, non_local_advers}. Non-local model attention in the network in a graphical fashion \cite{m_graph_atten}. However, stacking multiple non-local modules in the same stage shows instability and ill-pose in the training process \cite{m_non_local_diffuse}. In~\cite{liu2020learning}, Liu~\latinphrase{et~al.}\xspace uses non-local networks to form self-mutual attention between two modalities (RGB and Depth) to learn global contextual information. The idea is straightforward \latinphrase{i.e.}\xspace to sum the corresponding features before softmax normalization such that $\mathrm{softmax}(f^r (\mathbb{X}^r)+\alpha^d \bigodot f^d (\mathbb{X}^d))$ for RGB attention and vice versa. \vspace{1.5mm} \noindent \textbf{Non-Local Sparse Attention (NLSA)}: Mei~\latinphrase{et~al.}\xspace \cite{mei2021image} proposed a sparse non-local network to combine the benefits of non-local modules to encode long-range dependencies and sparse representations for robustness. Deep features are split into different groups (buckets) with high inner correlations. Locality Sensitive Hashing (LSH) \cite{gionis1999similarity} is used to find similar features to each bucket. Then, the Non-Local block processes the pixel within its bucket and the similar ones. NLSA reduces the complexity to asymptotic linear from quadratic as well as uses the power of sparse representations to focus on informative regions only. \vspace{1.5mm} \noindent \textbf{X-Linear Attention}: Bilinear pooling is a calculation process that computers the outer product between two entities rather than the inner product \cite{bilinear4, bilinear3, bilinear2, bilinear1} and has shown the ability to encode higher-order interaction and thus encourage more discriminability in the models. Moreover, it yields compact models with the required details even though it compresses the representations \cite{bilinear2}. In particular, bilinear applications has shown significant improvements in fine-grained visual recognition \cite{fine3, fine2, fine1} and visual question answering \cite{yu2017multi}. As Figure~\ref{fig:self_attentions}(e) depicts, a low-rank bilinear pooling is performed between queries and keys, and hence, the $2^{nd}$-order interactions between keys and queries are encoded. Through this query-key interaction, spatial-wise and channel-wise attention are aggregated with the values. The channel-wise attention is the same as squeeze-excitation attention \cite{se}. The final output of the x-linear module is aggregated with the low-rank bilinear of keys and values \cite{pan2020x}. They claimed that encoding higher interactions require only repeating the x-linear module accordingly (\latinphrase{e.g.}\xspace three iterative x-linear blocks for $4^{th}$-order interactions). Modeling infinity-order interaction is also explained by the usage of h Exponential Linear Unit \cite{barron2017continuously}. X-Linear attention module proposes a novel mechanism to attentions, different from transformer \cite{m_transformers}. It is able to encode the relations between input tokens without positional encoding with only linear complexity as opposed to quadratic in transformer. \vspace{1.5mm} \noindent \textbf{Axial-Attention}: Wang~\latinphrase{et~al.}\xspace~\cite{axial_attention} proposed axial attention to encode global information and long-range context for the subject. Although conventional self-attention methods use fully-connected layers to encode non-local interactions, they are very expensive given their dense connections \cite{m_transformers,bert,detr,image_transfomer}. Axial uses self-attention models in a non-local way without any constraints. Simply put, it factorizes the 2D self-attentions in two axes (width and height) of 1D-self attentions. This way, axial attention shows effectiveness in attending over wide regions. Moreover, unlike \cite{bam, ramachandran2019stand, hu2019local}, axial attention uses positional information to include contextual information in an agnostic way. With axial attention, the computational complexity is reduced to $\mathcal{O}(hwm)$. Also, axial attention showed competitive performance not only in comparison to full-attention models \cite{attention_augmented,m_standalone}, but convolutional ones as well \cite{resnet, huang2017densely}. \vspace{1.5mm} \noindent \textbf{Efficient Attention Mechanism}: Conventional attention mechanisms are built on double matrix multiplication that yields quadratic complexity $n \times n$ where $n$ is the size of the matrix. Many methods propose efficient architectures for attention \cite{kitaev2019reformer, Efficient_attention, wu2021centroid, kim2020fastformers}. In \cite{Efficient_attention}, Zhuoran~\latinphrase{et~al.}\xspace used the associative feature of matrix multiplication and suggested efficient attention. Formally, instead of using dot-product of the form $\rho (QK^T)V$, they process it in an efficient sequence $\rho_q(Q)(\rho_k(K)^T V)$ where $\rho$ denotes a normalization step. Regarding the normalization of $\mathrm{softmax}$, it is performed twice instead of once at the end. Hence, the complexity is reduced from quadratic $\mathcal{O}(n^2)$ to linear $\mathcal{O}(n)$. Through a simple change, the complexity of processing and memory usage are reduced to enable the integration of attention modules in large-scale tasks. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.25\paperwidth]{figures/eff_attention.PNG} \\ \small (a) Efficient Attention~\cite{Efficient_attention} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.16\paperwidth]{figures/slot.PNG} \\ \small (b) Slot Attention~\cite{slot} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.30\paperwidth]{figures/RFA.png} \\ \small (c) RFA~\cite{peng2021random} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.27\paperwidth]{figures/xlinear.png} \\ \small (d) X-Linear~\cite{pan2020x} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.27\paperwidth]{figures/axial.png} \\ \small (e) Axial~\cite{axial_attention} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.18\paperwidth]{figures/transformer.PNG} \\ \small (f) Transformer~\cite{m_transformers} \end{tabular} \caption{ The core structure of self-attention methods: Transformers~\cite{m_transformers}, Axial attention~\cite{axial_attention}, X-Linear~\cite{pan2020x}, Slot~\cite{slot} and RFA~\cite{peng2021random}. All of these methods are self-attention, which generate the scores from measuring the similarity between two maps of the same input. However, there is difference in the way of processing. The images are taken from the original papers. Best view in color.} \label{fig:self_attentions} \end{figure*} \subsubsection{Arithmetic Attention} \label{sec:arithmetic} This part introduces arithmetic attention methods such as dropout, mirror, reverse, inverse, and reciprocal. We named it arithmetic because these methods are different from the above techniques even though they use their core. However, these methods mainly produce the final attention scores from simple arithmetic equations such as the reciprocal of the attention, \latinphrase{etc.}\xspace \vspace{1.5mm} \noindent \textbf{Attention-based Dropout Layer}: In weakly-supervised object localization, detecting the whole object without location annotation is a challenging task \cite{wsol1, wsol1, wsol2}. Choe~\latinphrase{et~al.}\xspace \cite{choe2019attention} proposed using the dropout layer to improve the localization accuracy through two steps: making the whole object location even by hiding the most discriminative part and attending over the whole area to improve the recognition performance. As Figure~\ref{fig:arithmetic} (a) shows, ADL has two branches: 1) drop mask to conceal the discriminative part, which is performed by a threshold hyperparameter where values bigger than this threshold are set to zero and vice versa, and 2) importance map to give weight for the channels contributions by using a sigmoid function. Although the proposed idea is simple, experiments showed it is efficient (gained 15 $\%$ more than the state-of-the-art). \vspace{1.5mm} \noindent \textbf{Mirror Attention}: In a line detection application \cite{jin2020semantic}, Lee~\latinphrase{et~al.}\xspace developed mirrored attention to learn more semantic features. They flipped the feature map around the candidate line, and then concatenated the feature maps together. In case the line is not aligned, zero padding is applied. \vspace{1.5mm} \noindent \textbf{Reverse Attention}: Huang~\latinphrase{et~al.}\xspace~\cite{BMVC2017_18} proposed the negative context (\latinphrase{e.g.}\xspace what is not related to the class) in training to learn semantic features. They were motivated by less discriminability between the classes in the high-level semantic representations and the weak response to the correct class from the latent representations. The network is composed of two branches, the first one learns discriminative features using convolutions for the target class, and the second one learns the reverse attention scores that are not associated with the target class. These scores are aggregated together to form the final attentions shown in Figure~\ref{fig:arithmetic} (b). A deeper look inside the reverse attention shows that it is mainly dependent on negating the extracted features of convolutions followed by sigmoid $\mathrm{sigmoid} (-F_{conv})$. However, for the purpose of convergence, this simple equation is changed to be $\mathrm{sigmoid}(\frac{1}{ReLU(F_{conv})+0.125} - 4)$. On semantic segmentation datasets, reverse attention achieved significant improvement over state-of-the-art. In a similar work, Chen~\latinphrase{et~al.}\xspace~\cite{chen2018reverse} proposed using reverse attention for salient object detection. The main intuition was to erase the final predictions of the network and hence learn the missing parts of the objects. However, the calculation method of attention scores is different from \cite{BMVC2017_18}, whereas they used $1 - \mathrm{sigmoid}(F_{i+1})$ where $F_{i+1}$ denoted the features of the next stage. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.5\paperwidth]{figures/adl.PNG} \\ \small (a) Attention-Based Dropout Layer \cite{choe2019attention} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.28\paperwidth]{figures/ran.png} \\ \small (b) Reverse Attention \cite{BMVC2017_18} \end{tabular} \caption{The core structure of arithmetic-based attention methods Attention-based Dropout~\cite{choe2019attention}, Reverse Attention \cite{BMVC2017_18}. These methods use arithmetic operations to generate the attention scores such as reverse, dropout or reciprocal. The images are taken from the original papers. Best view in color.} \label{fig:arithmetic} \end{figure*} \subsubsection{Multi-modal attentions} \label{sec:Multi-modal} As the name reveals, multi-modal attention is proposed to handle multi-modal tasks, using different modalities to generate attentions, such as text and image. It should be noted that some attention methods below, such as Perceiver~\cite{jaegle2021perceiver} and Criss-Cross \cite{context1, context2}, are transformer types~\cite{m_transformers}, but are customized for multi-modal tasks by including text, audio, and image. \vspace{1.5mm} \noindent \textbf{Cross Attention Network}: In \cite{hou2019cross}, a cross attention module (CAN) was proposed to enhance the overall discrimination of few-shot classification \cite{sung2018learning}. Inspired by the human behavior of recognizing novel images, the similarity between seen and unseen parts is identified first. CAN learns to encode the correlation between the query and the target object. As Figure~\ref{fig:multimodals} (a) shows, the features of query and target are extracted independently, and then a correlation layer computes the interaction between them using cosine distance. Next, 1D convolution is applied to fuse correlation (GAP is performed first) and attentions, followed by softmax normalization. The output is reshaped to give a single channel feature map to preserve spatial representations. Although, experiments show that CAN produces state-of-the-art results, it depends on non-learnable functions such as the cosine correlation. Also, the design is suitable for few-shot classification but is not general because it depends on two streams (query and target). \vspace{1.5mm} \noindent \textbf{Criss-Cross Attention}: The contextual information is still very important for scene understanding \cite{context1, context2}. Criss-cross attention proposed encoding the context of each pixel in the image in the criss-cross path. By building recurrent modules of criss-cross attention, the whole context is encoded for each pixel. This module is more efficient than non-local block \cite{m_nonlocal} in memory and time, where the memory is reduced by $11x$ and GFLOPS reduced by $85\%$. Since this survey focuses on the core attention ideas, we show the criss-cross module in Figure~\ref{fig:multimodals} (b). Initially, three $1 \times 1$ convolutions are applied to the feature maps, whereas two of them are multiplied together (first map with each row of the second) to produce criss-cross attentions for each pixel. Then, softmax is applied to generate the attention scores, aggregated with the third convolution outcome. However, the encoded context captures only information in the criss-cross direction, and not the whole image. For this reason, the authors repeated the attention module by sharing the weights to form recurrent criss-cross, which includes the whole context. \cite{huang2019ccnet} \vspace{1.5mm} \noindent \textbf{Perceiver Traditional}: CNNs have achieved high performance in handling several tasks \cite{resnet, chen2011multi, hassanin2021mitigating}, however, they are designed and trained for a single domain rather than multi-modal tasks \cite{modal1, modal2, modal3}. Inspired by biological systems that understand the environment through various modalities simultaneously, Jaegle~\latinphrase{et~al.}\xspace proposed perceiver that leverages the relations between these modalities iteratively. The main concept behind perceiver is to form an attention bottleneck composed of a set of latent units. This solves the scale of quadratic processing, as in traditional transformer, and encourages the model to focus on important features through iterative processing. To compensate for the spatial context, Fourier transform is used to encode the features \cite{mildenhall2020nerf, kandel2000principles, stanley2007compositional, parmar2018image}. As Figure~\ref{fig:multimodals} (c) shows, the perceiver is similar to RNN because of weights sharing. It composes two main components: cross attention to map the input image or input vector to a latent vector and transformer tower that maps the latent vector to a similar one with the same size. The architecture reveals that perceiver is an attention bottleneck that learns a mapping function from high-dimensional data to a low-dimensional one and then passes it to the transformer \cite{m_transformers}. The cross-attention module has multi-byte attend layers to enrich the context, which might be limited from such mapping. This design reduces the quadratic processing $\mathcal{O}(M^2)$ to $\mathcal{O}(MN)$, where $M$ is the sequence length and $N$ is a hyperparameter that can be chosen smaller than $M$. Additionally, sharing the weights of the iterative attention reduces the parameters to one-tenth and enhances the model's generalization. \vspace{1.5mm} \noindent \textbf{Stacked Cross Attention}: Lee~\latinphrase{et~al.}\xspace~\cite{stacked_cross} proposed a method to attend between an image and a sentence context. Given an image and sentence, it learns the attention of words in a sentence for each region in the image and then scores the image regions by comparing each region to the sentence. This way of processing enables stacked cross attention to discover all possible alignments between text and image. Firstly, they compute image-text cross attention by a few steps as follows: a) compute cosine similarity for all image-text pairs \cite{karpathy2014deep} followed by $\ell_2$ normalization \cite{wang2017normface}, b) compute the weighted sum of these pairs attentions, where the image one is calculated by softmax \cite{chorowski2015attention}, c) the final similarity between these pairs is computed using LogSumExp pooling \cite{he2008discriminative, huang2018learning}. The same steps are repeated to get the text-image cross attention, but the attention in the second step uses text-based softmax. Although stacked attention enriches the semantics of multi-modal tasks by attending text over image and vice versa, shared semantics might lead to misalignment in case of lack of similarity. With slight changes to the main concept, several works in various paradigms such as question answering and image captioning \cite{cross_attention1, cross_attention2, cross_attention4, cross_attention3, cross_attention5} used the stacked-cross attention. \vspace{1.5mm} \noindent \textbf{Boosted Attention}: While top-down attention mechanisms~\cite{lu2017knowing} fail to focus on regions of interest without prior knowledge, visual stimuli methods~\cite{tavakoli2017paying,sugano2016seeing} alone are not sufficient to generate captions for images. For this reason, in \ref{fig:multimodals} (d), the authors proposed a boosted attention model to combine both of them in one approach to focus on top-down signals from the language and attend to the salient regions from stimuli independently. Firstly, they integrate stimuli attention with visual feature $I^{'} = W\ I \circ \mathrm{log}(W_{sal}I+\epsilon))$, where $I$ is the extracted features from the backbone, $W_{sal}$ denotes the weight of the layer that produces stimuli attention, $W$ is the weight of the layer that output the visual feature layer. Boosted attention is achieved using the Hadamard product on $I^{'}$. Their experiments work showed that boosted attention improved performance significantly. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.5\paperwidth]{figures/cros.png} \\ \small (a) Cross Attention~\cite{hou2019cross} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/criss-cross.PNG} \\ \small (b) Criss-Cross Attention~\cite{huang2019ccnet} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.48\paperwidth]{figures/perceiver.png} \\ \small (c) Perceiver~\cite{jaegle2021perceiver} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/boosted.png} \\ \small (d) Boosted Attention~\cite{chen2018boosted} \end{tabular} \caption{The core structure of cross-based (multi-modal) attention methods Attention-based Perceiver~\cite{jaegle2021perceiver}, Criss-Cross \cite{hou2019cross}, Boosted attention \cite{chen2018boosted}, Cross-attention module \cite{hou2019cross}. These methods belongs to soft or hard attention, but they use multi-modalities to generate the attention scores. The images are taken from the original papers. Best view in color.} \label{fig:multimodals} \end{figure*} \subsubsection{Logical Attention} \label{sec:logical} Similar to how human beings pay more attention to the crucial features, some methods have been proposed to use recurrences to encode better relationships. These methods rely on using RNNs or any type of sequential network to calculate the attentions. We named it logical methods because they use architectures similar to logic gates. \vspace{1.5mm} \noindent \textbf{Sequential Attention Models}: Inspired by the primate visual system, Zoran~\latinphrase{et~al.}\xspace~\cite{zoran2020towards} proposed soft, sequential, spatial top-down attention method (S3TA) to focus more on attended regions of an image \cite{mott2019towards} (as shown in Figure~\ref{fig:logical} (b)). At each step of the sequential process, the model queries the input and refines the total score based on spatial information in a top-down manner. Specifically, the backbone extracts features \cite{resnet, huang2017densely, pham2018efficient} channels that are split into keys and values. A Fourier transform encodes the spatial information for these two sets to preserve the spatial information from disappearing for later use. The main module is a top-down controller, which is a version of Long-Short Term Model (LSTM) \cite{lstm}, where its previous state is decoded as query vectors. The size of each query vector equals the sum of channels in keys and spatial basis. At each spatial location, the similarity between these vectors is calculated through the inner product, and then the softmax concludes the attention scores. These attention scores are multiplied by the values, and then the summation is taken to produce the corresponding answer vector for each query. All these steps are in the current step of LSTM and are then passed to the next step. Note that the input of the attention module is an output of the LSTM state to focus more on the relevant information as well as the attention map comprises only one channel to preserve the spatial information. Empirical evaluations show that attention is crucial for adversarial robustness because adversarial perturbations drag the object's attention away to degrade the model performance. Such an attention model proved its ability to resist strong attacks \cite{madry2018towards} and natural noises \cite{hendrycks2019natural}. Although S3TA provides a novel method to empower the attention modules using recurrent networks, it is inefficient. \vspace{1.5mm} \noindent \textbf{Permutation invariant Attention}: Initially, Zaheer~\latinphrase{et~al.}\xspace~\cite{deep_sets} suggested handling deep networks in the form of sets rather than ordered lists of elements. For instance, performing pooling over sets of extracted features \latinphrase{e.g.}\xspace $\rho(pool({\phi(x_1), \phi(x_2), \cdots, \phi(x_n)}))$, where $\rho$ and $\phi$ are continuous functions and pool can be the $sum$ function. Formally, any set of deep learning features is invariant permutation if $f(\pi x) = \pi f(x)$. Hence, Lee~\latinphrase{et~al.}\xspace~\cite{permutation_invariant} proposed an attention-based method that processes sets of data. In \cite{deep_sets}, simple functions ($sum$ or $mean$) are proposed to combine the different branches of the network, but they lose important information due to squashing the data. To address these issues, set transformer~\cite{permutation_invariant} parameterizes the pooling functions and provides richer representations that can encode higher-order interaction. They introduced three main contributions: a) Set Attention Block (SAB), which is similar to Multi-head Attention Block (MAB) layer \cite{m_transformers}, but without positional encoding and dropout; b) induced Set Attention Blocks (ISAB), which reduced complexity from $\mathcal{O}(n^2)$ to $\mathcal{O}(mn)$, where $m$ is the size of induced point vectors and c) pooling by Multihead Attention (PMA) uses MAB over the learnable set of seed vectors. \vspace{1.5mm} \noindent \textbf{Show, Attend and Tell}: Xu~\latinphrase{et~al.}\xspace~\cite{xu2015show} introduced two types of attentions to attend to specific image regions for generating a sequence of captions aligned with the image using LSTM~\cite{zaremba2014recurrent}. They used two types of attention: hard attention and soft attention. Hard attention is applied to the latent variable after assigning multinoulli distribution to learn the likelihood of $log p(y|a)$, where $a$ is the latent variable. By using multinoulli distribution and reducing the variance of the estimator, they trained their model by maximizing a variational lower bound as pointed out in \cite{mnih2014recurrent, ba2015multiple} provided that attentions sum to $1$ at every point \latinphrase{i.e.}\xspace $\sum_i \alpha_{ti} = 1$, where $\alpha$ refers to attentions scores. For soft attention, they used softmax to generate the attention scores, but for $p(s_t|a)$ as in \cite{baldi2014dropout}, where $s_t$ is the extracted feature at this step. The training of soft attention is easily done by normal backpropagation and to minimize the negative log-likelihood of $-\log(p(y|a))+\sum_i(1 - \sum_t\alpha_{ti})^2$. This model achieved the benchmark for visual captioning at the time as it paved the way for visual attention to progress. \vspace{1.5mm} \noindent \textbf{Kalman Filtering Attention}: Liu~\latinphrase{et~al.}\xspace identified two main limitations that hinder using attention in various fields where there is insufficient learning or history~\cite{kalman}. These issues are 1) object's attention for input queries is covered by past training; and 2) conventional attentions do not encode hierarchical relationships between similar queries. To address these issues, they proposed the use of Kalman filter attention. Moreover, KFAtt-freq to capture the homogeneity of the same queries, correcting the bias towards frequent queries. \vspace{1.5mm} \noindent \textbf{Prophet Attention}: In prophet attention~\cite{prophet}, the authors noticed that the conventional attention models are biased and cause deviated focus to the tasks, especially in sequence models such as image captioning~\cite{captioning1, captioning2} and visual grounding~\cite{grounding1, grounding2}. Further, this deviation happens because attention models utilize previous input in a sequence to attend to the image rather than the outputs. As shown in Figure~\ref{fig:logical} (a), the model is attending on \enquote{yellow and umbrella} instead of \enquote{umbrella and wearing}. In a like-self-supervision way, they calculate the attention vectors based on the words generated in the future. Then, they guide the training process using these correct attentions, which can be considered a regularization of the whole model. Simply put, this method is based on summing the attentions of the post sequences in the same sentence to eliminate the impact of deviated focus towards the inputs. Overall, prophet attention addresses sequence-models biases towards history while disregarding the future. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.40\paperwidth]{figures/prophet.jpg} \\ \small (a) Prophet ~ \cite{prophet} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.40\paperwidth]{figures/s3ta.png} \\ \small (b) S3TA~ \cite{zoran2020towards} \end{tabular} \caption{The core structure of logic-based attention methods such as Prophet attention~\cite{prophet} and S3TA \cite{zoran2020towards}. Another type of attention which use logical networks such as RNN to infer the attention scores. The images are taken from the original papers. Best view in color.} \label{fig:logical} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/tell.PNG}% \caption{An example of categories attentions types (Guided Attention Inference Networks \cite{li2018tell})}% \label{fig:class} \end{figure} \subsubsection{Category-Based Attentions} \label{sec:Category} The above methods generate the attention scores from the features regardless of the presence of the class. On the other hand, some methods use class annotation to force the network to attend over specific regions. \vspace{1.5mm} \noindent \textbf{Guided Attention Inference Network}: In \cite{li2018tell}, the authors proposed class-aware attention, namely Guided Attention Inference Networks (GAIN), guided by the labels. Instead of focusing only on the most discriminative parts in the image~\cite{zhou2016learning}, GAIN includes the contextual information in the feature maps. Following~\cite{selvaraju2017grad}, GAIN obtains the attention maps from an inference branch, which are then used for training. As shown in Figure~\ref{fig:class}, through 2D-convolutions, global average pooling, and ReLU, the important features are extracted $A^c$ for each class. Following this, the features of each class are obtained as $I - (T(A^c)\bigodot I)$ where $\bigodot$ is matrix multiplication. $T(A^c) = \frac{1}{1+exp(-w(A^c - \sigma))}$ where $\sigma$ is a threshold parameter and $w$ is a scaling parameter. Their experiments showed that without recursive runs, GAINS gained significant improvement over the state-of-the-art. \vspace{1.5mm} \noindent \textbf{Curriculum Enhanced Supervised Attention Network}: The majority of attention methods are trained in a weakly supervised manner, and hence, the attention scores are still far from the best representations \cite{m_transformers, m_nonlocal}. In \cite{zhu2020curriculum}, the authors introduced a novel idea to generate a Supervised-Attention Network (SAN). Using the convolution layers, they defined the output of the last layer to be equal to the number of classes. Therefore, performing attention using global average pooling \cite{lin2013network} yields a weight for each category. In a similar study, Fukui~\latinphrase{et~al.}\xspace proposed using a network composed of three branches to obtain class-specific attention scores: feature extractor to learn the discriminative features, attention branch to compute the attention scores based on a response model, perception to output the attention scores of each class by using the first two modules. The main objective was to increase the visual explanations \cite{zhou2016learning} of the CNN networks as it showed significant improvements in various fields such as fine-grained recognition and image classification. \vspace{1.5mm} \noindent \textbf{Attentional Class Feature Network}: Zhang~\latinphrase{et~al.}\xspace~\cite{zhang2019acfnet} introduced ACFNet, a novel idea to exploit the contextual information for improving semantic segmentation. Unlike conventional methods that learn spatial-based global information \cite{chen2017rethinking}, this contextual information is categorial-based, firstly presenting the class-center concept and then employing it to aggregate all the corresponding pixels to form a specific class representation. In the training phase, ground-truth labels are used to learn class centers, while coarse segmentation results are used in the test phase. Finally, class-attention maps are the results of class centers and coarse segmentation outcomes. The results show significant improvement for semantic segmentation using ACFNet. \subsection{Hard (Stochastic) Attention} Instead of using the weighted average of the hidden states, hard attention selects one of the states as the attention score. Proposing hard attention depends on answering two questions: (1) how to model the problem, and (2) how to train it without vanishing the gradients. In this part, hard attention methods are discussed as well as their training mechanisms. It includes a discussion of Bayesian attention, variational inference, reinforced, and Gaussian attentions. The main idea of Bayesian attention and variational one is to use latent random variables as attention scores. Reinforced attention replaces softmax with a Bernoulli-sigmoid unit \cite{williams1992simple}, whereas Gaussian attention uses a 2D Gaussian kernel instead. Similarly, self-critic attention~\cite{chen2019self} employs a re-enforcement technique to generate the attention scores, whereas Expectation-Maximization uses EM to generate the scores. \subsubsection{Bayesian Attention Modules (BAM)} \label{sec:bayesian} In contrast to the deterministic attention modules, Fan~\latinphrase{et~al.}\xspace~\cite{fan2020bayesian} proposed a stochastic attention method based on Bayesian-graph models. Firstly, keys and queries are aligned to form distributions parameters for attention weights, treated as latent random variables. They trained the whole model by reparameterization, which results from weight normalization by Lognormal or Weibull distributions. Kullback–Leibler (KL) divergence is used as a regularizer to introduce contextual prior distribution in the form of keys' functions. Their experiments illustrate that BAM significantly outperforms the state-of-the-art in various fields such as visual question answering, image captioning, and machine translation. However, this improvement happens on account of computational cost as well as memory usages. Compared to deterministic attention models, it is an efficient alternative in general, showing consistent effectiveness in language-vision tasks. \vspace{1.5mm} \noindent \textbf{Bayesian Attention Belief Networks}: Zhang~\latinphrase{et~al.}\xspace~\cite{zhang2021bayesian} proposed using Bayesian Belief modules to generate attention scores given their ability to model high structured data along with uncertainty estimations. As shown in \ref{fig:stochastics} (b), they introduced a simple structure to change any deterministic attention model into a stochastic one through four steps: 1) using Gamma distributions to build the decoder network 2) using Weibull distributions along with stochastic and deterministic paths for downward and upward, respectively 3) Parameterizing BABN distributions from the queries and keys of the current network 4) using evidence lower bound to optimize the encoder and decoder. The whole network is differentiable because of the existence of Weibull distributions in the encoder. In terms of accuracy and uncertainty, BABN proved improvement over the state-of-the-art in NLP tasks. \vspace{1.5mm} \noindent \textbf{Repulsive Attention}: Multi-head attention \cite{m_transformers} is the core of attention used in transformers. However, MHA may cause attention collapse when extracting the same features \cite{an2020repulsive, prakash2019repr, han2016dsd} and consequently, the discrimination power for feature representation will not be diverse. To address this issue, An~\latinphrase{et~al.}\xspace~\cite{an2020repulsive} adapted MHA to a Bayesian network with an underlying stochastic attention. MHA is considered a special case without sharing parameters and using a particle-optimization sample to perform Bayesian inference on the attention parameter imposes attention repulsiveness \cite{liu2016stein}. Through this sampling method, each MHA is considered a sample seeking posterior distribution approximation, far from other heads. \begin{figure*} \centering \begin{tabular}[b]{c} \includegraphics[width=.36\paperwidth]{figures/self_critic.PNG} \\ \small (a) Self-Critic Attention~\cite{chen2019self}\\ \includegraphics[width=.36\paperwidth]{figures/EMA.png} \\ \small (c) Expectation-Maximization Attention ~ \cite{li2019expectation} \end{tabular} \begin{tabular}[b]{c} \includegraphics[width=.3\paperwidth]{figures/babn.PNG}\\ \small (b) Bayesian Attention Belief Networks \cite{zhang2021bayesian} \tabularnewline\\ \includegraphics[width=.3\paperwidth]{figures/gaussian_attention.PNG} \\ \small (d) Gaussian Attention~\cite{gaussian_attention} \tabularnewline \end{tabular} \caption{The core structure of stochastic-based attention methods EMA~\cite{li2019expectation}, Gaussian attention module~\cite{gaussian_attention}, Self-critic \cite{chen2019self} and Bayesian module \cite{zhang2021bayesian}. The core functions of attention score generation is not softmax. The images are taken from the original papers. Best view in color.} \label{fig:stochastics} \end{figure*} \subsubsection{Variational Attention} \label{sec:variational} In a study to improve the latent variable alignments, Deng~\latinphrase{et~al.}\xspace~\cite{deng2018latent} proposed using variational attention mechanism. A latent variable is crucial because it encodes the dependencies between entities, whereas variational inference methods represent it in a stochastic manner \cite{salimbeni2019deep, drori2020deep}. On the other hand, soft attention can encode alignments, but it has poor representation because of the nature of softmax. Using stochastic methods show better performance when optimized well \cite{lin2003toward, wang2020survey}. The main idea is to propose variational attention along with keeping the training tractable. They introduced two types of variational attention: categorical (hard) attention that uses amortized variational inference based on policy gradient and soft attention variance; relaxed (probabilistic soft attention) using Dirichlet distribution that allows attending over multiple sources. Regarding reparameterization, Dirichlet distribution is not parameterizable, and thus the gradient has high variances \cite{jankowiak2018pathwise}. Inspired by \cite{deng2018latent}, Bahuleyan~\latinphrase{et~al.}\xspace developed stochastic attention-based variational inference \cite{bahuleyan2018variational}, but using a normal distribution instead of Dirichlet distribution. They observed that variational encoder-decoders should not have a direct connection; otherwise, traditional attentions serve as bypass connections. \subsubsection{Reinforced Self-Attention } \label{sec:reinforced} Shen~\latinphrase{et~al.}\xspace~\cite{shen2018reinforced} used a reinforced technique to combine soft and hard attention in one method. Soft attention has shown effectiveness in modeling local and global dependencies, which output from the dot-product similarity \cite{bahdanau2014neural}. However, soft attention is based on the softmax function that assigns values to each item, even the non-attended ones, which weakens the whole attention. On the other hand, hard attention \cite{show_attend} attends to important regions or tokens only and disregards others. Despite its importance to textual tasks, it is inefficient in terms of time and differentiability \cite{williams1992simple}. Shen~\latinphrase{et~al.}\xspace~\cite{shen2018reinforced} used hard attention to extract rich information and then feed it into soft attention for further processing. Simultaneously, soft attention is used to reward hard attention and hence stabilize the training process. Specifically, they used hard attention to encode tokens from the input in parallel while combining it with soft attention \cite{shen2018disan} without any CNN/RNN modules (see Figure~\ref{fig:stochastics} (e)). In \cite{karianakis2018reinforced}, reinforcement attention was proposed to extract better temporal context from video. Specifically, this attention module uses Bernoulli-sigmoid unit \cite{williams1992simple}, a stochastic module. Thus, to train the whole system, REINFORCE algorithm is used to stabilize the gradients \cite{jankowiak2018pathwise}. \subsubsection{Gaussian-based attention} \label{sec:Gaussian} \vspace{1.5mm} \noindent \textbf{Self Supervised Gaussian-Attention}: Most soft-attention models use softmax to predict the attention of feature maps \cite{m_transformers, image_transfomer, zhang2020resnest} which suffers from various drawbacks. In~\cite{niu2020gatcluster}, Niu~\latinphrase{et~al.}\xspace proposed replacing the classical softmax with a Gaussian attention module. As shown in Figure~\ref{fig:stochastics} (d), they build a 2D Gaussian kernel to generate attention maps instead of softmax $K = e(-\frac{1}{\alpha}(u - \mu)^T \sum^{-1} (u - \mu))$ for each individual element, where $u = [x, y]^T$, $\mu = [\mu_x, \mu^y]^T$. A fully connected layer passes the extracted features, and then the Gaussian kernel is used to predict the attention scores. Using Gaussian kernels proved its effectiveness in discriminating the important features. Since it does not require any further learning steps, such as fully connected layers or convolutions, this significantly reduces the number of parameters. As stochastic training models need careful designs because of SGD mismatching~\cite{stochastic1, stochastic2, stochastic3}, the Gaussian attention model developed binary classification loss that takes normalized logits to suppress the low scores and discriminate the high ones. This normalization uses a modified version of softmax, where the input is squared and divided by temperature value (\latinphrase{e.g.}\xspace batch size). \vspace{1.5mm} \noindent \textbf{Uncertainty-Aware Attention}: Since attention is generated without full supervision (\latinphrase{i.e.}\xspace in a weakly-supervised manner), it lacks full reliability \cite{li2018tell}. To fix this issue, \cite{heo2018uncertainty} proposed the use of uncertainty which is based on input. It generates varied attention maps according to the input and, therefore, learns higher variance for uncertain inputs. Gaussian distribution is used to handle attention weights, such that it gives small values in case of high confidence and vice versa \cite{kendall2017uncertainties}. Bayesian network is employed to build the model with variational inference as a solution~\cite{zhang1994simple, blei2017variational}. Note that this model is stochastic and SGD backpropagation flow can not work properly due to randomness \cite{kingma2013auto}. For this reason, they used the reparameterization trick \cite{gal2017concrete, kingma2015variational} to train their model. \subsubsection{Self-Critic attention} \label{sec:self-critic} Chen~\latinphrase{et~al.}\xspace \cite{chen2019self} proposed a self-critic attention model that generates attention using an agent and re-evaluates the gain from this attention using the REINFORCE algorithm. They observed that most of the attention modules are trained in a weakly-supervised manner. Therefore, the attention maps are not always discriminative and lack supervisory signals during training \cite{lee2015deeply}. To supervise the generation of attention maps, they used a reinforcement algorithm to guide the whole process. As shown in Figure~\ref{fig:stochastics} (a), the feature maps are evaluated to predict whether it needs self correctness or not. \subsubsection{Expectation Maximization attention} \label{sec:EM} Traditional soft attention mechanisms can encode long-range dependencies by comparing each position to all positions, which is computationally very expensive~\cite{m_nonlocal}. In this regard, Li~\latinphrase{et~al.}\xspace~\cite{li2019expectation} proposed using expectation maximization to build an attention method that iteratively forms a set of bases that compute the attention maps \cite{dempster1977maximum}. The main intuition is to use expectation maximization to select a compact basis set instead of using all the pixels as in \cite{m_nonlocal, li2020spatial} (see Figure~\ref{fig:stochastics} (c)). These bases are regarded as the learning parameters, whereas the latent variables serve as the attention maps. The output is the weighted sum of bases, and the attention maps are the weights. The estimation step is defined by $ z=\frac{\mathbb{K}(x_n, \mu_n)}{\sum_j\mathbb{K}(x_n, \mu_j)}$, where $\mathbb{K}$ denotes a kernel function. The maximization step updates $\mu$ through data likelihood maximization such that $\mu = \frac{z_{nk}(x_n, \mu_n)}{\sum_j z_{jk}}$. Finally, the features are multiplied by attention scores $\mathbb{X} = \mathbb{Z} \mu$. Since EMA is a stochastic model, training the whole model needs special care. Firstly, the authors average the $\mu$ over the mini-batch and update the maximization step to train it stably. Secondly, they normalize the Value of $\mu$ to be within (1, $T$) by $\ell_2$-Norm. EMA has shown the ability to remove noisy representation and to give promising results after the third step. Also, it is worth noting that the complexity is reduced to a linear form $\mathcal{O}(NK)$ from a quadratic one $\mathcal{O}(N^2)$.
2024-02-18T23:39:51.822Z
2022-04-22T02:14:02.000Z
algebraic_stack_train_0000
685
37,190
proofpile-arXiv_065-3421
\section{Introduction} A text is not a simple collection of isolated sentences. These sentences generally appear in a certain order and are connected with each other through logical or semantic means to form a coherent whole. In recent years, modelling beyond the sentence level is attracting more attention, and different natural language processing (NLP) tasks use discourse-aware models to obtain better performance, such as sentiment analysis~\citep{bhatia-etal-2015-better}, automatic essay scoring~\citep{nadeem-etal-2019-automated}, machine translation~\citep{sim-smith-2017-integrating}, text summarization~\citep{xu-etal-2020-discourse} and so on. As discourse information typically involves the interaction of different levels of linguistic phenomena, including syntax, semantics, pragmatics and information structure, it is difficult to represent and annotate. Different discourse theories and discourse annotation frameworks have been proposed. Accordingly, discourse corpora annotated under different frameworks show considerable variation, and a corpus can be hardly used together with another corpus for natural language processing (NLP) tasks or discourse analysis in linguistics. Discourse parsing is a task of uncovering the underlying structure of text organization, and deep-learning based approaches are used in recent years. However, discourse annotation takes the whole document as the basic unit and is a laborious task. To boost the performance of neural models, we typically need a large amount of data. Due to the above issues, the unification of discourse annotation frameworks has been a topic of discussion for a long time. Researchers have proposed varied methods to unify discourse relations and debated over whether trees are a good representation of discourse~\citep{egg-redeker-2010-complex, lee2008departures, wolf-gibson-2005-representing}. However, existing research either focuses on mapping or unifying discourse relations of different frameworks~\citep{bunt2016iso, benamara-taboada-2015-mapping, sanders2018unifying, demberg2019compatible}, or on finding a common discourse structure~\citep{yi-etal-2021-unifying}, without giving sufficient attention to the issue of relation mapping. There is still no comprehensive approach that considers unifying both discourse structure and discourse relations. Another approach to tackling the task is to use multi-task learning so that information from a discourse corpus annotated under one framework can be used to solve a task in another framework, thus achieving synergy between different frameworks. However, existing studies adopting this method~\citep{liu2016implicit, braud-etal-2016-multi} do not show significant performance gain by incorporating a part of discourse information from a corpus annotated under a different framework. How to leverage discourse information from different frameworks remains a challenge. Discourse information may be used in down-stream tasks.~\citet{huang-kurohashi-2021-extractive} and~\citet{xu-etal-2020-discourse} use both coreference relations and discourse relations for text summarization with graph neural networks (GNNs). The ablation study by~\citet{huang-kurohashi-2021-extractive} shows that using coreference relations only brings little performance improvement but incorporating discourse relations achieves the highest performance gain. While different kinds of discourse information can be used, how to encode different types of discourse information to improve discourse-awareness of neural models is a topic that merits further investigation. The above challenges motivate our research on unifying different discourse annotation frameworks. We will focus on the following research questions: \textbf{RQ1:} Which structure can be used to represent discourse in the unified framework? \textbf{RQ2:} What properties of different frameworks should be kept and what properties should be ignored in the unification? \textbf{RQ3:} How can entity-based models and lexical-based models be incorporated into the unified framework? \textbf{RQ4:} How can the unified framework be evaluated? The first three questions are closely related to each other. Automatic means will be used, although we do not preclude semi-automatic means, as exemplified by~\citet{yi-etal-2021-unifying}. We will start with the methods suggested by existing research and focus on the challenges of incorporating different kinds of discourse information in multi-task learning and graphical models. The unified framework can be used for the following purposes: \begin{enumerate} \item A corpus annotated under one framework can be used jointly with another corpus annotated under a different framework to augment data, for developing discourse parsing models or for discourse analysis. We can train a discourse parser on a corpus annotated under one framework and compare its performance with the case when it is trained on augmented data, similar to~\citet{yi-etal-2021-unifying}. \item Each framework has its own theoretical foundation and focus. A unified framework may have the potential of combining the strengths of different frameworks. Experiments can be done with multi-task learning so that discourse parsing tasks of different frameworks can be solved jointly. We can also investigate how to enable GNNs to better capture different kinds of discourse information. \item A unified framework may provide a common ground for exploring the relations of different frameworks and validating annotation consistency of a corpus. We can perform comparative corpus analysis and obtain new understanding of how information expressed in one framework is conveyed in another framework, thus validating corpus annotation consistency and finding some clues for solving problems in a framework with signals from another framework, similar to~\citet{polakova2017signalling} and~\citet{bourgonje-zolotarenko-2019-toward}. \end{enumerate} \iffalse The study by~\citet{liu2016implicit} is one of the few efforts in this direction, but it is limited to implicit discourse relation classification using RST-DT, PDTB and raw texts annotated with discourse connectives.~\citet{braud-etal-2016-multi} propose a model to predict RST discourse trees in which the PDTB is used. Due to differences in the two frameworks, the PDTB is modified so that sentences are used as basic discourse units and intra-sentential explicit relations are ignored. Ablation studies show that modest performance is achieved using such information from the PDTB corpus. How to best leverage information from different frameworks remains a challenge. The unified framework may enable us to apply multi-task learning to discourse parsing across different frameworks. \fi \section{Related Work} \subsection{An Overview of Discourse Theories} A number of discourse theories have been proposed. The theory by~\citet{grosz-sidner-1986-attention} is one of those earlier few whose linguistic claims about discourse are also computationally significant~\citep{mann1987rhetorical}. With this theory, it is believed that discourse structure is composed of three separated but interrelated components: linguistic structure, intentional structure and attentional structure. The linguistic structure focuses on cue phrases and discourse segmentation. The intentional structure mainly deals with why a discourse is performed (discourse purpose) and how a segment contributes to the overall discourse purpose (discourse segment purpose). The attentional structure is not related to the discourse participants, and it records the objects, properties and relations that are salient at each point in discourse. These three aspects capture discourse phenomena in a systematic way, and other discourse theories may be related to this theory in some way. For instance, the Centering Theory~\citep{grosz-etal-1995-centering} and the entity-grid model~\citep{barzilay-lapata-2008-modeling} focus on the attentional structure, and the Rhetorical Structure Theory (RST)~\citep{mann1988rhetorical} focuses on the intentional structure. The theory proposed by~\citet{halliday2014cohesion} studies how various lexical means are used to achieve cohesion, these lexical means including reference, substitution, ellipsis, lexical cohesion and conjunction. Cohesion realized through the first four lexical means is in essence anaphoric dependency and conjunction is the only source of discourse relation under this theory~\citep{webber2006accounting}. The other discourse theories can be divided into two broad types: relation-based discourse theories and entity-based discourse theories~\citep{jurafsky2018speech}. The former studies how coherence is achieved with discourse relations and the latter focuses on local coherence achieved through shift of focus, which abstracts a text into a set of entity transition sequences~\citep{barzilay-lapata-2008-modeling}. RST is one of the most influential relation-based discourse theories. The RST Discourse Treebank (RST-DT)~\citep{carlson-etal-2001-building} is annotated based on this theory. In the RST framework, discourse can be represented by a tree structure whose leaves are Elementary Discourse Units (EDUs), typically clauses, and whose non-terminals are adjacent spans linked by discourse relations. The discourse relations can be symmetric or asymmetric, the former being characterized by equally important spans connected in parallel, and the latter typically having a nucleus and a satellite, which are assigned based on their importance in conveying the intended effects. An RST tree is built recursively by connecting the adjacent discourse units, forming a hierarchical structure covering the whole text. An example of RST discourse trees can be seen in Figure~\ref{rst-tree}. \begin{figure*}[h!] \vspace{-6\baselineskip} \noindent\begin{minipage}{\linewidth} \centering \includegraphics[ width=0.9\textwidth,height=0.4\textheight, scale=1.2]{rst-example.pdf} \vspace{-4\baselineskip} \caption{An RST discourse tree, originally from~\citet{marcu-2000-rhetorical}.} \label{rst-tree} \end{minipage} \end{figure*} \iffalse \noindent\begin{minipage}{\linewidth} \centering \includegraphics[width=\textwidth,height=0.4\textheight, scale=1.5]{rst-example.pdf} \captionof{rst-tree}{An RST discourse tree, originally from~\citet{marcu-2000-rhetorical}.} \end{minipage} \begin{figure*}[h] \vspace{-6\baselineskip} \begin{center} \hbox{\hspace{-2.5em} \includegraphics[ width=0.8\textwidth,height=0.4\textheight, scale=1.5] {rst-example.pdf}} \vspace{-4\baselineskip} \caption{An RST discourse tree, originally from~\citet{marcu-2000-rhetorical}.} \label{rst-tree} \end{center} \end{figure*} \fi Another influential framework is the Penn Discourse Treebank (PDTB) framework, which is represented by the Penn Discourse Treebank~\citep{prasad-etal-2008-penn, prasad-etal-2018-discourse}. Unlike the RST framework, the PDTB framework does not aim at achieving complete annotation of the text but focuses on local discourse relations anchored by structural connectives or discourse adverbials. When there are no explicit connectives, the annotators will read adjacent sentences and decide if a connective can be inserted to express the relation. The annotation is not committed to a specific structure at the higher level. PDTB 3.0 adopts a three-layer sense hierarchy, including four general categories called classes at the highest level, the middle layer being more specific divisions, which are called types, and the lowest layer containing directionality of the arguments, called subtypes. An example of the PDTB-style annotation is shown as follows~\citep{ldcexample}: \textit{The Soviet insisted that aircraft be brought into the talks,}(implicit=but)\{arg2-as-denier\}~\textbf{then argued for exempting some 4,000 Russian planes because they are solely defensive.} The first argument is shown in italics and the second argument is shown in bold font for distinction. As the discourse relation is implicit, the annotator adds a connective that is considered to be suitable for the context. The Segmented Discourse Representation Theory (SDRT)~\citep{asher2003logics} is based on the Discourse Representation Theory~\citep{Kamp1993-KAMFDT}, with discourse relations added, and discourse structure is represented with directed acyclic graphs (DAGs). Elementary discourse units may be combined recursively to form a complex discourse unit (CDU), which can be linked with another EDU or CDU~\citep{asher2017annodis}. The set of discourse relations developed in this framework overlap partly with those in the RST framework but some are motivated from pragmatic and semantic considerations. In~\citet{asher2003logics}, a precise dynamic semantic interpretation of the rhetorical relations is defined. An example of discourse representation in the SDRT framework is shown in Figure~\ref{sdrt-graph}, which illustrates that the SDRT framework provides full annotation, similar to the RST framework, and it assumes a hierarchical structure of text organization. The vertical arrow-headed lines represent subordinate relations, and the horizontal lines represent coordinate relations. The textual units in solid-line boxes are EDUs and $\pi$\textquotesingle\ and $\pi$\textquotesingle\textquotesingle\ represent CDUs. The relations are shown in bold. \begin{figure} \begin{center} \hbox{\hspace{-7.5 em} \includegraphics[ width=0.9\textwidth,height=0.4\textheight, scale=1.2] {sdrt-new-example}} \vspace{-9\baselineskip} \caption{SDRT representation of the text~\textit{a. Max had a great evening last night. b. He had a great meal. c. He ate salmon. d. He devoured lots of cheese. e. He then won a dancing competition.} The example is taken from~\citet{asher2003logics}.} \label{sdrt-graph} \end{center} \end{figure} \subsection{Research on Relations between Different Frameworks} The correlation between different frameworks has been a topic of interest for a long time. Some studies explore how different frameworks are related, either in discourse structures or in relation sets. Some studies take a step further and try to map the relation sets of different frameworks. \subsubsection{Comparison/unification of discourse structures of different frameworks} ~\citet{stede-etal-2016-parallel} investigate the relations between RST, SDRT and argumentation structure. For the purpose of comparing the three layers of annotation, the EDU segmentation in RST and SDRT is harmonized, and an ``argumentatively empty'' JOIN relation is introduced to address the issue that the basic unit of the argumentation structure is coarser than the other two layers. The annotations are converted to a common dependency graph format for calculating correlations. To transform RST trees to the dependency structure, the method introduced by~\citet{li-etal-2014-text} is used. The RST trees are binarized and the left-most EDU is treated as the head. In the transformation of the SDRT graphs to the dependency structure, the CDUs are simplified by a \textit{head replacement strategy}. The authors compare the dependency graphs in terms of common edges and common connected components. The relations of the argumentation structure are compared with those of RST and SDRT, respectively, through a co-occurrence matrix. Their research shows the systematic relations between the argumentation structure and the two discourse annotation frameworks. The purpose is to investigate if discourse parsing can contribute to automatic argumentation analysis. The authors exclude the PDTB framework because it does not provide full discourse annotation. ~\citet{yi-etal-2021-unifying} try to unify two Chinese discourse corpora annotated under the PDTB framework and the RST framework, respectively, with a corpus annotated under the dependency framework. They use semi-automatic means to transform the corpora to the discourse dependency structure which is presented in~\citet{li-etal-2014-text}. Their work shows that the major difficulty is the transformation from the PDTB framework to the discourse dependency structure, which requires re-segmenting texts and complementing some relations to construct complete dependency trees. They use the same method as~\citet{stede-etal-2016-parallel} to transform the RST trees to the dependency structure. Details about relation mapping across the frameworks are not given. \subsubsection{Comparison/unification of discourse relations of different frameworks} The methods of mapping discourse relations of different frameworks presented by~\citet{scheffler2016mapping}, ~\citet{demberg2019compatible} and~\citet{bourgonje-zolotarenko-2019-toward} are empirically grounded. The main approach is to make use of the same texts annotated under different frameworks. ~\citet{scheffler2016mapping} focus on mapping between explicit PDTB discourse connectives and RST rhetorical relations. The Potsdam Commentary Corpus~\citep{stede-neumann-2014-potsdam}, which contains annotations under both frameworks, is used. It is found that the majority of the PDTB connectives in the corpus match exactly one RST relation and mismatches are caused by different segment definitions and focuses, i.e., PDTB focuses on local/lexicalized relations and RST focuses on global structural relations. As the Potsdam Commentary Corpus only contains explicit relations under the PDTB framework,~\citet{bourgonje-zolotarenko-2019-toward} try to induce implicit relations from the corresponding RST annotation. Since RST trees are hierarchical and the PDTB annotation is shallow, RST relations that connect complex spans are discarded. Moreover, because the arguments of explicit and implicit relations under the PDTB framework are determined based on different criteria, only RST relations that are signalled explicitly are considered in the experiment. It is shown that differences in segmentation and partially overlapping relations pose challenges for the task. ~\citet{demberg2019compatible} propose a method of mapping RST and PDTB relations. Since the number of PDTB relations is much smaller than that of RST relations for the same text, the PDTB relations are used as the starting point for the mapping. They aim for mapping as many relations as possible while making sure that the relations connect the same segments. Six cases are identified: direct mapping, which is the easiest case; when PDTB arguments are non-adjacent, the Strong Compositionality hypothesis~\citep{marcu2000theory} (i.e., if a relation holds between two textual spans, that relation also holds between the most important units of the constituent spans) is used to check if there is a match when the complex span of an RST relation is traced along the nucleus path to its nucleus EDU; in the case of multi-nuclear relations, it is checked if a PDTB argument can be traced to the nucleus of the RST relation along the nucleus path; the mismatch caused by different segmentation granularity is considered innately unalignable and discarded; centrally embedded EDUs in RST-DT are treated as a whole and compared with an argument of the PDTB relation; and the PDTB E\textsc{nt}R\textsc{el} relation is included to test its correlation with some RST relations that tend to be associated with cohesion. Other studies are more theoretical.~\citet{hovy-1990-parsimonious} is the first to attempt to unify discourse relations proposed by researchers from different areas and suggests adopting a hierarchy of relations, with the top level being more general (from the functional perspective: ideational, interpersonal and textual) and putting no restrictions on adding fine-grained relations, as long as they can be subsumed under existing taxonomy. The number of researchers who propose a specific relation is taken as a vote of confidence of the relation in the taxonomy. The study serves as a starting point for research in this direction. There are a few other proposals for unifying discourse relations of different frameworks to facilitate cross-framework discourse analysis, including: introducing a hierarchy of discourse relations, similar to~\citet{hovy-1990-parsimonious}, where the top level is general and fixed, and the lowest level is more specific and allows variations based on genre and language~\citep{benamara-taboada-2015-mapping}, finding some dimensions based on cognitive evidence where the relations can be compared with each other and re-grouped~\citep{sanders2018unifying}, and formulating a set of core relations that are shared by existing frameworks but are open and extensible in use, with the outcome being ISO-DR-Core~\citep{bunt2016iso}. When the PDTB sense hierarchy is mapped to the ISO-DR-Core, it is found that the directionality of relations cannot be captured by the existing ISO-DR-Core relations and it remains a question whether to extend the ISO-DR-Core relations or to redefine the PDTB relations so that the directionality of arguments can be captured~\citep{prasad-etal-2018-discourse}. \section{Research Plan} RST-DT is annotated on texts from the Penn Treebank~\citep{marcus-etal-1993-building} that have also been annotated in PDTB. The texts are formally written Wall Street Journal articles. The English corpora annotated under the SDRT framework, i.e., the STAC corpus~\citep{asher-etal-2016-discourse} and the Molweni corpus~\citep{li-etal-2020-molweni}, are created for analyzing multi-party dialogues, making it difficult to be used together with the other two corpora. Therefore, in addition to RST-DT and PDTB 3.0, we will use the ANNODIS corpus~\citep{pery-woodley-etal-2009-annodis}, which consists of formally written French texts. We will first translate the texts into English with an MT system and then manually check the translated texts to reduce errors. In the following, the research questions and the approach in our plan will be discussed. These questions are closely related to each other and the research on one question is likely to influence how the other questions should be addressed. They are presented separately just for easier description. \textbf{RQ1:} Which structure can be used to represent discourse in the unified framework? Although there is a lack of consensus on how to represent discourse structure, in a number of studies, the dependency structure is taken as a common structure that the other structures can be converted to~\citep{muller-etal-2012-constrained, hirao-etal-2013-single, venant-etal-2013-expressivity, li-etal-2014-text, yoshida-etal-2014-dependency, stede-etal-2016-parallel, morey-etal-2018-dependency, yi-etal-2021-unifying}. This choice is mainly inspired by research in the field of syntax, where the dependency grammar is better studied and its computational and representational properties are well-understood\footnote{In communication with Bonnie Webber, January, 2022.}. The research by~\citet{venant-etal-2013-expressivity} provides a common language for comparing discourse structures of different formalisms, which is used in the transformation procedure presented by~\citet{stede-etal-2016-parallel}. Another possibility is the constrained directed acyclic graph introduced by~\citet{danlos-2004-discourse}. While~\citet{venant-etal-2013-expressivity} focus on the expressivity of different structures, the constrained DAG is motivated from the perspective of strong generative capacity~\citep{danlos2008strong}. Although neither of the studies deals with the PDTB framework, since they are both semantically driven, we believe it is possible to deal with the PDTB framework using either of the two structures. We will start with the investigation of the two structures. Another issue is how to maintain one-to-one correspondence during the transformation of the original structure and the unified structure back and forth. As indicated by~\citet{stede-etal-2016-parallel}, the transformation from the RST or SDRT structures into dependency structures always produces the same structure, but going back to the initial RST or SDRT structures is ambiguous.~\citet{morey-etal-2018-dependency} introduces head-ordered dependency trees in syntactic parsing~\citep{fernandez-gonzalez-martins-2015-parsing} to reduce the ambiguity. We may start with a similar method. As is clear from Section 2, using the dependency structure as a common ground for studying the relations between different frameworks is not new in existing literature, but comparing the RST, PDTB and SDRT frameworks with this method has not yet been done. This approach will be our starting point, and the suitability of the dependency structure in representing discourse will be investigated empirically. The SciDTB corpus~\citep{yang-li-2018-scidtb}, which is annotated under the dependency framework, will be used for this purpose. \textbf{RQ2:\footnote{In communication with Bonnie Webber, January, 2022. We thank her for pointing out this aspect.}} What properties of different frameworks should be kept and what properties should be ignored in the unification? We present a non-exhaustive list of properties, which we consider to have considerable influence on the unified discourse structure. \begin{enumerate} \item Nuclearity:~\citet{marcu1996building} uses the nuclearity principle as the foundation for a formal treatment of compositionality in RST, which means that two adjacent spans can be joined into a larger span by a rhetorical relation if and only if the relation holds between the most salient units of those spans. This assumption is criticized by~\citet{stede2008disentangling}. The remedy provided by~\citet{stede2008disentangling} is to separate different levels of discourse information, which is in line with the suggestions in~\citet{Knott00beyondelaboration:} and~\citet{ moore-pollack-1992-problem}. Our strategy is to keep this property in the initial stage of experimentation. The existing methods for transforming RST trees to dependency structure~\citep{hirao-etal-2013-single, li-etal-2014-text} rely heavily on the nuclearity principle and we will use these methods in the transformation and see what kinds of problems this procedure will cause, particularly with respect to the PDTB framework, which does not enforce a hierarchical structure for complete coverage of the text. \item Sentence-boundedness: The RST framework does not enforce well-formed discourse sub-trees for each sentence. However, it is found that 95\% of the discourse parse trees in RST-DT have well-formed sub-trees at the sentence level~\citep{soricut-marcu-2003-sentence}. For the PDTB framework, there is no restriction on how far an argument can be from its corresponding connective: it can be in the same sentence as the connective, in the sentence immediately preceding that of the connective, or in some non-adjacent sentence~\citep{Prasad2006ThePD}. Moreover, the arguments are determined based on the \textit{Minimality Principle}, which means that clauses and/or sentences that are minimally required for the interpretation of the relation should be included in the argument, and other spans that are relevant but not necessary can be annotated as supplementary information, which is labeled depending on which argument it is supplementary to~\citep{prasad-etal-2008-penn}. The SDRT framework developed in~\citet{asher2003logics} does not specify the basic discourse unit, but in the annotation of the ANNODIS corpus, EDU segmentation follows similar principles as RST-DT. The formation of CDU and the attachment of relations are where SDRT differs significantly from RST. A segment can be attached to another segment from the same sentence, the same paragraph or a larger context, and by one or possibly more relations. A CDU can be of any size and can have segments that are far apart in the text, and relations may be annotated within the CDU\footnote{See section 3 of the ANNODIS annotation manual, available through \url{ http://w3.erss.univ-tlse2.fr/textes/publications/CarnetsGrammaire/carnGram21.pdf}}. The differences in the criteria on location and extent for basic discourse unit identification and relation labeling of the RST framework and the PDTB framework may be partly attributed to different annotation procedures. In RST, EDU segmentation is performed first and EDU linking and relation labelling are performed later. The balance between consistency and granularity is the major concern behind the strategy for EDU segmentation~\citep{carlson-etal-2001-building}. In contrast, in PDTB, the connectives are identified first, and their arguments are determined afterwards. Semantic relatedness is given greater weight and the location and extent of the arguments can be determined more flexibly. On the whole, neither SDRT nor PDTB shows any tendency of sentence-boundedness. We will investigate to what extent the tendency of sentence-boundedness complicates the unification and what the consequences are if entity-based models and lexical-based models are incorporated. \item Multi-sense annotation: As shown above, SDRT and PDTB allow multi-sense annotation while RST only allows one relation to be labeled. The single-sense constraint actually gives rise to ambiguity because of the multi-faceted nature of local coherence~\citep{stede2008disentangling}. For the unification task, we assume that multi-sense annotation is useful. However, we agree with the view mentioned in~\citet{stede2008disentangling} that incrementally adding more relations as phenomena are being recognized is not a promising direction. There are two possible approaches: one is to separate different dimensions of discourse information~\citep{stede2008disentangling} and the other is to represent different kinds of discourse information simultaneously, similar to the approach adopted in~\citet{Knott00beyondelaboration:}. While multi-level annotation may reveal the interaction between discourse and other linguistic phenomena, it is less helpful for developing a discourse parser and requires more efforts in annotation. The second approach may be conducive to computationally cheaper discourse processing when proper constraints are introduced. \end{enumerate} \textbf{RQ3:} How can entity-based models and lexical-based models be incorporated into the unified framework? The PDTB framework believes that lexical-based discourse relations are associated with anaphoric dependency, which is anchored by discourse adverbials~\citep{anaphora-discourse} and annotated as a type of explicit relations. As for entity-based relations, PDTB uses the E\textsc{nt}R\textsc{el} label to annotate this type of relations when neither explicit nor implicit relations can be identified and only entity-based coherence relations are present. In the RST framework, the ELABORATION relation is actually a relation between entities. However, it is encoded in the same way as the other relations between propositions, which bedevils the framework~\citep{Knott00beyondelaboration:}. Further empirical studies may be needed to identify how different frameworks represent these different kinds of discourse information. The main challenge is to use a relatively simple structure to represent different types of discourse information while keeping the complexity relatively low. \textbf{RQ4:} How can the unified framework be evaluated? We will use intrinsic evaluation to assess the complexity of the discourse structure. Extrinsic evaluation will be used to assess the effectiveness of the unified framework. The downstream tasks in the extrinsic evaluation include text summarization and document discrimination, which are two typical tasks for evaluating discourse models. The document discrimination task asks a score of coherence to be assigned to a document. The originally written document is considered to be the most coherent, and with more permutations, the document becomes less coherent. For comparison with previous studies, we will use the CNN and Dailymail dataset~\citep{cnndailymaildataset15} for the text summarization task, and use the method and dataset\footnote{\url{https://github.com/AiliAili/Coherence_Modelling}} in~\citet{shen-etal-2021-evaluating} to control the degree of coherence for the document discrimination task. Previous studies that use multi-task learning and GNNs to encode different types of discourse information will be re-investigated to test the effectiveness of the unified framework. As we may have to ignore some properties, we will examine what might be lost with the unified framework. \section{Conclusion} We propose to unify the RST, PDTB and SDRT frameworks, which may enable discourse corpora annotated under different frameworks to be used jointly and achieve the potential synergy of different frameworks. The major challenges include determining which structure to use in the unified framework, choosing what properties to keep and what to ignore, and incorporating entity-based models and lexical-based models into the unified framework. We will start with existing research and try to find a computationally less expensive way for the task. Extensive experiments will be conducted to investigate how effective the unified framework is and how it can be used. An empirical evaluation of what might be lost through the unification will be performed. \section{Acknowledgements} We thank Bonnie Webber for valuable feedback that greatly shaped the work. We are grateful to the anonymous reviewers for detailed and insightful comments that improved the work considerably, and Mark-Jan Nederhof for proof-reading the manuscript. The author is funded by University of St Andrews-China Scholarship Council joint scholarship (NO.202008300012). \section{Ethical Considerations and Limitations} The corpora are used in compliance with the licence requirements: The ANNODIS corpus is available under Creative Commons By-NC-SA 3.0. RST-DT is distributed on Linguistic Data Consortium: Carlson, Lynn, Daniel Marcu, and Mary Ellen Okurowski. RST Discourse Treebank LDC2002T07. Web Download. Philadelphia: Linguistic Data Consortium, 2002. PDTB 3.0 is also distributed on Linguistic Data Consortium: Prasad, Rashmi, et al. Penn Discourse Treebank Version 3.0 LDC2019T05. Web Download. Philadelphia: Linguistic Data Consortium, 2019. \textbf{Bender Rule} English is the language studied in this work.
2024-02-18T23:39:52.001Z
2022-04-19T02:16:53.000Z
algebraic_stack_train_0000
695
5,136
proofpile-arXiv_065-3472
\section*{Introduction} Let $R$ be a commutative ring, $I=(f_1,\ldots ,f_n)$ a finitely generated ideal and $f$ an arbitrary element of $R$. A very natural and important question, not only from the theoretical but also from the computational point of view, is to determine if $f$ belongs to the ideal $I$ or to some ideal closure of it (for example to the radical, the integral closure, the plus closure, the solid closure, the tight closure, among others). To answer this question the concept of a forcing algebra introduced by Mel Hochster in the context of solid closure \cite{hochstersolid} is important (for more information on forcing algebras see \cite{brennerforcingalgebra}, \cite{brenneraffine}): \begin{definitionintro} Let $R$ be a commutative ring, $I=(f_1, \ldots , f_n)$ an ideal and $f \in R$ another element. Then the \emph{forcing algebra} of these (forcing) data is \[ A=R[T_1, \ldots, T_n] /(f_1T_1 + \ldots + f_nT_n+f ) \, . \] \end{definitionintro} Intuitively, when we divide by the forcing equation $f_1T_1 + \ldots + f_nT_n+f $ we are ``forcing'' the element $f$ to belong to the expansion of $I$ in $A$. Besides, it has the universal property that for any $R$-algebra $S$ such that $f\in IS$, there exists a (non-unique) homomorphism of $R$-algebras $\theta: A\rightarrow S$. Furthermore, the formation of forcing algebras commutes with arbitrary change of base. Formally, if $\alpha: R\rightarrow S$ is a homomorphism of rings, then \[S\otimes_R A\cong S[T_1,...,T_n]/(\alpha_1(f_1)T_1+\cdots+\alpha_n(f_n)T_n+\alpha(f))\] is the forcing algebra for the forcing data $\alpha(f_1),\ldots ,\alpha(f_n),\alpha(f)$ . In particular, if ${\mathfrak p} \in X =\Spec R$, then the fiber of (the forcing morphism) $\varphi: Y:=\Spec A \rightarrow X:=\Spec R$ over ${\mathfrak p} $, $\varphi^{-1}( {\mathfrak p})$, is the scheme theoretical fiber $\Spec (\kappa ({\mathfrak p})\otimes_R A)$, where $\kappa ({\mathfrak p})=R_{\mathfrak p}/ {\mathfrak p} R_{\mathfrak p}$ is its residue field. In this case, the fiber ring $\kappa ({\mathfrak p}) \otimes_R A$ is the forcing algebra over $\kappa ({\mathfrak p})$ corresponding to the forcing data $f_1({\mathfrak p}),\ldots ,f_n({\mathfrak p}),f({\mathfrak p})$, where we denote by $g({\mathfrak p})\in \kappa ({\mathfrak p})$, the image (the evaluation) of $g\in R$ inside the residue field $\kappa ({\mathfrak p})=R_{{\mathfrak p}}/{\mathfrak p} R_{{\mathfrak p}}$. Also, note that for any $f_i$ $A_{f_i}\cong R_{f_i}[T_1,...,\check{T_i},...,T_n]$, via the $R_{f_i}-$homomorphism sending $T_i\mapsto -\sum_{j\neq i}(f_j/f_i)T_j-(f/f_i)$ and $T_r\mapsto T_r$ for $r\neq i$. An extreme case occurs when the forcing data consists only of $f$. Then, we define $I$ as the zero ideal. Therefore $A=R/(f)$. Besides, if $n=1$, then intuitively the forcing algebra $A=R[T_1]/(f_1T_1-f)$ can be consider as the graphic of the ``rational'' function $f/f_1$. We will explore this example in more detail in chapter two. By means of forcing algebras and forcing morphisms one can rewrite the fact that the element $f$ belongs to a particular closure operations of $I$. We shall illustrate this now. Firstly, the fact that $f\in I$ is equivalent to the existence of a homomorphism of $R-$algebras $\alpha:A\rightarrow R$, which is equivalent at the same time to the existence of a section $s:X\rightarrow Y$, i.e. $\varphi\circ s=Id_X$. Secondly, $f$ belongs to the radical of $I$ if and only if $\varphi$ is surjective. In fact, suppose that $\varphi$ is surjective and let us fix a prime ideal $\fop\in X$ containing $I$. Then, $\varphi^{-1}(x)={\rm Spec}\,\kappa(\fop)\otimes A\neq \emptyset$, that means, $\kappa(\fop)\otimes A=\kappa(\fop)[T_1,...,T_n]/(f_1(\fop)T_1+\cdots+f_n(\fop)T_n+f(\fop))\neq 0$. But, each $f_i(\fop)=0$, since $f_i\in \fop$, therefore $f(\fop)$ is also zero, thus $f\in \fop$. In conclusion, $f\in \cap_{\fop\in V(I)}\fop= \rad I$. Conversely, suppose that $f\in \rad I$ and take an arbitrary prime $\fop\in X$. Then, if $I$ is not contained in $\fop$, then some $f_j(\fop)\neq0$ and so $\kappa(\fop)\otimes A\neq0$, that means $\varphi^{-1}(\fop)\neq\emptyset$. Lastly, if $I\subseteq\fop$ then $f\in \fop$, and therefore $\kappa(\fop)\otimes A=\kappa(\fop)[T_1,...,T_n]\neq0$ and thus $\varphi^{-1}(\fop)=\mathbb{A}_{\kappa(\fop)}^{n-1}\neq\emptyset$. In conclusion $\varphi$ is surjective. Thirdly, let us review the definition of the tight closure of an ideal $I$ of a commutative ring $R$ of characteristic $p>0$. We say that $u\in R$ belongs to the \emph{tight closure} of $I$, denoted by $I^*$, if there exists a $c\in R$ not in any minimal prime, such that for all $q=p^e\gg0$, $cu^q\in I^{[q]}$, where $I^{[q]}$ denotes the expansion of $I$ under the $e-$th iterated composition of the Frobenius homomorphism $F:R\rightarrow R$, sending $x\rightarrow x^p$. Tight Closure is one of the most important closure operations in commutative algebra and was introduced in the 80s by M. Hochster and C. Huneke as an attempt to prove the ``Homological Conjectures'' (for more information \cite{Hochsterhuneketightclosure}). Let $(R,m)$ be normal local domain of dimension two. Suppose that $I=(f_1,...,f_n)$ is an $m-$primary ideal and $f$ is an arbitrary element of $R$. Then, $f\in I^*$ if and only if $D(IA)=\Spec\,A\smallsetminus V(IA)$ is not an affine scheme, i.e. is not of the form $\Spec\,D$ for any commutative ring $D$ (see \cite[corollary 5.4.]{brenneraffine}). Forth, the origin of the forcing algebras comes from the definition of the solid closure, as an effort to defining a closure operation for any commutative ring, independent the characteristic (see \cite{hochstersolid}). Explicitly, let $R$ be a Noetherian ring, let $I\subseteq R$ an ideal and $f\in R$. Then, $f$ belongs to the \emph{solid closure} of $I$ if for any maximal ideal $m$ of $R$ and any minimal ideal ${\mathfrak q}$ of its completion $\widehat{R}_m$, for the complete local domain $(R'=\widehat{R}_m/{\mathfrak q},m')$ holds that the $d-$th local cohomology of the forcing algebra $A'$, obtained after the change of base $R\hookrightarrow R'$, $H_m^d(A')\neq0$, where $d=\dim R'$ (see \cite[Definition 2.4., p. 15]{brennerbarcelona}). Fifth, let us consider an integral domain $R$ and an ideal $I\subseteq R$. Then, $u$ belongs to the \emph{plus closure} of $I$, denoted by $I^+$, if there exists a finite extension of domains $R\hookrightarrow S$, such that $f\in IS$. If $R$ is a Noetherian domain and $I=(f_1,...f_n)\subseteq R$ is an ideal and $f\in R$, then $f\in I^+$ if and only if there exists an irreducible closed subscheme $\widetilde{Y}\subseteq Y=\Spec\,A$ such that $\dim\widetilde{Y}=\dim X$, $\varphi(\widetilde{Y})=X$ and for each $x\in X$, $\varphi^{-1}(x)\cap\widetilde{Y}$ is finite (for an projective version of this criterion see \cite[Proposition 3.12]{brennerbarcelona}). Finally, if $R$ denotes an arbitrary commutative ring and $I\subseteq R$ is an ideal, then we say that $u$ belongs to the \emph{integral closure} of $I$, denote by $\overline{I}$, if there exist $n\in \mathbb{N}$, and $a_i\in I^i$, for $i=1,...,n$, with \[u^n+a_1u^{n-1}+\cdots+a_n=0.\] We proved in \cite[Chapter 2]{brennergomezconnected} that $f\in \overline{I}$, where $I=(f_1,...,f_n)\subseteq R$, if and only if the corresponding forcing morphism $\varphi$ is universally connected, i.e. $\Spec (S\otimes_RA)$ is a connected space for any Noetherian change of base $R\rightarrow S$, such that $\Spec\,S$ is connected. From this we derive a criterion of integrity for fractions $r/s\in K(R)$, where $R$ denotes a Noetherian domain, in terms of the universal connectedness of the natural forcing algebra $A:=R[T]/(sT+r)$. In view of this results, it seems very natural to study in commutative algebra the question of finding a closure operation with ``good'' properties (see \cite{epsteinclosureguide}), in terms of finding suitables algebraic-geometrical as well as topological or homological properties of the forcing morphism. This approach goes closer to the philosophy of Grothendieck's EGA of defining and studying the objects in a relative context (see \cite{hartshornealgebraic} and \cite{EGAI}). A simple and deep example of this approach is the counterexample to one of the most basic and important open questions on tight closure: the Localization Problem i.e., the question whether tight closure commutes with localization. This was done by H. Brenner and P. Monsky using vector bundles techniques and geometric deformations of tight closure (see \cite{brennerbarcelona}). Besides, another good example going in this direction is a general definition of forcing morphism for arbitrary schemes. Specifically, let $X$ and $Y$ be arbitrary schemes. Suppose that $i:Z\rightarrow X$ is a closed subscheme and $f\in \Gamma(X,{\mathcal O}_X)$ is a global section. Then, a morphism $\varphi:Y\rightarrow X$ is a \emph{forcing morphism} for $f$ and $Z$, if \emph{i)} the pull-back of the restriction of $f$ to $Z$, $f_{|Z}=i_Z^{\sharp}(f)$ is zero, i.e. $\varphi_{|\varphi^{-1}(Z)}^{\sharp}(f_{|Z})=0$; \emph{ii)} for any morphism of schemes $\psi: W\rightarrow X$ with the same property, i.e. $\psi_{|\psi^{-1}(Z)}^{\sharp}(f_{|Z})=0$, there exists a (non-unique) morphism $\widetilde{\psi}:W\rightarrow Y$ such that $\psi=\varphi\circ\widetilde{\psi}$. It is a natural generalization of the universal property of a forcing algebra but in the relative context and in a category including that of commutative rings with unity. \newline\indent In general, for a integral base ring there are two kinds of irreducible components for the prime spectrum of a forcing algebra. In fact, let $R$ be a noetherian domain, $I=(f_1 , \ldots ,f_n) $ an ideal, $f \in R$ and $A=R[T_1, \ldots , T_n]/(f_1T_1+ \ldots +f_nT_n+f)$ the forcing algebra for these data. For $I \neq 0$ there exists a unique irreducible component $H \subseteq \Spec A$ (``horizontal component") with the property of dominating the base $\Spec R$ (i.e. the image of $H$ is dense). This component is given (inside $R[ T_1, \ldots , T_n]$) by \[{\mathfrak p} = R[ T_1, \ldots , T_n] \cap (f_1T_1+ \ldots + f_nT_n+f) Q(R)[ T_1, \ldots , T_n] \, ,\] where $Q(R)$ denotes the quotient field of $R$. All other irreducible components of $\Spec A$ (``vertical components") are of the form \[V( {\mathfrak q} R[ T_1, \ldots , T_n] )\] for some prime ideal ${\mathfrak q} \subseteq R$ which is minimal over $(f_1, \ldots , f_n,f)$ (for a complete proof of this fact and more information see \cite[Lemma 2.1.]{brennergomezconnected}). Finally, let us describe briefly the content of the following sections of this paper. We study the case corresponding to a sub-module $N$ of a finitely generated module $M$ and an arbitrary element $s\in M$. This case corresponds to forcing algebras with several forcing equations \[A=R[T_1,\ldots ,T_n]/\left\langle \left(\begin{array}{ccc} f_{11}& \ldots &f_{1n} \\ \vdots& \ddots & \vdots \\ f_{m1} & \ldots & f_{mn} \end{array} \right) \cdot \left( \begin{array}{c}T_1 \\ \vdots \\ T_n \end{array}\right)+\left( \begin{array}{c}f_1\\ \vdots \\ f_m\end{array} \right) \right\rangle. \] Even very basic properties of forcing algebras are not yet understood, and these paper deal in some extent with these questions. For example, we describe how to perform elementary row and column operations on the forcing algebra by means of considering elementary affine linear isomorphisms and a specific relation between the regular sequences of forcing elements and the fitting ideals of the corresponding forcing matrix (\S1). Besides, the irreducibility of the forcing algebra over a noetherian domain can be obtained just by assuming that the height of $I$ is bigger or equal that 2 (\S2). Now, we show with two kinds of examples that for the reducedness of the forcing algebra it is not enough to have the reducedness of the base. Besides, as a natural consequence of studying this, we see that a noetherian ring is the product of fields if and only if any element belongs to the ideal generated by its square power (\S3). Moreover if we add to the condition the possibility that $I$ is the whole base $R$, then we get a complete characterization of the integrity of the forcing algebra over UFDs (\S4). Moreover, with a very natural approximation through simple examples and increasing just step by step the dimension of the base space we obtain, in the case that our base is the ring of polynomials over a perfect field, a quite simple criterion of normality for the forcing algebras by means of the size of the codimensions of the ideal $I$ and the ideal $I+D$, where $D$ is generated by the partial derivatives of the data. In the case that we are working over an algebraic closed field and our base is the ring of coordinates of an irreducible variety $X$, the normality of the (forcing) hyperplane defined by the forcing equation can be characterized by imposing the condition that the codimension of the singular locus of $X$ in the whole affine space is a least three (\S5). Here, it is worth to know that we present the formal proof of this criterion as well as the ``informal'' way in which this criterion was originally found i.e., a way of analyzing simple examples increasing gradually the generality of the variables describing them, in order to develop slowly a deeper intuition of the phenomenon involved. As an instance of the importance of the examples we analyze an specific forcing algebra, that we call the ``enlightening'' example, because it is a very natural recurring point to verify the different results that we have already studied. On this respect this example is not less important that the former results. Instead of that, it is another valuable result where the different propositions and theorems come together (\S6). Moreover, we compute explicitly the normalization of a forcing algebra coming from the examples guiding us to find the normality criterion. And again, on that process we deal with very elementary and fundamental questions related with normal domains and denominators ideals (\S7). \section{Forcing Algebras with several Forcing Equations} Now, we study just a few elementary properties of forcing algebras which are defined by several forcing equations and which leads us in a natural way to the understanding of the linear algebra over the base ring $R$. This section could be understood as a simple invitation to this barely explored field of mathematics. Here we recommend for further reading \cite{brennerforcingalgebra}. In this case we can write the forcing algebra in a matrix form: \[A=R[T_1,\ldots ,T_n]/\left\langle \left(\begin{array}{ccc} f_{11}& \ldots &f_{1n} \\ \vdots& \ddots & \vdots \\ f_{m1} & \ldots & f_{mn} \end{array} \right) \cdot \left( \begin{array}{c}T_1 \\ \vdots \\ T_n \end{array}\right)+\left( \begin{array}{c}f_1\\ \vdots \\ f_m\end{array} \right) \right\rangle. \] This corresponds to a submodule $N \subseteq M $ of finitely generated $R$-modules and an element $f \in M$ via a free representation of these data (see \cite[p. 3]{brennerforcingalgebra}). Now, we study how the forcing algebra behaves when we make elementary row or column operations in the associated matrix $M$. Remember that the matrix notation in the forcing algebra just means that we are considering the ideal generated by the rows of the resulting matrix, after performing the matrix multiplications and additions. First, if $l_1,...,l_m$ denote the rows of $M$, and $c\in R$ denote an arbitrary constant, making a row operation, $l_j\mapsto cl_j+l_i$, ($i\neq j$; that is changing the $jth$ row by $c$ times the $ith$ row plus the $jth$ row) just means changing the generators $h_1,...,h_m$ to the new generators $h_1,...,h_{j-1},ch_i+h_j,h_{j+1},...,h_m$. The ideal generated by these two groups of forcing elements coincides and therefore the associated forcing algebra are the same. Similarly, if we make operations of the form $l_i\mapsto l_j$ and $l_i\mapsto cl_i$, where $c$ is an invertible element of $R$, which correspond to change two rows and to multiply a row by an element in $R$, then the forcing algebra does not change. For the column operation, the problem is a little bit more subtle. Let $\left\lbrace C_1,...,C_n\right\rbrace $ be the columns of the matrix $A$. Consider the column operation $\mapsto dC_i+C_jC_j$, where $d\in R$. Now, define the following automorphism $\varphi$ of the ring of polynomials $R[T_1,...,T_n]$ sending $T_s\mapsto T_s$, for $s\neq i$, and $T_i\mapsto cT_j+T_i$. Now, \[\varphi(h_r)=f_{r1}T_1+...+f_{ri}(cT_j+T_i)+...+f_{rn}T_n=\] \[f_{r1}T_1+...+(cf_{ri}+f_{rj})T_j+...+f_{rn}T_n.\] and then $\varphi$ induces an isomorphism between the forcing algebra with matrix $M$ and the forcing algebra with matrix obtained from $M$ performing the previous column operation. Similarly, for operations of the form $C_i\mapsto C_j$ and $C_i\mapsto dC_i$, where $d\in R$ is an invertible element, the resulting forcing algebras coincide. Now, if $R$ is a field and the rank of the associated matrix $M$ is $r$, where $r\leq {\rm min}(m,n)$, then performing row and column operations on the associated matrix we can obtained a matrix form by the $r\times r$ identity matrix in the upper-left side and with zeros elsewhere. Therefore, the elements $h_i$ have just the following simple form: $h_i=T_i+g_i$, for $i=1,...,r$ and $h_i=g_i$, for $i>r$, and some $g_i\in R$ (this $g_i$ could appear just in the nonhomogeneous case, corresponding to the changes made on the independent vector form by the $f_j$). Thus the forcing algebra $A$ is isomorphic either to zero (in the case that there exists $g_i\neq0$, for some $i>r$) or to $k[T_{r+1},...,T_n]$. This allow us to present the following lemma describing the fibers of a forcing algebra as affine spaces over the base residue field. \begin{lemma} \label{forcingfiber} Let $R$ be a commutative ring and let $A$ be the forcing algebra corresponding to the data $\left\lbrace f_{ij}, f_i \right \rbrace$. Let ${\mathfrak p} \in X$ be an arbitrary prime ideal of $R$ and $r$ the rank of the matrix $\left\lbrace f_{ij}({\mathfrak p}) \right\rbrace$. Then the fiber over ${\mathfrak p} $ is empty or isomorphic to the affine space ${ \mathbb A}^{n-r}_{\kappa ({\mathfrak p})}$. \end{lemma} \begin{proof} We know by a previous comment of the introduction that the fiber ring over ${\mathfrak p}$ is $\kappa ({\mathfrak p})\otimes_RA$ which is just \[\kappa ({\mathfrak p})[T_1,\ldots ,T_n]/\left\langle \left(\begin{array}{ccc} f_{11}({\mathfrak p})&\ldots &f_{1n}({\mathfrak p})\\ \vdots& \ddots &\vdots \\f_{m1}({\mathfrak p})&\ldots &f_{mn}({\mathfrak p})\end{array} \right) \cdot \left( \begin{array}{c}T_1 \\ \vdots \\ T_n \end{array}\right)+\left( \begin{array}{c}f_1({\mathfrak p})\\ \vdots \\ f_m({\mathfrak p})\end{array} \right) \right\rangle \, .\] Now, making elementary row and column operations on the matrix $(f_{ij}({\mathfrak p}))$, as indicated before, we can obtain a matrix with zero entries except for the first $r$ entries of the principal diagonal which are ones, plus an independent vector. In conclusion, after performing all the necessarily elementary operations, we obtain an isomorphism from $A$ to a very simple forcing algebra \[B=\kappa({\mathfrak p})[T_1,...,T_n]/(T_1+g_1,...,T_r+g_r,g_{r+1},...,g_n),\] corresponding to the matrix with zero entries except for the first $r$ entries of the principal diagonal, which are ones. But then $B$ is clearly isomorph to the affine ring $\kappa({\mathfrak p})[T_{r+1},...,T_n]$, if $g_{r+1}=\cdots=g_n=0$, and $A=0$ otherwise, proving our lemma. \end{proof} If $\kappa({\mathfrak p})$ is algebraically closed, then the fiber over a point ${\mathfrak p} \in \Spec R$ of this forcing algebra is just the solution set of the corresponding system of inhomogeneous linear equations over $\kappa ({\mathfrak p})$. If the vector $(f_1, \ldots , f_m)$ is zero, then we are dealing with a ``homogeneous'' forcing algebra. In this case there is a (zero- or ``horizontal'') section $s: X =\Spec R \rightarrow Y = \Spec A$ coming from the homomorphism of $R$-algebras from $A$ to $R$ sending each $T_i$ to zero. This section sends a prime ideal ${\mathfrak p} \in X$ to the prime ideal $(T_1,\ldots ,T_n)+{\mathfrak p} \in Y$. \begin{remark} If all $f_k$ are zero, and $m=n$, then the ideal $\mathfrak{a}$ is defined by the linear forms $h_i=f_{i1}T_1+\cdots +f_{in}T_n$, and in this case we can ``translate'' the fact of multiplying by the adjoint matrix of $M$, denoted by ${\rm adj}M$, just to saying that the elements $\det MT_i\in \mathfrak{a}$. In fact, \[\left( \begin{array}{c}\det M T_1\\\vdots\\ \det M T_n \end{array}\right)=\det M\cdot I_{nn}\cdot\left( \begin{array}{c}T_1\\\vdots\\T_n\end{array}\right) = {\rm adj}M\cdot M\cdot\left( \begin{array}{c}T_1\\\vdots\\T_n\end{array}\right)={\rm adj}M\cdot \left( \begin{array}{c}h_1\\\vdots\\h_n\end{array}\right) \] where the entries of the last vector belong to $\mathfrak{a}$. From this fact we deduce that, when the determinant of $M$ is a unit in $R$, then $\mathfrak{a}=(T_1,...,T_n)$ and the forcing algebra is isomorphic to the base ring $R$. Note that the previous argument works also in the nonhomogeneous case. \end{remark} Now, we study the homogeneous case when the elements $\left\lbrace h_1,...,h_m\right\rbrace$ form a regular sequence. First, we need the following general fact about the pure codimension of regular sequences in Noetherian rings: Let $S$ be a Noetherian ring and $\left\lbrace r_1,...,r_m\right\rbrace \subseteq S$ a regular sequence and $I$ the ideal generated by these elements. Then the pure codimension of $I$ is $m$. For a proof see \cite{gomezramirezthesis}. Besides, if $j\in \left\lbrace 1,...,\min(m,n) \right\rbrace$, then we define the Fitting ideals $I_j$ as the ideals generated by the minors of size $j$ of the matrix $M$. This definition corresponds to the standard definition of Fitting ideals regarding $M$ as a $R$-homomorphism of free modules (see \cite[p. 497]{eisenbud}). \begin{proposition} Let $R$ be a Noetherian integral domain and $A$ the homogeneous forcing algebra corresponding to the data $\left\lbrace f_{ij}\right\rbrace$, with $i=1,...,n$ and $j=1,...,m$. Suppose that the forcing equations $\left\lbrace h_1,...,h_m \right\rbrace$ form a regular sequence in $B:=R[T_1,...,T_n]$. Then $m\geq n$ and $I_{{\rm min}(m,n)}\neq (0).$ \end{proposition} \begin{proof} First, note that the ideal $I$ generated by the forcing elements is contained in the homogeneous ideal $P=(T_1,...,T_n)$, therefore we see that the dimension of $A$ is smaller or equal to the dimension of $B/P$, which is exactly the dimension of $R$. On the other hand, if we consider a saturated chain of primes in $A$, \[P_0\nsubseteq P_1\nsubseteq...\nsubseteq P_{\dim A},\] where $P_0$ is a minimal prime over $I$, then the former comments ${\rm ht}(P_0)=m$ and thus completing the former chain with a saturated chain for $P_0$ in $B$ of length $m$, we see that $\dim B\geq m+\dim A.$ Now, noting that $\dim B=\dim A+n$, since $A$ is Noetherian, we get $n+\dim A\geq m+\dim A$, which implies that $n \geq m$. For the second part, let's consider the matrix $M$ in the field of fractions $K$ of $R$. It is an elementary fact that the rank of $M$ is $\leq s$ if and only if every minor of size $s+1$ of $M$ is zero. This follows from the fact that performing row operations on a matrix change the values of fixed minor of size $r$ of the original matrix just by a nonzero constant term of another minor of size $r$ of the changed matrix (this is a general way of saying that performing a row operation is just multiplying by an invertible matrix and therefore the fact that the determinant of the matrix is zero or not is independent of the row operation). Now, suppose by contradiction that $I_{{\rm min}(m,n)}=0$, then the rank of $M$ in $K$ is strictly smaller than ${\rm min}(m,n)$ and thus the rows of $M$ are linearly dependent in $K$. Without loss of generality, assume that there is $j\in \left\lbrace 1,...,{\rm min}(m,n)\right\rbrace$, such that the $jth$ row of $M$, $l_j$, is a linear combination of the former ones, that is, there exist some $\alpha_i\in K$, such that $l_j=\sum_{i=1}^{j-1}\alpha_il_i$. Now, after multiplying by a nonzero common multiple $\beta \in R$ of the denominators, we get an equation of the form $\beta l_j=\sum_{i=1}^{j-1}\gamma_il_i$, for some $\gamma_i \in R$. But, seeing this equality in $A$, (which means just multiplying this equality by the $n\times1$ vector given by the $T_i$) we see that $\beta h_j=\sum_{i=1}^{j-1}\gamma_i h_i$, which implies that $h_j$ is a zero divisor in $B/(h_1,...,h_{j-1})$, because $\beta \notin (T_1,...,T_n)$ ($\beta$ is a nonzero constant polynomial in $B$) and therefore $\beta \notin (h_1,...,h_{j-1})$. This contradicts the fact that $I$ is generated by a regular sequence. \end{proof} The converse of the previous proposition is false as the following example shows. \begin{example} Consider $R=k[x];B=R[T_1,T_2];h_1=xT_1-xT_2$ and $h_2=xT_1+xT_2$, where $k$ is a field of ${\rm char}k\neq 2$. Then $m=n=2$ and the determinant of the associated matrix is $2x^2$, but the sequence $\left\lbrace h_1,h_2\right\rbrace$ is not regular, in fact, the ideal $I$ generated by its elements has height just one, because it is contained in the principal ideal $(x)$, and therefore by former comments, $I$ cannot be generated by a regular sequence. Geometrically, the variety defined by $I$ is the union of a line $V(T_1,T_2)$ and a plane $V(x)$. Intuitively, this example comes from the following observation. Suppose that we have the forcing algebra with equations $h_1'=T_1+T_2$ and $h_2=T_1-T_2$. If we consider the line $V(T_1-T_2,T_1+T_2)=V(T_1,T_2)$ (whose associated determinant is $2\neq0$) and multiplying these equations by $x$, we obtain a variety that is automatically the union of this line with the plane $V(x)\subseteq k^3$, which has bigger dimension, but the associated determinant of the new variety (our former example) is just $x^2$ times the former determinant. This process gives us a new variety with nonzero determinant but with an ideal with smaller codimension. \end{example} \section{Irreducibility} Here we shall see that if $A$ is a forcing algebra over a Noetherian integral domain such that ${\rm ht}(f,f_1,...,f_n)\geq 2$, where $\{f_1,...,f_n,f\}$ is the forcing data, then $A$ is an irreducible ring (i.e. $A$ has just one minimal prime). \begin{theorem}\label{irreducibility} Let $R$ be a Noetherian integral domain; \[A=R[T_1,...,T_n]/(f_1T_1+\cdots+f_nT_n+f);\] $h=f_1T_1+\cdots+f_nT_n+f$, where $f_1,...,f_n,f\in R$ and $J=(f,f_1,...,f_n)$. Assume that ${\rm ht}J\geq2$, then $A$ is an irreducible ring. \end{theorem} \begin{proof} By Lemma \cite[Lemma 2.1.(2)]{brennergomezconnected}, it is enough to see that for any minimal prime ${\mathfrak q}\in R$ of $J$, ${\mathfrak q} B$ is not minimal over $(h)$, because on that case, $A$ has just the horizontal component, and therefore is irreducible. Let ${\mathfrak q}\in R$ be minimal over $J$. Then, \[{\rm ht}{\mathfrak q} B\geq {\rm ht}{\mathfrak q}\geq{\rm ht}J\geq2.\] Therefore ${\mathfrak q} B$ is not minimal over $(h)$, since by Krull's Principal Ideal Theorem the minimal primes over a principal ideal have height smaller or equal than one. \end{proof} \section{(Non)Reduceness} In this section we study the (non)reducedness of forcing algebras over a reduced base ring $R$. First, for a base field $k$ \cite[Lemma 3.1]{brennergomezconnected} shows that any forcing algebra is isomorphic to a ring of polynomials over $k$ or the zero algebra, therefore it is reduced. \newline\indent Now, if $R$ is a local ring, let us first stay an elementary remark concerning a generalization of the Monomial Conjecture (MC) (see \cite{hochstercanonical}) in dimension one. \newline\indent In dimension one (CM) just says that if $x\in m$ does not belong to any minimal prime ideal of $R$ then $x^n\notin (x^{n+1})$ for all nonnegative integer $n$. In the next remark we will prove a generalization of this fact for a quasi-local ring, that is, not necessarily Noetherian. \begin{remark}\label{mcindimone} Let $(R,m)$ be a quasi-local ring and $x\in m$. Then, there exists a positive integer $n$ such that $x^n\notin (x^{n+1})$, if and only if $x$ is a nilpotent or a unit. In fact, one direction is trivial, for the other one, assume that $x$ is neither nilpotent nor a unit and that there exists $n\in \mathbb{N}$ and $y\in R$ such that $x^n=yx^{n+1}$, thus $x^n(1-yx)=0$, but $1-yx\notin m$, therefore it is a unit, then $x^n=0$, which is a contradiction. \end{remark} \begin{example} Let $(R,m)$ be a quasi-local reduced ring, which is not a field, and $f\in m\smallsetminus \{0\}$. Then, the trivial forcing algebra $A:=R/(f^2)$ is non-reduced because clearly $\overline{f}\in {\rm nil}A$ and by the previous Remark $\overline{f}\neq A$. So, there are always non-reduced forcing algebras over quasi-local reduced base rings, which are not a field. \end{example} Now, we want to study in which generality we can guarantee the existence of an element $f\in R$ such that $f\notin(f^2)$. The following Proposition gives a compact characterization of the fact that any element $f\in R$ belongs to $(f^2)$. \begin{proposition} A commutative ring with unity $R$ is reduced of dimension zero if and only if for any element $f\in R$, holds that $f\in (f^2)$. In particular, if $R$ is noetherian then it is equivalent to the fact that $R$ is a finite direct product of fields. \end{proposition} \begin{proof} Assume that $R$ is a reduced zero-dimensional ring. Then it is enough to check the desired property locally. But, on that case $R$ is a ring with a unique prime ideal, which is at the same time reduced, therefore it is a field and in particular $f\in (f^2)$, for all $f\in R$. For the other direction, let $P$ be a prime ideal of $R$. Clearly, the same property holds for $R/P$, thus, for any $g\in R/P$ there exists $c\in R/P$, such that $g=cg^2$. Therefore, $g(1-cg)=0$, implying that either $g=0$ or $1-cg=0$, because $R$ is an integral domain. So $R/P$ is a field and then $R$ has dimension zero. Finally, from the hypothesis follows that for any $f\in R$, $f\in (f^{2^m})$, for every natural number $m$. In particular, if $f$ is nilpotent and $f^m=0$, for some $m\in\mathbb{N}$, then $f\in (f^m=(0))$. In conclusion, $f$ is reduced. The second part is a direct consequence of the Chinesse Remainder Theorem. \end{proof} \begin{remark} The previous proposition guarantees the existence of non-reduced forcing algebras over any noetherian ring which is not a finite direct product of fields. Specifically, as before, we choose an element $f\in R$, such that $f\notin (f^2)$ and define $A:=R/(f^2)$. \end{remark} Finally, we present a more interesting example of an irreducible but not reduced forcing algebra over an affine domain base ring $R$ such that the $\codim((f_1,...,f_n),R)$ is arbitrary large. \begin{example} Consider $R=k[x_1,...,x_{n-1},z]/(x_1z,...,x_{n+1}z)$, $h=x_1T_1+\cdots+x_{n+1}T_{n+1}+z^2$ and $A=R[T_1,...,T_{n+1})/(h)$. Then, \[\codim((x_1,...,x_{n+1}),R)=n,\] because the ring of polynomials is catenary. Besides, it is straightforward to verify that $z\notin (h)$, and $z^3=z^2h\in(h)$. Therefore $A$ is non-reduced. \end{example} \section{Integrity over UFD} Now, we prove an integrity criterion for forcing algebras over UFD as base ring involving just the height of the forcing elements $f_1,...,f_n$. \begin{lemma} \label{domain} Let $R$ be a Noetherian UFD which is not a field, $J=(f_1,\ldots,f_n,f)$, where some $f_i\neq0$, and let $A$ be the forcing algebra corresponding to this data and $B=R[T_1,...,T_n]$. Then $A$ is an integral domain if and only if $J=R$, or ${\rm ht}J\geq2$. \end{lemma} \begin{proof} Along the proof we will use the basic fact that in a UFD the notions of prime and irreducible element coincide. We will prove the negation of the equivalence $((h)\in \Spec B) \Leftrightarrow (I=R \vee{\rm ht}J\geq2$), which is equivalent formally to $((h)\notin \Spec B) \Leftrightarrow (I\neq R \wedge{\rm ht}J\leq1$). Now, we can written the condition at the right side by ${\rm ht}J\leq1$, assuming implicitly that ${\rm ht} I$ is well defined, i.e., $I\neq R$. So we will see that $A$ is not an integral domain if and only if ${\rm ht}J\leq1$. In fact, we can assume $J\neq0$ and therefore ${\rm ht}J=1$. Choose a prime ideal $P$ of $R$ such that $P$ contains $J$ and ${\rm ht}P=1$. Choose $a\neq0\in P$. Now, some of the prime factors of $a$, say $p$, belongs to $P$ and therefore $P=(p)$, due to the fact that both prime ideals have height one. Thus, there exist $g_i,g\in R$ such that $f_i=pg_i$ and $f=pg$, hence $h=f_1T_1+ \ldots +f_nT_n+f=p(g_1T_1+\ldots +g_nT_n+g)$ is the product of $p$ and an element which is not a unit since some of the $f_i$ is different from zero. Therefore $h$ is not irreducible, which is equivalent of being a non prime element. In conclusion, $A$ is not an integral domain. Conversely, assume that $A$ is not an integral domain, or equivalently that $h=f_1T_1+ \ldots +f_nT_n+f$ is not irreducible. Hence, there exists polynomials $Q_1,Q_2\in R[T_1,\ldots ,T_n]$, not units, such that $h=Q_1Q_2$. Now, the degree of $h$ is the sum of the degrees of $Q_1$ and $Q_2$, because $R$ is an integral domain. Then one of the two factors has degree zero, say $Q_1$. Comparing the coefficients we get that each $f_i=Q_1g_i$ and $f=Q_1g$, and $Q_2=g_1T_1+\ldots+g_nT_n+g$. In conclusion, $J\subseteq(Q_1)R$ and therefore by Krull's Theorem ${\rm ht}(J)\leq1$. \end{proof} \section{A Normality Criterion for Polynomials over a Perfect Field} Now we will try to understand under what conditions on the elements $f_1,\ldots ,f_n,f\in R$ the associated forcing algebra is a normal domain in the case that $R$ is the ring of polynomials over a perfect field. For some examples, results and intuition we assume a very basic and modest knowledge of algebraic geometry, mainly relating affine varieties (see, for example \cite{goertzwedhornalgebraic} and \cite[Chapter I]{hartshornealgebraic}). We state explicitly the statement of a corollary of the Jacobian Criterion, which we use later. For proofs see \cite[Theorem 16.19, Corollary 16.20]{eisenbud}. \begin{corollary}\label{corjacobian} Let $R[x_1,\ldots,x_r]/I$ be an affine ring over a perfect field $k$ and suppose that $I$ has pure codimension $c$, i.e., the height of any minimal prime over $I$ is exactly $c$. Suppose that $I=(f_1,\ldots,f_n)$. If $J$ is the ideal of $R$ generated by the $c\times c$ minors of the Jacobian matrix $(\partial f_i/\partial x_j)$, then $J$ defines the singular locus of $R$ in the sense that a prime $P$ of $R$ contains $J$ if and only if $R_P$ is not a regular local ring. \end{corollary} Besides, let us recall Serre's Criterion for normality for any Noetherian ring (see \cite[Theorem 11.2.]{eisenbud}). Remember that a ring is normal if it is the direct product of normal domains: \begin{theorem} A Noetherian ring $S$ is normal if and only if the following two conditions holds: \begin{enumerate} \item (S2) For any prime ideal $P$ of $S$ holds \[ {\rm depth}_P(S_P)\geq {\rm min}(2,{\rm dim}(S_P)). \] \item (R1) Every localization of $S$ on primes of codimension at most one is a regular ring. \end{enumerate} \end{theorem} \begin{remark}\label{derivatives} If $R=k[x_1,\ldots ,x_r]$ and $h=f_1T_1 + \ldots + f_nT_n+f\in B:=R[T_1,\ldots ,T_n]$, $h\neq0$, then the forcing algebra $A=R[T_1,\ldots ,T_n]/(h)$ is equidimensional of dimension ${\rm dim}A=r+n-{\rm ht}((h))=r+n-1$, since $=R[T_1,\ldots ,T_n]$ is catenary and $h$ has pure codimension one, because every minimal prime over $(h)$ has height one by Krull's principal ideal theorem. Therefore in the case that $k$ is a perfect field we deduce from the corollary of the Jacobian criterion that the singular locus of the forcing algebra is exactly the prime spectrum of the following ring \[ A_S=A/((\partial h/\partial x_j),(\partial h/ \partial T_i))=R[T_1,\ldots ,T_n]/(h,(\partial h/\partial x_j),(\partial h/ \partial T_i)).\] Now, $(\partial h/\partial x_j)=\sum_{i=1}^n(\partial f_i/\partial x_j)T_i +(\partial f/\partial x_j)$ and $\partial h/ \partial T_i=f_i$. Thus we get \[ J:=(h,(\partial h/\partial x_j),(\partial h/ \partial T_i))=(h,\sum_{i=1}^n(\partial f_i/\partial x_j)T_i+(\partial f/\partial x_j),f_i) \] \[=(f,f_i,\sum_{i=1}^n(\partial f_i/\partial x_j)T_i +(\partial f/\partial x_j)),\] where $i,j\in \{1,...,n\}$. We can write the last set of generators in a compact way using matrices: \[\left( \begin{array}{ccc}\partial f_1/\partial x_1&\ldots& \partial f_n/\partial x_1\\ \vdots&&\vdots \\ \partial f_1/\partial x_r &\ldots&\partial f_n/\partial x_r\end{array}\right)\cdot\left( \begin{array}{c}T_1\\ \vdots \\T_n\end{array}\right)+\left( \begin{array}{c}\partial f/\partial x_1 \\ \vdots \\ \partial f/\partial x_r \end{array}\right).\] We will denote by $\overline{J}$ the class of $J$ in $A$. \end{remark} Now we rewrite the normality condition for the forcing algebra $A$ in terms of the codimension of its singular locus $V(\overline{J})\in \Spec A$, or in terms of the codimension of the corresponding closed subset $V(J)\subseteq \Spec (R[T_1,\ldots ,T_n])$, which are isomorphic as affine schemes. On this section we set \[I=(f,f_1,...,f_n)\in R\] and $D=(\partial f/\partial x_i, \partial f_j/\partial x_i)$ for $i,j\in \{1,...,n\}$. Note that $J\subseteq (I+D)B$. In particular. $V(IB)\cap V(DB)\subseteq V(J)\subseteq\Spec B$. First, let's consider the trivial case $R=k$. By previous comments we know that if $A\neq0$ then $A=k[T_1,...,\check{T_i},...,T_n]$, so $A$ is regular and thus a normal domain. In conclusion, $A$ is a normal domain if and only if all $f_i$ and $f$ are zero, or there exists some $f_i\neq0$. \begin{lemma} \label{sing} let $R=k[x_1,\ldots ,x_r]$ be the ring of polynomials over a perfect field $k$ and $h=f_1T_1 + \ldots + f_nT_n+f\in B:=R[T_1,\ldots ,T_n]$, with $h\neq0$, and $A=R[T_1,\ldots ,T_n]/(h)$. Then, the following conditions are equivalent: \begin{enumerate} \item $A$ is a normal ring. \item ${\rm codim}(\overline{J},A)\geq2$, or $\overline{J}=A$. \item ${\rm codim}(J,B)\geq3$, or $J=B$. \end{enumerate} \end{lemma} \begin{proof} $(1)\Rightarrow(2)$ Assume that $A$ is a normal ring, then the Serre's Criterion tells us that for any prime ideal $q$ of $A$ with ${\rm ht}q\leq1$, $A_q$ is a regular ring (remember that in dimension zero regularity is equivalent to being a field). Now, suppose that $\overline{J}\subsetneq A$. Then, we know that for any prime $P$ of $A$ that contains $\overline{J}$, $A_P$ is not regular, therefore ${\rm ht}P\geq2$, thus ${\rm codim}(\overline{J},A)\geq2$. $(2)\Rightarrow(1)$ We know that $A$ is C-M because it is the quotient of C-M $R[T_1,\ldots ,T_n]$ by an ideal $(h)$ of height one generated by exactly one element, (see \cite[Theorem 18.13]{eisenbud}). Therefore, for any prime ideal $P$ of $A$ the local ring $A_P$ is C-M. Then, \[ {\rm depth}(A_P)={\rm dim}(A_P)\geq {\rm min}(2,{\rm dim}(A_P)). \] Thus $A$ satisfies the condition (S2) of the Serre's Criterion. Besides, $A$ satisfies condition (R1). In fact, any prime ideal $P$ of $A$ of height at most 1 does not contain $\overline{J}$, because ${\rm codim}(\overline{J},A)={\rm ht}_A(\overline{J})\geq2$, or $\overline{J}=A$, hence $P$ is not in the singular locus of $A$, that means the regularity of the local ring $A_P$. Since, $\overline{J}=A$ if and only if $J=B$ then, for the equivalence between (2) and (3) we can assume that $\overline{J}\subsetneq A$ (respectively $J\subsetneq B$). $(2)\Rightarrow(3).$ Let $P$ be a prime ideal of $B$ that contains $J$, then by hypothesis ${\rm ht}_A(\overline{P})\geq2$. Let $\overline{P_0}\subsetneqq \overline{P_1} \subsetneqq \overline{P_2}=\overline{P}$ be a chain of primes in $A$, then one can see the corresponding chain of prime ideals in $B$ adding the zero ideal, which is a prime ideal, $Q_0=(0)\subsetneqq Q_1=P_0\subsetneqq Q_2=P_1 \subsetneqq Q_3=P_2=P$, that means ${\rm codim}(J,B)\geq3$ $(3)\Rightarrow(2)$ Let $P$ be a prime ideal of $A$ that contains $\overline{J}$, and let $Q$ be the prime ideal of $B$ that correspond to $P$. Clearly, $J\subseteq Q$ as subsets of $B$. We know that ${\rm ht}(Q)\geq3$ and $(h)\subseteq Q$, therefore $Q$ contains a minimal prime ideal of $(h)$, say $Q_0$, which has height one by Krull's Principal Ideal Theorem. In virtue of this, we know that there exists a saturated chain of primes ideals of $B$, \[ Q_0=(0)\subsetneqq Q_1 \subsetneqq Q_2 \subsetneqq Q_3 \subseteq Q,\] since $B$ is a catenary ring and ${\rm ht}Q\geq3$, and therefore any saturated chain of prime ideals from $(0)$ to $Q$ has the same length, that is, ${\rm ht}(Q)$, which is a least three. Therefore, looking at the corresponding chain in $A$, and denoting by $P_{i-1}$ the prime ideal of $A$ corresponding to $Q_i$, we get, starting with the class of $Q_1$, the following saturated chain: $P_0 \subsetneqq P_1 \subsetneqq P_2 \subseteq P$, then ${\rm ht}P\geq2$. In conclusion, ${\rm codim}(\overline{J},A)\geq2$. \end{proof} \begin{remark}\label{codimension} An important fact is that for $R=k[x_1,\ldots ,x_r]$, $I$ an ideal of $R$ and $B=R[T_1,...,T_n]$ we know that the ${\rm codim}(I,R)={\rm codim}(IB,B)$, because by previous results we get \[n+r-\codim(IB,B)={\rm dim}(B/IB)={\rm dim}((R/I)[T_1,\ldots,T_n])=\] \[{\rm dim}(R/I)+n={\rm dim}R-{\rm codim}(I,R)+n=n+r-{\rm codim}(I,R).\] \end{remark} We want to find necessary and sufficient conditions for the forcing data $f_1,...,f_n$ and $f$ on the base ring of polynomials $R=k[x_1,...,x_n]$, such that the associated forcing algebra turns out to be a normal domain. The previous lemma gives a condition over $A$ and the Jacobian ideal $J$ of the partial derivatives of the forcing equation, which involves, as seen before, again the forcing ideal and new forcing equations defined by the partial derivatives of the original forcing data. This suggests that a suitable condition for normality over the base $R$ should involve the forcing data and its partial derivatives. The following collection of examples start to give us a good first intuition of the phenomenon. \begin{example}\label{normalintuition} Let $k$ be a perfect field and let's define $R=k[x,y]$; $B=k[x,y,T_1,T_2]$; $A=B/(h)$ and \[h=x^aT_1+y^bT_2+x^cy^d,\] where $a,b,c$ and $d$ are nonnegative integers. After computations we have that the Jacobian ideal \[J=(x^a,y^b,x^cy^d,ax^{a-1}T_1+cx^{c-1}y^d,by^{b-1}T_2+dx^cy^{d-1}).\] Let $D\subseteq R$ be the ideal generated by all the partial derivatives of the generators of the forcing ideal $I=(f_1,f_2,f)=(x^a,y^b,x^cy^d)$, i.e., \[D=(ax^{a-1},by^{b-1},cx^{c-1}y^d,dx^cy^{d-1}).\] By Lemma \ref{domain}, $A$ is a domain for any nonnegative values of the exponents. After elementary considerations we see that ${\rm codim}(J,B)\geq3$ or $J=B$ if and only if some of the following seven cases occur: i) $a=0$. ii) $a=1$. iii) $b=0$. iv) $b=1$. v) $d=c=0$. vi) $c=1$ and $d=0$. vii) $c=0$ and $d=1$. In fact, in any other case $J\subseteq (x,y)B$, and therefore ${\rm codim}(J,B)\leq 2$. Moreover, it is also elementary to see that the previous seven cases are exactly the ones in which the ideal $I+D$ is equal to $R$. In conclusion, in virtue of the previous Lemma, $A$ is a normal domain if and only if $I+D=R$. \end{example} \begin{remark} Suppose that $k$ is an algebraically closed field. Continuing with the notation of the former example, let's write $V=V(I)\subseteq k^2$, $W=V(D)\subseteq k^2$, $Y=V(h)\subseteq k^4$ and $S=V(J)\subseteq k^4$ denote the corresponding affine varieties and $\pi:S\rightarrow V$ the natural projection to the first two coordinates. Geometrically, Example \ref{normalintuition} suggests that the normality of the variety $X$ (which is equivalent to the normality of the forcing algebra, see \cite[Exercise I.3.17]{hartshornealgebraic}), is related to the intersection of $V$ and $W$, because $V\cap W=\emptyset$, if and only if $I+D=R$. In fact, this is true for arbitrary polynomial data $f_1,f_2$ and $f\in R$ as we will see. First, by Lemma \ref{domain}, $A$ is an integral domain if and only if ${\rm ht}I\geq2$ or $I=R$. So, let's assume that $A$ is a domain and $I\subsetneq R$, otherwise $V=\emptyset$ and $J=B$, being $A$ normal, by Lemma \ref{sing}. Thus, ${\rm ht}I\geq2$, which means that the minimal prime ideals over $I$ are just finitely many maximal ideals, since $\dim R=2$. But, by the Nullstellensatz (see \cite[Exercise 7.14]{atimac}) this points correspond exactly to the points of $V$. Therefore, let's write $V=\{v_1,...,v_r \}$. Moreover, let $S$ be the singular locus of $Y$ in the sense that, if we consider $S$ as a subvariety of $Y$. By previous comments $S$ is the finite union of its (singular) fiber varieties $S_{v_i}=\pi^{-1}(v_i)$. Now, by Lemma \ref{sing}, $Y$ is a normal variety if and only if ${\rm codim}(S,K^4)\geq3$ (which is equivalent to ${\rm codim}(S,Y)\geq 2$). Assume, that $V\cap W\neq\emptyset$, i.e., $I+D\subsetneq R$, and let's prove that $Y$ is not normal. In fact, we know that $J\subseteq (I+D)B$. Therefore, by Remark \ref{codimension} \[\codim (S,k^4)=\codim (J,B)\leq \codim((I+D)B,B)=\codim (I+D,R)\leq2,\] implying that $Y$ is not normal. Conversely, assume that $V\cap W=\emptyset$. Then, for any point $v\in V$, there exists some $\partial f_i(v)/ \partial x_j\neq0$, because if not all the partial derivatives of the forcing data would be zero at $v$ (the elements $\partial f(v)/ \partial x_j$ are also zero, because we can write them as a linear combinations of the $\partial f_i(v)/ \partial x_j$, see Remark \ref{derivatives}), implying that $v\in W$, but that is impossible. Clearly, $S_{v}=V(G)$, where $v=(a,b)\in k^2$ and \[G=(x-a,y-b,\partial f_1(v)/ \partial xT_1+\partial f_2(v)/ \partial xT_2+\partial f(v)/ \partial x,\] \[\partial f_1(v)/ \partial yT_1+\partial f_2(v)/ \partial yT_2+\partial f(v)/ \partial y).\] But, under the condition that some $\partial f_i(v)/ \partial x_j\neq0$, it is elementary to see that $\codim (G,B)\geq3$. In conclusion, $\codim (S_{v}, k^4)\geq3$, implying that $\codim (S,k^4)$, being the minimum of the codimension of its singular fibers, is bigger or equal than three, which means the normality of $Y$. \end{remark} Besides, if we move to the next dimension, i.e., $R=k[x_1,x_2,x_3]$ and $B=R[T_1,T_2,T_3]$, then, it is possible to see in a natural way that a necessary condition for the normality of $Y$ is that $(\dim V\cap W)<1$ (here we assume that the dimension of the empty set is $-1$). Because, suppose by contradiction that $\dim V\cap W\geq1$. For any point $v\in V\cap W$, by Remark \ref{derivatives} and Lemma \ref{forcingfiber} the fiber $S_v\cong \mathbb{A}^3_k$. Therefore, $(V\cap W)\times \mathbb{A}^3_k\subseteq S$. But, $\dim (V\cap W)\times \mathbb{A}^3_k\geq1+3=4$, and so, $\dim S\geq4$, thus, $\codim (S,k^6)\leq2$, implying that $Y$ is not normal. Note that this argument works independent from the number of variables. However, this case was very suitable to obtain the right intuition about the desired condition i.e., $\dim(V\cap W)<r-2$. Heuristically, one can compute the dimension of $S$ by knowing the general behavior of the dimension of the fibers $S_v$ and the dimension of the base space $V$. Now, by Lemma \ref{forcingfiber} the fibers $S_v$ have maximal dimension exactly when the rank of the forcing matrix is minimal, i.e., when the point $v$ belongs to $W\cap V$. Therefore, to guarantee that the dimension of $Y$ is not so big (in order to maintain the codimension big enough), we need to bound the dimension of the subvariety of $V$ with maximal dimensional singular fibers, i.e., the dimension of $V\cap W$. In fact, assuming that $Y$ is irreducible, the right necessary and sufficient condition for $Y$ being an (irreducible) normal variety is that ($\dim V\leq r-2$ and) $\dim V\cap W\leq r-3$, where $V,W\subseteq k^r$. First, in order to get a better intuition about the fibers, the following proposition tells us that the points of $\Spec R$ with fibers completely singular are exactly the points of $V(I)\cap V(D)$. \begin{proposition} Let $R=k[x_1,\ldots ,x_r]$ be the ring of polynomials over a perfect field $k$; $B=R[T_1, \ldots ,T_n]$; $h=f_1T_1+\cdots+f_nT_n+f$; $f,f_1, \ldots , f_n \in R$; $A=B/(h)$; $I=(f,f_1, \ldots , f_n)$; $D=(\partial f/ \partial x_j, \partial f_i/ \partial x_j)$ and \[J:=(h,(\partial h/\partial x_j),(\partial h/ \partial T_i)).\] Let $\varphi: Y=\Spec A\rightarrow X=\Spec R$ be the forcing morphism. Choose a point $x\in Y$ with nonempty fiber $\varphi^{-1}(x)$. Then $x\in X$ has fiber completely singular i.e., $\varphi^{-1}(x)\subseteq V(J)\in Y$ if and only if $x\in V(I+D)\subseteq X$. \end{proposition} \begin{proof} We know from the Corollary of the Jacobian Criterion that for any prime ideal $y\in Y$, $A_y$ is not regular if and only if $y\in V(J)$. Let $x\in V(I+D)$ and $Q\in \varphi^{-1}(x)$. Then, $(I+D)B\in Q$ and so $J\in Q$, meaning that $Q\in V(J)$. Conversely, let's consider a point $x\in X$, such that $\varphi^{-1}(x)\subseteq V(J)$. Now, it is elementary to see that the last condition means that $\varphi^{-1}(x)=V(J_x)$, where \[\varphi^{-1}(x)= A=k(x)[T_1, \ldots, T_n] /(f_1(x)T_1 + \ldots + f_n(x)T_n+f(x)),\] and $J_x=(\sum_{i=1}^n(\partial f_i(x)/\partial x_j)T_i +(\partial f(x)/\partial x_j),$ for $i,j\in \{1,...,n\}$. Firstly, if $f_i\notin x$, for some $i$, then the fiber $\varphi^{-1}(x)$ is completely regular, because, by previous comments (Ch. 1 \S2) $\varphi^{-1}(x)\cong\mathbb{A}^{n-1}_{k(x)}$. Secondly, if $f\notin x$, then $f(x)=0$. But, we know that $f_1(x)=\cdots=f_n(x)=0$, therefore the fiber is empty, since $h=f(x)\neq0\in k(x)$. But, it contradicts our hypothesis. Note that, until now, we know that $h=f_1(x)T_1 + \ldots + f_n(x)T_n+f(x)=0$. Thirdly, suppose that $\partial f_i/\partial f_j\notin x$, for some $i,j\in \{1,...,n\}$, that means, $\partial f_i(x)/\partial f_j=0$. We consider two cases: Suppose that $\partial f(x)/\partial f_j\neq0$. Then, since $h=0$, the ideal $Q=(T_1,...,T_n)\in \varphi^{-1}(x)$, but \[\sum_{i=1}^n(\partial f_i(x)/\partial x_j)T_i +(\partial f(x)/\partial x_j)\notin Q.\] Therefore $Q\notin V(J_x)$, a contradiction. In the second case, i.e., $\partial f(x)/\partial f_j=0,$, the prime ideal $Q'=(T_1,...,T_i-1,...,T_n)\in\varphi^{-1}(x)$, but \[\sum_{i=1}^n(\partial f_i(x)/\partial x_j)T_i +(\partial f(x)/\partial x_j)=\] \[\sum_{i=1}^n(\partial f_i(x)/\partial x_j)T_i\notin Q'.\] So, again, $Q'\notin V(J_x)$, a contradiction. Lastly, if $\partial f(x)/\partial x_j\neq0$, for some $j$, then, due to the last results \[\sum_{i=1}^n(\partial f_i(x)/\partial x_j)T_i +(\partial f(x)/\partial x_j)=\partial f(x)/\partial x_j\in J_x,\] thus $\varphi^{-1}(x)=V(J_x)=\emptyset$. But, this is not possible, because the fiber is not empty. In conclusion, $\varphi^{-1}(x)\subseteq V(J)\in Y$, as desired. \end{proof} Now, we present the statement of the normality criterion for forcing algebras over the ring of polynomials with coefficients in a perfect field. \begin{theorem}\label{normalcriterion} Let $R=k[x_1,\ldots ,x_r]$ be the ring of polynomials over a perfect field $k$; $B=R[T_1, \ldots ,T_n]$; $f,f_1, \ldots , f_n \in R$; $I=(f,f_1, \ldots , f_n)$;$D=(\partial f/ \partial x_j, \partial f_i/ \partial x_j)$, for $i,j\in\{1,...,n\}$. Then, the forcing algebra for this data $A$ is a normal domain if and only if the following two conditions hold: \begin{enumerate}[(a)] \item ${\rm codim}(I,R)\geq2$, or $I=R$. \item ${\rm codim}(I+D,R)>2$, or $I+D=R$. \end{enumerate} Moreover, in the case that all $f_i=0$, then (b) is a necessary and sufficient condition for $A$ being a normal ring. \end{theorem} \begin{proof} We have already proved in Lemma \ref{domain} that (a) is a necessary and sufficient condition for $A$ being an integral domain. Let's prove that (b) is equivalent to normality. Effectively, following Lemma \ref{sing} we just need to see the condition (b) is equivalent to ${\rm codim}(J,B)>2$, or $J=B$. Let's denote the last condition by (b'). By Remark \ref{derivatives} we know that $J\subseteq (I+D)B$. Suppose that (b') holds. First, if $J=B$, then $(I+D)B=B$, implying $I+D=R$. Second, if ${\rm codim}(J,B)>2$, then by Remark \ref{codimension} we get \[{\rm codim}(I+D,R)={\rm codim}((I+D)B,B\geq {\rm codim}(J,B)>2.\] Conversely, assume that (b) holds and $J\neq B$. We prove that ${\rm codim}=(J,B)>2$. Let $Q$ be a prime ideal of $B$ that contains $J$. First, assume that $(I+D)B\subseteq Q$, then $I+D\neq R$, therefore ${\rm codim}(I+D,R)>2$, so, again by Remark \ref{codimension} ${\rm codim}((I+D)B,B)>2$, which implies that ${\rm codim}(Q,B)>2$. Second, suppose that $I+D\nsubseteq Q$, then necessarily one of the partial derivatives $\partial f/ \partial x_j$ or $ \partial f_i/\partial x_j$ is not contained in $Q$, because $IB\subseteq J\subseteq Q$. In fact, there exits some $b\in {1, \ldots, n}$ and some $c\in {1, \ldots, r}$ with $\partial f_b/\partial x_d\notin Q$, cause if not, all $\partial f_i/ \partial x_j$ would be contained in $Q$ and also the elements $\sum_{i=1}^n(\partial f_i/\partial x_j)T_i+\partial f/\partial x_j$ and therefore $\partial f/\partial x_j$ for any $j$, thus $D$ would be also contained in $J$, which is not the case. For simplicity suppose that $Q$ not contained the element $\alpha:= \partial f_1/\partial x_1$ and let's write $l:=\sum_{i=1}^n(\partial f_i/\partial x_1)T_i+\partial f/\partial x_1$. Let $\psi$ be the following homomorphism of $R_{(\alpha)}$ algebras \[\psi:B_{(\alpha)}\approxeq R_{(\alpha)}[T_1,\ldots ,T_n]\longrightarrow R_{(\alpha)}[T_2, \ldots, T_n], \] that sends $T_1$ to $g:=-\alpha^{-1}(\sum_{i=2}^n(\partial f_i/\partial x_1)+\partial f/\partial x_1)$ and $T_j$ to $T_j$, for $j\geq 2$. Clearly, $\psi$ is surjective. Moreover, $ker(\psi)=(T_1-g)$. To see this let $S\in ker(\psi)$. Then using the binomial expansion we can write it in the form: \[ S=S(x_1,\ldots, x_r,(T_1-g)+g,\ldots,T_n)=S_0(x_1,\ldots, x_r,(T_1-g),\ldots,T_n)+\] \[S(x_1,\ldots, x_r,g,\ldots,T_n),\] \[=S_0(x_1,\ldots, x_r,(T_1-g),\ldots,T_n)+\psi(S)\] \[=S_0(x_1,\ldots, x_r,(T_1-g),\ldots,T_n),\] with $S_0$ being divisible by $T_1$, which implies that the former expression is divisible by $T_1-g$. Thus $S\in (T_1-g)$. On the other hand, in the ring $ R_{(\alpha)}[T_1,\ldots,T_n]$ we know that $(T_1-g)=(l)$, therefore $\psi$ induces an isomorphism between $ R_{(\alpha)}[T_1,\ldots,T_n]/(l)$ and $ R_{(\alpha)}[T_2,\ldots,T_n]$. Denote by $Q_0$ the image under $\psi$ of $QR_{(\alpha)}[T_1,\ldots,T_n]$, and assume for the sake of contradiction that ${\rm codim}(Q,B)\leq2$ then we have the following chain of inequalities: \[d:={\rm dim}(B/Q)={\rm dim}B-{\rm codim}(Q,B)=n+r-{\rm codim}(Q,B)\geq n+r-2.\] Besides, $B$ is a Jacobson ring, hence there exists a maximal ideal $m$ containing $Q$ such $\alpha \notin m$, otherwise $\alpha$ would be contained in the intersection of all the maximal ideals containing $Q$, which is $Q$, absurd. Now, let's consider a saturated chain of primes ideals from $Q$ to $m$, which exits in virtue of Zorn's lemma. Besides, this chain has length exactly $d$ because $B/Q$ is an affine domain and therefore, $d$ is the length of any saturated chain of primes on it (see fundamental results on Chapter 1). Then, \[ Q=Q_0\subsetneqq Q_1 \subsetneqq, \ldots, \subsetneqq Q_{d-1} \subsetneqq Q_d=m.\] Now, we can consider this chain in $R_{(\alpha)}[T_1,\ldots,T_n]$, because no $Q_i$ contains $\alpha$. This shows that ${\rm dim}(R_{(\alpha)}[T_1,\ldots,T_n])/Q^e\geq d$ and, in fact, the equality holds because we are localizing and thus the dimension cannot be bigger that the dimension of the original ring. Besides, $\psi$ induces an isomorphism between $R_{(\alpha)}[T_1,\ldots,T_n]/Q^e$ and $R_{(\alpha)}[T_2,\ldots,T_n]/Q_0$, then finally, recalling that ${\rm codim}(I,R)\geq2$ and that $l\in Q$ we get \[d={\rm dim}(R_{(\alpha)}[T_1,\ldots,T_n])/Q^e)={\rm dim}(R_{(\alpha)}[T_2,\ldots,T_n])/Q_0)\leq \] \[ {\rm dim}(R_{(\alpha)}[T_2,\ldots,T_n])/I^e) \leq {\rm dim}(R[T_2,\ldots,T_n])/I^e)\] \[={\rm dim}((R/I)[T_2,\ldots, T_n])={\rm dim}(R/I)+n-1=\] \[{\rm dim}R-{\rm codim}(I,R)+n-1\leq r+n-1-2< n+r-2.\] Which is a contradiction with the former estimate of $d$. Finally, if all $f_i=0$ then $J=I+D$ and then from the fact that ${\rm codim}((I+D),R)={\rm codim}((I+D),B)$ we deduce from Lemma \ref{sing} that condition (b) is equivalent to the normality of $A$. \end{proof} Now, we state a direct application of the previous Theorem to normal affine varieties. As said before, our convention is that $\dim\emptyset=-1$. \begin{corollary}Let $R=k[x_1,\ldots ,x_r]$ be the ring of polynomials over an algebraically closed field $k$; $B=R[T_1, \ldots ,T_n]$; $f,f_1, \ldots , f_n \in R$; \[I=(f,f_1, \ldots , f_n)\] and $D=(\partial f/ \partial x_j, \partial f_i/ \partial x_j)$. Assume that $(h)$ is a radical ideal, where $h=f_1T_1+\cdots+f_nT_n+f.$ Let's denote by $V=V(I)\subseteq k^r$ and $W=V(D)\subseteq k^r$ the affine varieties defined by $I$ and $D$, respectively. Then, $X=V(H)\subseteq k^{n+r}$ is a normal (irreducible) variety if and only if the following two conditions holds simultaneously \begin{enumerate}[(1)] \item ${\rm dim}V\leq r-2$. \item ${\rm dim}(V\cap W)<r-2$. Moreover, in the case that all $f_i=0$, then (2) is a necessary and sufficient condition for $X$ being a normal (irreducible) variety. \end{enumerate} \end{corollary} \begin{proof} Recall that a variety is normal if for any point $x\in X$, the stalk $\mathcal{O}_{X,x}$ is a normal domain (see \cite[Exercise I.3.17]{hartshornealgebraic}). Since $(h)$ is a radical ideal, we know that the forcing algebra $A=B/(h)$ is exactly the ring of coordinates of $X$. Since $X$ is affine and normality is a local property we have that $X$ is a normal (irreducible) variety if and only if $A$ is a normal domain. Besides, from Hilbert's Nullstellensatz we get \[{\rm dim}V={\rm dim}(R/I(V))={\rm dim}(R/\rad(I))={\rm dim}(R/I)\] \[={\rm dim}R-{\rm codim}(I,R)=r-{\rm codim}(I,R),\] and analogously \[{\rm dim}(V\cap W)=r-{\rm codim}(I+D,R).\] From this and the fact that $V=\emptyset$ (or $V\cap W=\emptyset$), if and only if $I=R$ (or $I+D=B$), we rewrite the conditions (a) and (b) of the former theorem as (1) and (2). \end{proof} As a comment, we say that the discussion beginning at Example \ref{normalintuition} is essentially the way in which the above criterion of normality was discovered. Lastly, in order to support the former intuition we dedicate the next pair of sections to study two interesting and enlightening examples. \section{An Enlightening Example} In this section we study an specific example of a forcing algebra with several forcing equations and we explore the some interesting properties. This example shows how rich and interesting is the formal study of forcing algebras on its own. Let $R=k[x,y]$ be the ring of polynomials over a (perfect) field $k$, $B=R[T_1,T_2]$, $A=B/H$, where \[ H=(h_1,h_2)=(xT_1+yT_2,yT_1+xT_2)=\left( \left( \begin{array}{cc}x&y\\y&x\end{array}\right) \cdot \left( \begin{array}{c}T_1\\T_2 \end{array}\right)\right). \] The determinant of the associated matrix $M$ is $x^2-y^2=(x+y)(x-y)$. It is easy to check that $h_1$ is irreducible and that $h_2$ does not belong to the ideal generated by $h_1$. Therefore $h_1,h_2\subseteq B$ is a regular sequence and hence, by former comments $H$ has pure codimension $2$. Let $P$ be a minimal prime of $H$. Then, by a previous remark, $P$ contains the elements $\det MT_i=(x-y)(x+y)T_i$ for $i=1,2$. If $\det M\notin P$, then $T_i\in P$, and therefore $P=(T_1,T_2)$. Now, assume that $\det M\in P$, then $x-y\in P$ or $x+y\in P$. In the first case, $h_1-T_1(x-y)=y(T_1+T_2)$ should be in $P$. But, if $y\in P$ then $x=(x-y)+y\in P$, which implies that $P=(x,y)$. If $T_1+T_2\in P$ then it is easy to check that $P=(x-y,T_1+T_2)$, since this is a prime ideal containing $H$. On the other hand, if $x+y\in P$, then, similarly we see that $P=(x,y)$, or $P=(x+y,T_1-T_2)$. In conclusion, the minimal primes of $H$ (which are, in fact, the associated primes of $H$, because $A$ is a Cohen-Macaulay ring) are the four ideals $P_1=(T_1,T_2)$, $P_2=(x,y)$, $P_3=(x-y,T_1+T_2)$ and $P_4=(x+y,T_1-T_2)$. This example shows that Theorem \ref{irreducibility} is false for several forcing equations, since $\Spec A$ is not an irreducible space but the ideal generated by the forcing data $(x,y)$ has height two. Let $V_i=V(P_i)\subseteq k^4$ be the affine variety define by $P_i$, which correspond to the irreducible components of $V=V(H)$. Now, the intersections of any couple of this components correspond to singular points of $V$ (we assume for a while that $k$ is algebraically closed, and we replace $H$ by $\rad H$ in order to work with the corresponding variety $V$), because the ring of coordinates of $V$ localized at the maximal ideal corresponding to such a points has at least two irreducible components and therefore it is not an integral domain, in particular, it is not a regular local ring, since local regular rings are domains. This is a way to see geometrically the non-normality of $V$, because the normality is a local property and the localization at these intersection points, say ${\mathfrak p}\in{\rm Spm}(A)$, is not a normal ring. In fact, a local ring has clearly a connected spectrum, therefore $A_{{\mathfrak p}}$ cannot be a direct product of normal domains (\cite{brennergomezconnected} \S1). Besides, by the former comment, $A_{{\mathfrak p}}$ cannot be neither a normal domain. Returning to our computations, we see that the intersection of these irreducible components are, in general, defined by lines and, in two cases, defined by just one point. In fact, $V_1\cap V_2=V(x,y,T_1,T_2);V_1\cap V_3=V(T_1,T_2,x-y);V_1\cap V_4=V(T_1,T_2,x+y)$;$V_2\cap V_3=V(x,y,T_1+T_2);V_2\cap V_4=V(x,y,T_1-T_2)$ and $V_3\cap V_4=V(x,y,T_1,T_2)$. Furthermore, It is easy to see that $\Spec A$ is connected (see \cite[Proposition 1.2.]{brennergomezconnected}), since we are in the homogeneous case. Moreover, $V(P_1)$ is an horizontal component, $V(P_2)$ a vertical component and $V(P_3)$ and $V(P_4)$ behave like ``mixed'' components i.e., they do not dominate the base nor are they the preimage of a subset of the base. Besides, $\Spec A$ is also locally (over the base) connected because every pair of minimal components have non-empty intersection and the elementary fact that the minimal primes of a localization are exactly the minimal primes of the original ring not intersecting the multiplicative system. In the case that $k$ is a perfect field, we can use also the Jacobian Criterion in order to prove again that $A$ is not a normal ring. In fact, as seen before, the pure codimension of $H$ is two, since $\left\lbrace h_1,h_2 \right\rbrace$ is a regular sequence. So, the singular locus in $\Spec A$ is given by the $2\times2$ minors of the Jacobian matrix defined by the partial derivatives of the $h_i$, that is, \[J=\left( T_1^2-T_1^2,x^2-y^2,yT_1-xT_2,xT_1-yT_2\right).\] Thus in order to test normality we should find the codimension of $J$ in $A$ and determine if it is bigger or equal than two. Since the pure codimension of $H$ is two we can translate our problem to the ring of polynomial in four variables $B=k[x,y,T_1,T_2]$ and to test if the corresponding Jacobian ideal \[J_0=\left( T_1^2-T_1^2,x^2-y^2,yT_1-xT_2,xT_1-yT_2,h_1,h_2\right)\] has codimension bigger or equal to four (in general, the codimension of a prime ideal decreases in $n$, if we mod out by ideals of pure codimension $n$, mainly because an affine domain is catenary and its dimension is the length of any maximal chain of prime ideals \cite[Corollary 13.6]{eisenbud}. But, after some computations we can show that the prime ideals that contain $J_0$ are exactly the ideals defining the varieties corresponding to the intersections of pairs of the irreducibles components of $V$. That is, $(x,y,T_1,T_2)$, $(T_1,T_2,x-y)$, $(T_1,T_2,x+y)$, $(x,y,T_1+T_2)$ and $(x,y,T_1-T_2)$. Therefore $\codim(J_0,B)=3$, and then, $\codim(\overline{J_0},A)=1<2$, implying that $A$ does not satisties Serre's condition (R1). Hence, by Serre's Normality Criterion $A$ is not a normal ring. Moreover, by the same reason $B/\rad H$ is not a normal ring, and this is equivalent to the non-normality of the variety $V(H)\subseteq k^4$. Geometrically, if $k$ is an algebraic closed field, it means just that the singular points of $V$, which correspond to the maximal ideals containing $J_0$, are exactly the points in the intersections of the different irreducible components of the variety, which correspond to the geometrical intuition of singularities. This example suggests the following conjecture. \begin{conjecture} In the homogeneous case, assume that $R=k[x_1,...,x_r]$, and suppose $H=(h_1,...,h_m)=P_1\cap ... \cap P_s$, where $P_i$ are the minimal primes, for $i=1,...,s$. Then $V(P_i)\cap V(T_1,...,T_n)\neq\emptyset$. \end{conjecture} \section{An Example of Normalization} On this section we will compute explicitly the normalization of a forcing algebra by elementary methods illustrating how good examples lead us in a natural way to the study of general basic properties of normal domains. Let $k$ be a perfect field. Our example is a particular case of the Example \ref{normalintuition}. Let $R=k[x,y]$, $B=R[t,s]$, $A=B/(h)$, where $h=x^2t+y^2s+xy$. Now, with the notation of section 3, $I=(x^2,y^2,xy$), $D=(x,y)$, and so, $I+D=(x,y)$. By Theorem \ref{normalcriterion} $A$ is a non-normal domain, because $\codim(I,B)\geq2$, but $\codim(I+D,B)=2$. Besides, the integral closure, or normalization of $A$, $\overline{A}$ is a module-finite extension of $A$ (in general, this is true for finitely generated algebras over complete local rings, see \cite[Exercise 9.8]{hunekeswanson}). Now, we will give an explicit description of $\overline{A}$ as an affine domain. First, let $K=K(A)$ be the field of fractions of $A$ and let $u=tx/y\in K$. Then, if we consider the forcing equation $h$ in $K[t,s]$, we get the following integral equation for $u$, after multiplication by $t/y^2$: \[ (tx/y)^2+(tx/y)+st=0. \] Let $A'=A[u]$ be the $A-$subalgebra of $K$ generated by $u$. So, we rewrite $h$ considered in $A'$, by means of $yu=xt$, to obtain the equation $0=h=y(xu+ys+x)$. But, $y\neq0$, therefore $xu+ys+x=0$. Let $C=k[X,Y,T,S,U]$ be the ring of polynomials. Define $\phi:C\rightarrow A'$ the homomorphism of $k-$algebras sending each capital variable into its corresponding small variable. Note that from the previous considerations the ideal $P=(YU-XT,XU+YS+X,U^2+U+TS)\subseteq \ker\phi$. We will see that $P=\ker\phi$. Effectively, let's write $E=k[X,Y,U,T]/(YU-XT)$. Then, $E$ is a forcing algebra and by Theorem \ref{normalcriterion} is a normal domain. First, we prove that $P$ is a prime ideal. Define $Q=K(E)$, then, informally if we consider the equations \[XU+YS+X=U^2+U+TS=0\] in the variable $S$ and solve them, it lead us to obtain the equality $S=-(U^2+U)/T=-(XU+X)/Y$ in a ``suitable'' field of fractions. But, in fact, it hods that \[-(U^2+U)/T=-(XU+X)/Y\in Q,\] because \[-Y(U^2+U)=-TXU-XT=-T(XU+X)\in D\], due to the fact that $YU=XT\in E$. Write $S'=-(U^2+U)/T=-(XU+X)/Y\in Q$ and consider the natural homomorphism $\psi: E[S]\rightarrow E[S']\subseteq Q$, where $E[S]$ denote the ring of polynomials in the variable $S$. We will prove that $\ker \psi=(XU+YS+X,U^2+U+TS)$. For that we need the following basic lemma about normal domains: \begin{lemma}\label{denominator} Let $R$ be a normal domain, $q\in K(R)$, \[I=(bx-a\in R[x]: q=a/b; a,b\in R\},\] and $(R:q)=\{b\in R:bq\in R\}$ be the denominator ideal. Consider the homomorphism of $R-$algebras \[\varphi:R[x]\rightarrow R[q]\subseteq K(R),\] sending $x$ to $q$. Then the following holds: \begin{enumerate} \item If $q\notin R$, then $\codim((R:q),R)=1$. \item Suppose that $(R:q)=(b_1,...,b_m)\in R$, such that $q=a_i/b_i$, for some $a_i\in R$. Then, $I=(b_1x-a_1,...,b_mx-a_m)$. \item $\ker \varphi=I$. \end{enumerate} \end{lemma} \begin{proof} (1) It is a well known fact that any normal Noetherian domain is the intersection of its localizations on primes of height one (see \cite[Corollary 11.4]{eisenbud}). We argue by contradiction. If $\codim((R:q),R)\geq2$, then $(R:q)$ is not contained in any prime ideal $P\subseteq R$ of height one. In particular, there exists for every such prime ideal $P$ an element $b_P\notin P$, but $b_P\in (R:q)$, meaning that there is $a_P\in R$, with $q=a_P/b_P\in R_P$. In conclusion, $q\in \cap_{\text{ht}P=1}R_P=R$. (2) Let $bx-a\in I$. That means, in particular, that $b\in (R:q)$. So, we can write $b=c_1b_1+\cdots+c_rb_r\in R$, for some $c_i\in R$, $i=1,...,r$. Now, let $a_i\in R$ be elements such that $q=a_i/b_i$. Since, \[a=bq=\sum_{i=1}^nc_ib_iq=\sum_{i=1}^nc_ia_i,\] it is straightforward to verify $bx-a=\sum_{i=1}^rc_i(b_ix-a_i)$, as desired. (3) Clearly $I\subseteq \ker \varphi$. For the other containment, let $f\in\ker\varphi$ we argue by induction on the degree of $f$. Write $f=v_nx^n+\cdots+v_0$. The case $n=1$ is clear. So, assume $n\geq2$. First, we know that \[v_nq^n+\cdots+v_0=0\in K(R),\] then after multiplying by $v_n^{n-1}$, we get the integrity equation for $v_nq$, \[(v_nq)^n+v_{n-1}v_n(v_nq)^{n-1}+\cdots+v_0v_n^{n-1}=0.\] So, $v_nq\in R$, because $R$ is a normal domain. Therefore, there exists $d\in R$ such that $q=d/v_n$. Now, $f-x^{n-1}(v_nx-d)\in\ker\varphi$, and it has lower degree. Thus, by the induction hypothesis $f-x^{n-1}(v_nx-d)\in I$, and then $f\in I$, because $v_nx-d\in I$. \end{proof} We continue with our discussion, by abuse of notation we write with the same capital letters its classes in $E$. Now, we know that $Y,T\in (E:S')$. Besides, $(X,T)\in E$ in a prime ideal of codimension one in $E$, therefore in virtue of Lemma \ref{denominator}(1), $(Y,T)=(E:S')$. Hence, applying again Lemma \ref{denominator}(2)-(3) we see that \[\ker\psi=((Y)S+(XU+X),(T)S+(U^2+U)),\] as desired. In conclusion, \[E[S]/(XU+YS+X,U^2+U+TS)\cong E[S']\] is an integral domain, therefore \[C/P\cong E[S]/(XU+YS+X,U^2+U+TS)\] so is. On the other hand, since the extension $A\rightarrow A'$ is integral, both rings have the same dimension (it is a direct consequence from the Going Up, see \cite[Proposition 4.15]{eisenbud}). But, $\dim A=\dim B-{\rm ht}(h)=3$, and then \[3=\dim A'=\dim C/\ker\phi=5-{\rm ht}(\ker\phi),\] implying ${\rm ht}(\ker\phi)=2$. Besides, it is easy to check that $P\subseteq \ker\phi$ is a (prime) ideal of height strictly bigger that one, therefore both ideals coincide. Finally, we can apply Corollary \ref{corjacobian} to the affine domain $C/P$. After computations we verify that \[(U+1)(2U+1),U(2U+1),U(U+1)+ST,ST\in J,\] where $J$ denotes the Jacobian ideal, defining the singular locus of $C/P$. But, easily we check that \[C=((U+1)(2U+1),U(2U+1),U(U+1)+ST,ST),\] therefore the singular locus is empty and then $C/P$ is regular, and in particular, normal. In conclusion, an explicit description of the normalization of $A$ as an affine ring is \[\overline{A}\cong k[X,Y,T,S,U]/(YU-XT,XU+YS+X,U^2+U+TS).\] \begin{remark} One can go forward in a natural way by computing the normalization for forcing algebras with forcing equations of the form $h=x^nt+y^n+xy$, for $n\geq2$. However, just for the case $n=3$, new methods seem to be needed. In particular, we get an ideal \[P=(YU-X^2T,XU+X+Y^2S,U^2+U+XYST).\] But, in order to apply Lemma \ref{denominator}, the most challenging part appears to be finding an explicit description of the generators of the corresponding denominator ideal, cause \[S'=-(X+UX)/Y^2=-(U^2+U)/XYT,\] and therefore we just know that $Y^2,XYT\in(D:S')$, where \[E=k[X,Y,U,T]/(YU-X^2T).\] But, on this case the ideal $(Y^2,XYT)$ is not prime as in the argument before where we get the prime ideal $(X,Y)$ as denominator ideal. \end{remark} This section suggests on its own a way for forthcoming research on computing the normalization of forcing algebras. \section*{Acknowledgement} Danny de Jes\'us G\'omez-Ram\'irez would like to thank to his parents Jos\'e Omar G\'omez Torres and Luz Stella Ram\'irez Correa for all the assistance, love and inspiration. Besides, he wishes to thank the German Academic Exchange Service (DAAD) for the financial and academic support. \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2024-02-18T23:39:52.193Z
2017-07-28T02:00:36.000Z
algebraic_stack_train_0000
709
13,314
proofpile-arXiv_065-3507
\section{Introduction}\label{sec:intro} In game theory, turn-taking games are represented by game trees referred to as extensive-form games. It is well known that the concept of {\em game-theoretic rationality} provides a canonical approach towards solving extensive-form games with perfect information, namely {\em Backward Induction} (BI), which yields a subgame-perfect equilibrium (which is a unique in generic turn-taking games with no pay-off ties)~\cite{perea12,perea07}. However, BI is often criticized for not taking into account that a player may end up in one particular subgame rather than another subgame, e.g. after a deviation of other players from their BI strategy. Thus, the past moves and reasoning of the players are not taken into consideration, only the future. All players commonly believe in everybody's future rationality, no matter how irrational players' past behavior has already proven. In some cases, such complete lack of consideration of previous moves may be suboptimal for players An alternative approach, that of {\em Forward Induction} (FI), does take these previous moves of the opponent(s) under consideration and tries to rationalize the opponent's past behavior for assessing his future moves. Thus, when a player is about to play in a subgame that has been reached due to some strategy of the opponent that is not consistent with both common knowledge (belief) of rationality for each of the players {\em and} his past behavior, the player may still rationalize the opponent's past behavior. She may attribute to her opponent a strategy that is optimal against a possible suboptimal strategy of hers, or a strategy that is optimal against some rational strategy of hers, which is only optimal against a suboptimal strategy of his, and so on. If the player pursues this kind of rationalizing reasoning to the highest extent possible and reacts accordingly, she ends up choosing what is called an Extensive-Form Rationalizable (EFR) strategy~\cite{pearce84,battigalli96,battigalli97}. For perfect information games, to which we restrict our attention in this paper, see Definition 1 in~\cite{hp14}, which we we will be using here. Even though EFR strategies may be distinct from BI strategies (e.g., see~\cite{reny92}, Game 1 in Figure 1), in perfect-information games without relevant payoff ties it has be been shown that the unique EFR outcome coincides with the unique BI outcome~\cite{battigalli96,battigalli97,hp14} . In case there are relevant payoff ties, however, EFR outcomes may form a---possibly strict---subset of the BI outcomes~\cite{chenmicali11,chenmicali13,perea12,hp14}, see game 3 in Figure 1. In~\cite{ghv14}, we asked the question: {\em Are people inclined to use forward induction when they play a game?} Our pivotal interest was to examine participants' behavior following a deviation from BI behavior by their opponent right at the beginning of the game. We designed a Marble Drop game experiment where the participants played several rounds of turn-taking games with the computer. Related research has been done on how people reason in dynamic games, also focusing on forward induction~ \cite{bn08,shahriar14,cachon1996,huck2005,chlass2016}. However, their games are not perfect information, in contrast to our Marble Drop games. In these games, we programmed the computer so as to follow, in each repetition of each game, a strategy which is optimal with respect to some strategy of the human participant. We provided a more intuitive framing of the game trees, inspired by Meijering et al.'s Marble Drop~\cite{meijering2010,meijering2011}. In ~\cite{ghv14}, it turned out that in the aggregate, people's first decisions could be explained as EFR behavior. However, it also seemed that in many cases, cardinal effects could have played a role. Consequently, we wanted to design a new Marble Drop experiment which minimizes this cardinal effect, and in the process, improve certain other aspects of our previous experiment, based on our own findings and discussion with colleagues working in empirical research. To summarize, the main differences between the current Marble Drop experiment and the one reported in~\cite{ghv14} are as follows. The aim is to have better overall understanding of the participants' behavior: \\ \noindent - At the beginning of the experiment, we explained verbally in a more comprehensive way regarding how the computer agent was programmed (including ``Imagine that you are playing against a different computer opponent each time" and ``the computer thinks that you already have a plan for that game, and it plays the best response to the plan it thinks that you have for that game, however, the computer does not learn from previous games and does not take into account your choices during the previous games".) \noindent - We changed the payoff structures of the experimental games in order to minimize cardinal effects on the behavior of the participants. \noindent - We modified the questions asked to the Group A and Group B participants to direct some of the participants towards strategic reasoning while playing the games and note the effects of such prompts. \noindent - We asked more structured and pointed questions about the participants' decision-making process at the end of the experiment to bring out the reasoning behind their behaviors in a more explicit manner. \noindent -We changed the monetary incentive so that a wealth effect would be eradicated. By basing payment on number of marbles in a randomly drawn game, the participants perceived a clear difference between earning $k$ and $k+1$ marbles in a game (not just 5 cents as in the previous experiment, but 3.75 euros).\\ \noindent We mainly focused on the following questions:\\ \noindent 1) Do participants now more clearly play according to FI? \noindent 2) If not, what are they actually doing? What roles are played by risk attitudes and cooperativeness versus competitiveness? \noindent 3) Can they be reasonably divided into types of players?\\ Additionally, we are interested in whether people take the perspective of their opponent and make use of \emph{theory of mind} while playing the Marble Drop experiment. Theory of mind refers to the ability to reason about unobservable mental content of others, such as beliefs, desires, or goals~\cite{Premack1978}. This theory of mind ability can even be used to reason about the theory of mind of others, and reason about the way others reason about beliefs and goals. This \emph{second-order theory of mind} allows us to understand sentences such as ``Alice \emph{knows} that Bob \emph{wants} to get six marbles'', and use this to adjust our predictions of the behavior of Alice. The rest of this paper is structured as follows. In Section 2, we explain the new experimental games and the differences with the previous experiment~\cite{ghv14}. Section 3 presents the experimental results about the participants' decisions in the games as well as their reports about their own reasoning behind their behaviors. Section 4 discusses the results and provides suggestions for future research. \section{A Marble Drop Experiment}\label{sec:expt} In~\cite{ghv14}, we designed a marble drop game experiment to investigate whether people are inclined to use Forward Induction (FI) when they play dynamic perfect-information games. The participants played 8 rounds of turn-based games against a computer opponent, repeating in each round a set of 6 games, distinct in terms of payoff structures. By letting participants play against a computer opponent, we ensure that each participant encounters the same situations, which allows us to eliminate \emph{variability due to the strategy of the opponent} in our analysis. In these two-player games, the players played alternately. Let $C$ denote the computer, and let $P$ denote the participant. In four of the games the computer plays first, followed by the participant, and each of the players can play at two decision nodes. In the remaining two games, which are truncated versions of two of the games described earlier, the participant gets first chance to move. It appeared that participants did apply FI in playing these games, that is, in all likelihood the participants responded in a way which is optimal with respect to the conjecture that the computer is after a larger prize than the one it has foregone, even when this necessarily meant that the computer has attributed future irrationality to the participant when the computer made the first move in the games. However, a closer look at the individual choices revealed that cardinal effects probably played a role in these choices. \begin{figure*}[t] \begin{tabular}{cc} \begin{tikzpicture}[ dot/.style={shape=circle,fill=black,minimum size=0pt, inner sep=0pt,outer sep=0pt}, ] \matrix[matrix of nodes, column sep=3ex, row sep=4ex] { |[dot,label=above:C] (p1)| {} & |[dot,label=above:P] (p2)| {} & |[dot,label=above:C] (p3)| {} & |[dot,label=above:P] (p4)| {} & |[dot, label=right:{(6,3)}] (p5)| {}\\[1cm] |[dot,label=below:{(4,1)}] (p6)| {}&|[dot,label=below:{(1,2)}] (p7)| {} &|[dot,label=below:{(3,1)}] (p8)| {} & |[dot,label=below:{(1,4)}] (p9)| {} & \\ }; \draw (p1) edge node[above]{\scriptsize{$b$}} (p2); \draw (p2) edge node[above]{\scriptsize{$d$}} (p3); \draw (p3) edge node[above]{\scriptsize{$f$}} (p4); \draw (p4) edge node[above]{\scriptsize{$h$}} (p5); \draw (p1) edge node[left]{\scriptsize{$a$}} (p6); \draw (p2) edge node[left]{\scriptsize{$c$}} (p7); \draw (p3) edge node[left]{\scriptsize{$e$}} (p8); \draw (p4) edge node[left]{\scriptsize{$g$}} (p9); \end{tikzpicture}& \begin{tikzpicture}[ dot/.style={shape=circle,fill=black,minimum size=0pt, inner sep=0pt,outer sep=0pt}, ] \matrix[matrix of nodes, column sep=3ex, row sep=4ex] { |[dot,label=above:C] (p1)| {} & |[dot,label=above:P] (p2)| {} & |[dot,label=above:C] (p3)| {} & |[dot,label=above:P] (p4)| {} & |[dot, label=right:{(6,3)}] (p5)| {}\\[1cm] |[dot,label=below:{(4,1)}] (p6)| {}&|[dot,label=below:{(1,2)}] (p7)| {} &|[dot,label=below:{(3,1)}] (p8)| {} & |[dot,label=below:{(1,4)}] (p9)| {} & \\ }; \draw (p1) edge node[above]{\scriptsize{$b$}} (p2); \draw (p2) edge node[above]{\scriptsize{$d$}} (p3); \draw (p3) edge node[above]{\scriptsize{$f$}} (p4); \draw (p4) edge node[above]{\scriptsize{$h$}} (p5); \draw (p1) edge node[left]{\scriptsize{$a$}} (p6); \draw (p2) edge node[left]{\scriptsize{$c$}} (p7); \draw (p3) edge node[left]{\scriptsize{$e$}} (p8); \draw (p4) edge node[left]{\scriptsize{$g$}} (p9); \end{tikzpicture}\\ Game 1&Game 2\\[2mm] \begin{tikzpicture}[ dot/.style={shape=circle,fill=black,minimum size=0pt, inner sep=0pt,outer sep=0pt}, ] \matrix[matrix of nodes, column sep=3ex, row sep=4ex] { |[dot,label=above:C] (p1)| {} & |[dot,label=above:P] (p2)| {} & |[dot,label=above:C] (p3)| {} & |[dot,label=above:P] (p4)| {} & |[dot, label=right:{(6,3)}] (p5)| {}\\[1cm] |[dot,label=below:{(4,1)}] (p6)| {}&|[dot,label=below:{(1,2)}] (p7)| {} &|[dot,label=below:{(3,1)}] (p8)| {} & |[dot,label=below:{(1,4)}] (p9)| {} & \\ }; \draw (p1) edge node[above]{\scriptsize{$b$}} (p2); \draw (p2) edge node[above]{\scriptsize{$d$}} (p3); \draw (p3) edge node[above]{\scriptsize{$f$}} (p4); \draw (p4) edge node[above]{\scriptsize{$h$}} (p5); \draw (p1) edge node[left]{\scriptsize{$a$}} (p6); \draw (p2) edge node[left]{\scriptsize{$c$}} (p7); \draw (p3) edge node[left]{\scriptsize{$e$}} (p8); \draw (p4) edge node[left]{\scriptsize{$g$}} (p9); \end{tikzpicture}& \begin{tikzpicture}[ dot/.style={shape=circle,fill=black,minimum size=0pt, inner sep=0pt,outer sep=0pt}, ] \matrix[matrix of nodes, column sep=3ex, row sep=4ex] { |[dot,label=above:C] (p1)| {} & |[dot,label=above:P] (p2)| {} & |[dot,label=above:C] (p3)| {} & |[dot,label=above:P] (p4)| {} & |[dot, label=right:{(6,3)}] (p5)| {}\\[1cm] |[dot,label=below:{(4,1)}] (p6)| {}&|[dot,label=below:{(1,2)}] (p7)| {} &|[dot,label=below:{(3,1)}] (p8)| {} & |[dot,label=below:{(1,4)}] (p9)| {} & \\ }; \draw (p1) edge node[above]{\scriptsize{$b$}} (p2); \draw (p2) edge node[above]{\scriptsize{$d$}} (p3); \draw (p3) edge node[above]{\scriptsize{$f$}} (p4); \draw (p4) edge node[above]{\scriptsize{$h$}} (p5); \draw (p1) edge node[left]{\scriptsize{$a$}} (p6); \draw (p2) edge node[left]{\scriptsize{$c$}} (p7); \draw (p3) edge node[left]{\scriptsize{$e$}} (p8); \draw (p4) edge node[left]{\scriptsize{$g$}} (p9); \end{tikzpicture}\\ Game 3&Game 4 \end{tabular} \caption[]{Collection of the main games used in the experiment. The ordered pairs at the leaves represent payoffs for the computer ($C$) and the participant ($P$), respectively.\vspace{0cm}} \label{fig:maingames} \end{figure*} \begin{figure*}[t] \begin{center} \begin{tabular}{cc} \begin{tikzpicture}[ dot/.style={shape=circle,fill=black,minimum size=0pt, inner sep=0pt,outer sep=0pt}, ] \matrix[matrix of nodes, column sep=3ex, row sep=4ex] { |[dot,label=above:P] (p2)| {} & |[dot,label=above:C] (p3)| {} & |[dot,label=above:P] (p4)| {} & |[dot, label=right:{(6,3)}] (p5)| {}\\[1cm] |[dot,label=below:{(1,2)}] (p7)| {} &|[dot,label=below:{(3,1)}] (p8)| {} & |[dot,label=below:{(1,4)}] (p9)| {} & \\ }; \draw (p2) edge node[above]{\scriptsize{$d$}} (p3); \draw (p3) edge node[above]{\scriptsize{$f$}} (p4); \draw (p4) edge node[above]{\scriptsize{$h$}} (p5); \draw (p2) edge node[left]{\scriptsize{$c$}} (p7); \draw (p3) edge node[left]{\scriptsize{$e$}} (p8); \draw (p4) edge node[left]{\scriptsize{$g$}} (p9); \end{tikzpicture}& \begin{tikzpicture}[ dot/.style={shape=circle,fill=black,minimum size=0pt, inner sep=0pt,outer sep=0pt}, ] \matrix[matrix of nodes, column sep=3ex, row sep=4ex] { |[dot,label=above:P] (p2)| {} & |[dot,label=above:C] (p3)| {} & |[dot,label=above:P] (p4)| {} & |[dot, label=right:{(6,3)}] (p5)| {}\\[1cm] |[dot,label=below:{(1,2)}] (p7)| {} &|[dot,label=below:{(3,1)}] (p8)| {} & |[dot,label=below:{(1,4)}] (p9)| {} & \\ }; \draw (p2) edge node[above]{\scriptsize{$d$}} (p3); \draw (p3) edge node[above]{\scriptsize{$f$}} (p4); \draw (p4) edge node[above]{\scriptsize{$h$}} (p5); \draw (p2) edge node[left]{\scriptsize{$c$}} (p7); \draw (p3) edge node[left]{\scriptsize{$e$}} (p8); \draw (p4) edge node[left]{\scriptsize{$g$}} (p9); \end{tikzpicture}\\ Game $1'$&Game $3'$ \end{tabular} \end{center} \caption[]{Truncated versions of Game 1 and Game 3.\vspace{0cm}\up \label{fig:auxgames} \end{figure*} To get more conclusive evidence for FI reasoning on the part of the participants, we designed the current experiment, where the participants once again played 8 rounds of turn-based games, repeating in each round a set of 6 games as earlier. The main difference from the earlier experiment is in the payoff structures of the 6 games used for the experiment. We revised the payoff structures so as to minimize the probable cardinal effects reported in~\cite{ghv14}. The main features of the new payoff structures (cf. Figures \ref{fig:maingames} and \ref{fig:auxgames}) are as follows: \noindent - Game 1 differs from Game 2 only in the payoff of the Computer ($C$) following action $a$ (ending the game right away). As a result, Game $1'$ is the truncation of Game 1 as well as Game 2. This means that participants who get to play in Game 1 and Game 2 face exactly the same continuation game (Game $1'$). We assumed that these new payoffs would enable an unobstructed comparison between the participants' behavior across these games (i.e. without confounding effects of cardinal differences in payoffs, which may have interfered in the previous experiment~\cite{ghv14}). In~\cite{ghv14}, Game 1 and Game 2 had the same tree structures as currently, but the payoffs of player $C$ following $a$ and $e$ were interchanged, so that Game $1'$ was not a truncation of Game 2, hence some cardinal effect may have been instrumental in participants' choices at the nodes of these games. \noindent - The same holds good for Game 3 and Game 4, respectively. \noindent - Game 1 differs from Game 3 only in $P$'s payoff following $h$ at the very end. One can compare Game 2 and Game 4 in a similar manner. Previously, the payoff structure of Game 1 differed from that of Game 3, and the same for Game 2 and Game 4, respectively. \noindent - We removed all the zero payoffs from these games, which some participants experienced as particularly ``bad" in the previous experiment (according to their verbal reports). \noindent - At $C$'s second chance to move, exiting by playing $e$ guarantees $C$ a payoff of 3, which is only slightly smaller than the expected payoff 3.5 of a fifty-fifty lottery between $g$ (with which $C$ gets 1) and $h$ (with which $C$ gets 6). This means that if at an earlier node, $P$ believes that $C$ is ``confused" (because $C$ has just deviated from its BI behavior), by attributing to $C$ a fifty-fifty belief on $P$'s future behavior at the last node (if reached), $P$ cannot conclude what $C$ will do next (choose $e$ or $f$) if $P$ believes that $C$ is mildly risk-averse, because the precise level of $C$'s presumed risk aversion would determine whether a sure payoff of 3 is preferable or not to a fifty-fifty lottery between 1 and 6 in $C$'s eyes. Furthermore, if due to the above, participant $P$ has a 50-50 belief on $C$'s future choice between $e$ or $f$ and $P$ is herself mildly risk-averse, cardinal considerations on their own will not give her a clear guidance what to do: exiting by playing $c$ will guarantee the payoff 2, while continuing by playing $d$ yields a fifty-fifty lottery between 1 (if $C$ chooses $e$) or 4 (if $C$ chooses $f$), with a slightly higher expected payoff of 2.5. Thus, participants who are unsure how to interpret $C$'s initial deviation from BI will neither be able to come up with an easy forecast regarding $C$'s future behavior on the basis of cardinal payoff considerations, nor will they have a strong preference for $c$ or $d$ on the basis of cardinal payoff considerations---which were a confounding potential explanation for participants' behavior in the previous experiment, which we would like to circumvent. To get some idea about BI and FI strategies in the current experimental games, let us now concentrate on Game 1 (a variant of Reny's game~\cite{reny92}). (For the remaining games, the reader may adapt the reasoning strategies discussed in~\cite{ghv14}.) In Game 1, the unique Backward Induction (BI) strategy for player $C$ is $a;e$, while for player $P$ it is $c;g$. In case the last decision node of the game is reached, player $P$ will play $g$ (which will give $P$ a better payoff at that node) yielding 1 for $C$. Thus, in the previous node, if reached, $C$ will play $e$ to be better off. Continuing like this from the end to the start of the game by BI reasoning, it can be inferred that whoever is the current player will choose to end the game immediately. Forward induction, in contrast, would proceed as follows. Among the two strategies of player $C$ that are compatible with reaching the first decision node of player $P$, namely $b; e$ and $b;f$, only the latter is rational for player $C$. This is because $b;e$ is dominated by $a;e$, while $b;f$ is optimal for player $C$ if she believes that player $P$ will play $d;h$ with a high enough probability. Attributing to player $C$ the strategy $b;f$ is thus player $P$'s best way to rationalize player $C$'s choice of $b$, and in reply, $d;g$ is player $P$'s best response to $b;f$. Thus, the unique Extensive-Form Rationalizable (EFR) strategy of $P$ is $d;g$, which is distinct from her BI strategy $c;g$. Nevertheless, player $C$'s best response to $d;g$ is $a;e$, which is therefore player $C$'s EFR strategy. Hence the EFR outcome of the game (with the EFR strategies $a;e$ and $d;g$) is identical to the BI outcome. A summary of the strategies in the games of Figures \ref{fig:maingames} and \ref{fig:auxgames} is given in Table 1. \begin{table} \begin{center} \begin{tabular}{ || l | l | l || } \hline\hline {\scriptsize\bf Games | Strategies} & {\scriptsize BI strategy} & {\scriptsize EFR strategy} \\ \hline {\scriptsize Game 1} & {\scriptsize C: $a;e$} & {\scriptsize C: $a;e$} \\ & {\scriptsize P: $c;g$} & {\scriptsize P: $d;g$} \\ \hline {\scriptsize Game 2} & {\scriptsize C: $a;e$} & {\scriptsize C: $a;e$} \\ & {\scriptsize P: $c;g$} & {\scriptsize P: $c;g$} \\ \hline {\scriptsize Game 3} & {\scriptsize C: $a;e, b;e, a;f, b;f$} & {\scriptsize C: $a;e, a;f, b;f$} \\ & {\scriptsize P: $c;g, d;g, c;h, d;h$} & {\scriptsize P: $d;g, d;h$} \\ \hline {\scriptsize Game 4} & {\scriptsize C: $a;e, b;e, a;f, b;f$} & {\scriptsize C: $a;e, b;e, a;f, b;f$} \\ & {\scriptsize P: $c;g, d;g, c;h, d;h$} & {\scriptsize P: $c;g, d;g, c;h, d;h$} \\ \hline {\scriptsize Game $1'$} & {\scriptsize C: $e$} & {\scriptsize C: $e$} \\ & {\scriptsize P: $c;g$} & {\scriptsize P: $c;g$} \\ \hline {\scriptsize Game $3'$} & {\scriptsize C: $e, f$} & {\scriptsize C: $e, f$} \\ & {\scriptsize P: $c;g, d;g, c;h, d;h$} & {\scriptsize P: $c;g, d;g, c;h, d;h$} \\ \hline\hline \end{tabular} \label{game-summary} \end{center} \caption{BI and EFR (FI) strategies for the 6 experimental games in Figures 1 and 2\vspace{0cm}\up}\vspace{0cm}\u \end{table} We now discuss the experimental procedure, which is almost the same as reported in~\cite{ghv14}. A group of 50 Bachelor's and Master's students from different disciplines participated in the experiment. As in~\cite{ghv14}, the participants had little or no knowledge of game theory, so as to ensure that neither backward induction nor forward induction reasoning was already known to them. 96\% of the participants were aged between 18 and 32. 48\% of the participants were female, 52\% of the participants were male. All experiments were conducted at the Institute of Artificial Intelligence at the University of Groningen, in the ALICE Lab. The participants played the finite perfect-information games in a graphical interface on the computer screen (cf. Figure \ref{fig:interface}). In each game, a marble was about to drop, and both the participant and the computer determined its path: The participant controlled the orange trapdoors, and the computer controlled the blue trapdoors. The participant's (computer's) goal was that the marble should drop into the bin with as many orange (blue) marbles as possible. \begin{figure}[h] \begin{center} \includegraphics[width=.5\textwidth]{instr-game-sz.pdf} \vspace{0cm}\up\vspace{0cm} \end{center} \caption[]{Graphical interface for the participants.\vspace{0cm}\up} \label{fig:interface} \end{figure} At the start of the experiment, participants received verbal instructions (based on an instruction sheet, see Appendix A) regarding the experiment. These instructions emphasized two facts in particular. Participants were advised to regard each game as if it were played against a new opponent. Furthermore, participants were notified about the fact that at the start of each new game, the computer had its strategy planned out, and that strategy was a best response to a strategy that the computer assumed the participant would play. The participants first played 14 practice games of increasing difficulty so as to get acquainted with the game setting. The experimental phase consisted of 8 rounds. In each round, the participants played the six games that were described above against a computer opponent. The order in which these 6 games were played in each round was randomized. In each new round, the graphical representation of each game was altered, so as to minimize the possibility for participants to recognize the games they played in earlier rounds. \begin{figure}[h] \begin{center} \includegraphics[width=.8\textwidth]{finalquestion.png}\vspace{0cm}\up \end{center} \caption[]{Questionnaire based on a representation of Game 4.\vspace{0cm} \label{fig:finalquestion} \end{figure} The participants were randomly divided into two groups: Group A and Group B, each consisting of 25 persons. While playing in certain rounds of a game, as they were about to make their choices at their first decision node, Group A participants were asked a multiple-choice question as follows. (i) For games 1-4: ``The computer just chose to go [direction computer just chose]. If you choose to go [direction corresponding to playing $d$], what do you think the computer would do next?'', or, (ii) For games $1', 3'$: ``It's your turn. If you choose to go [direction corresponding to playing $d$], what do you think the computer would do next?''. Three options were given regarding the likely choice of the computer: ``I think the computer would most likely open the left side" or `` I think the computer would most likely open the right side" or ``Both answers seem equally likely". The first two answers translated to the moves $e$ or $f$ of the computer, respectively. In case of the third answer, we assumed that the participant was undecided regarding the computer's next choice. For the Group B participants, questions were asked in certain rounds at the end of a game: (i) For games 1-4: ``The computer first chose to go [direction computer chose at its first decision point]. When you made your first choice, what did you think the computer would do next if you chose to go [direction corresponding to playing $d$]?'', or, (ii) For games $1', 3'$: ``When you made your first choice, what did you think the computer would do next if you chose to go [direction corresponding to playing $d$]?''. The answer choices were the same as above. At the end of the experiment, the participants were presented with two questionnaires, both based on an image of one of the experimental games (cf. Figure \ref{fig:finalquestion}, based on a representation of Game 4). All participants were asked these questions based on the same images. Finally, they were paid according to the marbles they earned at one of the experimental games, selected randomly for each participant by a mechanism proposed by Allais~\cite{allais53}, and supported as incentive-compatible by Azrieli et al.~\cite{healy12}. The reward varied from 3.75 euros to 15 euros depending on the number of marbles (1--4) that the participant won in the randomly selected game. \section{Results and analysis}\label{sec:results} We did not find any significant difference in behavior between participants in groups A and B. Instead, we found strong evidence that participants were equally likely to choose to continue (playing $d$) at the first decision node (Bayes Factor = 0.056). Consequently, we analyze the data of all 50 participants together. \begin{figure}[p] \begin{center} \begin{tabular}{cc} \includegraphics[width=.45\textwidth]{figures/game1_v2.png} & \includegraphics[width=.45\textwidth]{figures/game2_v2.png}\vspace{0cm}\up\vspace{0cm}\up\vspace{0cm} \vspace{0cm}\up\vspace{0cm}\up\\ \includegraphics[width=.45\textwidth]{figures/game3_v2.png} & \includegraphics[width=.45\textwidth]{figures/game4_v2.png}\vspace{0cm}\up\vspace{0cm}\up\vspace{0cm} \vspace{0cm}\up\vspace{0cm}\up\\ \includegraphics[width=.45\textwidth]{figures/game1p_v2.png} & \includegraphics[width=.45\textwidth]{figures/game3p_v2.png}\vspace{0cm} \vspace{0cm} \vspace{0cm}\up\vspace{0cm} \vspace{0cm}\up\vspace{0cm}\up\\ \end{tabular} \end{center} \caption[]{Sequence of choices (across the 8 repetitions of each game) at the first decision node of games 1, 2, 3, 4, $1', 3'$, per participant (named A1 \ldots A25, B1 \ldots B25). The {\em dark grey} color corresponds to the rounds the participant played the move $d$, and the {\em light grey} color corresponds to the rounds the participant played move $c$, whenever the participant's first decision node was reached. Note that white horizontal bands correspond to rounds in which the computer took option $a$, thereby ending the game.\vspace{0cm}\up\vspace{0cm}} \label{fig:choices1} \end{figure} \subsection{Aggregate results: Is there a forward induction trend at participants' first node?} Our main hypothesis was that in case the participants played $c$ more in Game 2 than in Game 1, and played $c$ more in Game 4 than in Game 3, then we could safely assume that the participants' behavior showed some inclination towards FI reasoning. To test this hypothesis, we first perform a mixed effects logistic regression on participant choices in Game 1 and Game 2. In this model, we determine whether the probability that a participant chooses $c$ in a given game depends on whether the game is an instance of Game 1 or an instance of Game 2. To account for individual differences, participants are treated as random effects. The results show that participants were indeed significantly more likely to play $c$ in Game 2 than in Game 1 ($p < 0.01$). A second logistic regression for Game 3 and Game 4 shows that that participants were also significantly more likely to play $c$ in Game 4 than in Game 3 ($p < 0.02$). These results suggest that in the aggregate, participants' behavior shows inclination towards FI reasoning. However, on a visual inspection of the graphs showing the choices of the participants at their first decision nodes (cf. Figure \ref{fig:choices1}) we found the following: Comparing Game 1 to Game 2, only 14 participants out of 50 played $c$ in Game 2 somewhat more than they played $c$ in Game 1 (meaning that the number of times such a participant played $c$ in the rounds of Game 2 minus those he played $c$ in the rounds of Game 1 is more than 1). In contrast, 5 participants played $c$ in Game 2 somewhat less than they played $c$ in Game 1. For the remaining 31 participants, there was not much difference in their playing $c$ between Games 1 and 2. Comparing Game 3 to Game 4, only 10 participants out of 50 played $c$ somewhat more in Game 4 than they did in Game 3 (with no reverse cases), and for the remaining 40 participants there was not much difference between their playing $c$ across Games 3 and 4. We finally note that only 13 participants out of 50 played $c$ in more than half of the 8 rounds of Game $1'$. Interestingly, all these trends in participants' choices at their first decision points indicated a very low extent of game-theoretic strategic reasoning overall---be it BI or FI reasoning. \begin{figure}[h] \begin{center} \begin{tabular}{ll} \includegraphics[width=.45\textwidth]{finalanswer.pdf} & \includegraphics[width=.41\textwidth]{finalanswer2.pdf}\\ \end{tabular} \includegraphics[width=.45\textwidth]{finalanswer3.pdf}\vspace{0cm}\up \end{center} \caption[]{Reasoning behind participant behaviour. Answers to final question classified on three dimensions: attitude towards risk (left); explicit order of theory of mind (middle) and level of cooperation (right) \vspace{0cm} \label{fig:finalanswer} \end{figure} We should note here that in the current experiment, in all six games, the computer would gain 6 points if the game ended up at the rightmost leaf, compared to only 4 points at the rightmost leaves in the games of~\cite{ghv14}. This higher pay-off appears to have motivated the participants to think that the computer would risk for 6 and would continue in the game. Given these conflicting findings of the logistic regression and the visual inspection of the choice graphs, it is pertinent to ask: ``How exactly do participants reason while playing these games?" In what follows, we attempt to answer this question. \subsection{Dividing participants into groups: decisions and answers about opponent} One way to divide the participants into groups is to analyze their answers to the questionnaire presented in Figure \ref{fig:finalquestion}. According to their answers, we classify the participants along the following three dimensions (see Figure \ref{fig:finalanswer}): (i) {\em risk-taking/risk-averse} considerations for self and opponents, (ii) explicit order of {\em theory of mind}, which has been shown to be important in strategic reasoning in dynamic games~\cite{meijering2010,meijering2011}, and (iii) {\em competitive/cooperative} considerations. To come up with our classification, we considered the decisions at each node of the game in Figure \ref{fig:finalquestion} in the questionnaire and the participants' explanations of reasoning behind their decisions. As an example, consider the following sample answers of the participants A7 and B0 to the questionnaire given in Figure \ref{fig:finalquestion}:\footnote{The full answers of all participants' answers to the final questionnaire will be provided on~\url{http://www.ai.rug.nl/SocialCognition/experiments/}.} \medskip \noindent {\em Participant A7:}\\ \noindent {\bf Direction A}: right \noindent {\em Motivation:}\ B would probably open right, at which point C has the chance to get at least 3 marbles. \noindent {\bf Direction B}: right \noindent {\em Motivation:}\ If C opens left, B has a 100\% chance of getting 4 marbles. The chance that C opens left is big enough, since there are 6 marbles in D-right. \noindent {\bf Direction C}: left \noindent {\em Motivation:}\ Since the direction for D doesn't matter, and the amount of marbles of your opponent doesn't matter, the chance is quite big that D will choose right, giving you 6 marbles. \noindent {\bf Direction D}: right \noindent {\em Motivation:}\ I would pick right. Even against the computer it felt wrong to pick left (since the opponent's score doesn't matter for my own score). \medskip \noindent {\em Participant B0:} \noindent {\bf Direction A}: right \noindent {\em Motivation:}\ The player would hope for the largest amount of points he could get. 2 would not be enough when there is the chance to get three or six. \noindent {\bf Direction B}: right \noindent {\em Motivation:}\ The player would chose right because there his/her chances are good that he/she will end up with 4 balls. But there is the risk of having only one ball \noindent {\bf Direction C}: right \noindent {\em Motivation:}\ If the player decides to go left, he will let the other person choose how many balls he will end up with. This is a high risk since it is possible to get just one ball. Therefore the player would take the right path were he has the mean of 3. \noindent {\bf Direction D}: right \noindent {\em Motivation:}\ The player would chose right since both boxes contain 4 balls. Further it is not important to let the other player have less balls than you. Therefore I would say the player is nice and let the other person have the full amount of balls.\\ \medskip \noindent Based on these answers, we came up with the classification shown in Figure \ref{fig:finalanswer}. Participant A7 considers the chance of getting more for both the players (risk-taker: self and opponent), takes into consideration how the opponent would play (first-order theory of mind), and does not want to wrong the opponent (cooperative). Participant B0 also considers the chance of getting more for both the players (risk-taker: self and opponent), in addition, considers safe play for the opponent (risk-averse: opponent), takes into consideration how the opponent would play (first-order theory of mind), and wants to be nice to the opponent (cooperative). Note that Directions A and C in Figure~\ref{fig:finalquestion} (blue trapdoors) are considered to be the computer's moves, and Directions B and D (orange trapdoors) are considered to be the participant's moves in the interpretation here. \begin{figure} \begin{center} \includegraphics[width=.65\textwidth]{figures/lca_34.png}\vspace{0cm}\up \end{center} \caption{Estimated probability of stop decision (i.e. $c$ or $g$) according to the latent class analysis of participant choices on Game 3 and Game 4.\vspace{0cm}\up \label{fig:lca34} \end{figure} \subsection{Dividing the participants by latent class analysis according to their decisions} To further analyze participant choices, we used latent class analysis (LCA, \cite{mccutcheon1987latent}), allowing us to divide participants into classes that exhibit similar patterns of behavior. In our model, the behavior of a participant is described by whether they choose $c$ or $d$ at their first decision point of a game, and whether they choose $g$ or $h$ at their second decision point. LCA determines what classes of behavior best describe the observed participant data, given the number $n$ of such classes. Statistical model selection tools can be used to select the number of classes that is most appropriate for the data. Our aim of the latent class analysis is to describe the behavior of participants in both their decision points. However, of the 181 times (out of 300 times where the participants actually reached their first decision node) that a participant in Game 1 ended up at his or her second decision point, only 2 times participants chose $h$. In addition, no participant in Game 2 ever reached the second decision point---the computer was programmed to play $e$ whenever its second decision node was reached. For this reason, we performed a latent class analysis on the two decision points of Game 3 and Game 4 only. The Bayesian Information Criterion (BIC) favors an LCA model with three classes, which we describe here. The results of the estimation parameters of the LCA are summarized in Figure \ref{fig:lca34}. In this figure, estimated probabilities of choosing $c$ at the first decision point or $g$ at the second decision point are averaged over repeated encounters of the same game. According to the LCA, 46\% of the participants tend to choose to continue at the first decision point, but stop at the second decision point: Class 1. A further 34\% of the participants is represented by the second class, who prefer to continue at both decision points: Class 2. The final 20\% of the participants has a preference to stop at both decision points: Class 3. Thus, about 80\% of the participants belonged to Class 1 or 2, who tend to continue at the first decision node, and the remaining 20\% tend to stop. Out of these 41 participants assigned to Class 1 or Class 2 by LCA, 37 were classified as risk-takers (self), 37 were classified as risk-takers (opponent), while only two participants were classified as neither type of risk-taking based on their final answers (see Figure \ref{fig:finalanswer}). Out of the 9 participants assigned to Class 3 by LCA, 7 were classified as risk-averse (self), 4 were classified as risk-averse (opponent), while only one participant did not mention any risk-aversion, once again, based on their final answers As depicted in Figure \ref{fig:maingames}, the final decision point for Player $P$ in Game 3 and Game 4 only influences the payoff for the computer player $C$. Participants that ended up at this decision point more often chose the competitive option (60\%), in which the opponent only received a payoff of 1, than the cooperative option (40\%), which yielded the opponent a payoff of 6 (Bayes Factor = 132.7663). In addition, of the 19 participants assigned to Class 2 by the LCA, who are therefore classified as being likely to choose the cooperative option at their second decision point, 15 were also classified as being cooperative based on their final answer (see Figure \ref{fig:finalanswer}). Moreover, of the 31 remaining participants, only 5 were classified as being cooperative based on their final answer. However, there was no clear mapping of classes 1 and 3 of the LCA results and the classification `competitive' and `neither cooperative nor competitive' based on the final answers. \begin{figure} \begin{center} \includegraphics[width=.65\textwidth]{figures/participantB4.png}\vspace{0cm}\up \end{center} \caption{Example of participant choices at the first decision point across all games. This participant, like many others, tended to choose $c$ at the first decision point for Game 1, Game 2, and Game $1'$, but tended to choose $d$ for Game 3, Game 4, and Game $4'$.\vspace{0cm}\up} \label{fig:playergraph} \end{figure} \subsection{Individual participant decision patterns: differences and similarities} Visual inspection of individual participant graphs shows a similar pattern. See Figure \ref{fig:playergraph} for an example of a graph of an individual participant's choices: In two of the rounds for games 1-4, selected randomly, the computer took the \emph{outside option}, that is, played $a$, and so the first decision node of the participant was not reached. It was noted that participants do differentiate between Game 1 and Game 2 on the one hand and Game 3 and Game 4 on the other hand: The participants played $c$ more often in Games 1 and 2, compared to what they played in Games 3 and 4. Further, even though participants' behavior differed between Game 1 and Game 2 to some extent (not exceedingly so), participants appeared to behave similarly in Game 3 and Game 4. For the truncated games, behavior in Game $1'$ appeared similar to Game 1, and behavior in Game 3' appeared similar to both Game 3 and Game 4. \section{Discussion}\label{sec:related} The results outlined in Section \ref{sec:results} indicate that while in the aggregate, participant behavior is indicative of forward reasoning, most participants do not appear to making use of either forward induction reasoning or backward induction reasoning. Apparently, game-theoretic rationality does not drive participants' choices in these turn-taking games. In this section, we take a look at some possible alternatives for the driving force behind people's choices in our experiment, in the light of the literature on human decision making and perspective taking in games. \subsection{Level-$k$ reasoning} In our experiment, we explicitly mention to the participants that the computer player acts based on its beliefs about future participant behavior, and that the computer player does not learn from the participant's previous actions. Participants are therefore playing a series of one-shot games against a computer opponent. In behavioral economics, the behavior of participants in such one-shot games has been successfully modeled through iterated best-response models such as level-$n$ theory \cite{stahl1995players,Bacharachstahl2000leveln,costa2001cognition,Nagel1995,bn08,Crawford2013}, cognitive hierarchies \cite{camerer2004cognitive}, quantal response equilibria \cite{mckelvey1995quantal}, and noisy introspection models \cite{goeree2004model}. Similar to the theory of mind classification we describe in Section \ref{sec:results}, a participant's level of reasoning sophistication is measured by the maximum number of steps of iterated reasoning the participant considers. According to cognitive hierarchy models \cite{camerer2004cognitive}, a naive level-0 reasoner does not reason strategically at all, but instead chooses randomly among all available options. In contrast, a level-1 reasoner believes that all other participants are level-0 reasoners, and selects the option that is a best response to that belief. In terms of our Marble Drop game, level-1 reasoners would therefore maximize future payoffs without considering the opponent's past behavior. In the cognitive hierarchy model, a level-2 reasoner performs two steps of iterated reasoning. Such a reasoner believes that other individuals can be either level-0 reasoners that play randomly, or level-1 reasoners that maximize their expected payoff. In addition, level-2 reasoners form beliefs about the relative proportion of level-0 reasoners and level-1 reasoners. Over a range of one-shot non-repeated games, cognitive hierarchy models estimate participants to be level 1.5 reasoners on average \cite{camerer2004cognitive,costa2006cognition}. In terms of theory of mind reasoning, the vast majority of participants in these one-shot non-repeated games reason at zero-order or first-order theory of mind. Although part of the participants in those experiments were found to use more than two steps of iterated reasoning, only few players were found to be well-described as higher-level agents \cite{wright2010beyond}. \subsection{Considering the opponent's perspective} Kawagoe has recently applied a level $k$-analysis to centipede games to explain that participants' lack of backward induction may be due to their believing that the opponent could have made an error or cannot apply backward induction for the number of steps required~\cite{Kawagoe2012}. Evans and Krueger have shown in a very simple dynamic trust game of perfect information, in which participants need to open windows in order to inspect payoffs, quite a few participants do not choose to inspect all the opponent's payoffs but they do inspect all their own ones, thereby not applying even first-order theory of mind~\cite{Evans2011,Evans2014}. For higher orders of theory of mind, the situation seems even worse. It has also been shown that level $k$ reasoning for higher $k$ is correlated to general cognitive ability~\cite{Gill2016}. These results could make one quite pessimistic about the value and actual use of higher-level per\-spective-taking in games. Is the situation really that bad? De Weerd and colleagues have shown on the basis of agent simulations that both in competitive situations and in repeated mixed-motive interactions, agents attain higher pay-offs if they apply second-order theory of mind than first-order theory of mind, which is in turn more beneficial than zero--order theory of mind~\cite{Weerd2013,Weerd2017}, so the application of theory of mind is beneficial. Also theoretically, higher-order theory of mind is required in dynamic perfect information games~\cite{Pacuit2015}. But do people really use it? De Weerd and colleagues~\cite{Weerd2017} showed that, by letting people play unknowingly against a second-order theory of mind agent in a repeated negotiation game, people are enticed to apply second-order theory of mind, which became much more prevalent than when the participants played against zero-order or first-order agents. In cognitive science, the application of perspective taking in turn-taking games of perfect information has been studied, especially on the basis of experiments in which the decision trees looked like our Game 1$'$ and 3$'$, but with a large variation of pay-off structures, not only those similar to centipede games. These various turn-taking games, when solved by participants without knowledge of the backward induction algorithm, do require second-order theory of mind: ``the opponent {\em thinks} that at my final move, I {\em intend} to go down". Hedden and Zhang~\cite{hedden2002} showed that people have a hard time learning to apply second-order in these games (around 60 \% optimal decisions after many game items). Meijering and colleagues~\cite{meijering2010,meijering2011} proposed several supportive interventions that helped people to make much better decisions (up to around 90 \% optimal ones at the end of playing many experimental games with different payoff structures). Even though application of second-order theory of mind can apparently be supported and trained, Meijering and colleagues~\cite{meijering2014a} also showed on the basis of computational cognitive models that people do not start to apply higher orders of theory of mind spontaneously. Rather, people start to do this only when they receive negative feedback from the results of the games they play, for example, in the form of low pay-off, or the comment ``you could have chosen better''. They prefer reasoning that is ``as easy as possible, as complex as necessary". And even when they do produce the backward induction {\em outcome}, it appears by analyzing their eye movements and reaction times that their reasoning rather follows a stream of thinking starting from the root to its child nodes and then to the grandchildren in the decision tree, by ``forward reasoning plus backtracking", and giving special attention to decision points~\cite{meijering2012,Bergwerff2014}. \subsection*{Coming back to the experiment} The experiments done by~\cite{meijering2010,meijering2011,meijering2012} concern games in which the computer opponent has been programmed to make rational decisions, with the goal to attain the highest possible individual pay-off, and participants are told about this. In the games that participants play in the current paper, in contrast, the opponent often plays irrationally, and participants are told that in fact, the computer opponent is playing its best response to a strategy that it attributes to the participant. Let us see whether this leads to a difference in the application of theory of mind. For our previous experiment presented in \cite{ghv14}, we found that higher levels of theory of mind expressed in final answers was correlated positively to number of marbles won in total in the games~\cite{Ghosh2017}, so it appears that theory of mind is indeed profitable in this type of Marble Drop games in which the opponent sometimes acts in an unexpected way by not ending the game. In their model-based analysis of strategies used by the participants of~\cite{meijering2011,flobbeverb}, Meijering and colleagues~\cite{meijering2014a} implemented one simple ``zero-order theory of mind" strategy that ignores any future decisions and simply compares the immediate payoff, when stopping a game, against the maximum of all future possible payoffs, which they dubbed as a ``simple risk-taking strategy". A more advanced ``first-order theory of mind" version of that model attributes the simple risk-taking strategy to the opponent. They showed that the sequences of decisions of many 9-year old children playing the games of~\cite{flobbeverb} correspond to either the zero -order risk-taking strategy or the first-order version attributing risk-taking to the opponent; and that the adults' decisions in these games~\cite{meijering2011} correspond more closely to the first-order strategy of assigning the risk-taking strategy to the opponent, or to a second-order strategy. It is interesting to see in Figure~\ref{fig:finalanswer} that in our games, many participants' answers also exemplify that they themselves ``take a risk" or ``take a chance" when moving to the right, namely 40 out of 50 participants; and a similar number of participants, namely 39 out of 50, attributes such risk-taking to their opponent. Also, in line with the work by Meijering and colleagues, most of our current adult participants (34 out of 50) appear to reason explicitly at the first or second order of theory of mind, at least when their answers are analyzed. Thus, it does appear that participants regularly do think and act at least at first- or second-order theory of mind, which is rather high when considering how much more complicated our decision trees for games 1, 2, 3 and 4 are than those of~\cite{meijering2011,hedden2002} and especially~\cite{Evans2011}. For the current experiment, it would be useful to estimate more precisely which level of theory of mind each participant actually applies when making their own decisions, using computational techniques such as simulation of computational cognitive models~\cite{ghosh2014,Ghosh2017} or Bayesian strategy estimation~\cite{Weerdpress}. \subsection{Moving toward cooperation} It has long been noted in the literature on behavioral game theory that players' utilities do not correspond one-to-one to payoffs~\cite{mckelvey1992}. Healy~\cite{healy16} has elicited players' utilities over outcomes associated with centipede-like games, showing that there are many motivations that play a role. In our games, trying to get all the way to the right also corresponds to going for the outcomes with maximum social welfare, see Figures~\ref{fig:maingames} and~\ref{fig:auxgames}. It is very well possible that participants try to get there and aim to entice their opponent to move right as well (see also Nagel and Tang's experiment~\cite{nagel98}, in which participants may have reason to believe that their opponent could be an altruist). In this light, our participants could interpret the opponent's first ``irrational" move to the right not as ``irrational" at all, but as a first step towards cooperation to attain maximum social welfare. \section{Conclusion}\label{sec:concl} We made a number of improvements that we thought would make it easier for participants to apply backward or forward induction reasoning in the perfect-information games in a Marble Drop set-up than in~\cite{ghv14}. We found that even though in the aggregate, participants in the new experiment still tend to slightly favor the forward induction choice at their first decision node, their verbalized strategies most often depend on their own attitudes towards risk and those they assign to the computer opponent, sometimes in addition to considerations about cooperativeness and competitiveness. When we analyzed the individual participants' decisions and their answers about the reasons behind their decisions, it turns out that many players do not find anything to rationalize when the computer does not take the safe and rational option to go down and stop the game at its first decision point. Instead, in almost all the games, many participants think that the computer is just taking a chance to gain a higher payoff later. Also, many of the players are willing to take such a risk at their own first decision point: very few players take the BI option of stopping the game there. Dividing the players into classes according to their decisions as well as according to how they reasoned about their own and the opponent's choices turned up several more nuanced patterns of reasoning. In view of our findings for this experiment, our subsequent categorization of the elements of reasoning behind the behavior of participants, namely (i) risk-taking/risk-averseness tendency of self/opponent, (ii) competitive/cooperative tendency, and (iii) proficiency in applying higher-order theory of mind, has provided an understanding of a few possible player-types with respect to their strategic reasoning. In addition, the distinction of instinctive versus contemplative reasoners could be addressed in future investigations, requiring a detailed analysis of the time data~\cite{rubinstein2013,rubinstein2014}. Such an analysis of the temporal data could also provide new insights into the categorisation of the participants' behavior as mentioned above. The take-home message from this paper that we would like to suggest is the following: In addition to investigating into game-theoretic rationality as a guiding force for understanding people's choices in dynamic games (both perfect and imperfect information games), one should also look into risk-taking and risk-averse behaviours, theory of mind considerations, competitive and cooperative considerations, instinctive and contemplative behaviours, and similar reasoning tendencies towards providing a better explanation of people's choices in turn-taking games. \subsection{Acknowledgments} We would like to thank Burkhard Schipper for his many insightful comments about our paper~\cite{ghv14} and his useful suggestions about improved ways to set up this experiment on forward versus backward induction, including ways to incentivize participants to take the games seriously. We would also like to thank Fokie Cnossen for her advice on how to formulate questions to participants. We are vey grateful to Eric Jansen who designed and implemented the new experiment including the Marble Drop interface, which was inspired by the one constructed by Damian Podareanu and Michiel van de Steeg for our previous experiment reported in~\cite{ghv14}. We would also like to thank Eric Jansen for performing the experiment for this study. The anonymous reviewers for TARK 2017 have given us very helpful advice to improve our paper, for which we would like to thank them. \bibliographystyle{eptcs}
2024-02-18T23:39:52.331Z
2017-07-28T02:04:28.000Z
algebraic_stack_train_0000
717
9,097
proofpile-arXiv_065-3528
\section{Algorithms} \label{sec:algorithms} The problem of subtropical matrix factorization has some unique challenges that stem from the lack of linearity and smoothness of the max-times algebra. One of such issues is that dominated elements in a decomposition have no impact on the final result. Namely, if we consider the subtropical product of two matrices $\matr{B}\in \Region^{n\times k}$ and $\matr{C}\in \Region^{k\times m}$, we can see that each entry $(\matr{B} \maxprod \matr{C})_{ij} = \max_{1 \le s \le k}{\matr{B}_{is} \matr{C}_{sj}}$ is completely determined by a single element with index $\argmax_{1 \le s \le k}{\matr{B}_{is} \matr{C}_{sj}}$. This means that all entries $t$ with $\matr{B}_{it} \matr{C}_{tj}<\max_{1 \le s \le k}{\matr{B}_{is} \matr{C}_{sj}}$ do not contribute at all to the final decomposition. To see why this is a problem, observe that many optimization methods used in matrix factorization algorithms rely on local information to choose the direction of the next step (e.g. various forms of gradient descent). In the case of the subtropical algebra, however, the local information is practically absent, and hence we need to look elsewhere for effective optimization techniques. A common approach to matrix decomposition problems is to update factor matrices alternatingly, which utilizes the fact that the problem $\min_{\matr{B}, \matr{C}}{\norm{\matr{A} - \matr{B}\matr{C}}}_F$ is biconvex. Unfortunately, the subtropical matrix factorization problem does not have the biconvexity property, which makes alternating updates less useful. Here we present a different approach that, instead of doing alternating factor updates, constructs the decomposition by adding one rank-1 matrix at a time, following the idea by \cite{kolda2000}. The corresponding algorithm is called \Equator (Algorithm~\ref{alg:equator}) First observe that the max-times product can be represented as an elementwise maximum of rank-1 matrices (blocks) \begin{equation}\label{blockwise} \matr{B} \maxprod \matr{C} = \max\limits_{1\le s \le k}{\matr{B}^s \matr{C}_s}\;. \end{equation} Hence, Problem~\ref{problem:mdecomp} can be split into $k$ subproblems of the following form: given a rank-$(l-1)$ decomposition $\matr{B}\in \Region^{n\times (l-1)}$, $\matr{C}\in \Region^{(l-1)\times m}$ of a matrix $\matr{A} \in \Region^{n \times m}$, find a column vector $\vec{b}\in \Region^{n\times 1}$ and a row vector $\vec{c} \in \Region^{1\times m}$ such that the error \begin{equation}\label{maxtimesrank1} \norm{ {\matr{A} - \max{\lbrace \matr{B}\maxprod \matr{C}}, \vec{b} \vec{c} \rbrace} } \end{equation} is minimized. We assume by definition that the rank-$0$ decomposition is an all zero matrix of the same size as $\matr{A}$. The problem of rank-$k$ subtropical matrix factorization is then reduced to solving \eqref{maxtimesrank1} $k$ times. One should of course remember that this scheme is just a heuristic and finding optimal blocks on each iteration does not guarantee converging to a global minimum. One prominent issue with the above approach is that an optimal rank-$(k-1)$ decomposition might not be very good when considered as a part of a rank-$k$ decomposition. This is because for smaller ranks we generally have to cover the data more crudely, whereas when the rank increases we can afford to use smaller and more refined blocks. In order to deal with this problem, we find and then update the blocks repeatedly, in a cyclic fashion. That means that after discovering the last block, we go all the way back to block one. The input parameter $M$ defines the number of full cycles we make. \begin{algorithm}[tbp] \flushleft \caption{\Equator}\label{alg:equator} \begin{algorithmic}[1] \Input $\matr{A} \in \Region^{n \times m}$, $k>0$, $M>0$ \Output $\Bbest \in \Region^{n \times k}$, $\Cbest \in \Region^{k \times m}$ \Function{\Equator}{\matr{A}, k, M} \State $\matr{B} \gets 0^{n \times k}$, $\matr{C} \gets 0^{k \times m}$ \label{init} \State $\Bbest \gets \matr{B}, \Cbest \gets \matr{C}$ \label{best:factor:init} \State $\bestError \gets E(\matr{A}, \matr{B}, \matr{C})$ \label{error:init} \For{$\mathit{count} \gets 1$ \textbf{to} $k\times M$} \label{begincyclic} \State $l \gets (\mathit{count}-1) \pmod k + 1$ \Comment{Index of the current block} \label{getcurrentindexequator} \State $[\matr{B}^l, \matr{C}_l] \gets \UpdateBlock(\matr{A}, \matr{B}, \matr{C}, count)$ \label{Cancer:UpdateBlock} \If{$E(\matr{A}, \matr{B}, \matr{C}) < \bestError$} \label{compbegin} \State $\Bbest \gets \matr{B}, \Cbest \gets \matr{C}$ \State $\bestError \gets E(\matr{A}, \matr{B}, \matr{C})$ \label{compend} \EndIf \EndFor \label{finishloop} \State \textbf{return} $\Bbest$, $\Cbest$ \EndFunction \end{algorithmic} \end{algorithm} On a high level \Equator works as folows. First the factor matrices are initialized to all zeros (line~\ref{init}). Since the algorithm makes iterative changes to the current solutions that might in some cases lead to worsening of the results, it also stores the best reconstruction error and the corresponding factors found so far. They are initalized with the starting solution on lines~\ref{best:factor:init}--\ref{error:init}. The main work is done in the loop on lines~\ref{begincyclic}--\ref{finishloop}, where on each iteration we update a single rank-1 matrix in the current decomposition using the \UpdateBlock routine (line~\ref{Cancer:UpdateBlock}), and then check if the update improves the best result (lines~\ref{compbegin}--\ref{compend}). We will present two versions of the \UpdateBlock function, one called \Capricorn and the other one \Cancer. \Capricorn is designed to work with discrete (or flipping) noise, when some of the elements in the data are randomly changed to different values. In this setting the level of noise is the proportion of the flipped elements relative to the total number of nonzeros. \Cancer on the other hand is robust with continuous noise, when many elements are affected (e.g. Gaussian noise). We will discuss both of them in detail in the following subsections. In the rest of the paper, especially when presenting the experiments, we will use names \Capricorn and \Cancer not only for a specific variation of the \UpdateBlock function, but also for the \Equator algorithm that uses it. \subsection{\Capricorn} We first describe \Capricorn, which is designed to solve the subtropical matrix factorization problem in the presence of discrete noise, and minimizes the $L1$ norm of the error matrix. The main idea behind the algorithm is to spot potential blocks by considering ratios of matrix rows. Consider an arbitrary rank-1 block $\matr{X} = \vec{b} \vec{c}$, where $\vec{b} \in \Region^{n \times 1}$ and $\vec{c} \in \Region^{1 \times m}$. For any indices $i$ and $j$ such that $\vec{b}_i>0$ and $\vec{b}_j>0$, we have $\matr{X}_j = \frac{\vec{b}_j}{\vec{b}_i} \matr{X}_i$. This is a characteristic property of rank-1 matrices -- all rows are multiples of one another. Hence, if a block $\matr{X}$ dominates some region $\Gamma$ of a matrix $\matr{A}$, then rows of $\matr{A}$ should all be multiples of each other within $\Gamma$. These rows might have different lengths due to block overlap, in which case the rule only applies to their common part. \UpdateBlock starts by identifying the index of the block that has to be updated at the current iteration (line~\ref{getcurrentindex}). In order to find the best new block we need to take into account that some parts of the data have already been covered, and we must ignore them. This is accomplished by replacing the original matrix with a residual $\matr{R}$ that represents what there is left to cover. The building of the residual (line~\ref{capricorn:residual}) reflects the winner-takes-it-all property of the max-times algebra: if an element of $\matr{A}$ is approximated by a smaller value, it appears as such in the residual; if it is approximated by a value that is at least as large, then the corresponding residual element is \NaN, indicating that this value is already covered. We then select a seed row (line~\ref{seedrow}), with an intention of growing a block around it. We choose the row with the largest sum as this increases the chances of finding the most prominent block. In order to find the best block $\matr{X}$ that the seed row passes through, we first find a binary matrix $\matr{H}$ that represents the pattern of $\matr{X}$ (line~\ref{getpattern}). Next, on lines \ref{startgetbc}--\ref{endgetbc} we choose an approximation of the block pattern with index sets ${b\_idx}$ and $c\_idx$, which define what elements of $\vec{b}$ and $\vec{c}$ should be nonzero. The next step is to find the actual values of elements within the block with the function \RecoverBlock (line~\ref{recoverblock}). Finally, we inflate the found core block with \ExpandBlock (line~\ref{expandblock}). \begin{algorithm}[tbp] \flushleft\small% \caption{\UpdateBlock (\Capricorn)}\label{alg:updateblock} \begin{algorithmic}[1] \Input $\matr{A} \in \Region^{n \times m}$, $\matr{B} \in \Region^{n \times k}$, $\matr{C} \in \Region^{k \times m}$, $\mathit{count}>0$ \Output $\vec{b} \in \Region^{n \times 1}$, $\vec{c} \in \Region^{1 \times m}$ \Parameters $\bucketSize>0$, $\delta>0$, $\theta>0$, $\tau\in[0,1]$ \Function{\UpdateBlock}{\matr{A}, \matr{B}, \matr{C}, \mathit{count}} \State $l \gets (\mathit{count}-1) \pmod k + 1$ \Comment{Index of the current block} \label{getcurrentindex} \State $\matr{R}_{ij} \gets \begin{cases} \matr{A}_{ij} & (\matr{B}^{-l} \maxprod \matr{C}_{-l})_{ij} < \matr{A}_{ij}\\ \NaN & \text{otherwise} \end{cases}$ \Comment{Residual matrix} \label{capricorn:residual} \State $\idx \gets \argmax_i \sum_j r_{ij}$ \label{seedrow} \State $\matr{H} \gets \CorrelationsWithRow(\matr{R}, \idx, \bucketSize, \delta, \tau)$ \label{getpattern} \State $r \gets \argmax_{i} \sum_j h_{ij}$ \label{startgetbc} \State $c \gets \argmax_j \sum_i h_{ij}$ \State ${b\_idx} \gets \lbrace i \setcond \matr{H}_{i c} = 1\rbrace$ \State $c\_idx \gets \lbrace i \setcond \matr{H}_{r i} = 1\rbrace$ \label{endgetbc} \State $[\vec{b}, \vec{c}] \gets \RecoverBlock(\matr{R}, {b\_idx}, c\_idx)$ \label{recoverblock} \State $\vec{b} \gets \AddRows(\vec{b}, \vec{c}, \matr{A}, \theta, \bucketSize, \delta)$ \label{expandblock} \State $\vec{c} \gets \AddRows(\vec{c}^T, \vec{b}^T, \matr{A}^T, \theta, \bucketSize, \delta)^T$ \State \textbf{return} $\vec{b}$, $\vec{c}$ \EndFunction \end{algorithmic} \end{algorithm} The function \lword{\CorrelationsWithRow} (Algorithm~\ref{alg:correlationsWithRow}) finds the pattern of a new block. It does so by comparing a given seed row to other rows of the matrix and extracting sets where the ratio of the rows is almost constant. As was mentioned before, if two rows locally represent the same block, then one should be a multiple of the other, and the ratios of their corresponding elements should remain level. \lword{\CorrelationsWithRow} processes the input matrix row by row using the function \FindRowSet, which for every row outputs the most likely set of indices, where it is correlated with the seed row (lines \ref{startcorr}--\ref{endcorr}). Since the seed row is obviously the most correlated with itself, we compensate for this by replacing its pattern with that of the second most correlated row (lines \ref{replaceseedbegin}--\ref{replaceseedend}). Finally, we drop some of the least correlated rows after comparing their correlation value $\phi$ to that of the second most correlated row (after the seed row). The correlation function $\phi$ is defined as follows \begin{equation} \phi(\matr{H}, \idx, i) = \frac{\langle \matr{H}_i, \matr{H}_{\idx}\rangle}{\langle \matr{H}_i, \matr{H}_i\rangle + 1} \;. \label{eq:phi} \end{equation} The parameter $\tau$ is a threshold determining whether a row should be discarded or retained. The auxiliary function \FindRowSet (Algorithm~\ref{alg:FindRowSet}) compares two vectors and finds the biggest set of indices where their ratio remains almost constant. It does so by sorting the log-ratio of the input vectors into buckets of a fixed size and then choosing the bucket with the most elements. The notation $\vec{u} \vecdivide \vec{v}$ on line~\ref{getlogratios} means elementwise ratio of vectors $\vu$ and $\vv$. It accepts two additional parameters: $\bucketSize$ and $\delta$. If the largest bucket has fewer than $\bucketSize$ elements, the function will return an empty set -- this is done because very small patterns do not reveal much structure and are mostly accidental. The width of the buckets is determined by the parameter $\delta$. \begin{algorithm}[tbp] \flushleft\small% \caption{\CorrelationsWithRow}\label{alg:correlationsWithRow} \begin{algorithmic}[1] \Input $\matr{R} \in \Region^{n \times m}$, $idx \in [n]$, $\bucketSize>0$, $\delta>0$, $\tau\in[0,1]$ \Output $\matr{H} \in \lbrace 0,\, 1 \rbrace^{n\times m}$ \Function{\CorrelationsWithRow}{\matr{R}, \idx, \bucketSize, \delta, \tau} \State turn all $\NaN$ elements of $\matr{R}$ to 0 \State $\matr{H} \gets 0^{n \times m}$ \For {$i \gets 1$ \textbf{to} $n$} \label{startcorr} \State $V_i \gets \FindRowSet(\matr{R}_{\idx}, \matr{R}_{i}, \bucketSize, \delta)$ \State $\matr{H}(i, V_i) \gets 1$ \label{endcorr} \EndFor \State $s \gets \argmax_{i \setcond i\neq \idx}\sum_j h_{ij}$ \label{replaceseedbegin} \State $\matr{H}_{\idx} \gets \matr{H}_{s}$ \label{replaceseedend} \For {$i \gets 1$ \textbf{to} $n$} \If {$\phi(\matr{H}, \idx, i) < \phi(\matr{H}, \idx, s) - \tau$} \State $\matr{H}_{i} \gets 0$ \EndIf \EndFor \State \textbf{return} $\matr{H}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[tbp] \flushleft\small% \caption{\FindRowSet}\label{alg:FindRowSet} \begin{algorithmic}[1] \Input $\vec{u} \in \Region^m, \vec{v} \in \Region^m, \bucketSize > 0, \delta > 0$ \Output $V \subset [m]$ \Function{\FindRowSet}{\vec{u}, \vec{v}, \bucketSize, \delta} \State $\vec{r} \gets \log(\vec{u} \vecdivide \vec{v})$ \label{getlogratios} \State $\nBuckets \gets \ceil{(\max\{r\}-\min\{r\})/\delta}$ \For {$i \gets 0$ \textbf{to} $\nBuckets$} \State $V_i \gets \{ \idx \in [m]\setcond \min\{\vec{r}\}+i\delta \le r_{\idx} < \min\{\vec{r}\}+(i+1)\delta\}$ \EndFor \State $V \gets \argmax\{\abs{V_i} \setcond i=1,\ldots,\nBuckets\}$ \If {$\abs{V} < \bucketSize$} \State $V \gets \emptyset$ \EndIf \State \textbf{return} $V$ \EndFunction \end{algorithmic} \end{algorithm} At this point we know the pattern of the new block, that is, the locations of its non-zeros. To fill in the actual values, we consider the submatrix defined by the pattern, and find the best rank-1 approximation of it. We do this using the \RecoverBlock function (Algorithm~\ref{alg:recoverblock}). It begins by setting all elements outside of the pattern to 0 as they are irrelevant to the block (line \ref{recovercancel}). Then it chooses one row to represent the block (lines~\ref{representbegin}--\ref{representend}), which will be used to find a good rank-1 cover. Finally, we find the optimal column vector for the block by computing the best weights to be used for covering different rows of the block with its representing row (line \ref{getb}). Here we optimize with respect to the Frobenius norm, rather than $L_1$ matrix norm, since it allows to solve the optimization problem in closed form. \begin{algorithm}[tbp] \flushleft\small% \caption{\RecoverBlock}\label{alg:recoverblock} \begin{algorithmic}[1] \Input $\matr{R} \in \Region^{n\times m}, \bIdx \subset [n], \cIdx \subset [m]$ \Output $\vec{b} \in \Region^{n\times 1}$, $\vec{c} \in \Region^{1\times m}$ \Function{\RecoverBlock}{\matr{R}, \bIdx, \cIdx} \State turn $\matr{R}$ to 0 except elements with indices $(\bIdx, \cIdx)$ \label{recovercancel} \State $p \gets \RowRepresentingBlock(\matr{R}, \bIdx)$ \label{representbegin} \State $\vec{c} \gets \matr{R}_{p}$ \label{representend} \State $\vec{b} \gets {{\argmin}_{\vec{t}\in \Region^{n \times 1}}\norm{\matr{R}-\vec{t}\vec{c}}_F}$ \label{getb} \State \textbf{return} $\vec{b}$, $\vec{c}$ \EndFunction \end{algorithmic} \end{algorithm} Since blocks often heavily overlap, we are susceptible to finding only fragments of patterns in the data -- some parts of a block can be dominated by another block and subsequently not recognized. Hence, we need to expand found blocks to make them complete. This is done separately for rows and columns in the method called \AddRows (Algorithm~\ref{alg:AddRows}), which, given a starting block $\matr{X}=\vec{b}\vec{c}$ and the original matrix $\matr{A}$, tries to add new nonzero elements to $\vec{b}$. It iterates through all rows of $\matr{A}$ and adds those that would make a positive impact on the objective without unnecessarily overcovering the data. In order to decide whether a given row should be added, it first extracts a set $V_i$ of indices where this row is a multiple of the row vector $\vec{c}$ of the block (if they are not sufficiently correlated, then the row does not belong to the block) (line~\ref{addrowlab1}). A row is added if the evaluation of the following function (line~\ref{impact}) \begin{equation} \label{myimpact} \psi(\alpha) = \frac{\sum_{s \in V_i} \max\lbrace 0, \,\alpha c_s - \matr{A}_{is} \rbrace} {\sum_{s \in V_i} \matr{A}_{is} - \abs{\matr{A}_{is} - \alpha c_s}} \end{equation} is below the threshold $\theta$. In \eqref{myimpact} the numerator measures by how much the new row would overcover the original matrix, and the denominator reflects the improvement in the objective compared to a zero row. \begin{algorithm}[tbp] \flushleft\small% \caption{\AddRows}\label{alg:AddRows} \begin{algorithmic}[1] \Input $\vec{b} \in \Region^{n \times 1}$, $\vec{c} \in \Region^{1 \times m}$, $\matr{A} \in \Region^{n\times m}$, $\theta > 0$, $\bucketSize>0$, $\delta>0$ \Output $\vec{b} \in \Region^{n\times 1}$ \Function{\AddRows}{\vec{b}, \vec{c}, \matr{A}, \theta, \bucketSize, \delta} \State ${b\_idx} \gets \lbrace t \setcond \vec{b}_t > 0\rbrace$ \For {$i \in [n]\setminus {b\_idx}$} \State $V_i \gets \FindRowSet(\vec{c}, \matr{R}_i, \bucketSize, \delta)$ \label{addrowlab1} \If {$V_i = \emptyset$} \State \textbf{continue} \EndIf \State $\alpha \gets mean(\matr{R}_{iV_i}./\vec{c}_{V_i})$ \label{getalpha} \State $\mathit{impact} \gets \frac{\sum_{s \in V_i} \max\{ 0,\, \alpha c_s - \matr{A}_{is} \} } { \sum_{s \in V_i} \matr{A}_{is} - \abs{\matr{A}_{is} - \alpha c_s}} $ \label{impact} \If {$impact \le \theta$} \State $\vec{b}_i \gets \alpha$ \label{getbi} \EndIf \EndFor \State \textbf{return} $\vec{b}$ \EndFunction \end{algorithmic} \end{algorithm} \textbf{Parameters}. \Capricorn has four parameters in addition to the common parameters in the Equator framework: $\bucketSize>0$, $\delta>0$, $\theta>0$, and $\tau\in[0,1]$. The first one, $\bucketSize$ determines the minimum number of elements in two rows that must have ``approximately'' the same ratio for them to be considered for building a block. The parameter $\delta$ defines the bucket width when computing row correlations. When expanding a block, $\theta$ is used to decide whether to add a row (or column) to it -- the decision is positive whenever the expression~\eqref{myimpact} is at most $\theta$. Finally $\tau$ is used during the discovery of correlated rows. The value of $\tau$ belongs to the closed unit interval, and the higher it is, the more rows will be added. \subsection{\Cancer} We now present our second algorithm, \Cancer, which is a counterpart of \Capricorn specifically designed to work in the presence of high levels of continuous noise. The reason why \Capricorn cannot deal with continuous noise is that it expects the rows in a block to have an ``almost'' constant elementwise ratio, which is not the case when too many entries in the data are disturbed. For example, even low levels of Gaussian noise would make the ratios vary enough to hinder \Capricorn's ability to spot blocks. With \Cancer we take a new approach which is based on polynomial approximation of the objective. We also replace the $L_1$ matrix norm, which was used as an objective for \Capricorn, with the Frobenius norm. The reason for that is that when the noise is continuous, its level is defined as the total deviation of the noisy data from the original, rather than a count of the altered elements. This makes the Frobenius norm a good estimator for the amount of noise. \Cancer conforms to the general framework of \Equator (Algorithm~\ref{alg:equator}), and differs from \Capricorn only in how it finds the blocks and in the objective function. Observe that in order to solve the problem \eqref{maxtimesrank1} we need to find a column vector $\vec{b} \in \Region^{n\times 1}$ and a row vector $\vec{c} \in \Region^{1\times m}$ such that they provide the best rank-1 approximation of the input matrix given the current factorization. The objective function is not convex in either $\vec{b}$ or $\vec{c}$ and is generally hard to optimize directly, so we have to simplify the problem, which we do in two steps. First, instead of doing full optimization of $\vec{b}$ and $\vec{c}$ simultaneously, we update only a single element of one of them at a time. This way the problem is reduced to single variable optimization. Even then the objective is hard to minimize, and we replace it with a polynomial approximation, which is easy to optimize directly. The \Cancer version of the \UpdateBlock function is described in Algorithm~\ref{alg:updateblockcancer}. It alternatingly updates the vectors $\vec{b}$ and $\vec{c}$ using the \AdjustOneElement routine. Both $\vec{b}$ and $\vec{c}$ will be updated $\lfloor f (n+m)/2\rfloor$ times. \UpdateBlock starts by finding the index of the block that has to be changed (line~\ref{getcurrentindexcancer}). Since the purpose of \UpdateBlock is to find the best rank-1 matrix to replace the current block, we also need to compute the reconstructed matrix without it, which is done on line~\ref{findN}. We then find the number of times \AdjustOneElement will be called (line~\ref{getfraccancer}) and change the degree of polynomials used for objective function approximation (line~\ref{computedegree}). This is needed because high degree polynomials are better at finalizing a solution that is already reasonably good, but tend to overfit the data and cause the algorithm to get stuck in local minima at the beginning. It is therefore beneficial to start with polynomials of lower degrees and then gradually increase it. The actual changes to $\vec{b}$ and $\vec{c}$ happen in the loop (lines~\ref{innercycle}--\ref{updatebcancer}), where we update them using \AdjustOneElement. The \AdjustOneElement function (Algorithm~\ref{alg:adjustoneelement}) updates a single entry in either a column vector $\vec{b}$ or a row vector $\vec{c}$. Let us consider the case when $\vec{b}$ is fixed and $\vec{c}$ varies. In order to decide which element of $\vec{c}$ to change, we need to compare the best changes to all $m$ entries and then choose the one that yields the most improvement to the objective. A single element $\vec{c}_l$ only has an effect on the error along the column $l$. Assume that we are currently updating block with index $q$ and let $\matr{N}$ denote the reconstruction matrix without this block, that is $\matr{N} = \matr{B}^{-q} \maxprod \matr{C}_{-q}$. Minimizing $E(\matr{A}, \matr{B}, \matr{C})$ with respect to $\vec{c}_l$ is then equivalent to minimizing \begin{equation} \label{eq:gamma} \gamma(\matr{A}_l, \matr{N}_l, \vec{b}, \vec{c}_l) = \sum_{i=1}^n (\matr{A}_{il} - \max \lbrace \matr{N}_{il}, \vec{b}_i \vec{c}_l\rbrace)^2\; . \end{equation} Instead of minimizing~\eqref{eq:gamma} directly, we use polynomial approximation in the \PolyMin routine (line~\ref{line:polymin}). It returns the (approximate) error $\mathit{err}$ and the value $x$ achieving that. Since we are only interested in the improvement of the objective achieved by updating a single entry of $\vec{c}$, we compute the improvement of the objective after the change (line~\ref{improvement}). After trying every column of $\vec{c}$, we update only the column that yield the largest improvement. \begin{algorithm}[tbp] \flushleft \caption{\UpdateBlock (\Cancer)}\label{alg:updateblockcancer} \begin{algorithmic}[1] \Input $\matr{A} \in \Region^{n \times m}$, $\matr{B} \in \Region^{n \times k}$, $\matr{C} \in \Region^{k \times m}$, $\mathit{count}>0$ \Output $\vec{b} \in \Region^{n \times 1}$, $\vec{c} \in \Region^{1 \times m}$ \Parameters $t>2$, $0<f<1$ \Function{\UpdateBlock}{\matr{A}, \matr{B}, \matr{C}, \mathit{count}} \State $l \gets (\mathit{count}-1) \pmod k + 1$ \Comment{Index of the current block} \label{getcurrentindexcancer} \State $\matr{N} \gets \matr{B}^{-l} \maxprod \matr{C}_{-l}$ \Comment{Reconstructed matrix without the $i$-th block} \label{findN} \State $\mathit{niters} \gets \lfloor f(n+m)/2 \rfloor$ \label{getfraccancer} \State $\mathit{deg} \gets 2 + \lfloor(\mathit{count}-1) / k\rfloor \pmod t$ \label{computedegree} \State $\vec{b} \gets \matr{B}^l$, $\vec{c} \gets \matr{C}_l$ \label{initbc} \For{$\mathit{iter} \gets 1$ \textbf{to} $\mathit{niters}$} \label{innercycle} \State $\vec{c} = \AdjustOneElement(\matr{A}, \matr{N}, \vec{b}, \vec{c}, \mathit{deg})$ \State $\vec{b} = \AdjustOneElement(\matr{A}^T, \matr{N}^T, \vec{c}^T, \vec{b}^T, \mathit{deg})^T$ \label{updatebcancer} \EndFor \State \textbf{return} $\vec{b}$, $\vec{c}$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[tbp] \flushleft \caption{\AdjustOneElement}\label{alg:adjustoneelement} \begin{algorithmic}[1] \Input $\matr{A} \in \Region^{n \times m}$, $\matr{N} \in \Region^{n \times m}$, $\vec{b} \in \Region^{n \times 1}$, $\vec{c} \in \Region^{1 \times m}$, $\mathit{deg} \ge 2$ \Output $\vec{c} \in \Region^{1 \times m}$ \Function{\AdjustOneElement}{\matr{A}, \matr{N}, \vec{b}, \vec{c}, deg} \For{$j \gets 1$ \textbf{to} $m$} \State $\mathit{baseError} \gets \sum_{i=1}^n \left(\matr{A}_{ij} - \max\lbrace\matr{N}_{ij}, \vec{b}_i \vec{c}_j \rbrace \right)^2$ \label{baseerror} \State $[err, \vec{x}_i] \gets \PolyMin(\matr{A}^j, \matr{N}^j, \vec{b}, \mathit{deg})$ \label{line:polymin} \State $\vec{u}_i \gets \mathit{baseError} - \mathit{err}$ \label{improvement} \EndFor \State $i \gets $ the index $i$ of largest value of $\vec{u}$ \State $\vec{c}_i \gets \vec{x}_i$ \State \textbf{return} $\vec{c}$ \EndFunction \end{algorithmic} \end{algorithm} The function $\gamma$ that we need to minimize in order to find the best change to the vector $\vec{c}$ in \AdjustOneElement is hard to work with directly since it is not convex, and also not smooth because of the presence of the maximum operator. To alleviate this, we approximate the error function $\gamma$ with a polynomial $g$ of degree $deg$. Notice that when updating $\vec{c}_l$, other variables of $\gamma$ are fixed and we only need to consider function $\gamma'(x) = \gamma(\matr{A}_l, \matr{N}_l, \vec{b}, x)$. To build $g$ we sample $deg+1$ points from $(0,1)$ and fit $g$ to the values of $\gamma'$ at these points. We then find the $x\in\Region$ that minimizes $g(x)$ and return $g(x)$ (the approximate error) and $x$ (the optimal value). \textbf{Parameters}. \Cancer has two parameters, $t>2$ and $0<f<1$, that control its execution. The first one, $t$, is the maximum allowed degree of polynomials used for approximation of the objective, which we set to 16 in all our experiments. The second parameter, $f$, determines the number of single element updates we make to the row and column vectors of a block in \UpdateBlock. \textbf{Generalized Cancer}\label{generalcancer}. The \Cancer algorithm can be adapted to optimize other objective functions. Its general polynomial approximation framework allows for a wide variety of possible objectives, the only constraint being that they have to be additive (we call a function $E(\matr{A}, \matr{R})$ \textit{additive} if there exists a mapping $\phi\colon \Region \times \Region \rightarrow \Region$ such that for all $\matr{A} \in \Region^{n \times m}$ and $\matr{R} \in \Region^{n \times m}$ we have $E(\matr{A}, \matr{R}) = \sum_{ij}\phi(\matr{A}_{ij}, \matr{R}_{ij})$). Some examples of such functions are $L_1$ and Frobenius matrix norms, as well as Kullback--Leibler and Jensen--Shannon divergences. In order to use the generalized form of \Cancer one simply has to replace the Frobenius norm with another cost function wherever the error is evaluated. \subsection{Time complexity} The main work in \Equator is performed inside the \UpdateBlock routine, which is called $M k$ times. Since $M$ is a constant parameter, the complexity of \Equator is $k$ times the complexity of \UpdateBlock. In the following we find the theoretical bounds on the execution time of \UpdateBlock for both \Capricorn and \Cancer. \textbf{Capricorn.} In the case of \Capricorn there are three main contributors to \UpdateBlock (Algorithm~\ref{alg:updateblock}): \CorrelationsWithRow, \RecoverBlock, and \AddRows. \lword{\CorrelationsWithRow} compares every row to the seed row, each time calling \FindRowSet, which in turn has to process all $m$ elements of both rows. This results in the total complexity of \lword{\CorrelationsWithRow} being $O(nm)$. To find the complexity of \RecoverBlock, first observe that any ``pure'' block $\matr{X}$ can be represented as $\matr{X}=\vec{b}\vec{c}$, where $\vec{b}\in \Region^{n'\times 1}$ and $\vec{c}\in \Region^{1\times m'}$ with $n'\le n$ and $m'\le m$. \RecoverBlock selects $\vec{c}$ from the rows of $\matr{X}$ and then finds the corresponding column vector $\vec{b}$ that minimizes $\norm{\matr{X}-\vec{b}\vec{c}}_F$. In order to select the best row, we have to try each of the $n'$ candidates, and since finding the corresponding $\vec{b}$ for each of them takes time $O(n'm')$, this gives the runtime of \RecoverBlock as $O(n')O(n'm') = O(n^2m)$. The most computationally expensive parts of \AddRows are \FindRowSet (line \ref{addrowlab1}), finding the mean (line \ref{getalpha}), and computing the impact (line \ref{impact}), which all run in $O(m)$ time. All of these operations have to be repeated $O(n)$ times, and hence the runtime of \AddRows is $O(nm)$. Thus, we can now estimate the complexity of \UpdateBlock to be $O(nm)+O(n^2m)+O(nm) = O(n^2m)$, which leads to the total runtime of \Capricorn to be $O(n^2mk)$. \textbf{Cancer.} Here \UpdateBlock (Algorithm~\ref{alg:updateblockcancer}) is a loop that calls \AdjustOneElement $\lfloor f(n+m) \rfloor$ times. In \AdjustOneElement the contributors to the complexity are computing the base error (line~\ref{baseerror}) and a call to \PolyMin (line~\ref{line:polymin}). Both of them are performed $n$ or $m$ times depending on whether we supplied the column vector $\vec{b}$ or the row vector $\vec{c}$ to \AdjustOneElement. Finding the base error takes time $O(m)$ for $\vec{b}$ and $O(n)$ for $\vec{c}$. The complexity of \PolyMin boils down to that of evaluating the max-times objective at $deg+1$ points and then minimizing a degree $deg$ polynomial. Hence, \PolyMin runs in time $O(m)$ or $O(n)$ depending on whether we are optimizing $\vec{b}$ or $\vec{c}$, and the complexity of \AdjustOneElement is $O(nm)$. Since \AdjustOneElement is called $\lfloor f(n+m)/2 \rfloor$ times and $f$ is a fixed parameter, this gives the complexity $O\bigl((n+m)nm\bigr)$ for \UpdateBlock and $O\bigl((n+m)nmk\bigr) = O(\max\{n,m\}nmk)$ for \Cancer. \section{Conclusions} \label{sec:conclusions} Subtropical low-rank factorizations are a novel approach for finding latent structure from nonnegative data. The factorizations can be interpreted using the winner-takes-it-all interpretation: the value of the element in the final reconstruction depends only on the largest of values in the corresponding elements of the rank-1 components (cf.\ NMF, where the value in the reconstruction is the \emph{sum} of the corresponding elements). That the factorizations are different does not necessarily mean that they are better in the terms of reconstruction error, although they can yield lower reconstruction error than even SVD. It does mean, however, that they find different structure from the data. This is an important advantage, as it allows the data analyst to use both the classical factorizations and the subtropical factorizations to get a broader understanding of the kinds of patterns that are present in the data. Working in the subtropical algebra is harder than in the normal algebra, though. The various definitions for the rank, for example, do not agree, and computing many of them -- including the subtropical Schein rank, which is arguably the most useful one for data analysis -- is computationally hard. That said, our proposed algorithms, \Capricorn and \Cancer, can find the subtropical structure when it is present in the data. Not every data have subtropical structure, though, and due to the complexity of finding the optimal subtropical factorization we cannot distinguish between the cases where our algorithms fail to find the latent subtropical structure, and where it does not exist. Based on our experiments with synthetic data, our hypothesis is that the failure of finding a good factorization indicates the lack of the subtropical structure rather than the algorithms' failure. That said, the presented algorithms are heuristics. Developing algorithms that achieve better reconstruction error is naturally an important direction of future work. In our \Equator framework, this hinges on the task of finding the rank-1 components. In addition, the scalability of the algorithms could be improved. A potential direction could be to take into account the sparsity of the factor matrices in dominated decompositions. This could allow one to concentrate only on the non-zero entries in the factor matrices. The connection between Boolean and (sub-)tropical factorizations raises potential directions for future work. The continuous framework could allow for easier optimization in the Boolean algebra. Also, the connection allows us to model combinatorial structures (e.g.\ cliques in a graph) using subtropical matrices. This could allow for novel approaches on finding such structures using continuous subtropical factorizations. \section{Experiments} \label{sec:experiments} We tested both \Capricorn and \Cancer on synthetic and real-world data. In addition we also compare against a variation of \Cancer that optimizes the Jensen--Shannon divergence, which we call \CancerJS. The purpose of the synthetic experiments is to evaluate the properties of the algorithm in controlled environments where we know the data has the max-times structure. They also demonstrate on what kind of data each algorithm excels and what their limitations are. The purpose of the real-world experiments is to confirm that these observations also hold true in real-world data, and to study what kinds of data sets actually have max-times structure. The source code of \Capricorn and \Cancer and the scripts that run the experiments in this paper are freely available for academic use.\!\footnote{\url{http://people.mpi-inf.mpg.de/~pmiettin/tropical/}} \paragraph{Parameters of \Capricorn.} In both synthetic and real-world experiments we used the following default set of parameters: $M=4$, $\bucketSize=3$, $\delta=0.01$, $\theta=0.5$, and $\tau=0.5$. \paragraph{Parameters of \Cancer.} Both variations of \Cancer use the same set of parameters. For the synthetic experiments we used $M=14$, $t=16$, and $f=0.1$. For the real world experiments we set $t=16$, $f=0.1$, and $M=40$ (except for \Eigenfaces, where we used $M=50$). \subsection{Other methods.} \label{sec:exp:other-methods} We compared our algorithms against \SVD and six versions of NMF. For \SVD, we used Matlab's built-in implementation. The first NMF method, called simply \NMF, by \cite{kim2008toward}, is based on the block principal pivoting algorithm. The second form of NMF is a sparse NMF algorithm by \cite{hoyer04non-negative},\!\footnote{\url{https://github.com/aludnam/MATLAB/tree/master/nmfpack}, accessed 18 July 2017} which we call \SNMF. It defines the sparsity of a vector $\vec{x}\in\Region^n$ as \begin{equation} \label{eq:hoyer} \text{sparsity}(\vec{x}) = \frac{\sqrt{n} - \left(\sum_i\abs{\vec{x}_i}\right)/\sqrt{\sum_i\vec{x}_i^2}}{\sqrt{n}-1}\; , \end{equation} and returns factorizations where the sparsity of the factor matrices is user-controllable. In all of our experiments, we used the sparsity of \Cancer's factors as the sparsity parameter of \SNMF. We also compare against a standard alternating least squares algorithm called \ALS \citep{cichocki09nonnegative}. Next we have two versions of NMF that are essentially the same as \ALS, but they use $L_1$ regularization for increased sparsity~\citep{cichocki09nonnegative}, that is, they aim at minimizing \[ \norm{\mA - \mB\mC}_F + \alpha\norm{\mB}_1 + \beta\norm{\mC}_1\; . \] The first method is called \ALSR and uses regularizer coefficient $\alpha=\beta=1$, and the other, called \ALSRfive, has regularizer coefficient $\alpha=\beta=5$. The last NMF algorithm, \WNMF by \citet{li2013non}, is designed to work with missing values in the data. \subsection{Synthetic experiments.} \label{sec:synth-exper} The purpose of synthetic experiments is to prove the concept, that is that our algorithms are capable of identifying the max-times structure when it is there. In order to test this, we first generate the data with the pure max-times structure, then pollute it with some level of noise, and finally run the methods. The noise-free data is created by first generating random factors of some density with nonzero elements drawn from a uniform distribution on the $[0, 1]$ interval and then multiplying them using the max-times matrix product. We distinguish two types of noise. The first one is the discrete (or tropical) noise, which is introduced in the following way. Assume that we are given an input matrix $\matr{A}$ of size \by{n}{m}. We first generate an \by{n}{m} noise matrix $\matr{N}$ with elements drawn from a uniform distribution on the $[0, 1]$ interval. Given a level of noise $l$, we then turn $\lfloor (1 - l)nm \rfloor$ random elements of $\matr{N}$ to 0, so that its resulting density is $l$. Finally, the noise is applied by taking elementwise maximum between the original data and the noise matrix $\matr{F} = \max \lbrace \matr{A}, \matr{N}\rbrace$. This is the kind of noise that \Capricorn was designed to handle, so we expect it to be better than \Cancer and other comparison algorithms. We also test against continuous noise, as it is arguably more common in the real world. For that we chose Gaussian noise with 0 mean, where the noise level is defined to be its standard deviation. Since adding this noise to the data might result in negative entries, we truncate all values in a resulting matrix that are below zero. Unless specified otherwise, all matrices in the synthetic experiments are of size \by{1000}{800} with true max-times rank 10. All results presented in this section are averaged over 10 instances. For reconstruction error tests, we compared our algorithms \Capricorn, \Cancer, and \CancerJS against \SVD, \NMF, \SNMF, \ALS, \ALSR, and \ALSRfive. The error is measured as the relative Frobenius norm $\norm*{\matr{\tilde{A}} - \matr{A}}_F/\norm{\matr{A}}$, where $\matr{A}$ is the data and $\matr{\tilde{A}}$ its approximation, as that is the measure both \SVD and \NMF aim at minimizing. We also report the sparsity $s$ of factor matrices obtained by algorithms, which is defined as a fraction of zero elements in the factor matrices, \begin{equation} \label{eq:sparsity} s(\mA) = \abs{\{(i, j) \setcond \mA_{ij} = 0\}}/(nm)\; , \end{equation} for an \by{n}{m} matrix $\mA$. For the experiments with tropical noise, the reconstruction errors are reported in Figure~\ref{fig:synth:reconstruct:frob} and factor sparsity in Figure~\ref{fig:synth:sparsity}. For the Gaussian noise experiments, the reconstruction errors and factor sparsity are shown in Figure~\ref{fig:synth:err} and Figure~\ref{fig:synth:sparsity:gaussian} respectively. \paragraph{Varying density with tropical noise.} In our first experiment we studied the effects of varying the density of the factor matrices in presence of the tropical noise. We changed the density of the factors from 10\% to 100\% with an increment of 10\%, while keeping the noise level at 10\%. Figure~\ref{density:cap} shows the reconstruction error and Figure~\ref{density:frob:sparse} the sparsity of the obtained factors. \Capricorn is consistently the best method, obtaining almost perfect reconstruction; only when the density approaches 100\% does its reconstruction error deviate slightly from 0. This is expected since the data was generated with the tropical (flipping) noise that \Capricorn is designed to optimize. Compared to \Capricorn all other methods clearly underperform, with \Cancer being the second best. With the exception of \ALSRfive, all NMF methods obtain results similar to those of \SVD, while having a somewhat higher reconstruction error than \Cancer. That \SVD and NMF methods (except \ALSRfive) start behaving better at higher levels of density indicates that these matrices can be explained relatively well using standard algebra. \Capricorn and \Cancer also have the highest sparsity of factors, with \Capricorn exhibiting a decrease in sparsity as the density of the input increases. This behaviour is desirable since ideally we would prefer to find factors that are as close to the original ones as possible. For NMF methods there is a trade-off between the reconstruction error and the sparsity of the factors -- the algorithms that were worse at reconstruction tend to have sparser factors. \paragraph{Varying tropical noise.} The amount of noise is always with respect to the number of nonzero elements in a matrix, that is, for a matrix $\matr{A}$ with $\kappa(\matr{A})$ nonzero elements and noise level $\alpha$, we flip $\alpha \kappa(\matr{A})$ elements to random values. There are two versions of this experiment -- one with factor density 30\% and the other with 60\%. In both cases we varied the noise level from 0\% to 110\% with increments of 10\%. Figure~\ref{noise:cap} and Figure~\ref{noise:frobhd} show the respective reconstruction errors and Figure~\ref{noise:frob:sparse} and Figure~\ref{noise:frobhd:sparse} the corresponding sparsities of the obtained factors. In the low-density case, \Capricorn is consistently the best method with essentially perfect reconstruction for up to $80\%$ of noise. In the high-density case, however, the noise has more severe effects, and in particular after $60\%$ of noise, \Cancer, \SVD, and all versions of NMF are better than \Capricorn. The severity of the noise is, at least partially, explained by the fact that in the denser data we flip more elements than in sparser data: for example when the data matrices are full, at 50\% of noise, we have already replaced half of the values in the matrices with random values. Further, the quick increase of the reconstruction error for \Capricorn hints strongly that the max-times structure of the data is mostly gone at these noise levels. \Capricorn also produces clearly the sparsest factors for the low density case, and is mostly tied with \Cancer and \ALSRfive when the density is high. It should be noted however that \ALSRfive generally has the highest reconstruction error among all the methods, which suggests that its sparse factors come at the cost of recovering little structure from the data. \paragraph{Varying rank with tropical noise.} Here we test the effects of the (max-times) rank, with the assumption that higher-rank matrices are harder to reconstruct. The true max-times rank of the data varied from 2 to 20 with increments of 2. There are three variations of this experiment: with 30\% factor density and 10\% noise (Figure~\ref{dim:frob}), with 30\% factor density and 50\% noise (Figure~\ref{dim:frobhn}), and with 60\% factor density and 10\% noise (Figure~\ref{dim:frobhd}). The corresponding sparsities are shown on Figures~\ref{dim:frob:sparse:sparse}, \ref{dim:frobhn:sparse}, and \ref{dim:frobhd:sparse}. \Capricorn has a clear advantage for all settings, obtaining nearly perfect reconstruction. \Cancer is generally second best, except for the high noise case, where it is mostly tied with a bunch of NMF methods. Interestingly, on the last two plots the reconstruction error actually drops for \Cancer, \SVD, and NMF-based methods. This is a strong indication that at this point they no longer can extract meaningful structure in the data, and the improvement of the reconstruction error is largely due to uniformization of the data caused by high density and high noise levels. \begin{figure} [tp] \centering \subfigure[Varying density test.] {% \includegraphics[width=\subfigwidth]{density-cap.pdf}% \label{density:cap}% } \hspace{\subfigspace} \hspace{\subfigspace} \subfigure[Varying noise test.] {% \includegraphics[width=\subfigwidth]{noise-cap.pdf}% \label{noise:cap}% } \hspace{\subfigspace} \subfigure[Varying noise with high density.] {% \includegraphics[width=\subfigwidth]{noisehd-cap.pdf}% \label{noise:frobhd}% } \hspace{\subfigspace} \\ \subfigure[Varying rank test with 10\% noise and 30\% factor density.] {% \includegraphics[width=\subfigwidth]{dim-cap.pdf}% \label{dim:frob}% } \hspace{\subfigspace} \subfigure[Varying rank test with 50\% noise and 30\% factor density.] {% \includegraphics[width=\subfigwidth]{dimhn-cap.pdf}% \label{dim:frobhn}% } \hspace{\subfigspace} \subfigure[Varying rank test with 10\% noise and 60\% factor density.] {% \includegraphics[width=\subfigwidth]{dimhd-cap.pdf}% \label{dim:frobhd}% } \caption{\textbf{Reconstruction errors on synthetic data with tropical noise}. $x$-axis is the parameter varied and $y$-axis is the relative Frobenius norm. All results are averages over 10 random matrices and the width of the error bars is twice the standard deviation. } \label{fig:synth:reconstruct:frob} \end{figure} \begin{figure} [tp] \centering \subfigure[Varying density test.] {% \includegraphics[width=\subfigwidth]{densitySp-cap.pdf}% \label{density:frob:sparse}% } \hspace{\subfigspace} \hspace{\subfigspace} \subfigure[Varying noise test.] {% \includegraphics[width=\subfigwidth]{noiseSp-cap.pdf}% \label{noise:frob:sparse}% } \hspace{\subfigspace} \subfigure[Varying noise with high density.] {% \includegraphics[width=\subfigwidth]{noisehdSp-cap.pdf}% \label{noise:frobhd:sparse}% } \hspace{\subfigspace} \\ \subfigure[Varying rank test with 10\% noise and 30\% factor density.] {% \includegraphics[width=\subfigwidth]{dimSp-cap.pdf}% \label{dim:frob:sparse:sparse}% } \hspace{\subfigspace} \subfigure[Varying rank test with 50\% noise and 30\% factor density.] {% \includegraphics[width=\subfigwidth]{dimhnSp-cap.pdf}% \label{dim:frobhn:sparse}% } \hspace{\subfigspace} \subfigure[Varying rank test with 10\% noise and 60\% factor density.] {% \includegraphics[width=\subfigwidth]{dimhdSp-cap.pdf}% \label{dim:frobhd:sparse}% } \caption{\textbf{Sparsity (fraction of zeroes) of the factor matrices for synthetic data with tropical noise.} $x$-axis is the parameter varied and $y$-axis is the sparsity of the factors. The markers are averages of 10 random matrices and the width of the error bars is twice the standard deviation.} \label{fig:synth:sparsity} \end{figure} \paragraph{Varying Gaussian noise.} Here we investigate how the algorithms respond to different levels of Gaussian noise, which was varied from 0 to 0.14 with increments of 0.01. A level of noise is a standard deviation of the Gaussian noise used to generate the noise matrix as described earlier. The factor density was kept at 50\%. The results are given on Figure~\ref{fig:err:noise} (reconstruction error) and Figure~\ref{fig:sparsity:noise} (sparsity of factors). Here \Cancer is generally the best method in reconstruction error, and second in sparsity only to \Capricorn. The only time it loses to any method is when there is no noise, and \Capricorn obtains a perfect decomposition. This is expected since \Capricorn is by design better at spotting pure subtropical structure. \paragraph{Varying density with Gaussian noise.} In this experiment we studied what effects the density of factor matrices used in data generation has on the algorithms' performance. For this purpose we varied the density from 10\% to 100\% with increments of 10\% while keeping the other parameters fixed. There are two versions of this experiment, one with low noise level of 0.01 (Figures~\ref{fig:err:density} and \ref{fig:sparsity:density}), and a more noisy case at 0.08 (Figures~\ref{fig:err:densityHN} and~\ref{fig:sparsity:densityHN}). \Cancer provides the least reconstruction error in this experiment, being clearly the best until the density is $0.7$, from which point on it is tied with \SVD and the NMF-based methods (the only exception being the least-dense high-noise case, where \ALSR obtains a slightly better reconstruction error). \Capricorn is the worst by a wide margin, but this is not surprising, as the data does not follow its assumptions. On the other hand, \Capricorn does produce generally the sparsest factorization, but these are of little use given its bad reconstruction error. \Cancer produces the sparsest factors from the remaining methods, except in the first few cases where \ALSRfive is sparser (and worse in reconstruction error), meaning that \Cancer produces factors that are both the most accurate and very sparse. \paragraph{Varying rank with Gaussian noise.} The purpose of this test is to study the performance of algorithms on data of different max-times ranks. We varied the true rank of the data from 2 to 20 with increments of 2. The factor density was fixed at 50\% and Gaussian noise at 0.01. The results are shown on Figure~\ref{fig:err:dim} (reconstruction error) and Figure~\ref{fig:sparsity:dim} (sparsity of factors). The results are similar to those considered above, with \Cancer returning the most accurate and second sparsest factorizations. \paragraph{Optimizing the Jensen--Shannon divergence.} By default \Cancer optimizes the Frobenius reconstruction error, but it can be replaced by an arbitrary additive cost function. We performed experiments with Jensen--Shannon divergence, which is given by the formula \begin{equation} \label{obj:JS} J(\mA, \mB) = \sum_{ij} \mA_{ij}\log\left(\frac{2\mA_{ij}}{\mA_{ij}+\mB_{ij}}\right) + \mB_{ij}\log\left(\frac{2\mB_{ij}}{\mA_{ij}+\mB_{ij}}\right)\;. \end{equation} It is easy to see that \eqref{obj:JS} is an additive function, and hence can be plugged into \Cancer. Figure~\ref{fig:synth:reconstruct:js}, shows how this version of \Cancer compares to other methods. The setup is the same as in the corresponding experiments on Figure~\ref{fig:synth:err}, except that we have removed \ALSRfive because of its overall bad performance. In all these experiments it is apparent that this version of \Cancer is inferior to that optimizing the Frobenius error, but is generally on par with \SVD and NMF-based methods. Also for the varying density test (Figure~\ref{density:js}) it produces better reconstruction errors than \SVD and all the NMF methods, until the density reaches 50\%, after which they become tied. \begin{figure}[tp] \centering \subfigure[Varying noise with density 50\%]{% \includegraphics[width=\subfigwidth]{noise-can}% \label{fig:err:noise}% } \hspace{\subfigspace} \subfigure[Varying density with low Gaussian noise]{% \includegraphics[width=\subfigwidth]{density-can}% \label{fig:err:density}% } \\ \subfigure[Varying density with high Gaussian noise]{% \includegraphics[width=\subfigwidth]{densityHN-can}% \label{fig:err:densityHN}% } \hspace{\subfigspace} \subfigure[Varying rank; 50\% density and low Gaussian noise]{% \includegraphics[width=\subfigwidth]{dim-can}% \label{fig:err:dim}% } \caption{\textbf{Reconstruction error (Frobenius norm) for synthetic data with Gaussian noise noise.} The markers are averages of 10 random matrices and the width of the error bars is twice the standard deviation.} \label{fig:synth:err} \end{figure} \begin{figure} \centering \subfigure[Varying noise with density 50\%]{% \includegraphics[width=\subfigwidth]{noiseSparsity-can}% \label{fig:sparsity:noise}% } \hspace{\subfigspace} \subfigure[Varying density with low Gaussian noise]{% \includegraphics[width=\subfigwidth]{densitySparsity-can}% \label{fig:sparsity:density} } \\ \hspace{\subfigspace} \subfigure[Varying density with high Gaussian noise]{% \includegraphics[width=\subfigwidth]{densityHNSparsity-can}% \label{fig:sparsity:densityHN}% } \subfigure[Varying rank; 50\% density and low Gaussian noise]{% \includegraphics[width=\subfigwidth]{dimSparsity-can}% \label{fig:sparsity:dim}% } \caption{\textbf{Sparsity (fraction of zeroes) of the factor matrices for synthetic data with Gaussian noise.} The markers are averages of 10 random matrices and the width of the error bars is twice the standard deviation.} \label{fig:synth:sparsity:gaussian} \end{figure} \begin{figure} [tp] \centering \subfigure[Varying noise test.] {% \includegraphics[width=\subfigwidth]{noise-js.pdf}% \label{noise:js}% } \hspace{\subfigspace} \subfigure[Varying density test.] {% \includegraphics[width=\subfigwidth]{density-js.pdf}% \label{density:js}% } \hspace{\subfigspace} \subfigure[Varying rank test with 10\% noise and 30\% factor density.] {% \includegraphics[width=\subfigwidth]{dim-js.pdf}% \label{dim:js}% } \caption{\textbf{Comparison of \Cancer with Jensen--Shannon objective and other methods on synthetic data with Gaussian noise.} $x$-axis is the parameter varied and $y$-axis is the relative Frobenius error. All results are averages over 10 random matrices and the width of the error bars is twice the standard deviation. } \label{fig:synth:reconstruct:js} \end{figure} \paragraph{Prediction.} In this experiment we choose a random holdout set and remove it from the data (elements of this set are marked as missing values). We then try to learn the structure of the data from its remaining part using \Capricorn and \WNMF, and finally test how well they predict the values inside the holdout set. All input matrices are integer-valued and since the recovered data produced by the algorithms can be continuous-valued, we round it to the nearest integer. The quality of the prediction is measured as the fraction of correct values in the hold-out set, and the results are reported in Figure~\ref{fig:synth:predict}. \begin{figure} [tp] \centering \includegraphics[width=\subfigwidth]{predict} \caption{\textbf{Prediction rate on synthetic data}. $x$-axis represents the size of the holdout set and $y$-axis is the correct prediction rate (higher is better). All results are averages over 10 random matrices and the width of the error bars is twice the standard deviation.}\label{fig:synth:predict} \end{figure} It is easy to see that as the fraction of held-out data increases, \Capricorn's results get worse, as expected, but it still is consistently better than \WNMF that does not seem to be able to recover any specific structure. \paragraph{Discussion.} The synthetic experiments confirm that both \Capricorn and \Cancer are able to recover matrices with max-times structure. The main practical difference between then is that \Capricorn is designed to handle the tropical (flipping) noise, while \Cancer is meant for the data that is perturbed with white (Gaussian) noise. While \Capricorn is clearly the best method when the data has only the flipping noise -- and is capable of tolerating very high noise levels -- its results deteriorate when we apply Gaussian noise. Hence, when the exact type of noise is not known a priori, it is advisable to try both methods. It is also important to note that \Cancer is actually a framework of algorithms as it can optimize various objective. In order to demonstrate that, we performed experiments with Jensen--Shannon divergence as objective and obtained results that are, while inferior to \Cancer that optimizes the Frobenius error, still slightly better than the rest of the algorithms. Overall we can conclude that \SVD and the NMF-based methods generally cannot recover the structure from subtropical data, that is, we cannot use existing methods as a substitute to find the max-times structure neither for the reconstruction nor for the prediction tasks. \subsection{Real-world experiments.} \label{sec:real-world-exper} The main purpose of the real-world experiments is to study to which extend \Capricorn and \Cancer can find max-times structure from various real-world data sets. Having established with the synthetic experiments that both algorithms are capable of finding the structure when it is present, here we look at what kind of results they obtain in the real-world data. It is probably unrealistic to expect real-world data sets to have ``pure'' max-times structure, as in the synthetic experiments. Rather, we expect \SVD to be the best method (in reconstruction error's sense), and our algorithms to obtain reconstruction error comparable to the NMF-based methods. We will also verify that the results from the real-world data sets are intuitive. \subsubsection*{The datasets} \label{sec:real:data} \BasLP represents a linear program.\!\footnote{Submitted to the matrix repository by Csaba Meszaros.} It is available from the University of Florida Sparse Matrix Collection\footnote{\url{http://www.cise.ufl.edu/research/sparse/matrices/}, accessed 18 July 2017} \citep{davis11university}. \Trec is a brute force disjoint product matrix in tree algebra on $n$ nodes.\!\footnote{Submitted by Nicolas Thiery.} It can be obtained from the same repository as \BasLP. \Worldclim was obtained from the global climate data repository.\!\footnote{The raw data is available at \url{http://www.worldclim.org/}, accessed 18 July 2017.} It describes historical climate data across different geographical locations in Europe. Columns represent minimum, maximum, and average temperatures and precipitation, and rows are \by{50}{50} kilometer squares of land where measurements were made. We preprocessed every column of the data by first subtracting its mean, dividing by the standard deviation, and then subtracting its minimum value, so that the smallest value becomes 0. \NPAS is a nerdiness personality test that uses different attributes to determine the level of nerdiness of a person.\!\footnote{Tha dataset can be obtained on the online personality website \url{http://personality-testing.info/_rawdata/NPAS-data.zip}, accessed 18 July 2017.} It contains answers by 1418 respondents to a set of 36 questions that asked them to self-assess various statements about themselves on a scale of 1 to 7. We preprocessed \NPAS analogously to \Worldclim. \Eigenfaces is a subset of the Extended Yale Face collection of face images~\citep{georghiades2000few}. It consists of \by{32}{32} pixel images under different lighting conditions. We used a preprocessed data by Xiaofei He et al.\!\footnote{\url{http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html}, accessed 18 July 2017} We selected a subset of pictures with lighting from the left and then preprocessed the input matrix by first subtracting from every column its smallest element and then dividing it by its standard deviation. \News is a subset of the {20Newsgroups} dataset,\!\footnote{\url{http://qwone.com/~jason/20Newsgroups/}, accessed 18 July 2017} containing the usage of 800 words over 400 posts for 4 newsgroups.\!\footnote{The authors are grateful to Ata Kab{\'a}n for pre-processing the data, see~\cite{miettinen09matrix}.} Before running the algorithms we represented the dataset as a TF-IDF matrix, and then scaled it by dividing each entry by the greatest entry in the matrix. \HPI is a land registry house price index.\!\footnote{Available at \url{https://data.gov.uk/dataset/land-registry-house-price-index-background-tables/}, accessed 18 July 2017} Rows represent months, columns are locations, and entries are residential property price indices. We preprocessed the data by first dividing each column by its standard deviation and then subtracting its minimum, so that each column has minimum 0. \Movielense is a collection of user ratings for a set of movies. The original dataset\footnote{Available at \url{http://grouplens.org/datasets/movielens/100k/}, accessed 18 July 2017} consists of 100000 ratings from 1000 users on 1700 movies, with ratings ranging from 1 to 5. In order to be able to perform cross-validation on it, we had to preprocess \Movielense by removing users that rated fewer than 10 movies and movies that were rated less than 5 times. After that we were left with 943 users, 1349 movies and 99287 ratings. The basic properties of these data sets are listed in Table~\ref{tab:real:specs_all}. \setlength{\tabcolsep}{0.5em} \begin{table}[tb] \centering \caption{Real world datasets properties.} \label{tab:real:specs_all} \begin{tabular}{@{}lRRR@{}} \toprule Algorithm & $Rows$ & $Columns$ & $Density$ \\ \midrule \BasLP & 9825 & 5411 & 1.1\% \\ \Trec & 2726 & 551 & 10.0\% \\ \Worldclim & 2575 & 48 & 99.9\% \\ \NPAS & 1418 & 36 & 99.6\% \\ \Eigenfaces & 1024 & 222 & 97.0\% \\ \News & 400 & 800 & 3.5\% \\ \HPI & 253 & 177 & 99.5\% \\ \Movielense & 943 & 1349 & 7.8\% \\ \bottomrule \end{tabular} \end{table} \subsubsection*{Quantitative results: reconstruction error, sparsity, and convergence} \label{sec:real:quantitative} The following experiments are meant to test \Cancer and \Capricorn, and how they compare versus other methods, such as \SVD and NMF. Table~\ref{tab:real:world:error} provides the relative Frobenius reconstruction errors for various real-world data sets. We omitted \ALSRfive from these experiments due to its bad performance with the synthetic data. \SVD is, as expected, consistently the best method. Somewhat surprisingly, Hoyer's \SNMF is usually the second-best method, even though it did not show any advantage over other methods in the synthetic experiments. \Cancer is usually the third-best method (with the exception of \News and \NPAS), and often very close to \SNMF in reconstruction error. Overall, it seems \Cancer is capable of finding max-times structure that is comparable to what NMF-based methods provide. Consequently, we can study the max-times structure found by \Cancer, knowing that it is (relatively) accurate. On the other hand \Capricorn has a high reconstruction error. The discrepancy between \Cancer's and \Capricorn's results indicates that the datasets used cannot be represented using ``pure'' subtropical structure. Rather they are either a mix of NMF and subtropical patterns or have relatively high levels of continuous noise. \begin{table}[tb] \centering \caption{Reconstruction error for various real-world datasets.} \label{tab:real:world:error} \begin{tabular}{@{}lRRRRR@{}} \toprule & \text{\Worldclim} & \text{\NPAS} & \text{\Eigenfaces} & \text{\News} & \text{\HPI} \\ $k=$ & 10 & 10 & 40 & 20 & 15 \\ \midrule \Cancer & 0.071 & 0.240 & 0.204 & 0.556 & 0.027 \\ \Capricorn & 0.392 & 0.395 & 0.972 & 0.987 & 0.217 \\ \SNMF & 0.046 & 0.225 & 0.178 & 0.546 & 0.023 \\ \ALS & 0.087 & 0.227 & 0.313 & 0.538 & 0.074 \\ \ALSR & 0.122 & 0.226 & 0.294 & 1.000 & 0.045 \\ \SVD & 0.025 & 0.209 & 0.140 & 0.533 & 0.015 \\ \bottomrule \end{tabular} \end{table} The sparsity of the factors for real-world data is presented in Table~\ref{tab:real:sparsity_all}, except for \SVD. Here, \Cancer often returns the second-sparsest factors (being second only to \Capricorn), but with \News and \HPI, \ALSR obtains sparser decompositions. \begin{table}[tb] \centering \caption{Factor sparsity for various real-world datasets.} \label{tab:real:sparsity_all} \begin{tabular}{@{}lRRRRR@{}} \toprule & \text{\Worldclim} & \text{\NPAS} & \text{\Eigenfaces} & \text{\News} & \text{\HPI} \\ $k=$ & 10 & 10 & 40 & 20 & 15 \\ \midrule \Cancer & 0.645 & 0.528 & 0.571 & 0.812 & 0.422 \\ \Capricorn & 0.795 & 0.733 & 0.949 & 0.991 & 0.685 \\ \SNMF & 0.383 & 0.330 & 0.403 & 0.499 & 0.226 \\ \ALS & 0.226 & 0.120 & 0.434 & 0.513 & 0.331 \\ \ALSR & 0.275 & 0.117 & 0.480 & 1.000 & 0.729 \\ \bottomrule \end{tabular} \end{table} We also studied the convergence behavior of \Cancer using some of the real-world data sets. The results can be seen in Figure~\ref{fig:convergence}, where we plot the relative error with respect to the iterations over the main for-loop in \Cancer. As we can see, in both cases \Cancer has obtained a good reconstruction error already after few full cycles, with the remaining runs only providing minor improvements. We can deduce that \Cancer reaches quickly an acceptable solution. \begin{figure*}[tp] \centering \subfigure[\NPAS]{ \includegraphics[width=\subfigwidth]{NPASconvergence-can} \label{noise} } \hspace{\subfigspace} \subfigure[\HPI]{ \includegraphics[width=\subfigwidth]{HPIconvergence-can} \label{density} } \caption{Convergence rate of \Cancer for two real-world datasets. Each iteration is a single run of \UpdateBlock, that is if a factorization has rank $k$, then one full cycle would correspond to $k$ iterations.} \label{fig:convergence} \end{figure*} \subsubsection*{Prediction} \label{sec:real:prediction} Here we investigate how well both \Capricorn and \Cancer can predict missing values in the data. In order to test \Capricorn, we ran missing value prediction tests on \BasLP and \Trec datasets, and compare it against \NMF, \WNMF, and \SVD. The setup is as follows. A random holdout set is chosen that comprises 10\% of the nonzero elements and then removed from the data. Since the input matrices are integer valued, we round the output of the algorithms to the nearest integer and report the fraction of correctly predicted values. There are two versions of this experiment -- one where all elements in the data are taken into account and one where zero entries are ignored, that is, they do not contribute to the error. The motivation for this test is that \Capricorn always aims to extract subtropical patterns, sometimes even at the expense of covering zeros with nonzero values. We therefore want to see how well it performs when only the ``significant'' part of the data is counted. It is worth noting though that while \Capricorn and \WNMF have an option to ignore certain entries in an input matrix, \NMF does not. Hence the \NMF algorithm is at a disadvantage here, though we still show its result for completeness. The results for both prediction experiments where zeros ``count'' and ``don't count'' are shown in Table~\ref{tab:real:accuracy_all}, left and right, respectively. In both cases \WNMF is the best method, whereas \Capricorn is normally the second-best. As expected, \Capricorn's results improve greatly when zero elements are ignored. \begin{table}[tb] \centering \caption{Prediction accuracy on \BasLP and \Trec datasets. Left: accuracy is computed over all entries. Right: accuracy is computed over the non-zero entries.}\label{tab:real:accuracy_all} \begin{tabular}{@{}lRR@{}} \toprule Algorithm & $Bas1LP$ & \text{Trec12} \\ \midrule \Capricorn & 74.0 & 19.8 \\ \NMF & 23.4 & 18.3 \\ \WNMF & 85.2 & 39.9 \\ \SVD & 28.2 & 20.5 \\ \bottomrule \end{tabular} \hspace*{3em} \begin{tabular}{@{}lRR@{}} \toprule Algorithm & $Bas1LP$ & $Trec12$ \\ \midrule \Capricorn & 85.2 & 39.3 \\ \NMF & 29.1 & 19.6 \\ \WNMF & 93.1 & 49.8 \\ \SVD & 29.1 & 22.5 \\ \bottomrule \end{tabular} \end{table} Next we conduct prediction experiments with \Cancer. We tested it on the \Movielense dataset and compared against \WNMF. The choice of \WNMF is motivated by its ability to ignore elements in the input data and its generally good performance on the previous tests. To get a more complete view on how good the predictions are, we report various measures of quality: Frobenius error, root mean square error (RMSE), reciprocal rank, Spearman's $\rho$, mean absolute error (MAE), Jensen--Shannon divergence (JS), optimistic reciprocal rank, and Kendall's $\tau$. The tests can be divided into two categories. The first one, which comprises Frobenius error, root mean square error, mean absolute error, and Jensen--Shannon divergence, aims to quantify the distance between the original data and the reconstructed matrix. The second group of tests finds the correlation between rankings of movies for each user. It includes Spearman's $\rho$, Kendall's $\tau$, reciprocal rank, and optimistic reciprocal rank. All these measures are well known, with perhaps only the reciprocal rank requiring some explanation. Let us first denote by $U$ the set of all users. In the following, for each user $u\in U$ we only consider the set of movies $M(u)$ that this user has rated that belong to the holdout set. The ratings by user $u$ induce a natural ranking on $M(u)$. On the other hand both \Cancer and \WNMF produce approximations $r'(u, m)$ to the true ratings $r(u, m)$, which also induce a corresponding ranking of the movies. The reciprocal rank is a convenient way of comparing the rankings obtained by the algorithms to the original one. For any user $u \in U$, denote by $H(u)$ a set of movies that this user ranked the highest (that is $H(u) = \lbrace m\in M(u) \, \vert \, r(u, m) = \max_{m'\in M(u)} r(u, m') \rbrace$). The reciprocal rank for user $u$ is now defined as \begin{equation} \label{recip:rank} RR(u) = \frac{1}{\min\limits_{m\in H} R(u, m)}\;, \end{equation} where $R(u, m)$ is the rank of the movie $m$ within $M(u)$ according to the rating approximations given by the algorithm in question. Now the mean reciprocal rank is defined as the average of the reciprocal ranks for each individual user $MRR = \frac{1}{\abs{U}} \sum_{u\in U} RR(u)$. When computing the ranks $R(u, m)$, all tied elements receive the same rank, which is computed by averaging. That means that if, say, movies $m_1$ and $m_2$ have tied ranks of 2 and 3, then they both receive the rank of 2.5. An alternative way is to always assign the smallest possible rank. In the above example both $m_1$ and $m_2$ will receive rank 2. When ranks $R(u, m)$ are computed like this, the equation \eqref{recip:rank} defines the optimistic reciprocal rank. We perform standard cross-validation tests where a random selection of elements is chosen as a holdout set and removed from the data. The data has 943 users, each having rated from 19 to 648 movies. A holdout set is chosen by sampling uniformly at random 5 ratings from each user. We run the algorithms, while treating the elements from the holdout set as missing values, and then compare the reconstructed matrices to the original data. This procedure is repeated 10 times. For each test, Table~\ref{tab:movielens} shows the mean and the standard deviation of the results of each algorithm. In addition we report the $p$-value based on the Wilcoxon signed-rank test. It shows if an advantage of one method over the other is statistically significant. We say that a method $A$ is significantly better than method $B$ if the $p$-value is $<0.05$. \Cancer is significantly better for the Frobenius error, root mean square error, mean absolute error, and Jensen--Shannon divergence. For the remaining tests the results are less clear, with \Cancer winning on both version of the reciprocal rank, and \WNMF being better on Spearman's $\rho$ and Kendall's $\tau$ tests. None of these results are statistically significant as the $p$-values are quite high. In summary, our experiments show that \Cancer is significantly better in tests that measure the direct distance between the original and the reconstructed matrices, whereas for the ranking experiments it is difficult to give any of the algorithms an edge. \input{movielens_tab} \subsubsection*{Interpretability of the results} \label{sec:real:interpretability} The crux of using max-times factorizations instead of standard (nonnegative) ones is that the factors (are supposed to) exhibit the ``winner-takes-it-all'' structure instead of the ``parts-of-whole'' structure. To demonstrate this, we plotted the left factor matrices for the \Eigenfaces data for \Cancer and \ALS in Figure~\ref{fig:faces}. At first, it might look like \ALS provides more interpretable results, as most factors are easily identifiable as faces. This, however, is not very interesting result: we already knew that the data has faces, and many factors in the \ALS's result are simply some kind of `prototypical' faces. The results of \Cancer are harder to identify on the first sight. Upon closer inspection, though, one can see that they identify areas that are lighter in the different images, that is, have higher grayscale values. These factors tell us the variances in the lightning in the different photos, and can reveal information we did not know a priori. Further, as seen in Table~\ref{tab:real:accuracy_all}, \Cancer obtains better reconstruction error than \ALS with this data, confirming that these factors are indeed useful to recreate the data. \begin{figure} \centering \subfigure[\Cancer]{% \includegraphics[width=0.7\textwidth]{eigenfaces-cancer.pdf}% \label{fig:faces:cancer} } \\ \subfigure[\ALS]{% \includegraphics[width=0.7\textwidth]{eigenfaces-als.pdf} \label{fig:faces:als} } \caption{\Cancer finds the dominant patterns from the \Eigenfaces data. Pictured are the left factor matrices for the \Eigenfaces data.} \label{fig:faces} \end{figure} In Figure~\ref{fig:wc}, we show some factors from \Cancer when applied to the \Worldclim data. These factors clearly identify different bioclimatic areas from Europe: In Figure~\ref{fig:wc:1} we can identify the mountainous areas in Europe, including the Alps, the Pyrenees, the Scandes, and Scottish Highlands. In Figure~\ref{fig:wc:2} we can identify the mediterranean coastal regions, while in Figure~\ref{fig:wc:3} we see the temperate climate zone in blue, with the green color extending to the boreal zone. In all pictures, red corresponds to (near) zero values. As we can see, \Cancer identifies these areas crisply, making it easy for the analyst to know which areas to look at. \newlength{\oldsubfiglabelskip} \setlength{\oldsubfiglabelskip}{\subfiglabelskip} \subfiglabelskip=0pt \begin{figure*}[tp] \centering \subfigure[]{% \includegraphics[width=\subfigwidth]{worldclim-7.pdf}% \label{fig:wc:1}% } \hspace{\subfigspace} \subfigure[]{% \includegraphics[width=\subfigwidth]{worldclim-1.pdf}% \label{fig:wc:2}% } \hspace{\subfigspace} \subfigure[]{% \includegraphics[width=\subfigwidth]{worldclim-8.pdf}% \label{fig:wc:3} } \caption{\Cancer can find interpretable factors from the \Worldclim data. Shown are the values for three columns in the left-hand factor matrix $\mB$ on a map. Red is zero.} \label{fig:wc} \end{figure*} \subfiglabelskip=\oldsubfiglabelskip In order to interpret \NPAS we first observe that each column represents a single personality attribute. Denote by $\matr{A}$ the obtained approximation of the original matrix. For each rank-1 factor $\matr{X}$ and each column $\matr{A}_i$ we define the score $\sigma(i)$ as the number of elements in $\matr{A}_i$ that are determined by $\matr{X}$. By sorting attributes in descending order of $\sigma(i)$ we obtain relative rankings of the attributes for a given factor. The results are shown in Table~\ref{tab:real:npas_interp}. The first factor clearly shows introverted tendencies, while the second one can be summarized as having interests in fiction and games. \setlength{\tabcolsep}{0.5em} \begin{table}[tb] \centering \caption{Top three attributes for the first two factors of \NPAS.} \label{tab:real:npas_interp} \begin{tabular}{ll} \toprule Factor 1 & Factor 2 \\ \midrule I am more comfortable with my hobbies & I have played a lot of video games \\ \hspace{1cm} than I am with other people & \hspace{1cm} \\ I gravitate towards introspection & I collect books \\ I sometimes prefer fictional people to real ones & I care about super heroes \\ \bottomrule \end{tabular} \end{table} \section{Introduction} \label{sec:introduction} Finding simple patterns that can be used to describe the data is one of the main problems in data mining. The data mining literature knows many different techniques for this general task, but one of the most common pattern finding technique rarely gets classified as such. Matrix factorizations (or decompositions, these two terms are used interchangeably in this paper) represent the given input matrix $\mA$ as a product of two (or more) factor matrices, $\mA\approx \mB\mC$. This standard formulation of matrix factorizations makes their pattern mining nature less obvious, but let us write the matrix product $\mB\mC$ as a sum of rank-1 matrices, $\mB\mC = \mF_1 + \mF_2 + \cdots + \mF_k$, where $\mF_i$ is the outer product of the $i$th column of $\mB$ and the $i$th row of $\mC$. Now it becomes clear that the rank-1 matrices $\mF_i$ are the ``simple patterns'' and the matrix factorization is finding $k$ such patterns whose sum is a good approximation of the original data matrix. This so-called ``component interpretation''~\citep{skillicorn07understanding} is more appealing with some factorizations than with others. For example, the classical singular value decomposition (SVD) does not easily admit such an interpretation, as the components are not easy to interpret without knowing the earlier components. On the other hand, the motivation for the nonnegative matrix factorization (NMF) often comes from the component interpretation, as can be seen, for example, in the famous ``parts of faces'' figures of \citet{lee_seung}. The ``parts-of-whole'' interpretation is in the hearth of NMF: every rank-1 component adds something to the overall decomposition, and never removes anything. This aids with the interpretation of the components, and is also often claimed to yield sparse factors, although this latter point is more contentious \citep{hoyer04non-negative}. Perhaps the reason why matrix factorization methods are not often considered as pattern mining methods is that the rank-1 matrices are summed together to build the full data. Hence, it is rare for any rank-1 component to explain any part of the input matrix alone. But the use of summation as a way to aggregate the rank-1 components can be considered to be ``merely'' a consequence of the fact that we are using the standard algebra. If we change the algebra -- in particular, if we change how we define the summation -- we change the operator used for the aggregation. In this work, we propose to use the \emph{maximum} operator to define the summation over the nonnegative matrices, giving us what is known as the \emph{subtropical algebra}. As the aggregation of the rank-1 factors is now the element-wise maximum, we obtain what we call the ``winner-takes-it-all'' interpretation: the final value of each element in the approximation is defined only by the largest value in the corresponding element in the rank-1 matrices. Not only does the subtropical algebra give us the intriguing winner-takes-it-all interpretation, it also provides guarantees about the sparsity of the factors, as we will show in Section~\ref{sec:sparsity}. Furthermore, the different algebra means that we are finding different factorizations compared to NMF (or SVD). The emphasis here is on the word \emph{different}: the factorizations can be better or worse in terms of the reconstruction error -- we will discuss this in Section~\ref{sec:other_alg} -- but the patterns they find are usually different to those found by NMF. Unfortunately, the related optimization problems are \NP-hard (see Section~\ref{sec:comput_complex}). In Section~\ref{sec:algorithms}, we will develop a general framework, called \Equator, for finding approximate, low-rank subtropical decompositions, and we will present two instances of this framework, tailored towards different types of data and noise, called \Capricorn and \Cancer.\!\footnote{This work is a combined and extended version of our preliminary papers that described these algorithms \citep{karaev16cancer,karaev16capricorn}.} \Capricorn assumes integer data with noise that randomly flips the value to some other integer, whereas \Cancer assumes continuous-valued data with standard Gaussian noise. Our experiments (see Section~\ref{sec:experiments}) show that both \Capricorn and \Cancer work well on datasets that have the kind of noise they are designed for, and they outperform SVD and different NMF methods when data has subtropical structure. On real-world data, \Cancer is usually the better of the two, although in terms of reconstruction error, neither of the methods can challenge SVD. On the other hand, both \Cancer and \Capricorn return interpretable results that show different aspects of the data compared to factorizations made under the standard algebra. \subsubsection{#1}} \expandafter\def\expandafter\UrlBreaks\expandafter{\UrlBreak \do\a\do\b\do\c\do\d\do\e\do\f\do\g\do\h\do\i\do\j% \do\k\do\l\do\m\do\n\do\o\do\p\do\q\do\r\do\s\do\t% \do\u\do\v\do\w\do\x\do\y\do\z\do\A\do\B\do\C\do\D% \do\E\do\F\do\G\do\H\do\I\do\J\do\K\do\L\do\M\do\N% \do\O\do\P\do\Q\do\R\do\S\do\T\do\U\do\V\do\W\do\X% \do\Y\do\Z} \newcommand\lword[1]{\leavevmode\nobreak\hskip0pt plus\linewidth\penalty50\hskip0pt plus-\linewidth\nobreak\textbf{#1}} \begin{document} \title{Algorithms for Approximate Subtropical Matrix Factorization} \author{ Sanjar Karaev and Pauli Miettinen \\ Max-Planck-Institut f\"ur Informatik\\ Saarland Informatics Campus\\ Saarbr\"ucken, Germany\\ \texttt{\{skaraev,pmiettin\}@mpi-inf.mpg.de} } \date{} \maketitle \begin{abstract} \input{abstract} \end{abstract} \input{introduction} \input{preliminaries} \input{theory} \input{algorithms} \input{experiments} \input{related} \input{conclusions} \bibliographystyle{abbrvnat} \section{Notation and Basic Definitions} \label{sec:notation} \paragraph{Basic notation.} Throughout this paper, we will denote a matrix by upper-case boldface letters ($\matr{A}$), and vectors by lower-case boldface letters ($\vec{a}$). The $i$th row of matrix $\matr{A}$ is denoted by $\matr{A}_{i}$ and the $j$th column by $\matr{A}^{j}$. The matrix $\mA$ with the $i$th column removed is denoted by $\mA^{-i}$, and $\mA_{-i}$ is the respective notation for $\mA$ with a removed row. Most matrices and vectors in this paper are restricted to the nonnegative real numbers $\Region = [0,\infty)$. We use the shorthand $[n]$ to denote the set $\{1, 2, \ldots, n\}$. \paragraph{Algebras.} In this paper we consider matrix factorization over so called \emph{max-times} (or \emph{subtropical}) \emph{algebra}. It differs from the standard algebra of real numbers in that addition is replaced with the operation of taking the maximum. Also the domain is restricted to the set of nonnegative real numbers. \begin{definition} \label{def:max-times} The \emph{max-times} (or \emph{subtropical}) algebra is a set $\Region$ of nonnegative real numbers together with operations $a \maxadd b = \max \lbrace a, b\rbrace$ (addition) and $a \maxmult b = ab$ (multiplication) defined for any $a, b \in \Region$. The identity element for addition is $0$ and for multiplication it is $1$. \end{definition} In the future we will use the notation $a\maxadd b$ and $\max \lbrace a, b\rbrace$ and the names \emph{max-times} and \emph{subtropical} interchangeably. It is straightforward to see that the max-times algebra is a \emph{dioid}, that is, a semiring with idempotent addition ($a \maxadd a = a$). It is important to note that subtropical algebra is anti-negative, that is, there is no subtraction operation. A very closely related algebraic structure is the \emph{max-plus} (\emph{tropical}) algebra \citep[see e.g.][]{akian07max-plus}. \begin{definition} \label{def:max-plus} The \emph{max-plus} (or \emph{tropical}) algebra is defined over the set of extended real numbers $\R \cup \{-\infty\}$ with operations $a \tropadd b = \max \lbrace a, b\rbrace$ (addition) and $a \tropmult b = a+b$ (multiplication). The identity elements for addition and multiplication are $-\infty$ and $0$, respectively. \end{definition} The tropical and subtropical algebras are isomorphic \citep{blondel2000approximating}, which can be seen by taking the logarithm of the subtropical algebra or the exponent of the tropical algebra (with the conventions that $\log 0 = -\infty$ and $\exp(-\infty) = 0$). Thus, most of the results we prove for subtropical algebra can be extended to their tropical analogues, although caution should be used when dealing with approximate matrix factorizations. The latter is because, as we will see in Theorem~\ref{thm:max_plus_bound}, the \emph{reconstruction error} of an approximate matrix factorization under the two different algebras does not transfer directly. \paragraph{Matrix products and ranks.} The matrix product over the subtropical algebra is defined in the natural way: \begin{definition} \label{def:mprod} The \emph{max-times matrix product} of two matrices $\matr{B} \in \Region^{n \times k}$ and $\matr{C} \in \Region^{k \times m}$ is defined as \begin{equation} \label{eq:mprod} (\matr{B} \maxprod \matr{C})_{ij} = \max_{s = 1}^k \matr{B}_{is} \matr{C}_{sj} \;. \end{equation} \end{definition} We will also need the matrix product over the \emph{tropical} algebra. \begin{definition} \label{def:tropical:mprod} For two matrices $\matr{B} \in (\R \cup \{-\infty\})^{n \times k}$ and $\matr{C} \in (\R \cup \{-\infty\})^{k \times m}$, their \emph{tropical matrix product} is defined as \begin{equation} \label{eq:mprod} (\matr{B} \tropprod \matr{C})_{ij} = \max_{s = 1}^k \lbrace \matr{B}_{is} + \matr{C}_{sj}\rbrace \;. \end{equation} \end{definition} The \emph{matrix rank} over the subtropical algebra can be defined in many ways, depending on which definition of the normal matrix rank is taken as the starting point. We will discuss different subtropical ranks in detail in Section~\ref{sec:subtropical_ranks}. Here we give the main definition of the rank we are using throughout this paper, the so-called \emph{Schein} (or \emph{Barvinok}) \emph{rank} of a matrix. \begin{definition} \label{def:mrank} The \emph{max-times (Schein or Barvinok) rank} of a matrix $\matr{A}\in\R_+^{n\times m}$ is the least integer $k$ such that $\matr{A}$ can be expressed as an element-wise maximum of $k$ rank-1 matrices, $\matr{A} = \matr{F}_1 \maxadd \matr{F}_2 \maxadd\cdots\maxadd\matr{F}_k$. Matrix $\mF\in \Region^{n\times m}$ has subtropical (Schein/Barvinok) rank of 1 if there exist column vectors $\vx\in\Region^n$ and $\vy\in\Region^m$ such that $\mF = \vx\vy^T$. Matrices with subtropical Schein (or Barvinok) rank of 1 are called \emph{blocks}. \end{definition} When it is clear from the context, we will use the term \emph{rank} (or \emph{subtropical rank}) without other qualifiers to denote the subtropical Schein/Barvinok rank. \paragraph{Special matrices.} The final concepts we need in this paper are \emph{pattern matrices} and \emph{dominating matrices}. \begin{definition} \label{def:pattern} A \emph{pattern} of a matrix $\matr{A}\in\R^{n\times m}$ is an \by{n}{m} binary matrix $\matr{P}$ such that $\matr{P}_{ij} = 0$ if and only if $\matr{A}_{ij} = 0$, and otherwise $\matr{P}_{ij} = 1$. We denote the pattern of $\mA$ by $\pattern(\mA)$. \end{definition} \begin{definition} \label{def:dominate} Let $\matr{A}$ and $\matr{X}$ be matrices of the same size, and let $\Gamma$ be a subset of their indices. Then if for all indices $(i, j) \in \Gamma$, $\matr{X}_{ij} \ge \matr{A}_{ij}$, we say that \emph{$\matr{X}$ dominates $\matr{A}$ within $\Gamma$}. If $\Gamma$ spans the entire size of $\matr{A}$ and $\matr{X}$, we simply say that $\matr{X}$ \emph{dominates} $\matr{A}$. Correspondingly, $\matr{A}$ is said to be \emph{dominated by} $\matr{X}$. \end{definition} \paragraph{Main problem definition.} Now that we have sufficient notation, we can formally introduce the main problem considered in the paper. \begin{problem}[Approximate subtropical rank-$k$ matrix factorization] \label{problem:mdecomp} Given a matrix $\matr{A} \in \Region^{n \times m}$ and an integer $k>0$, find factor matrices $\matr{B} \in \Region^{n \times k}$ and $\matr{C} \in \Region^{k \times m}$ minimizing \begin{equation} \label{eq:mdecomp} E(\matr{A}, \matr{B}, \matr{C}) = \norm{\matr{A} - \matr{B} \maxprod \matr{C}}\;. \end{equation} \end{problem} Here we have deliberately not specified any particular norm. Depending on the circumstanses, different matrix norms can be used, but in this paper we will consider the two most natural choices -- the Frobenius and $L_1$ norms. \section{Related Work} \label{sec:related-work} Here we present earlier research that is related to the subtropical matrix factorization. We start by discussing classic methods, such as SVD and NMF, that have long been used for various data analysis tasks, and then continue with approaches that use idempotent structures. Since the tropical algebra is very closely related to the subtropical algebra, and since there has been a lot of research on it, we dedicate the last subsection to discuss it in more detail. \subsection{Matrix factorization in data analysis.} Matrix factorization methods play a crucial role in data analysis as they help to find low-dimensional representations of the data and uncover the underlying latent structure. A classic example of a real-valued matrix factorization is the singular value decomposition (SVD) \citep[see e.g.]{golub}, which is very well known and finds extensive applications in various disciplines, such as for example signal processing and natural language processing. The SVD of a real $n$-by-$m$ matrix $\mA$ is a factorization of the form $\mA = \mU \matr{\Sigma} \mV^T,$ where $\mU\in \mathbb{R}^{n \times n}$ and $\mV \in \mathbb{R}^{m \times m}$ are orthogonal matrices, and $\matr{\Sigma} \in \mathbb{R}^{n \times m}$ is a rectangular diagonal matrix with nonnegative entries. An important property of SVD is that it provides the best low-rank approximation of a given matrix with respect to the Frobenius norm \citep{golub}, giving rise to the so called truncated SVD. This property is frequently used to separate important parts of data from the noise. For example, it was used by \citet{jha2011denoising} to remove the noise from sensor data in electronic nose systems. Another prominent usage of the truncated SVD is in dimensionality reduction \citep[see for example][]{sarwar2000application, deerwester1990indexing}. Despite SVD being so ubiquitous, there are some restrictions to its usage in data mining due to possible presence of negative elements in the factors. In many applications negative values are hard to interpret, and thus other methods have to be used. Nonnegative matrix factorization (NMF) is a way to tackle this problem. For a given nonnegative real matrix $\mA$, the NMF problem is to find a decomposition of $\mA$ into two matrices $\mA \approx \mB\mC$ such that $\mB$ and $\mC$ are also nonnegative. Its applications are extensive and include text mining \citep{pauca2004text}, document clustering \citep{xu2003document}, pattern discovery \citep{brunet2004metagenes}, and many other. This area drew considerable attention after a publication by \citet{lee_seung}, where they provided an efficient algorithm for solving the NMF problem. It is worth mentioning that even though the paper by \citeauthor{lee_seung} is perhaps the most famous in NMF literature, it was not the first one to consider this problem. Earlier works include \citet{paatero1994positive} \citep[see also][]{paatero1997least}, \citet{paatero1999multilinear}, and \citet{cohen1993nonnegative}. \citet{berry2007algorithms} provide an overview of NMF algorithms and their applications. There exist various flavours of NMF that impose different constraints on the factors; for example \citet{hoyer04non-negative} used sparsity constraints. Though both NMF and SVD perform approximations of a fixed rank, there are also other ways to enforce compact representation of data. For example, in maximum-margin matrix factorization constraints are imposed on the norms of factors. This approach was exploited by \citet{srebro2004maximum}, who showed it to be a good method for predicting unobserved values in a matrix. The authors also indicate that posing constraints on the factor norms, rather than on the rank, yields a convex optimization problem, which is easier to solve. \subsection{Idempotent semirings.} The concept of the subtropical algebra is relatively new, and as far as we know, its applications in data mining are not yet well studied. Indeed, its only usage for data analysis that we are aware of was by \citet{weston13nonlinear}, where it was used as a part of a model for collaborative filtering. The authors modeled users as a set of vectors, where each vector represents a single aspect about the user (e.g. a particular area of interest). The ratings are then reconstructed by selecting the highest scoring prediction using the $\max$ operator. Since their model uses $\max$ as well as the standard plus operation, it stands on the border between the standard and the subtropical worlds. Boolean algebra, despite being limited to the binary set $\{0,1\}$, is related to the subtropical algebra by virtue of having the same operations, and is thus a restriction of the latter to $\{0,1\}$. By the same token, when both factor matrices are binary, their subtropical product coincides with the Boolean product, and hence the Boolean matrix factorization can be seen as a degenerate case of the subtropical matrix factorization problem. The dioid properties of the Boolean algebra can be checked trivially. The motivation for the Boolean matrix factorization comes from the fact that in many applications data is naturally represented as a binary matrix (e.g. transaction databases), which makes it reasonable to seek decompositions that preserve the binary character of the data. The conceptual and algorithmic analysis of the problem was done by \citet{miettinen09matrix}, with the focus mainly on the data mining perspective of the problem. For a linear algebra perspective see \citet{kim1982boolean}, where the emphasis is put on the existence of exact decompositions. A number of algorithms have been proposed for solving the BMF problem \citep{miettinen2008discrete, lu2008optimal, lucchese2014unifying, karaev2015getting}. \subsection{Tropical algebra.} Another close cousin of the max-times algebra is the max-plus, or so called tropical algebra, which uses plus in place of multiplication. It is also a dioid due to the idempotent nature of the $\max$ operation. As was mentioned earlier, the two algebras are isomorphic, and hence many of the properties are identical (see Sections \ref{sec:notation} and \ref{sec:theory} for more details). Despite the theory of the tropical algebra being relatively young, it has been thoroughly studied in recent years. The reason for this is that it finds extensive applications in various areas of mathematics and other disciplines. An example of such a field is the discrete event systems (DES)~\citep{cassandras08introduction}, where the tropical algebra is ubiquitously used for modeling \citep[see e.g.][]{baccelli92synchronization,cohen99max-plus}. Other mathematical disciplines where the tropical algebra plays a crucial role are optimal control \citep{gaubert1997methods}, asymptotic analysis \citep{dembo2010large, maslov1992idempotent, akian1999densities}, and decidability \citep{simon1978limited, simon1994semigroups}. Research on tropical matrix factorization is of interest for us because of the above mentioned isomorphism between the two algebras. However as was explained in Section \ref{sec:theory}, the approximate matrix factorizations are not directly transferable as the errors can differ dramatically. It should be mentioned that in the general case the problem of the tropical matrix factorization is NP-complete \citep[see e.g.][]{shitov2014complexity}. \Citet{de2002qr} demonstrated that if the max-plus algebra is extended in such a way that there is an additive inverse for each element, then it is possible to solve many of the standard matrix decomposition problems. Among other results the authors obtained max-plus analogues of QR and SVD. They also claimed that the techniques they propose can be readily extended to other types of classic factorizations (e.g. Hessenberg and LU decomposition). Despite the apparent successes in the realm of tropical matrix factorization, its subtropical counterpart has not received much attention, and to the best of our knowledge the first work on the subject was done by \citet{karaev16capricorn}. The problem of solving tropical linear systems of equations arises naturally in numerous applications, and is also closely related to matrix factorization. In order to illustrate this connection, assume that we are given a tropical matrix $\mA \in \tropalg^{n\times m}$ and one of the factors $\mB \in \tropalg^{n\times k}$. Then the other factor $\mC \in \tropalg^{k\times m}$ can be found by solving the following set of problems \begin{equation} \label{alternating_updates} \mC_j = \argmin_{\vc \in \tropalg^k} \|\mB \tropprod \vc - \mA_j\|_F, \; j=1,\dots, m\;. \end{equation} Each problem in \eqref{alternating_updates} requires ``approximately'' solving a system of tropical linear equations. The minus operation in \eqref{alternating_updates} does not belong to the tropical semiring, so the approximation here should be understood in terms of minimizing the classical distance. The general form of tropical linear equations \begin{equation} \label{trop:lin} \mA \vx \tropadd \vb = \mC \vx \tropadd \vd \end{equation} is not always solvable \citep[see e.g.][]{gaubert1997methods}; however various techniques exist for checking the existence of the solution for particular cases of \eqref{trop:lin}. For equations of the form $\mA \vx = \vb$ the feasibility can be established for example through the so called \emph{matrix residuation}. There is a general result that for an $n$-by-$m$ matrix $\mA$ over a complete idempotent semiring, the existence of the solution can be checked in $O(nm)$ time \citep[see][]{gaubert1997methods}. Although the tropical algebra is not complete, there is an efficient way of finding if the solution exists \citep{cuninghame1979minimax, zimmermann2011linear}. It was shown by \citet{butkovivc2003max} that this type of tropical equations is equivalent to the set cover problem, which is known to be NP-hard. This directly affects the max-times algebra through the above-mentioned isomorphism and makes the problem of precisely solving max-times linear systems of the form $\mA \vx = \vb$ infeasible for high dimensions. Homogeneous equations $\mA\vx = \mB\vx$ can be solved using the \emph{elimination} method, which is based on the fact that the set of solutions of a homogeneous system is a finitely generated semimodule \citep{butkovic1984elimination} \citep[independently rediscovered by][]{gaubert1992theorie}. If only a single solution is required, then according to \citet{gaubert1997methods}, a method by \citet{walkup1998general} is usually the fastest in practice. Now let $\mA$ be a tropical square matrix of size $n\times n$. For complete idempotent semirings a solution to the equation $\vx = \mA \vx \tropadd \vb$ is given by $\vx = \matr{A^*}\vb$ \citep[see e.g.][]{salomaa2012automata}, where the operator $\matr{A^*}$ is defined as \[ \matr{A^*} = \tropadd_{k=1}^\infty \mA^k \;. \] Since the tropical semiring is not complete (it is missing the $\infty$ element), $\matr{A^*}$ can not always be computed. However, when there are no positive weight circuits in the graph defined by $\mA$, then we have $\matr{A^*} = \mA^0 \tropadd \dots \tropadd \mA^{n-1}$, and all entries of $\matr{A^*}$ belong to the tropical semiring \citep{baccelli92synchronization}. Computing the operator $\mA^*$ takes time $O(n^3)$ \citep[see e.g.][]{gondran1984graphs, gaubert1997methods}. Another important direction of research is the eigenvalue problem $\mA\vx = \lambda\vx$. Tropical analogues of the Perron--Frobenius theorem \citep[see e.g.][]{vorobyev2extremal, maslov1992idempotent}, and Collatz--Wielandt formula \citep{bapat1995pattern, gaubert1992theorie} were developed. For a general overview of the results in the $(\max, +)$ spectral theory, see for example \citet{gaubert1997methods}. Tropical algebra and tropical geometry were used by \citet{gartnertropical} to construct a tropical analogue of an SVM. Unlike in the classical case, tropical SVMs are localized, in the sense that the kernel at any given point is not influenced by all the support vectors. Their work also utilizes the fact that tropical hyperplanes are somewhat more complex than their counterparts in the classical geometry, which makes it possible to do multiple category classification with a single hyperplane. \section{Theory} \label{sec:theory} Our main contributions in this paper are the algorithms for the subtropical matrix factorization. But before we present them, it is important to understand the theoretical aspects of subtropical factorizations. We will start by studying the computational complexity of Problem~\ref{problem:mdecomp}. After that, we will show that the dominated subtropical factorizations of sparse matrices are sparse. Finally, we compare the subtropical factorizations to factorizations over other algebras, and discuss different ways to define the subtropical rank, and the relationships between these ranks. \subsection{Computational complexity} \label{sec:comput_complex} The computational complexity of different matrix factorization problems varies. For example, SVD can be computed in polynomial time \citep{golub}, while NMF is \NP-hard \citep{vavasis09complexity}. Unfortunately, the subtropical factorization is also \NP-hard. \begin{theorem}\label{thm:npcomplete} Computing the max-times matrix rank is an \NP-hard problem, even for binary matrices. \end{theorem} The theorem is a direct consequence of the following theorem by \citet{kim2005factorization}: \begin{theorem}[\citealp{kim2005factorization}] \label{thm:trop_rank_nphard} Computing the max-plus (tropical) matrix rank is \NP-hard, even for matrices that take values only from $\{-\infty, 0\}$. \end{theorem} While computing the rank deals with exact decompositions, its hardness automatically makes any approximation algorithm with provable multiplicative guarantees unlikely to exist, as the following corollary shows. \begin{corollary} \label{corollary:approxnphard} It is \NP-hard to approximate Problem~\ref{problem:mdecomp} to within any polynomially computable factor. \end{corollary} \begin{proof} Any algorithm that can approximate Problem~\ref{problem:mdecomp} to within a factor $\alpha$ must find a decomposition of error $\alpha\cdot 0 = 0$ if the input matrix has exact max-times rank-$k$ decomposition. As this implies solving the max-times rank, per Theorem~\ref{thm:npcomplete} it is only possible if \Poly=\NP. \end{proof} \subsection{Sparsity of the factors} \label{sec:sparsity} It is often desirable to obtain sparse factor matrices if the original data is sparse, as well, and the sparsity of its factors is frequently mentioned as one of the benefits of using NMF~\citep[see, e.g.][]{hoyer04non-negative}. In general, however, the factors obtained by NMF might not be sparse, but if we restrict ourselves to \emph{dominated} decompositions, \citet{gillis10using} showed that the sparsity of the factors cannot be less than the sparsity of the original matrix. The proof of \citet{gillis10using} relies on the anti-negativity, and hence their proof is easy to adapt to max-times setting. Let the \emph{sparsity} of an \by{n}{m} matrix $\matr{A}$, $s(\matr{A})$, be defined as \begin{equation}\label{fracnonzero} s(\matr{A}) = \frac{nm - \nnz{\matr{A}}}{nm}\;, \end{equation} where $\nnz{\matr{A}}$ is the number of nonzero elements in $\matr{A}$. Now we have \begin{theorem}\label{thm:sparsity} Let matrices $\matr{B} \in \Region^{n \times k}$ and $\matr{C} \in \Region^{k \times m}$ be such that their max-times product is dominated by an \by{n}{m} matrix $\matr{A}$. Then the following estimate holds \begin{equation}\label{nzeros} s(\matr{B}) + s(\matr{C} ) \ge s(\matr{A}) \;. \end{equation} \end{theorem} \begin{proof} The proof follows that of \citet{gillis10using}. We first prove \eqref{nzeros} for $k = 1$. Let $\vec{b} \in \Region^n$ and $\vec{c} \in \Region^m$ be such that $\vec{b}_i \vec{c}^T_j \le \matr{A}_{ij}$ for all $1 \le i \le n$, $ 1 \le j \le m$. Since $(\vec{b}\vec{c}^T)_{ij}>0$ if and only if $\vec{b}_i > 0$ and $\vec{c}_j > 0$, we have \begin{equation}\label{bwnnonzero} \nnz{\vec{b}\vec{c}^T} = \nnz{\vec{b}}\,\nnz{\vec{c}}\;. \end{equation} By \eqref{fracnonzero} we have $\nnz{\vec{b}\vec{c}^T} = nm(1-s(\vec{b}\vec{c}^T))$, $\nnz{\vec{b}} = n(1-s(\vec{b}))$ and $\nnz{\vec{c}} = m(1-s(\vec{c}))$. Plugging these expressions into \eqref{bwnnonzero} we obtain $(1 - s(\vec{b}\vec{c}^T)) = (1-s(\vec{b}))(1-s(\vec{c}))$. Hence, the sparsity in a rank-$1$ dominated approximation of $\matr{A}$ is \begin{equation}\label{intermediate1} s(\vec{b}) + s(\vec{c}) \ge s(\vec{b}\vec{c}^T)\;. \end{equation} From \eqref{intermediate1} and the fact that the number of nonzero elements in $\vec{b}\vec{c}^T$ is no greater than in $\matr{A}$, it follows that \begin{equation}\label{inermediate2} s(\vec{b}) + s(\vec{c}) \ge s(\matr{A}) \;. \end{equation} Now let $\matr{B} \in \Region^{n \times k}$ and $\matr{C} \in \Region^{k \times m}$ be such that $\matr{B}\maxprod \matr{C}$ is dominated by $\matr{A}$. Then $\matr{B}_{il} \matr{C}_{lj} \le \matr{A}_{ij}$ for all $i \in [n]$, $j \in [m]$, and $l \in [k]$, which means that for each $l\in [k]$, $\matr{B}^l\matr{C}_l\,$ is dominated by $\matr{A}$. To complete the proof observe that $s(\matr{B}) = k^{-1} \sum_{l = 1}^k \matr{B}^l$ and $s(\matr{C}) = k^{-1} \sum_{l=1}^k \matr{C}_l$ and that for each $l$ estimate \eqref{inermediate2} holds. \end{proof} \subsection{Relation to other algebras} \label{sec:other_alg} Let us now study how the max-times algebra relates to other algebras, especially the standard, the Boolean, and the max-plus algebras. For the first two, we compare the ranks, and for the last, the reconstruction error. Let us start by considering the Boolean rank of a binary matrix. The \emph{Boolean (Schein or Barvinok) rank} is the following problem: \begin{problem}[Boolean rank] Given a matrix $\matr{A}\in\B^{n\times m}$ and an integer $k$, are there matrices $\matr{B}\in\B^{n\times k}$ and $\matr{C}\in\B^{k\times m}$ such that $\matr{A} = \matr{B}\bprod\matr{C}$, where $\bprod$ is the \emph{Boolean matrix product}, \[ (\matr{B}\bprod\matr{C})_{ij} = \bigvee_{l=1}^k \matr{B}_{il}\matr{C}_{lj}\; . \] \end{problem} \begin{lemma} \label{lemma:brank_vs_strank} If $\mA$ is a binary matrix, then its Boolean and subtropical ranks are the same. \end{lemma} \begin{proof} We will prove the claim by first showing that the Boolean rank of a binary matrix is no less than the subtropical rank, and then showing that it is no larger, either. For the first direction, let the Boolean rank of $\mA$ be $k$, and let $\mB$ and $\mC$ be binary matrices such that $\mB$ has $k$ columns and $\mA = \mB\bprod\mC$. It is easy to see that $\mB\bprod\mC = \mB\maxprod\mC$, and hence, the subtropical rank of $\mA$ is no more than $k$. For the second direction, we will actually show a slightly stronger claim: Let $\mA\in \Region^{n \times m}$ and let $\pattern(\mA)$ be its pattern. Then the Boolean rank of $\pattern(\mA)$ is never more than the subtropical rank of $\mA$. As $\pattern(\mA) = \mA$ for a binary $\mA$, the claim follows. To prove the claim, let $\mA\in\Region^{n\times m}$ have subtropical rank of $k$ and let $\mB\in\Region^{n\times k}$ and $\mC\in\Region^{k\times m}$ be such that $\mA = \mB\maxprod\mC$. Let $(i,j)$ be such that $\mA_{ij} = 0$. By definition, $\max_{l=1}^k \mB_{il}\mC_{lj} = 0$, and hence \begin{equation} \label{eq:pattern_vals:0} \max_{l=1}^k \pattern(\mB)_{il}\pattern(\mC)_{lj} = \bigvee_{l=1}^k \pattern(\mB)_{il}\pattern(\mC)_{lj} = 0\; . \end{equation} On the other hand, if $(i,j)$ is such that $\mA_{ij} > 0$, then there exists $l$ such that $\mB_{il}, \mC_{lj} > 0$ and consequently, \begin{equation} \label{eq:pattern_vals:1} \max_{l=1}^k \pattern(\mB)_{il}\pattern(\mC)_{lj} = \bigvee_{l=1}^k \pattern(\mB)_{il}\pattern(\mC)_{lj} = 1\; . \end{equation} Combining~\eqref{eq:pattern_vals:0} and \eqref{eq:pattern_vals:1} gives us \begin{equation} \label{eq:pattern_bprod} \pattern(\mA) = \pattern(\mB)\bprod\pattern(\mC)\; , \end{equation} showing that the Boolean rank of $\pattern(\mA)$ is at most $k$. \end{proof} Notice that Lemma~\ref{lemma:brank_vs_strank} also furnishes us with another proof of Theorem~\ref{thm:npcomplete}, as the computation of the Boolean rank is an \NP-complete problem \citep[see, e.g.][]{miettinen09matrix}. Notice also that while the Boolean rank of the pattern is never more than the subtropical rank of the original matrix, it can be much less. This is easy to see by considering a matrix with no zeroes: it can have arbitrarily large subtropical rank, but it's pattern has Boolean rank 1. Unfortunately, the Boolean rank does not help us with effectively estimating the subtropical rank, as its computation is an \NP-hard problem. The standard rank is (relatively) easy to compute, but the standard rank and the max-times rank are incommensurable, that is, there are matrices that have smaller max-times rank than standard rank and others that have higher max-times rank than standard rank. Let us consider an example of the first kind, \[ \begin{pmatrix} 1 & 2 & 0 \\ 2 & 4 & 1 \\ 0 & 4 & 2 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 2 & 1 \\ 0 & 2 \end{pmatrix} \maxprod \begin{pmatrix} 1 & 2 & 0 \\ 0 & 2 & 1 \end{pmatrix}\; . \] As the decomposition shows, this matrix has max-times rank of $2$, while its normal rank is easily verified to be $3$. Indeed, it is easy to see that the complement of the \by{n}{n} identity matrix $\bar{\matr{I}}_n$, that is, the matrix that has $0$s at the diagonal and $1$s everywhere else, has max-times rank of $O(\log n)$ while its standard rank is $n$ (the result follows from similar results regarding the Boolean rank, see, e.g. \citealp{miettinen09matrix}). As we have discussed earlier, max-plus and max-times algebras are isomorphic, and consequently for any matrix $\mA \in \mathbb{R}_+^{n\times m}$ its max-times rank agrees with the max-plus rank of the matrix $\log(\mA)$. Yet, the errors obtained in approximate decompositions do not have to (and usually will not) agree. In what follows we characterize the relationship between max-plus and max-times errors. We denote by $\tropalg$ the extended real line $\R \cup \lbrace -\infty\rbrace$. \begin{theorem}\label{thm:max_plus_bound} Let $\matr{A} \in \overline{\R}^{n \times m}$, $\matr{B} \in \overline{\R}^{n \times k}$ and $\matr{C} \in \overline{\mathbb{R}}^{k \times m}$. Let $M = \exp\{N\}$, where \[ N = \max_{\substack{i\in [n]\\ j \in [m]}} \Bigl\{ \max \bigl\{ \matr{A}_{ij}, \max_{1 \le d \le k} \{ \matr{B}_{id} + \matr{C}_{dj} \} \bigr\} \Bigr\} \; . \] If an error can be bounded in max-plus algebra as \begin{equation}\label{max_plus_bound} \norm{\matr{A} - \matr{B} \tropprod \matr{C}}_F^2 \le \lambda\; , \end{equation} then the following estimate holds with respect to the max-times algebra: \begin{equation}\label{max_times_bound} \norm{\exp\{\matr{A}\} - \exp\{\matr{B}\} \maxprod \exp\{\matr{C}\}}_F^2 \le M^2 \lambda\; . \end{equation} \end{theorem} \begin{proof} Let $\alpha_{ij} = \max_{d=1}^{ k} \lbrace \matr{B}_{id} + \matr{C}_{dj}\rbrace$. From \eqref{max_plus_bound} it follows that there exists a set of numbers $\lbrace\lambda_{ij} \ge 0 \setcond i \in [n], j \in [m] \rbrace$ such that for any $i, j$ we have $(A_{ij} - \alpha_{ij})^2 \le \lambda_{ij}$ and $\sum_{ij} \lambda_{ij} = \lambda$. By the mean-value theorem, for every $i$ and $j$ we obtain \[ \abs{\exp\{\matr{A}_{ij}\} - \exp\{\alpha_{ij}\} } = \abs{ \matr{A}_{ij} - \alpha_{ij} } \exp\{ \alpha_{ij}^*\} \le \sqrt{\lambda_{ij}} \exp\{\alpha_{ij}^*\}\; , \] for some $\min\lbrace \matr{A}_{ij}, \alpha_{ij} \rbrace \le \alpha_{ij}^* \le \max\lbrace \matr{A}_{ij}, \alpha_{ij} \rbrace$. Hence, \[ \left(\exp\{\matr{A}_{ij}\} - \exp\{\alpha_{ij}\}\right)^2 \le \lambda_{ij} (\exp \{\max \lbrace \matr{A}_{ij}, \alpha_{ij} \rbrace\})^2 \; . \] The estimate for the max-times error now follows from the monotonicity of the exponent: \[ \begin{split} \norm{\exp\{\matr{A}\} - \exp\{\matr{B}\} \maxprod \exp\{\matr{C}\}}_F^2 &\le \sum_{ij} \left(\exp\{\alpha_{ij}^*\}\right)^2 \lambda_{ij} \\ &\le \sum_{ij} \left(\exp\{\max\lbrace \matr{A}_{ij}, \alpha_{ij} \rbrace\}\right)^2 \lambda_{ij} \le M^2 \lambda\; , \end{split} \] proving the claim. \end{proof} \subsection{Different subtropical matrix ranks} \label{sec:subtropical_ranks} The definition of the subtropical rank we use in this work is the so-called Schein (or Barvinok) rank (see Definition~\ref{def:mrank}). Like in the standard linear algebra, this is not the only possible way to define the (subtropical) rank. Here we will review few other forms of subtropical rank that can allow us to bound the Schein/Barvinok rank of a matrix. Following the literature, we will present the definitions in this section over the tropical algebra. Recall that due to isomorphism, these definitions transfer directly to the subtropical case. Unless otherwise mentioned, the definitions are by \citet{guillon2015ultimate}; we refer the readers interested in more details to their work. We begin with the tropical equivalent of the subtropical Schein/Barvinok rank: \begin{definition} \label{def:schein_barvinok_rank} The \emph{tropical Schein/Barvinok rank} of a matrix $\mA\in\tropalg^{n\times m}$, denoted $\rankSB(\mA)$, is defined to be the least integer $k$ such that there exist matrices $\mB\in\tropalg^{n\times k}$ and $\mC\in\tropalg^{k\times m}$ for which $\mA = \mB\tropprod\mC$. \end{definition} Analogous to the standard case, we can also define the rank as the number of linearly independent rows or columns. The following definition of linear independence of a family of vectors in a tropical space is due to \citet{gondran1984linear}. \begin{definition} \label{def:gondran:minoux:linear} A set of vectors $\vec{x}_1, \dots, \vec{x}_k$ from $\tropalg^n$ is called \emph{linearly dependent} if there exist disjoint sets $I, J \subset \{1,\dots, k\}$ and scalars $\{\lambda_i\}_{i\in I \cup J}$, such that $\lambda_i \ne -\infty$ for all $i$ and \begin{equation} \label{lin_depend} \max_{i\in I} \{\lambda_i + \vec{x}_i\} = \max_{j\in J} \{\lambda_j + \vec{x}_j\} \;. \end{equation} Otherwise the vectors $\vec{x}_1, \dots, \vec{x}_k$ are called \emph{linearly independent}. \end{definition} This gives rise to the so-called \emph{Gondran--Minoux ranks}: \begin{definition} \label{def:gondran:minoux:rank} The \emph{Gondran--Minoux} row (column) rank of a matrix $\matr{A} \in \tropalg^{n\times m}$ is defined as the maximal $k$ such that $\matr{A}$ has $k$ independent rows (columns). They are denoted by $\rankGMr(\matr{A})$ and $\rankGMc(\matr{A})$ respectively. \end{definition} Another way to characterize the rank of the matrix is to consider the space its rows or columns can span. \begin{definition} \label{def:convex} A set $X \subset \tropalg^n$ is called \emph{tropically convex} if for any vectors $\vec{x}, \vec{y} \in X$ and scalars $\lambda, \mu \in \tropalg$, we have $\max\{\lambda + \vec{x}, \mu + \vec{y}\} \in X$. \end{definition} \begin{definition} \label{def:convex:hull} The \emph{convex hull} $H(\vec{x}_1, \dots \vec{x}_k)$ of a finite set of vectors $\{\vec{x}_i \}_{i=1}^k \in \tropalg^n$ is defined as follows \[ H(\vec{x}_1, \dots \vec{x}_k) = \left\{ \max_{i=1}^k \{\lambda_i + \vec{x}_i\} \setcond \lambda_i \in \tropalg \right\} \;. \] \end{definition} \begin{definition} \label{def:weak:dim} The \emph{weak dimension} of a finitely generated tropically convex subset of $\tropalg^n$ is the cardinality of its minimal generating set. \end{definition} We can define the rank of the matrix by looking at the weak dimension of the (tropically) convex hull its rows or columns span. \begin{definition} \label{def:row:rank} The \emph{row rank} and the \emph{column rank} of a matrix $\matr{A} \in \tropalg^{n\times m}$ are defined as the weak dimensions of the convex hulls of the rows and the columns of $\matr{A}$ respectively. They are denoted by $\rankRW(\matr{A})$ and $\rankCL(\matr{A})$. \end{definition} None of the above definitions coincide \citep[see][]{akian2009linear}, unlike in the standard algebra. We can, however, have a partial ordering of the ranks: \begin{theorem} \label{thm:rank:relations} \citep{guillon2015ultimate, akian2009linear} Let $\matr{A} \in \tropalg^{n\times m}$. Then the the following relations are true for the above definitions of the rank of $\matr{A}$: \begin{equation} \label{rank:bounds} \left . \begin{aligned} &\rankGMr(\mA)\\ &\rankGMc(\mA) \end{aligned}\right\} \le \rankSB(\mA) \le \left\{\!\begin{aligned} &\rankRW(\mA)\\ &\rankCL(\mA) \end{aligned}\right . \;. \end{equation} \end{theorem} The row and column ranks of an \by{n}{n} tropical matrix can be computed in $O(n^3)$ time \citep{butkovivc2010max}, allowing us to bound the Schein/Barvinok rank from above. Unfortunately, no efficient algorithm for the Gondran--Minoux rank is known. On the other hand, \citet{guillon2015ultimate} presented what they called the \emph{ultimate tropical rank} that lower-bounds the Gondran--Minoux rank and can be computed in time $O(n^3)$. We can also check if a matrix has full Schein/Barvinok rank in time $O(n^3)$ \citep[see][]{butkovivc1985condition}, even if computing any other value is \NP-hard. These bounds, together with Lemma~\ref{lemma:brank_vs_strank} yield the following corollary regarding the bounding of the \emph{Boolean rank} of a square matrix: \begin{corollary} \label{corollary:brank_bounds} Given an \by{n}{n} binary matrix $\mA$, it's Boolean rank can be bound from below, using the ultimate rank, and from above, using the tropical column and row ranks, in time $O(n^3)$. \end{corollary}
2024-02-18T23:39:52.417Z
2017-07-28T02:07:19.000Z
algebraic_stack_train_0000
724
19,291
proofpile-arXiv_065-3575
\section{Introduction} \label{sec:intro} Active galactic Nuclei (AGNs) are high energy astronomical objects, so that they emit non-thermal radiation in any frequency ranges of radiation, in other words, radio, infrared, visible lights, ultraviolet, X-rays, and gamma-rays \citep{Begelman84,Hughes91,Burgarella93,Tsiganos96,Ferrari98}. These central engines are believed to be accreting supermassive black holes, with relativistic jet whose bulk Lorentz factor is $\sim 10$ \citep{Biretta99,Asada14,Boccardi16}. Their jets show strong time variabilities in the timescales from days to years \citep{Fossati98,Abdo09a,Abdo10a,Abdo10b,Abdo10c,Abdo11,Ackermann10,Chen13,Edelson13}. In the extreme cases, blazars show bursts of hours \citep{Ackermann16,Britto16}. These radiations are believed to be emitted by a bunch of electrons with strongly relativistic motions. The system of AGN jets is also believed as a cosmic ray accelerator \citep{Dermer09}. Although it has a potential to accelerate highest energy to the energy of $\sim 10^{20}$ eV, it is still not well understood what is physical mechanism for the particle acceleration and where the acceleration site is. Many studies have been done for diffusive shock acceleration model based on conventional Fermi acceleration mechanism \citep{Fermi54}. In the Fermi acceleration, charged particles interact with magnetized clouds, and are vented random directions by their magnetic field. Since the head-on collisions, which the particles gain the energy, are more frequent than rear-end collisions, which they loose their energies, the particles statistically gain the energy step by step and eventually obtain very high energy to be cosmic ray particles. However, this Fermi acceleration mechanism is difficult to explain highest energy particles $\sim 10^{20}$ eV, because of 1) the large number of scatterings necessary to reach highest energies, 2) energy losses through the synchrotron emission at the bending associated with scatterings, and 3) difficulty in the escape of particles which are initially magnetically confined in the acceleration domain (e.g., \citet{Kotera11}). On the other hand, \citet{Tajima79} proposed that particles can be accelerated by the wakefield induced by an intense laser pulse, see review by \citet{Tajima17}. This long lasting energy elevated state of wakefield may be regarded a Higgs state \citep{Higgs64} of plasma. In particular, ponderomotive force which is proportional to gradient of ${ \bm{E}^2}$ works to accelerate the charged particles, effectively, where $\bm{E}$ is electric field of the electromagnetic wave. The acceleration towards relativistic regime by the ponderomotive force is confirmed by recent experiments by ultra intense lasers for electrons \citep{Leemans06,Nakamura07}. Positron acceleration driven by wakefields by electrons is also reported by \citet{Corde15}. \citet{Takahashi00,Chen02,Chang09} applied this mechanism to magnetowave induced plasma wakefield acceleration for ultra high energy cosmic rays. Recently, \citet{Ebisuzaki14,Ebisuzaki14b} applied this wakefield acceleration theory to the relativistic jets launched from an accreting black hole. In such astrophysical context going far beyond the laboratory scales of wakefields (see, for example, \citet{Tajima17}), the relativistic factors that characterize the dynamics $a_0=eE/ m_e \omega c$ becomes far greater than unity. This has not been achieved in the laboratory yet, though simulations began to peek into it. In this regime \citep{Ebisuzaki14,Ebisuzaki14b} the ponderomotive acceleration has advantages over the Fermi mechanism. In close scrutiny, the wakefields are composed with two parts the frontal bow part and the following stern (wake) part. Here we may call simply the bow wake and stern wake. In the work of high $a_0$ simulation, it was shown in \citet{Lau15} that the bow wake (it is driven directly by the ponderomotive force) is dominant over the stern wake. The advantages of this bow wake acceleration over the Fermi mechanism are: \begin{enumerate} \item The ponderomotive field provides an extremely high accelerating field (including the wakefield). \item It does not require particle bending, which would cause strong synchrotron radiation losses in extreme energies. \item The accelerating fields and particles move in the collinear direction at the same velocity, the speed of light, so that the acceleration has a built-in coherence called ``relativistic coherence'' \citep{Tajima2010}; in contrast, the Fermi acceleration mechanism, based on multiple scatterings, is intrinsically incoherent and stochastic. \item No escape problem \citep{Kotera11} exists. Particles can escape from the acceleration region since the accelerating fields naturally decay out. \end{enumerate} They found that protons can be accelerated even above ZeV $\sim 10^{22}$ eV in the bow wake of a burst of Alfv\'en waves emitted by an accretion disk around a black hole with the mass of $10^8 $ M$_{\odot}$. \citet{Ebisuzaki14} used three major assumptions based on the standard $\alpha$-disk model \citep{Shakura73}. \begin{itemize} \item {Assumption A}: the magnetic field energy ${\mathcal E}_B$ included in an Alfv\'en wave burst is assumed as: \begin{equation} {{\mathcal E}_B}=(B_D^2/4\pi)\pi(10R_s)^2Z_D=1.6\times 10^{48}(\dot{m}/0.1)(m/10^8)^2 \quad{\rm erg}, \label{eqn:EB} \end{equation} where $B_{D}$ is the magnetic field stored in the inner most regions of the accretion disk, $R_{\rm s}=2R_{\rm g}$ is the Schwarzschild radius of the black hole, $Z_{\rm D}$ is the thickness of the disk, $R_{\rm g}$ is gravitational radius, $\dot{m}$ is the accretion rate normalized by the Eddington luminosity, and $m$ is the mass of the black hole in the unit of solar mass. \item{Assumption B}: they assumed that the angular frequency $\omega_{\rm A}$ of the Alfv\'en wave corresponds to that excited by magnetorotational instability (MRI \citep{Velikhov59,Chandrasekhar60,Balbus91,Matsumoto95}), which takes place in a magnetized accretion disk, in other words: \begin{equation} \label{growthrate} \omega_{\rm A}=2\pi {{c_{\rm A}}_D}/\lambda_{\rm A}\sim 2.6\times10^{-5}(m/10^8)^{-1}\quad {\rm Hz}, \end{equation} where $\lambda_{\rm A}$ is the wavelength of the Alfv\'en wave, and ${c_{\rm A}}$ is speed of Alfv\'en wave. \citet{Ebisuzaki14} showed that the Alfv\'en shock gives rise to electromagnetic wave pulse with $\omega=\omega_{\rm A}$ along the propagation in the jets through the mode conversion, as the density and magnetic fields in the jets decrease during the jet propagation. \item{Assumption C}: the recurrence rate $\nu_{\rm A}$ of the Alfv\'en burst is evaluated as: \begin{equation} \label{recurrencerate} \nu_{\rm A}=\eta {{c_{\rm A}}_D}/Z_{\rm D}\quad {\rm Hz}, \end{equation} where $\eta$ is the episode-dependent parameter of the order of unity. \end{itemize} They found that the non-dimensional strength parameter $a_0=e E / m_e \omega c$ is as high as $10^{10}$ for the case of $\dot{m}=0.1$ and $m=10^8$, where $e$ is electric charge, $E$ is the intensity of the electric field, $m_e$ is mass of electron, and $c$ is speed of light. The ponderomotive force of this extremely relativistic waves co-linearly accelerate to the jet particle up to the maximum energy: \begin{equation} W_{\rm max}=2.9\times 10^{22}q(\Gamma/20) (\dot{m}/0.1)^{4/3}(m/10^8)^{2/3}\quad {\rm eV}, \end{equation} where $q$ is the charge of the particle and $\Gamma$ is the bulk Lorentz factor of the jet. Recent one-dimensional particle in cell (PIC) simulation shows maximum energy gain via a ponderomotive force in the bow wake and the maximum energy is almost proportional to $a_0^2$ \citep{Lau15}. Based on the above estimation, \citet{Ebisuzaki14} concluded that the accreting supermassive black hole is the ZeV ($10^{22}$ eV) linear accelerator. GRMHD simulations of accretion flows onto the black hole have been done since early works by \citet{Koide99,Koide00}. Some improvements in the numerical method for solving GRMHD equations made it possible to follow the long-term dynamics of magnetized accretion flows \citep{DeVilliers03,Gammie03}. 2D axis-symmetric and full 3D simulations have been done to study properties of accretion disk, Blandford-Znajek efficiency, jet and so on. \citet{McKinney04} have studied outward going electromagnetic power through the event horizon, i.e, Blandford-Znajek process by 2D axis-symmetric simulations. \citet{McKinney06} studied long term magnetized jet propagation launched from the disk and black hole system. The properties of the accretion flows, and outflows are studied by a series of papers \citep{DeVilliers03,Hirose04,DeVilliers05,Krolik05} Long-term 3D GRMHD simulations have been done by \citet{Narayan12,McKinney12} \citet{Beckwith08} pointed out that the initial magnetic field topology strongly affects the results, especially outflows, see also \citet{Narayan12,Penna13,Foucart16}. Recently \citet{Ressler15,O'Riordan16,Chandra17,Shiokawa17} have tried to compare the results by GRMHD simulations with observational results. Major motivation of the present paper is to verify the assumptions of \citet{Ebisuzaki14} (Equations (\ref{growthrate}) and (\ref{recurrencerate})) by the 3D GRMHD simulations of accretion disk around a supermassive black hole. This paper is organized as follows. We describe our physical models and numerical details in \S~2. The results are shown in \S~\ref{results}. The application to the ultra high energy cosmic ray acceleration, blazars, and gravitational waves is discussed in \S~\ref{sec:discussion} and \S~5. \section{GENERAL RELATIVISTIC MAGNETOHYDRODYNAMIC SIMULATION METHOD} \label{sec:basic_eq} \subsection{Basic Equations} \label{sec:basic_eq2} We numerically solve general relativistic magnetohydrodynamic equations, assuming a fixed metric around a black hole. The unit in which $GM_{\rm BH}$ and $c$ are unity is adopted, where $G$ is gravitational constant, $M_{\rm BH}$ is mass of the central black hole. The scales of length and time are $R_{\rm g}=R_{\rm s}/2=GM_{\rm BH} c^{-2}$ and $GM_{\rm BH} c^{-3}$, respectively. The mass and energy is scale free. The achieved mass accretion rate at the event horizon is used to scale of the mass and energy, for example. The metric around a rotating black hole whose dimensionless spin parameter is $a$ can be described by Boyer-Lindquist (BL) coordinate or Kerr-Schild (KS) coordinate as follows. \label{model} The line element in BL coordinate is \begin{eqnarray} \label{BLmetric} ds_{\rm BL}^2={g_{tt}}dt^2+g_{rr}dr^2+g_{\theta\theta}d\theta^2 +g_{\phi\phi}d\phi^2+2g_{t\phi}dtd\phi, \end{eqnarray} where $g_{tt}=-(1-2r/\Sigma)$, $g_{rr}=\Sigma/\Delta$, $g_{\theta\theta}=\Sigma$, $g_{\phi\phi}=A\sin^2\theta/\Sigma$, $g_{t\phi}=-2ra\sin^2\theta/\Sigma$, $\Sigma=(r^2+a^2\cos^2 \theta)$, $\Delta=r^2-2r+a^2$, and $A=(r^2+a^2)^2-a^2\Delta\sin^2\theta$. We follow standard notation used in \citet{Misner73}, i.e, metric tensor for Minkowski space is diag$(-1,1,1,1)$. The line element in Kerr-Schild coordinate is \begin{eqnarray} \label{KSmetric} ds_{\rm KS}^2={g_{tt}}dt^2+g_{rr}dr^2+g_{\theta\theta}d\theta^2 +g_{\phi\phi}d\phi^2+2g_{tr}dtdr \nonumber \\ +2g_{t\phi}dtd\phi+2g_{r\phi}drd\phi, \end{eqnarray} where $g_{tt}=-(1-2r/\Sigma)$, $g_{rr}=1+2r/\Sigma$, $g_{\theta\theta}=\Sigma$, $g_{\phi\phi}=A\sin^2\theta/\Sigma$, $g_{tr}=2r/\Sigma$, $g_{t\phi}=-2ra\sin^2\theta/\Sigma$, and $g_{r\phi}=-a\sin^2\theta(1+2r/\Sigma)$. We also use so-called modified Kerr-Schild (mKS) coordinate $(x_0,x_1,x_2,x_3)$ so that the numerical grids are fine near the event horizon and the equator. The transformation between Kerr-Schild coordinate and modified Kerr-Schild coordinate is described as $t=x_0$, $r=\exp{x_1}$, $\theta=\pi x_2 +{1\over 2}(1-h)\sin(2\pi x_2)$, and $\phi=x_3$, where $h$ is a parameter which controls how the grids are concentrated around the equator. We have done three cases of resolution in the polar and azimuthal angle grid. Constant grid for polar angle, i.e., $h=1$ are used for two lower resolution cases. The grid numbers are $N_1=124$, $N_2=124$, and $N_3=60$, and $N_1=124$, $N_2=252$, and $N_3=28$ which are uniformly spaced for both cases. In another case we set $h=0.2$ so that the polar grids concentrates around the equator. The grid numbers are $N_1=124$, $N_2=252$, and $N_3=60$ which are uniformly spaced. Since the resolution of polar grid with $h=0.2$ is about 10 and 5 times better than that with $h=1$ at the equator for the cases with $N_2=124$ and $N_2=252$, respectively, we can capture shorter wavelength and faster growing mode of MRI for poloidal direction by the highest resolution case. The highest resolution is comparable to those used in recent 3D GRMHD simulations \citep{McKinney12,Penna10}. Computational domain covers from inside the event horizon to $r=30000 R_{\rm g}$, $[0.01\pi, 0.99\pi]$ in polar angle, and $[0, 2\pi]$ in azimuthal angle. Contravariant vectors in Boyer-Lindquist coordinate and Kerr-Schild coordinate are related with $u^{t}_{KS}=u^t_{BL}+(2r/\Delta)u^{r}_{BL}$, $u^{r}_{KS}=u^r_{BL}$, $u^{\theta}_{KS}=u^{\theta}_{BL}$, and $u^{\phi}_{KS}=(a/\Delta)u^r_{BL}+u^{\phi}_{BL}$. Contravariant vectors in Kerr-Schild coordinate and modified Kerr-Schild coordinate are related with $u^{t}_{KS}=u^0_{mKS}$, $u^{r}_{KS}=ru^1_{mKS}$, $u^{\theta}_{KS}=(\pi(1-(1-h)\cos(2\pi x_2)))u^2_{mKS}$, and $u^{\phi}_{KS}=u^3_{mKS}$. Mass and energy-momentum conservation laws are, \begin{eqnarray} \label{mass_conservation} {1\over \sqrt{-g}}\partial_\mu (\sqrt{-g}\rho u^\mu)=0,\\ \label{enery_momentum_conservation} \partial_\mu (\sqrt{-g}T^\mu _\nu)=\sqrt{-g}T^\kappa_\lambda {\Gamma^\lambda} _{\nu\kappa}, \end{eqnarray} where $T^{\mu \nu}$ is energy momentum tensor, $\rho$ is rest mass density, $u^\mu$ is fluid 4-velocity, $g$ is determinant of metric tensor, i.e., $g_{BL}=g_{KS}=-\Sigma^2\sin^2\theta$, and $g_{mKS}=-\pi^2r^2(1-(1-h)\cos{2\pi x_2})^2\Sigma^2\sin^2\theta$, and $\Gamma ^k _{ij}$ is Christoffel symbol which is defined as $\Gamma ^k _{ij} =(1/2) g^{kl} (\partial g_{jl}/\partial x^i +\partial g_{li}/\partial x^j -\partial g_{ij}/\partial x^l)$. Energy momentum tensor which includes matter and electromagnetic parts is defined as follows \begin{eqnarray} T^{\mu \nu}=T_{\rm MA}^{\mu \nu}+T_{\rm EM}^{\mu \nu},\\ T_{\rm MA}^{\mu \nu}=\rho h u^\mu u^\nu + p_{\rm th} g^{\mu \nu},\\ T_{\rm EM}^{\mu \nu}=F^{\mu\gamma}F^{\nu}_{\gamma} -{1\over 4}g^{\mu\nu}F^{\alpha\beta}F_{\alpha\beta}, \end{eqnarray} where $h(\equiv 1+U/\rho+p_{\rm th}/\rho)$ is specific enthalpy, $U$ is thermal energy density, $p_{\rm th}$ is thermal pressure, and $F^{\mu\nu}$ is the Faraday tensor and a factor of $\sqrt{4\pi}$ is absorbed into the definition of $F^{\mu\nu}$. The dual of the Faraday tensor is \begin{eqnarray} \label{Faraday_tensor} ^{*}F^{\mu\nu}={1\over 2}e^{\mu\nu\alpha\beta}F_{\alpha\beta}, \end{eqnarray} where $e^{\mu\nu\alpha\beta}=\sqrt{-g}\epsilon^{\mu\nu\alpha\beta}$ and $\epsilon^{\mu\nu\alpha\beta}$ is the completely antisymmetric Levi-Civita symbol ($\epsilon^{0123}=-\epsilon_{0123}=-1$). The magnetic field observed by normal observer is \begin{eqnarray} {\mathcal B}^{\mu}=-^{*}F^{\mu\nu}n_{\nu}=\alpha~ {^{*}F^{\mu t}}, \end{eqnarray} where $n_\mu=(-\alpha, 0,0,0)$ is the normal observer's four-velocity and $\alpha \equiv\sqrt{-1/g^{tt}}$ is the lapse. Note the time component of ${\mathcal B}$ is zero, since ${\mathcal B}^t=\alpha~ ^*{F^{tt}=0}$. Here we introduce another magnetic field which is used in \citet{Gammie03,Noble06,Nagataki09} as \begin{eqnarray} \label{mag} B^{\mu}={^{*}F^{\mu t}}={{\mathcal B}^{\mu} \over \alpha}. \end{eqnarray} The time component of $B^\mu$ is also zero. We also introduce four magnetic field $b^{\mu}$ which is measured by an observer at rest in the fluid, \begin{eqnarray} b^{\mu}=-{^{*}F^{\mu\nu}}u_{\nu}. \end{eqnarray} $B^{i}$ and $b^{\mu}$ are related with \begin{eqnarray} b^{t}\equiv B^\mu u_{\mu},\\ b^{i}\equiv (B^i+u^i b^t)/u^t. \end{eqnarray} $b^\mu u_{\mu}=0$ is satisfied. By using this magnetic four vector electromagnetic component of energy momentum tensor and the dual of Faraday tensor can be written as \begin{eqnarray} T_{\rm EM}^{\mu \nu}=b^2 u^\mu u^\nu + p_{\rm b} g^{\mu \nu}-b^\mu b^\nu,\\ \label{dual_Faraday} ^{*}F^{\mu\nu}=b^{\mu}u^{\nu}-b^{\nu}u^{\mu} = {{\mathcal B}^{\mu}u^{\nu}-{\mathcal B}^{\nu}u^{\mu} \over \alpha u^{t}}= {B^{\mu}u^{\nu}-B^{\nu}u^{\mu} \over u^{t}}, \end{eqnarray} where the magnetic pressure is $p_{\rm b}=b^\mu b_\mu/2 =b^2/2$. The Maxwell equations are written as \begin{eqnarray} \label{Maxwell_Eq} {^{*}F^{\mu\nu}}_{;\nu}=0. \end{eqnarray} By using Eqs.~(\ref{dual_Faraday}) these equations give \begin{eqnarray} \partial_i \left(\sqrt{-g} {B^i}\right)=0,\\ \partial_t \left(\sqrt{-g} {B^i}\right) +\partial_j \left(\sqrt{-g} (b^i u^j-b^j u^i)\right)=0. \end{eqnarray} These are no-monopole constraint equation and time evolution of spacial magnetic field equations, i.e, the induction equations, respectively. In order to close the equations, ideal gas equation of state $p_{\rm th}=(\gamma-1)U$ is adopted, where $\gamma$ is the specific heat ratio which is assumed to be constant ($\gamma=4/3$) \footnote{The calculation with $\gamma=5/3$ shows some minor differences compared to that with $\gamma=4/3$ as discussed by \citet{McKinney04,Mignone07}. Similar time variavlities in the inflows and outflows discussed below are observed by adopting two different specific heat ratio in the equation of state.}. We ignore self-gravity of the gas around the black hole and any radiative processes, assuming radiatively inefficient accretion flow (RIAF) in the disk \citep{Narayan94}, although the effects of radiation have been discussed by \citet{Ryan17}. We numerically solve these equations by GRMHD code developed by one of authors \citep{Nagataki09,Nagataki11}. Magnetohydrodynamic equations are solved by using shock capturing method (HLL method), applying 2nd order interpolation to reconstruct of physical quantities at the cell surfaces and 2nd order time integration by using TVD (total variation diminishing) Runge-Kutta method, see also \citet{Gammie03,Noble06}. The boundary conditions are zero gradient for $x_1$ and periodic one for $x_3$. \subsection{Initial Condition} We adopt the Fishbone-Moncrief solution as an initial condition for hydrodynamic quantities as adopted for recent GRMHD simulations \citep{McKinney04,McKinney06,McKinney12,Shiokawa12}. This solution describes the hydro-static torus solution around a rotating black hole. The gravitational force by the central black hole, the centrifugal force and the pressure gradient force balance each other. There are some free parameters to give a solution. We assume the disk inner edge at the equator is at $r=6.0~R_{\rm g}$ and a constant specific angular momentum ($l^{*}\equiv |u^{t}u_{\phi}|=4.45$). The 4-velocity is firstly given at Boyer Lindquist coordinate then transformed to Kerr-Schild one and to modified Kerr-Schild one. The dimensionless spin parameter is assumed to be $a=0.9$. The radii of event horizon and the innermost stable circular orbits (ISCO) at the equator are at $r_{\rm H}(a=0.9)\sim 1.4~R_{\rm g}$ and $r_{\rm ISCO}(a=0.9)\sim 2.32~R_{\rm g}$, respectively. The initial disk profile is on the plane including polar axis shown in Fig.~\ref{initialmass} which shows mass density contour. The disk is geometrically thick and is different from standard accretion disk \citep{Shakura73} in which the geometrically thin disk is assumed. \begin{figure} \begin{center} \rotatebox{0}{\includegraphics[angle=0,scale=0.4]{fig01_2.eps}} \caption{Log scaled rest mass density profile on $x-z$ plane at $t=0$. \label{initialmass}} \end{center} \end{figure} As we have discussed in Sec.~\ref{sec:intro} magnetic fields play an important role not only for the dynamics of the accretion flows but also for the dynamics of the outflows. We impose initially weak magnetic field inside the disk as a seed. This weak magnetic field violates the initial static situation and is expected to be amplified by winding and MRI. Here we introduce the four-vector potential $\bm A$ of the electromagnetic field. The Faraday tensor is defined by this vector potential as follows. \begin{eqnarray} F_{\mu\nu}=\partial_\mu A_{\nu}-\partial_\nu A_{\mu}. \end{eqnarray} In this study we assume initially closed poloidal magnetic field, i.e., the toroidal component of the initial vector potential is, \begin{eqnarray} \label{mag_initial} A_\phi \propto \max ( \rho/\rho_{\max}-0.2, 0 ), \end{eqnarray} where $\rho_{\max}$ is maximum mass density in the initial torus. Other spacial components are zero, i.e., $A_r=A_{\theta}=0$. The minimal plasma $\beta$ which is the ratio of the thermal pressure to the magnetic pressure is 100 in the disk. Since the vector potential has only toroidal components, the poloidal magnetic field is imposed. To violate axis-symmetry maximally 5\% amplitude random perturbation is imposed in the thermal pressure i.e., thermal pressure is $p_{\rm th}= p_0 (0.95+0.1C)$, where $p_0$ is equilibrium thermal pressure derived by Fishbone-Moncrief solution and $C$ is random number in the range of $0\le C\le 1$. This perturbation violates axis-symmetry of the system and triggers generation of non axisymmetric mode. \section{EPISODIC ERUPTION OF DISKS AND JETS} \label{results} At first we discuss the results based on the highest resolution calculation. The resolution effect, i.e., comparison with the results by lower resolution calculations is briefly discussed later. The main properties, such as the amplification and the dissipation of the magnetic field inside the disk, Alfv\'en wave emission from the disk, and these time variavilities, which will be discussed below are common for all calculations by different resolutions, although characteristic timescales are different each other. \begin{figure*} \begin{center} \rotatebox{0}{\includegraphics[angle=270,scale=0.6]{fig02.eps}} \caption{The disk evolution, (a) time evolution of strength of toroidal (purple) and poloidal (green) magnetic fields at the equator averaged at $r_{\rm ISCO} \le r \le 10R_{\rm g}$ and $0 \le \phi \le 2\pi$ and (b) time evolution of mass accretion rate ($\dot M$) at the event horizon ($r=1.4 R_{\rm g}$). \label{magden}} \end{center} \end{figure*} \subsection{B-field amplification and mass accretion} The magnetic field in the disk is amplified by winding effect and MRI, as follows. The initially imposed poloidal magnetic field is stretched in the toroidal direction, generating toroidal components, since there is a differential rotation inside the accretion disk. Although initially imposed magnetic field is weak, i.e., the minimum plasma $\beta$ is 100 inside the disk, the strength of the magnetic field quickly increases by the winding effect \citep{Duez06} and MRI. Soon stratified filaments which is parallel to the equatorial plane appear around the equator in the magnetic pressure contour. Fig.~\ref{magden}(a) shows the volume averaged strength of the magnetic field $\langle(B^{i}B_i)^{1/2} \rangle$ at the equator i.e., averaged $(B^{i}B_i)^{1/2}$ over $r_{\rm ISCO} \le r \le 10R_{\rm g}$, $\theta=\pi/2$, and $0 \le \phi \le 2\pi$. Since the magnetic field is stretched by the differential rotation in the disk, toroidal component dominates over the poloidal component after $t=100~{\rm GM_{BH}}c^{-3}$, though both components show strong time variability. Fig.~\ref{magden}(b) shows the mass accretion rate at the event horizon $\dot M(r_{\rm H}, t)$, where the mass accretion rate at a radius is defined as \begin{eqnarray} \dot M(r_{\rm H}, t)=-\int \sqrt{-g_{\rm KS}} \rho(r_{\rm H},t) u^{r}(r_{\rm H},t) dA_{\rm KS}\nonumber \\ =-\int \sqrt{-g_{\rm mKS}} \rho u^{1} dA_{\rm mKS}, \end{eqnarray} where $dA$ is area element, for example, $dA_{\rm mKS}=\Delta x^2 \Delta x^3$, and the sign is chosen so that the case of mass inflow is positive mass accretion. The mass accretion rate $\dot M(r_{\rm H}, t)$ also shows strong time variability, and synchronized with magnetic field strength near the ISCO. Bottom two panels in Fig.~\ref{plasmabeta_TEST600} show the contours of $1/\beta$ at the equator at $t=7550~{\rm GM_{\rm BH}}c^{-3}$ and $t=7640~{\rm GM_{\rm BH}}c^{-3}$. Between these two figures the state of the disk near the disk inner edge ($r \sim 6 R_{\rm g}$) transitioned from high $\beta$ state ($\beta^{-1} \sim 10^{-2}$) to low $\beta$ state ($\beta^{-1} \sim 1$). Since we sometimes observe that the plasma $\beta$ in the disk is more than 100, we use ``low'' $\beta$ state for the disk with a plasma $\beta$ of order of unity at which the disk is still gas pressure supported. It should be noted that ``low $\beta$ state'' is defined as a magnetically supported disk with $\beta \lesssim 1$, for example, \citet{Mineshige95}. The characteristics of two states presented in \citet{Mineshige95} are listed in the top of Fig.~\ref{plasmabeta_TEST600}. Middle two panels in Fig.~\ref{plasmabeta_TEST600} show the toroidal magnetic field lines of two disks from MHD simulations by \citet{Tajima87}. At the low $\beta$ state (right panels in Fig.~\ref{plasmabeta_TEST600}) the toroidal magnetic field is stretched to the limit and magnetic field energy is stored. Since some field lines are almost anti-parallel and very close to each other, the reconnection will happen eventually. After magnetic field energy is dissipated via reconnection, the system becomes high $\beta$ state (left panels in Fig.~\ref{plasmabeta_TEST600}). \begin{figure*} \begin{tabular}{c|c|c} Parameter & High $\beta$ disk & Low $\beta$ disk\\ \hline $\beta\equiv$ $P_{\rm gas}/P_{\rm mag}$ & $\beta>1$ ($P_{\rm gas}$ supported) & $\beta\lesssim 1$ ($P_{\rm mag}$ supported) \\ Configuration & Optically thick disk & Optically thin disk \\ & (cooling-dominated) & (advection-dominated) \\ & + corona & consisting of blobs \\ Dissipation of magnetic fields & Escape via buoyancy & Reconnection\\ & and reconnection& \\ Dissipation of energy & Continuous & Sporadic \\ Spectrum & Soft + hard tail & Hard power law \\ Fluctuations & Small & Large \\ \hline & & \\ & \includegraphics[scale=0.33]{fig03_a.eps} & \includegraphics[scale=0.33]{fig03_b.eps} \\ & \includegraphics[scale=0.25]{fig03_c.eps} & \includegraphics[scale=0.25]{fig03_d.eps} \end{tabular} \caption{Table: Properties of high and low $\beta$ states of the disks taken from \citet{Mineshige95} (\copyright AAS. Reproduced with permission). Middle panels : Toroidal magnetic field lines at high $\beta$ state (left) and low $\beta$ state taken from \citet{Tajima87} (\copyright AAS. Reproduced with permission). Bottom panels : Inverse of plasma beta ($\beta^{-1}$) contours at the equator shown by logarithmic scales at $t=7550 {\rm GM_{\rm BH}}c^{-3}$ (low $\beta$ state, left) and at $t=7640 {\rm GM_{\rm BH}}c^{-3}$ (low $\beta$ state, right). The shadowed regions in the circle indicate inside the event horizon. \label{plasmabeta_TEST600}} \end{figure*} Bar structures near the disk inner edge can be seen (bottom panels in Fig.~\ref{plasmabeta_TEST600}). The non-axis-symmetric mode is excited, as shown in global hydrodynamic and magnetohydrodynamic simulations of accretion disks, for example \citet{Tajima87,Machida03,Kiuchi11,McKinney12}. Figure~\ref{panels} shows the contours of $1/\beta$ (at the equator), mass density ($y-z$ plane), and magnetic pressure ($x-z$ plane) at two different times as shown in Fig.~\ref{plasmabeta_TEST600}. Along the polar axis low density and highly magnetized region appears. It corresponds to the Poynting flux dominated jet. Thus baryonless and highly magnetized jet is formed along the polar axis. In this region the Alfv\'en speed is almost speed of light $\sim c$. A disk wind blows between the jet and the accretion disk. Filamentaly structures which are excited by MRI can be seen in the magnetic pressure contours. The thickness of filaments is $\sim 0.1 R_{\rm g}$, as shown in Fig.~\ref{panels}. \begin{figure*} \begin{center} \includegraphics[angle=0,scale=0.4]{fig04_a.eps} \includegraphics[angle=0,scale=0.4]{fig04_b.eps} \caption{Contours of inverse of the plasma $\beta$ (at the equator), mass density ($y-z$ plane), and magnetic pressure ($x-z$ plane) at two different times as shown in Fig.~\ref{plasmabeta_TEST600}. The domain shows $-80 {\rm GM_{BH}} c^{-2}< x <0$, $-80 {\rm GM_{BH}} c^{-2}< y <0$, and $0< z <70 {\rm GM_{BH}} c^{-2}$. \label{panels}} \end{center} \end{figure*} Both the averaged strength of the magnetic field near the ISCO at the equator and the mass accretion rate at the event horizon show synchronous time variability. This is because that the mass accretion rate at the event horizon is strongly affected by the activities of magnetic field amplification near the disk inner edge. The magnetic field amplification via MRI enhances the specific angular momentum transfer inside the accretion disk, resulting in the increase of the accretion rate at the event horizon. This means that the amplification of magnetic fields acts as a viscosity which is introduced as $\alpha$-viscosity in \citet{Shakura73}. Typical increase timescales are $20-60 ~GM_{\rm BH} c^{-3}$. While the magnetic field is amplified, the mass accretion rate at the event horizon rapidly increases. As we will show later, the outflow properties are also show intense time variability which is strongly related with particle acceleration via wakefield acceleration. The mass accretion rate repeatedly shows sharp rises followed by gradual falling down. The rising timescale for the quick increase of mass accretion rate corresponds to the value of the growth timescale of MRI \begin{eqnarray} \tau_1 \sim f_{\rm MRI}{\Omega_{\rm MRI}}^{-1}, \end{eqnarray} where $f_{\rm MRI}$ is order of unity, $\Omega_{\rm MRI}$ is growth rate of MRI, and $\Omega_{\rm MRI}=3\Omega_{\rm K}/4$ for the fastest growing mode and $\Omega_{\rm K}(r)=r^{-3/2}$ is Newtonian Keplerian angular velocity. The timescale for the fastest growing MRI mode is \begin{eqnarray} \tau_1 \sim 4.7 f_{\rm MRI} \left({r\over r_{\rm ISCO}}\right)^{3/2}[{\rm GM}_{\rm BH} c^{-3}]. \end{eqnarray} This timescale at around $r= 6-8 ~R_{\rm g}$ is almost as same as the timescale of increase of the strength of magnetic field in the disk. By the analysis of MRI for Newtonian MHD, the angular frequency of the mode is $k_z {{c_{\rm A}}_z}=\sqrt{15/16}\Omega_{\rm K}\sim \Omega_{\rm K}$ for the fastest growing mode for $z$ (parallel to polar axis) direction in Keplerian accretion disk, where $k$ is wave number and ${{c_{\rm A}}_z}$ is Alfv\'en speed of $z$ component. The volume averaged Alfv\'en speed $\langle {c_{\rm A}}_{z}\rangle \sim \langle \sqrt{b^{\theta} b_{\theta}/(\rho h +b^2)} \rangle$ near the ISCO ($r_{\rm ISCO}<r<10 ~R_{\rm g}$) at the equator is typically $3\times 10^{-3}~c$. Thus the wavelength of this mode is $\lambda=2\pi/k_z\sim 2\pi \langle {c_{\rm A}}_{z}\rangle /\Omega_{\rm K}\sim 0.022 (r/r_{\rm ISCO})^{1.5} R_{\rm g}$ with a grid size near the ISCO and at the equator $r\Delta \theta=0.0057(r/r_{\rm ISCO})R_{\rm g}$ for higher resolution case. Our simulation shows that the typical rising timescale in poloidal magnetic field amplification $\sim 30 {\rm GM_{\rm BH}} c^{-3}$. Corresponding wavelength of MRI is estimated $\sim 0.14R_{\rm g}$ at $r=8 R_{\rm g}$. This structure is well resolved by more than $8$ grids and is consistent with thickness of the filamentary structure near the equator shown in magnetic pressure panel in Fig.~\ref{panels}. Since the episodic period of this quick increase of strength of poloidal magnetic field, i.e., large peak to peak, is $\tau_2 \sim 100-400{\rm GM_{\rm BH}} c^{-3}$ which is about 2-6 times longer than the Keplerian orbital period at $r=6 {R_{\rm g}}$ near the ISCO ($\sim 22 (r/r_{\rm ISCO}){\rm GM_{\rm BH}} c^{-3}$). This timescale for the recurrence is roughly consistent with the analysis by local shearing box \citep{Stone96,Suzuki09,Shi10,O'Neill11} in which about 10 times orbital period at the radius of magnetic field amplfication is observed as repeat timescale. Along the polar axis funnel nozzles appear (Fig.~\ref{panels}). The outward going electromagnetic luminosity, opening angle of this jet are also time variable like as the mass accretion rate. The radial velocity just above the black hole and becomes positive at typically $10\lesssim r \lesssim 20 R_{\rm g}$, i.e., stagnation surfaces. Figure~\ref{ene_horizon}(b) shows radial electromagnetic luminosity calculated by the area integration only around polar region ($0\le\theta \le 20^{\circ}$) at the radius $r=15 {\rm R_g}$. Electromagnetic luminosity shows similar short time variability with the magnetic field amplification near the disk inner edge. We can see typical rising timescale of the flares is as same as that for rising timescale of magnetic field in the disk, i.e., $\bar \tau_1 \sim 30{\rm GM_{BH}} c^{-3}$ and the typical cycle of flares is also as same as repeat cycle of magnetic field amplification $\bar \tau_2 \sim 100{\rm GM_{BH}} c^{-3}$. We have observed some active phases in electromagnetic luminosity in the jet. In these active phase, the averaged radial electromagnetic flux increases and becomes about a few tens percent of the averaged disk Alfv\'en flux at the equator at around $t=1300, 3000, 4000,$ and $8200 {\rm GM_{BH}}c^{-3}$, as shown in the Fig.~\ref{ene_horizon} (a). The disk Alfv\'en flux at the equator is evaluated as an average of $z$ component of Alfv\'en energy flux at the equator $\langle E_{\rm EM}/dV\rangle$ times half of Alfv\'en speed $\langle {c_{\rm A}}_z\rangle/2$ inside the disk ($r_{\rm ISCO}<r<10 R_{\rm g}$). A large fraction of emitted Alfv\'en waves goes into the jet, when the level of the electromagnetic luminosity in the jet becomes a few tens percent of Alfv\'en flux in the disk. This is almost consistent with the assumption A as we discuss in the next section. \begin{figure*} \begin{center} \rotatebox{0}{\includegraphics[angle=270,scale=0.8]{fig05.eps}} \caption{ The jet activities. (a): time evolution of area averaged radial electromagnetic flux at $r=15 {R_{\rm g}}$ and $0<\theta<20^{\circ}$ (purple). Time evolution of averaged electromagnetic flux inside the disk calculated by $(\langle E_{\rm EM}\rangle/\langle dV \rangle) \langle {c_{\rm A}}_{z}\rangle /2$ at the equator and at $r_{\rm ISCO}<r<10 {R_{\rm g}}$ (green). (b) Time evolution of electromagnetic (Poynting) power in the jet at $r=15 R_g$ calculated by the area integration of electromagnetic flux at $0<\theta<20^{\circ}$. (c) same as (b) but the period from $t=4000 GM_{\rm BH} c^{-3}$ to $t=6000 GM_{\rm BH} c^{-3}$ is shown. Rising and repeat timescales of flares are presented in the figure. \label{ene_horizon}} \end{center} \end{figure*} Figure~\ref{butterfly} shows time evolution and the vertical structure of the averaged toroidal magnetic field ($\langle(B^{\phi}B_{\phi})^{1/2} \rangle$) as shown in \citet{Shi10,Machida13}, i.e, butterfly diagram. The average is taken at $0\leq \phi \leq 2\pi$ and $3R_{\rm g}\leq R\leq 3.2R_{\rm g}$, where $R$ is distance from the polar axis. Although initially no magnetic field is imposed at $R=3R_{\rm g}$, soon the magnetic field is transported to this site with accreting gas due to MRI growth and angular momentum exchange. After that the magnetic fields quasi-periodically goes up to the north and goes down to the south from above and below the equator due to the magnetic buoyancy, i.e., Parker instability \citep{Parker66}, as shown by \citet{Suzuki09,Shi10,Machida13}. Each episode corresponds to the one cycle of the disk state transitions. The magnetic fields sometimes changes its sign, although it happens much less than that observed in \citet{Shi10} who did high resolution simulations in local hearing box. Around the equator magnetic fields rise up with a speed $\sim 10^{-3}~c$ which corresponds to the averaged Alfv\'en speed of $z$ component (${{c_{\rm A}}_z}$) there. Strong magnetic fields sometimes goes up or goes down from the equator to outside of the disk. The appearance of the flare in the Poynting luminosity (Fig.~\ref{ene_horizon} (b) and (c)) in the jet corresponds to this strong magnetic field escape from the disk. \begin{figure*} \begin{center} \rotatebox{270} {\includegraphics[angle=0,scale=0.7] {fig06.eps}} \caption{ The time evolution and vertical structure of the averaged toroidal magnetic field at $0\leq \phi \leq 2\pi$ and $3R_{\rm g}\leq R\leq 3.2R_{\rm g}$, where $R$ is distance from the polar axis. } \label{butterfly} \end{center} \end{figure*} {The magnetic field lines in the jets are connected not with the disks but with the middle and high latitude of the central black hole. The outward going electric magnetic luminosity from the middle and high latitude of the central black hole is not so high as compared with that in the jet. The Alfv\'en waves emitted from the disks do not directly goes into the jets as assumed in \citet{Ebisuzaki14}. As shown above the time variavilities of the Poynting flux in the jet is as same as that of the magnetic fields strength in the disk and shows strong correlation. The amplification of Poynting flux above the black hole and the disks occurs by the Alfv\'en fluxes from the disk. Another possibility is that the blobs falling onto the black hall interact with magnetic fields which are connected with those in the jets. \subsection{resolution effect} In this subsection we discuss the resolution effect. We have performed calculations by three different grid types in polar and azimuthal angle as described in sec. \ref{sec:basic_eq2}. The highest resolution case for which the results are shown above is by non-uniform grids in polar angle for which the grids concentrates around the equator, resulting in about 10 or 5 times better in the polar angle around the equator than that for other two cases in which constant polar angle grids are adopted with around a half or same number of polar grids. In all cases we have observed properties discussed above, such as, time variable magnetic field amplification in the disk, disk state transition between low and high plasma $\beta$ states, time variable mass accretion onto black holes, and Poynting flux dominated jet with some flares. The timescale of the fastest growing mode ($30 {\rm GM_{BH}}c^{-3}$) is observed in the amplification of magnetic fields in the disk for highest resolution case, whereas longer timescales (typically $50 {\rm GM_{BH}}c^{-3}$ for 2nd highest resolution case and $80 {\rm GM_{BH}}c^{-3}$ or longer time scale for the lowest resolution case) are observed. The thinnest and multiple filaments in magnetic pressure contour around the equator are observed for the highest resolution case. These results mean the longer wavelength mode of MRI is observed in the lower resolution cases. The recurrence timescale for higher resolution case is also faster than that by low resolution cases. \section{Particle Acceleration} \label{sec:discussion} As shown in Fig.~\ref{ene_horizon} (b) flares in electromagnetic power in the jet are observed, where the Alfv\'en speed is almost speed of light because of the low mass density. Large amplitude Alfv\'en waves become electromagnetic waves by mode conversion of strongly relativistic waves \citep{Daniel97,Daniel98,Ebisuzaki14}. The interaction of the electromagnetic waves and the plasma can result in the acceleration of the charged particles by the ponderomotive force i.e., wakefield acceleration \citep{Tajima79}. The key for the efficient wakefield acceleration is the Lorentz invariant dimensionless strength parameter of the wave \citep{Esarey09}, \begin{eqnarray} a_0={eE \over m_e \omega c}. \end{eqnarray} The velocity of the oscillation motion of the charged particles via electric field becomes speed of light, when $a_0 \sim 1$. If the strength parameter $a_0$ highly exceeds unity, the ponderomotive force works to accelerate the charged particles to relativistic regime to wave propagating direction. \subsection{Comparison with \citet{Ebisuzaki14}} In order to evaluate $a_0$, \citet{Ebisuzaki14} used three assumptions A, B and C. Based on the results of numerical simulation, we intend to confirm three assumptions. First, assumption A tells us that the Alfv\'en flux in the jet is equal to that in the disk. As shown in Fig.~\ref{ene_horizon}(a), electromagnetic flux in the jet becomes a few tens percent of Alfv\'en flux in the accretion disk at the some active phases of the electric magnetic luminosity in the jet. Thus most Alfv\'en waves emitted from the disk via Alfv\'en burst goes to the jet as assumed in \citet{Ebisuzaki14} at this epoch, in other words, assumption A. in which all Alfv\'en waves are assumed to go into the jet. Second, \citet{Ebisuzaki14} assumed that magnetic field amplification occurs at $R=10 {\rm R_s}=20 {\rm R_g}$ for the standard disk model \citep{Shakura73}. In \citet{Ebisuzaki14} the timescale of the magnetic field amplification ($\tau_1$) and frequency of the Alfv\'en wave are determined by the MRI growth rate (Eq.~(\ref{growthrate})). Although they evaluated it around $10~R_{\rm s}$, the magnetic field amplification occurs at any radius in the disk. Since magnetic field amplification which affects to the mass accretion rate and the time variavilities in the jet mainly occurs inside compared with the assumption by \citet{Ebisuzaki14}, the timescales are shorter than those of them due to faster rotation period. If we apply $R=6.4{\rm R_g}=3.2{\rm R_s}$ instead of $R=20{\rm R_g}=10{\rm R_s}$ for \citet{Ebisuzaki14} model, the timescales are close to our numerical results as shown in table \ref{comparisontable}. The reason why the magnetic field amplification at smaller radius may be due to the high spinning of the black hole, i.e., $a=0.9$ for which both the event horizon and the radius of ISCO is smaller than those for non-spinning black hole case ($r_{\rm ISCO}(a=0)= 6 ~R_{\rm g}$). This timescale is well consistent to the rising timescales of blazar flares observed for 3C454.3 which will be discussed in next subsection. In other words, assumption B is OK in qualitatively, but Eq.~(\ref{growthrate}) must to be $\omega_A\sim 1.0\times10^{-4}(m/10^8)^{-1}{\rm Hz}$~ (see also table ~\ref{comparisontable2}). Finally, \citet{Ebisuzaki14} estimated the repeat timescale as the crossing time of the Alfv\'en wave in the disk, i.e, $Z_{\rm D}/ {c_{\rm A}}_z$ (assumption C) for the standard disk model \citep{Shakura73}. When we apply the radius $R=6.4{\rm R_g}=3.2{\rm R_s}$ instead of $R=20{\rm R_g}=10{\rm R_s}$ in Eq.~(\ref{recurrencerate}), we obtain $Z_{\rm D}/ {c_{\rm A}}_z=354 GM_{\rm BH} c^{-3}$, ignoring factor $\eta$ which is order of unity. This value is close to our typical $\bar \tau_2=100 GM_{\rm BH} c^{-3}$. The case of $R=20{\rm R_g}=10{\rm R_s}$ is also listed in the table \ref{comparisontable}. We can reevaluate the strength parameter $a_0$ as \begin{eqnarray} a_0={eE\over m_e \omega c}=1.4\times 10^{11} \left({M_{\rm BH}\over 10^8 M_{\odot}}\right)^{1/2} \left({\dot M_{\rm av}c^2 \over 0.1 L_{\rm Ed}}\right)^{1/2}. \end{eqnarray} Here electric field is estimated as $E=({\langle{c_{\rm A}}_{\rm D}\rangle/ c})^{1/2}\langle B_{\rm D}\rangle $. The angular frequency of the pulsed electromagnetic wave originating from the Alfv\'en shock (see \citet{Ebisuzaki14}) ${\omega}_{\rm D}=2\pi {c_{\rm A}}_{\rm D}/{\lambda_{\rm A}}_{\rm D}$, where ${\lambda_{\rm A}}_{\rm D}$ is assumed to be 0.14 $R_{\rm g}$. We use the values at $t=7900GM_{\rm BH} c^{-3}$ and the time averaged mass accretion rate $7750 GM_{\rm BH} c^{-3}\leq t \leq 8300 GM_{\rm BH} c^{-3}$) at the event horizon $\dot M_{\rm av}=55.8$ is used as a normalization to 10\% Eddington luminosity for $M_{\rm BH}=10^8$ solar masses. The estimated strength parameter highly exceeds unity as discussed in \citet{Ebisuzaki14}. This suggests efficient particle acceleration via wakefield acceleration can occur in the jet. Since both electrons and protons are accelerated at the large amplitude electromagnetic flares via ponderomotive forces and these particle move with the waves, high energy non-thermal electrons are concentrated at these waves. If we apply the radius for the magnetic field amplification at $R=6.4{\rm R_g}=3.2{\rm R_s}$ instead of $R=20{\rm R_g}=10{\rm R_s}$ for Eq.~(\ref{recurrencerate}), the estimated values such as the angular frequency of Alfv\'en wave in the jet (${\omega_{J}}$), recurrence rate of the burst ($1/\nu _{\rm A}$), acceleration time $D_3/c$, maximum energy of accelerated particle $W_{\rm max}$, total accretion power $L_{\rm tot}$, Alfv\'en luminosity $L_{\rm A}$ in the jet, gamma-ray luminosity $L_{\rm \gamma}$, and UHECR luminosity $L_{\rm UHECR}$ in \citet{Ebisuzaki14} are revised. Table \ref{comparisontable2} is revised version of Table 1 in \citet{Ebisuzaki14}. Figure~\ref{MBH_Ltot} is the revised version of Fig.~4 in \citet{Ebisuzaki14} which shows the relation of the maximum energy of accelerated particle $W_{\rm max}$ as a function of mass of central black hole and accretion power $L_{\rm tot}=\dot M c^2$. Since we do not consider any radiative processes, i.e., RIAF ($L_{\rm tot} \le 10\%$ Eddington luminosity), the gray shadowed region shows the objects for the UHECR ($W_{\rm max}\ge 10^{20}$ eV) accelerators. \begin{table*} \caption{Comparison the timescales unit in $GM_{\rm BH} c^{-3}$ of rising flares and repeat cycle of flares between our numerical results, blazar observations, and \citet{Ebisuzaki14} (ET14). Black hole masses $M_{\rm BH(3C454.3)}=5\times 10^8M_{\odot}$ \citep{Bonnoli11}, and $M_{\rm BH(AO0235+164)}=5.85 \times 10^8 M_{\odot}$ \citep{Liu06}, are used. For \citet{Ebisuzaki14} model, two different radii ($R=6.4{\rm R_g}$ and $R=20{\rm R_g}$ (their original assumption)) at which magnetic field amplification occurs are considered. \label{comparisontable}} \begin{tabular}{cc|cc|cc} & our results & 3C454.3 & AO0235+164 & ET14 & ET14 \\ & & & & $R=8{\rm R_g}$ & $R=20{\rm R_g}$ \\ \hline \hline rising timescale of flares ($\bar \tau_1$) & 30 & $57^{a}$ & $325^{b}$ & 128 & $5.1\times10^2$ \\ repeat cycle of flares ($\bar \tau_2$) & 100 & $132^{a}$ & $433^{b}$ & 354 & $2.0\times 10^3$ \\ \end{tabular}\\ a.~\citet{Abdo11}\\ b.~\citet{Abdo10a} \end{table*} \begin{table} \begin{center} \caption{Revised version of the Table 1 in \citet{Ebisuzaki14}, where angular frequency in the jet is ${\omega_A}$, recurrence rate of the burst is $\nu_{\rm A}$, acceleration length is $D_3$, maximum energy of accelerated particle is $W_{\rm max}$, $z$ is the charge of the particle, $\Gamma$ is bulk Lorentz factor of the jet, total accretion power is $L_{\rm tot}$, Alfv\'en luminosity in the jet is $L_{\rm A}$, gamma-ray luminosity is $L_{\rm \gamma}$, and UHECR luminosity is $L_{\rm UHECR}$. $R=6.4 R_{\rm g}$ instead of $R=20 R_{\rm g}$ is assumed as a radius where the magnetic field amplification occurs. \label{comparisontable2}} \begin{tabular}{lll} & Values & Units\\ \hline \hline $2\pi/{\omega_A}_{\rm J}$& $1.2\times 10^2 (\dot{m}/0.1)(m/10^8)$ & s \\ $1/\nu_{\rm A}\equiv \tau_2$ & $1.7\times 10^5 \eta^{-1}(m/10^8)$& s\\ $D_3/c$ & $1.8\times 10^9(\dot{m}/0.1)^{5/3}(m/10^8)^{4/3}$ & s\\ $W_{\rm max}$ & $1.8\times 10^{23}q(\Gamma/20)(\dot{m}/0.1)^{4/3}(m/10^8)^{2/3}$ & eV \\ $L_{\rm tot}$ & $1.2\times 10^{45} (\dot{m}/0.1)(m/10^8)$ & erg s$^{-1}$\\ $L_{\rm A}$ & $1.2\times 10^{42} \eta(\dot{m}/0.1)(m/10^8)$ & erg s$^{-1}$\\ $L_{\rm \gamma}$ & $ 1.2\times 10^{41}(\eta\kappa/0.1) (\dot{m}/0.1)(m/10^8)$ & erg s$^{-1}$\\ $L_{\rm UHECR}$ & $1.2\times 10^{40}(\eta\kappa\zeta/0.01) (\dot{m}/0.1)(m/10^8)$ & erg s$^{-1}$\\ $L_{\rm UHECR}/L_{\rm tot}$ & $1.0\times 10^{-5}(\eta\kappa\zeta/0.01)$ & --\\ $L_{\rm UHECR}/L_{\rm \gamma}$ & $1.0\times 10^{-1}(\zeta/0.1)$ & --\\ \hline \end{tabular} \end{center} $\zeta=L_{\rm J}/L_{\rm tot}$, $\eta=\nu_A Z_{\rm D}/{c_{\rm A}}_{\rm D}$, $\kappa=E_{\rm CR}/E_{\rm A}$, and $\zeta=\ln (W_{\rm max}/10^{20}{\rm eV})/\ln (W_{\rm max}/W_{\rm min})$. \end{table} \begin{figure} \begin{center} \rotatebox{0}{\includegraphics[angle=270,scale=0.4]{fig07.eps}} \caption{Revised version of Fig.~4 in \citet{Ebisuzaki14} applying the radius $R=6.4{\rm R_g}=3.2{\rm R_s}$ instead of $R=20{\rm R_g}=10{\rm R_s}$ as the magnetic field amplification site for the standard disk model \citep{Shakura73}. Solid lines represent maximum energy of accelerated particle for the energies $W_{\rm max}=10^{18}, 10^{20}, 10^{22}$, and $10^{24}$ eV via ponderomotive acceleration on the plane of the central black hole mass and accretion power $L_{\rm tot}=\dot M c^2$, assuming charge of the accelerated particle $q=1$, and bulk Lorentz factor of the jet ($\Gamma=20$). Dashed lines show 10\%, 0.1\% and 0.001\% accretion rate to the Eddington luminosity. Since we do not consider any radiative processes, i.e., RIAF ($L_{\rm tot} \le 10\%$ Eddington luminosity), the gray shadowed region shows the objects for the UHECR ($W_{\rm max}\ge 10^{20}$ eV) accelerators by our models. \label{MBH_Ltot}} \end{center} \end{figure} \subsection{gamma-rays} Blazar is a subclass of AGNs for which the jets are close to our line of sight. A blazar jet is very bright due to relativistic beaming effect, including high energy gamma-ray bands. They are observed in multi-wavelength from radio to TeV gamma rays. Short time variabilities and polarization are observed, including recent {\it AGILE} and {\it Fermi} observations for high energy gamma ray bands \citep{Abdo09a,Abdo10a,Abdo10b,Abdo10c,Ackermann10,Striani10,Abdo11,Bonnoli11, Ackermann12,Chen13,Ackermann16,Britto16}. These non-thermal emissions are usually explained by the internal shock model \citep{Rees78}, i.e., two shell collisions \citep{Spada01,Kino04,Mimica10,Peer16} for which a rapid shell catches up a slow shell, forming relativistic shocks. At the shocks particle accelerations, generating non-thermal particles, are expected by Fermi acceleration mechanism. Finally non-thermal emission is produced via synchrotron emission and inverse Compton emission. Other gamma ray emission from accretion disks and related processes have been suggested by \citet{Holcomb91,Haswell92}. Our model can naturally explain properties of observed active gamma-ray flares, i.e., spectrum and timescales of flares. For electrons energy loss via synchrotron radiation causes a cutoff around PeV regime \citep{Ebisuzaki14}, although heavier particles, such as protons and heavier nuclei are accelerated up to ultra high energy cosmic ray regime ($\sim 10^{20}$ eV) and beyond. The accelerated electrons emit radiation from radio to high energy gamma-rays via synchrotron radiation and inverse Compton emission mechanism. The distribution of accelerated non-thermal particles becomes power law with a power law index $\sim -2$ \citep{Mima91}, which is consistent with the observed blazar spectrum with the power law index close to $-2$. The photon index becomes close to $-2$, when the light curve of gamma-rays becomes active phase \citep{Abdo10a,Abdo11,Britto16}. This anti-correlation between the gamma-ray light curves and the photon index also supports our results. From our numerical simulations the rising timescale of electromagnetic flares in the jet is as same as rising timescale of magnetic field amplification in the disk, i.e., typically $\bar \tau_1 \sim 30 GM_{\rm BH} c^{-3}$ and timescales of peak to peak in the flares are as same as timescale of repeat cycle of magnetic field amplification in the disk, i.e., typically $\bar \tau_2 \sim 100 GM_{\rm BH} c^{-3}$. As for comparison with observed gamma-ray flares of blazars the rising timescale of flares and timescales of the cycle of the flares are normalized by the $(1+z)GM_{\rm BH} c^{-3}$, where $z$ is cosmological redshift of the object. The redshifts for two objects are $z_{\rm 3C454.3}=0.86$ \citep{Lynds67} and $z_{\rm AO0235+164}=0.94$. From the observations of line widths of ${\rm H}\beta$ in the broad line region (BLR), the mass of central black hole in 3C454.3 is estimated from $5\times 10^8M_{\odot}$ \citep{Bonnoli11} to $4\times 10^9M_{\odot}$ \citep{Gu01}. In this paper we adopt $5\times 10^8 M_{\odot}$ as a mass of central black hole in 3C454.3, since the estimation of the BLR was done by using C$_{\rm IV}$ line with less contamination by the non-thermal continuum in \citep{Bonnoli11}. The mass of central black hole in AO0235+164 is also derived to $M_{\rm BH(AO0235+164)}=5.85 \times 10^8 M_{\odot}$ \citep{Liu06} from the line width of ${\rm H}\beta$ in the BLR. We adopted 7 days as repeat time and 3 days as rise time for 3C454.3, though various timescales with different times are observed from 3C454.3. Among them, the seven days flare observed by \citet{Abdo11} is the most energetic. The estimated apparent isotropic gamma-ray energy in the seven days flare is 4 times or more higher than those of flares reported in \citet{Abdo10a,Britto16}. Sub-energetic and shorter timescale flares reported in \citet{Striani10,Ackermann12,Britto16} can be explained the result of the magnetic eruption via reconnection at the smaller region in the accretion flows. For AO0235+164 we compare the flare reported in \citet{Abdo10a}, since the flare is the most energetic one in the apparent isotropic energy compared with other flares of AO0235+164, for example \citet{Ackermann12}. The rising timescale is three weeks and repeat timescale is four weeks. Table \ref{comparisontable} summarize of the comparison between our results, theoretical model by \citet{Ebisuzaki14} and observations. Both timescales for 3C454.3 are good agreement with our results. For AO0235+164 the timescales are longer than those for our results which suggests the magnetic field amplification may occur outward where the timescale of MRI growth becomes longer. \section{DISCUSSIONS and SUMMARY} \label{sec:conclusions} We have performed 3D GRMHD simulation of accretion flows around a spinning black hole ($a=0.9$) in order to study the AGN jets from the system of supermassive black hole and surrounding accretion disk as an ultra high energy cosmic ray accelerator via wake field acceleration mechanism. We start our simulation from a hydrostatic disk, i.e., Fishbone-Moncrief solution with a weak magnetic field and random perturbation in thermal pressure which violate the hydrostatic state and axis-symmetry. We follow the time evolution of the system until $8300 GM_{\rm BH} c^{-3}$. Initially imposed magnetic field is well amplified, due to differential rotation of the disk. Non axis-symmetric mode, i.e., bar mode near the disk edge grows up. For highest resolution calculation case, the typical timescale of the magnetic field growth near the disk edge is $\bar\tau_1 = 30 GM_{\rm BH} c^{-3}$ which corresponds to inverse of the growth rate of the MRI for the almost fastest growing mode. For lower resolution calculation case, the time scale of the magnetic field growth near the disk edge becomes longer one. And the thickness of the filamental structure in the magnetic pressure around the equator increases by the lower resolution calculations. Amplified magnetic field once drops then grows up again. The typical repeat timescale is $\bar\tau_2 = 100 GM_{\rm BH} c^{-3}$ which corresponds to the analysis by high resolution local shearing box simulations. The transition between low $\beta$ state and high $\beta$ state repeats. This short time variability for the growth of poloidal magnetic field neat the disk edge also can be seen in the mass accretion rate at the event horizon which means the mass accretion seen in our numerical simulation triggered by angular momentum transfer by the growth of the magnetic fields. We have two types outflows as shown in Fig.~\ref{plasmabeta_TEST600}. One is the low density, magnetized, and collimated outflow, i.e, jets. The other is disk winds which are dense gas flow and between the disk surface and jets. In the jet short time variabilities of the electromagnetic luminosity are observed. The timescales are similar with those seen in the mass accretion rate at the event horizon and and poloidal Alfv\'en energy flux in the disk near the ISCO, i.e., typical rising timescales of flares and typical repeat cycle are as same as the rising timescales of the magnetic field amplification and repeat cycle of the magnetic field amplification, $\bar \tau_1 =30{\rm GM_{BH}}c^{-3}$ and $\bar \tau_2 =100{\rm GM_{BH}}c^{-3}$, respectively. Thus short pulsed relativistic Alfv\'en waves are emitted from the accretion disk, when a part of stored magnetic field energy is released. Since the strength parameter of these waves extremely high as $\sim 10^{11}$ for the $10^8$ solar masses central black hole and 10\% Eddington accretion rate accretion flows, the wakefield acceleration proposed by \citet{Tajima79} can be applied in the jet after mode conversion from Alfv\'en waves to electromagnetic waves. There are some advantages against Fermi acceleration mechanism \citep{Fermi54}. When we apply this mechanism to the cosmic ray acceleration, the highest energy of cosmic ray reaches $10^{22}$ eV which is enough high to explain the ultra high energy cosmic rays. We observe magnetic field amplification occurs inner radius compared with the assumption by \citet{Ebisuzaki14}, i.e., $R=20{\rm R_g}$. If we apply the model by \citet{Ebisuzaki14} assuming that magnetic field amplification occurs much inside the disk. The two timescales are consistent with our numeral results. Since both protons and electrons are accelerated via ponderomotive force in the jet. High energy gamma-ray emission are observed if we see the jet almost on-axis, i.e., blazars. The observed gamma-ray flare timescales such as rising timescales of flares and repeat cycle of flares for 3C454.3 by {\it Fermi} Gamma-ray Observatory \citep{Abdo10a} are well explained by our bow wake acceleration model. Telescope Array experiment reported that there is hotspot for the cosmic ray with the energy higher than 57EeV \citep{Abbasi14}. This observation supports that AGN jet is the origin of cosmic ray. Lastly, the consequences from our present work include the following implication on the gravitational observation. Since non-axis-symmetric mode grows in the disk, mass accretion onto the black hole causes the emission of gravitational waves \citep{Kiuchi11}. We estimate the levels of the signal of these gravitational waves, assuming the black hole and accretion disk system. The dimensionless amplitude of the gravitational wave at the coalescing phase can be estimated as \citet{Matsubayashi04}: \begin{eqnarray} h_{\rm coal}=5.45\times 10^{-21} \left( \epsilon_{\rm GW} \over 0.01 \right) \left( 4 {\rm Gpc} \over R \right) \left(\mu \over \sqrt {2} \times 10^3 M_{\odot} \right), \end{eqnarray} where $\epsilon_{\rm GW}$ is efficiency and we here assume as $1\%$, $\mu$ is reduced mass of the unit of solar mass for the black hole and the blob $\sim$ mass of the blob. The mass of the blob is estimated by $\dot M \bar \tau_1 $. Figure~\ref{GW}~(a) shows time evolution of estimated amplitude of the gravitational waves from our mass accretion rate for the blazar 3C454.3. The mass accretion history from $t=6500 GM_{\rm BH} c^{-3}$ to $t=8300 GM_{\rm BH} c^{-3}$ is used for this plot. \begin{figure} \begin{center} \rotatebox{0}{\includegraphics[angle=270,scale=0.28]{fig08_a.eps}} \rotatebox{0}{\includegraphics[angle=270,scale=0.28]{fig08_b.eps}} \caption{Top: Time evolution of amplitude of gravitational waves for 3C454.3 ($z=0.86$) derived by the mass accretion rate shown in Fig.~\ref{magden}. Bottom : Estimated gravitational wave signal when gas blobs accrete onto the black holes. Sensitivity curves of grand based gravitational detector (KAGRA) and proposed space gravitational detectors (eLISA, preDECIGO, DECIGO, and BBO) are also presented. \label{GW}} \end{center} \end{figure} Figure~\ref{GW}~(b) shows the estimated amplitude of the gravitational wave for some objects, such as gamma-ray active blazars AO0235+164 ($=0.94$) and 3C454.3 ($z=0.86$), nearby AGN jet M87, and famous stellar black hole Cygnus X-1. We assume 1\% Eddington mass accretion rate and typical frequency of the gravitational wave signal is $\bar \tau_1^{-1}/(1+z)$, where $z$ is redshift of the object. Approximated sensitivity curves of the KAGRA \citep{Nakano15} proposed space gravitational wave detectors such as eLISA \citep{Klein16}, preDECIGO \citet{Nakamura16}, DECIGO \citep{Kawamura06} and BBO \citep{Yagi11} are also presented. The signal level is so far small compared to the limit of the presently operating or proposed gravitational antennas. Our model cal be applied to magnetic accretion flows onto the central objects, arising from other events such as black hole or neutron star collisions. For example, recently the gravitational waves from neutron star merger have been detected by LIGO and Virgo gravitational wave detectors \citep{Abbott17a}. Short gamma-ray burst followed this event just 1.7 s later of the two neutron stars merger \citep{Abbott17b}. If the jets which emit gamma-rays are powered by magnetic accreting flows onto the merged object, strong and relativistic pulses of Alfv\'en waves would be emitted like our analysis of accretion disks and then charged particles in the jet are accelerated by the electromagnetic waves as \citet{Takahashi00} discussed. We see the present acceleration mechanism and its signature of gamma-ray bursts in an ubiquitous range of phenomena. \section*{Acknowledgements} The authors wish to acknowledge the anonymous referee for his/her detailed and helpful comments to the manuscript. We thank Shinkai H. for introducing some references for the fitting formula on the sensitivities of gravitational wave detectors. We appreciate useful comments by K. Abazajian and B. Barish. This work was carried out on Hokusai-greatwave system at RIKEN and XC30 at CFCA at NAOJ. This work was supported in part by the Grants-in-Aid of the Ministry of Education, Science, Culture, and Sport 15K17670 (AM), 26287056 (AM \& SN), Grant-in-Aid for Scientific Research on Innovative Areas Grant Number 26106006 (ET), the Mitsubishi Foundation (SN), Associate Chief Scientist Program of RIKEN SN, a RIKEN pioneering project `Interdisciplinary Theoretical Science (iTHES)' (SN), and 'Interdisciplinary Theoretical \& Mathematical Science Program (iTHEMS)' of RIKEN (SN).
2024-02-18T23:39:52.585Z
2018-06-05T02:14:14.000Z
algebraic_stack_train_0000
732
10,629
proofpile-arXiv_065-3618
\section{Introduction} Character recognition, popularly referred as optical character recognition (OCR), has been one of the interesting, fascinating and challenging fields of research in pattern recognition, artificial intelligence and machine vision in the last years \cite{MoSuYa, ImOtOc}. It has multiple applications in the real life, such as verifying signatures, recognizing bank check account numbers and amounts, or automating the mail sorting process in postal services; thus, much research has been focused on designing accurate handwritten character recognition systems \cite{BaHaBu, Ho, NaLeSu}. As both industry and academy have paid attention to this attractive field, there have been numerous previous attempts for recognizing handwritten characters, such as those methods for English handwritten characters included in \cite{KuTa, MoSuYa, NaLeSu, ShaGhShaTh}. One can find information about recent trends and tools in OCR in \cite{ShaGhShaTh}. The generic handwritten recognition process includes several phases: preprocessing the handwritten text, segmenting the writing into isolated characters, extracting feature vectors from the individual characters, and finally classifying each character using the features previously extracted, such that it can be assigned to the most likely letter \cite{Ho}. In this work, we will focus on the feature extraction step, as it is vital for obtaining good results in the recognition process in terms of accuracy. Its main goal is to extract and select a collection of features that maximizes the recognition rate using the least amount of elements as possible \cite{ArYa}. Moreover, the feature extraction method should be robust, that is, it should obtain similar feature sets from a collection of instances of the same symbol. This property makes the subsequent classification step less difficult \cite{FiGeKe}. According to Govindan and Shivaprasad \cite{GoShr}, we can classify features in different categories: global transformation and series expansion, statistical features, and geometrical and topological features. This last approach allows encoding some knowledge about the structure of the object and the components that make up that object \cite{ArYa}. In addition, it is closer to the human way of recognition \cite{BuSa}. The structural features consider different properties of the characters, such as extreme points, maxima and minima, reference lines, ascenders, descenders, cusps above and below a threshold, isolated dots, cross points, branch points, direction of a strokes, inflection between two points, etc \cite{ArYa}. Some of these structural features were already proposed by cognitive psychologists \cite{LiNo}, when studying the visual and cognitive mechanisms involved in visual object recognition. There are many works in OCR using structural feature extraction models. Rocha and Pavlidis \cite{RoPa} proposed a method for the recognition of multi-font printed characters giving special emphasis to structural features. The structural description of the shape for each character considered convex arcs and strokes, singular points and their spatial interrelations. Kahan et al.~\cite{KaPaBa} also developed a structural feature set for recognition of printed text. They included different information for a character, such as the location and number of their holes, the concavities in their skeletal structure, characteristics of their bounding boxes, among others. Kuroda, Harada, and Hagiwara implemented a recognition system for the on-line identification of handwritten Chinese characters based on structural patterns \cite{KuHaHa}. Lee and Gomes \cite{LeGo} proposed a method for recognizing numeral characters, also based on structural features. Their technique considered topological characteristics like the number of cavities, the crossing sequences, the intersection with the principal and secondary axes, and the distribution of pixels. Chan and Yeung \cite{ChYe} proposed a structural approach for the analysis of handwritten mathematical expressions. This problem is even more complicated than recognizing individual characters or symbols, as the components of the mathematical expression are normally arranged as a complex two-dimensional structure, and have different sizes. Amin \cite{Am} focused on printed Arabic text, which obtains lower recognition rates than those of disconnected characters such as printed English. He used seven types of global structural features such as number of sub words, number of peaks of each word, number of loops of each peak, number and position of complimentary characters, or the height and width of each peak. Kavallieratou, Fakotakis and Kokkinakis \cite{KaFaKo_2} proposed an integrated analysis system for unconstrained handwriting. The last module of this system includes a handwritten character recognition technique that uses a structural approach \cite{KaFaKo}. More concretely, it extracts a 280-dimensional feature vector for each character, consisting of the horizontal, vertical and radial histograms and the out-in and in-out radial profiles, and uses the $k$-means algorithm for the classification. Yang, Lijia and Chen \cite{YaLiChe} proposed the combination of structural and statistical features, in addition to BP networks for the classification step, to solve interferences of external noise.\\ In this paper, we propose a new algorithm for isolated English handwritten character recognition based on some structural features, using eight new histograms and four new profiles. Thus, we extract a $256$-dimensional integral vector for each character and then employ the $k$-means clustering algorithm for the classification step. We compare our results to those given in \cite{KaFaKo}, as the methods for feature extraction and classification are the most similar to ours. Our illustrative tables show that we reduce the dimension of the feature vectors and improve the accuracy of recognition. The rest of the paper is organized as follows. Section~\ref{S:algo} describes the proposed technique in detail. Section~\ref{S:exp} includes the experimental evaluation, and finally Section~\ref{S:concl} contains the conclusions and future work. \section{Proposed algorithm}\label{S:algo} \subsection{Preliminaries} A handwritten character recognition system usually requires a preprocessing phase before the feature extraction and classification steps \cite{KuTa}. The main goal of this preprocessing phase is to obtain isolated characters and represent them conveniently for the following steps. In most cases, this includes a segmentation stage and a binarization stage to get the isolated characters in the form of $m \times n$ binary matrices. These matrices are then generally normalized by reducing the size and removing the redundant information from the image without loosing any important information. Then, the feature extraction is applied over these matrices. This step can be considered as the heart of the system, as the feature selection is usually the most important factor to achieve high accuracies in the recognition process. After the normalization of the character images, the objective of the feature extraction is to represent the isolated characters as unique feature vectors. The key is to maximize the recognition rate using as few features as possible. Finally, the classification stage is the main decision-making stage of the system, and it uses the feature vectors to identify the text segment according to preset rules. In this stage, the basic task is to design a decision rule that is easy to compute and that maximizes the certainty of the misclassification relative to the power of the feature extraction scheme employed. \subsection{Preprocessing} For the experimental evaluation, we plan to use NIST database \cite{NIST}, which contains $128 \times 128$ BMP files for isolated handwritten English characters. Thus, before extracting the features, the preprocessing step must binarize and then normalize each original image data file to obtain a $32 \times 32$ matrix with entries in $\{0,1\}$, such that $0$s stand for white pixels while $1$s for black pixels. \subsection{Features extraction} As we have already mentioned, in this paper we focus on structural characteristics for feature extraction. Instead of the well-known horizontal and vertical histograms, we introduce new horizontal left and right histograms and vertical upper and lower histograms. We also employ new orthodiagonal and orthoantidiagonal histograms and profiles. All these features are used for the first time in the optical character recognition research. We will study if these new features improve the accuracy of the handwritten character recognition algorithm in comparison with \cite{KaFaKo}.\\ Now, we give the formal definition of these features. We need a map \[ f \colon [32] \times [32] \longrightarrow \{0, 1\} \] defined as follows: $f(l,m)$ is the value of the element in the $l$-th row and $m$-th column of the character matrix, and $[32] = \{1, \dots , 32\}$. The horizontal left and right histograms, $H_{\hl}$ and $H_{\hr}$, of the character matrix are the number of black pixels in the even rows of the left half of the matrix and the odd rows of the right half of the matrix, respectively (i.e. 32 features): \[ H_{\hl}(n)= \displaystyle \sum^{16}_{m=1} f(2n,m) \quad \text{for all} \quad 1\leq n \leq 16, \] and \[ H_{\hr}(n)= \displaystyle \sum^{32}_{m=16} f(2n-1,m) \quad \text{for all} \quad 1\leq n \leq 16. \] The vertical upper and lower histograms, $H_{\vu}$ and $H_{\vl}$, of the character matrix are the number of black pixels in the even columns of the upper half of the matrix and the odd columns of the lower half of the matrix, respectively (i.e. 32 features): \[ H_{\vu}(n)= \displaystyle \sum^{16}_{m=1} f(m,2n) \quad \text{for all} \quad 1\leq n \leq 16, \] and \[ H_{\vl}(n)= \displaystyle \sum^{32}_{m=16} f(m,2n-1) \quad \text{for all} \quad 1\leq n \leq 16. \] Besides above-given histograms, we introduce several other new histograms. We start with the upper and lower diagonal histograms, $H_{\ud}$ and $H_{\ld}$, given by the number of black pixels according to the odd and even orthogonal lines to the diagonal of the character matrix in the upper and lower triangles, respectively (i.e. 32 features in total): \[ H_{\ud}(n)= \displaystyle \sum_{\substack{k\geq 0\\ 2n-1-k\geq 1 \\ 2n-1+k\leq 32}} f(2n-1-k,2n-1+k) \quad \text{for all} \quad 1\leq n \leq 16, \] and \[ H_{\ld}(n)= \displaystyle \sum_{\substack{k\geq 0\\ 2n-k\geq 1 \\ 2n+k\leq 32}} f(2n+k,2n-k) \quad \text{for all} \quad 1\leq n \leq 16. \] Symmetrically, the upper and lower antidiagonal histograms, $H_{\uad}$ and $H_{\lad}$, are defined as the number of black pixels according to the even and odd orthogonal lines to the antidiagonal of character matrix in upper and lower triangles, respectively (i.e. again 32 features in total): \[ H_{\uad}(n)= \displaystyle \sum_{\substack{k\geq 0\\ 2n-k\geq 1 \\ 33-2n-k\geq 1}} f(2n-k,33-2n-k) \quad \text{for all} \quad 1\leq n \leq 16, \] and \[ H_{\lad}(n)= \displaystyle \sum_{\substack{k\geq 0\\ 2n+k-1\leq 32 \\ 34-2n+k\leq 32}} f(2n-1+k,34-2n+k) \quad \text{for all} \quad 1\leq n \leq 16. \] Additionally, we introduce the out-in and in-out diagonal and antidiagonal profiles for each normalised character. Namely, \emph{out-in upper diagonal profile} $P_{\oiud}$ and \emph{out-in lower diagonal profile} $P_{\oild}$ are defined at the index $1\leq n \leq 16$ as the position of the first black pixel found in the $(2n-1)$-th orthogonal line to the diagonal of the character matrix, starting from the periphery in the upper triangle going down; and the $2n$-th orthogonal line to the diagonal of the character matrix, starting in lower triangle going up, respectively (i.e. 32 features in total): \[ P_{\oiud}(n) = \left\lbrace I \ \left| \begin{array}{c} \displaystyle \sum_{\substack{k\geq I+1\\ 2n-1-k\geq 1 \\ 2n-1+k\leq 32}} f(2n-1-k,2n-1+k) = 0 \\ \qquad \qquad f(2n-1-I, 2n-1+I)=1 \\ \end{array} \right. \right. \] for all $1\leq n \leq 16$, and \[ P_{\oild}(n) = \left\lbrace J \ \left| \begin{array}{c} \displaystyle \sum_{\substack{k\geq J+1\\ 2n-k\geq 1 \\ 2n+k\leq 32}} f(2n+k,2n-k) = 0 \\ \qquad \qquad f(2n+J, 2n-J)=1 \\ \end{array} \right. \right. \] for all $1\leq n \leq 16$. Symmetrically, we introduce the \emph{out-in upper antidiagonal profile} $P_{\oiuad}$ and the \emph{out-in lower antidiagonal profile} $P_{\oilad}$, which are defined at the index $1\leq n \leq 16$ as the position of the first black pixel found in the $2n$-th orthogonal line to the antidiagonal of the character matrix, starting from the periphery in upper triangle going down; and the $(2n-1)$-th orthogonal line to the antidiagonal of the character matrix, starting in lower triangle going up, respectively (i.e. 32 features in total): \[ P_{\oiuad}(n) = \left\lbrace I \ \left| \begin{array}{c} \displaystyle \sum_{\substack{k\geq I+1\\ 2n-k\geq 1 \\ 33-2n-k\geq 1}} f(2n-k,33-2n-k) = 0 \\ \qquad \qquad f(2n-I, 33-2n-I)=1 \\ \end{array} \right. \right. \] for all $1\leq n \leq 16$, and \[ P_{\oilad}(n) = \left\lbrace J \ \left| \begin{array}{c} \displaystyle \sum_{\substack{k\geq J+1\\ 2n-1+k\leq 32 \\ 34-2n+k\leq 32}} f(2n-1+k,34-2n+k) = 0 \\ \qquad \qquad \ f(2n-1+J, 34-2n+J)=1 \\ \end{array} \right. \right. \] for all $1\leq n \leq 16$. \; Moreover, the \emph{in-out upper diagonal profile} $P_{\ioud}$ and the \emph{in-out lower diagonal profile} $P_{\iold}$ are defined at the index $1\leq n \leq 16$ as the position of the first black pixel found in the $(2n-1)$-th and in the $2n$-th orthogonal lines to the diagonal of character matrix starting from the diagonal going to the periphery in the upper and lower triangles, respectively (i.e. 32 features in total): \[ P_{\ioud}(n) = \left\lbrace I \ \left| \begin{array}{c} \displaystyle \sum^{I-1}_{\substack{k\geq 0\\ 2n-1-k\geq 1 \\ 2n-1+k\leq 32}} f(2n-1-k,2n-1+k) = 0 \\ \qquad \qquad f(2n-1-I, 2n-1+I)=1 \\ \end{array} \right. \right. \] for all $1\leq n \leq 16$, and \[ P_{\iold}(n) = \left\lbrace J \ \left| \begin{array}{c} \displaystyle \sum^{J-1}_{\substack{k\geq 0\\ 2n-k \geq 1 \\ 2n+k\leq 32}} f(2n+k,2n-k) = 0 \\ \qquad \quad f(2n+J, 2n-J)=1 \\ \end{array} \right. \right. \] for all $1\leq n \leq 16$. Symmetrically, we introduce the \emph{in-out upper antidiagonal profile} $P_{\iouad}$ and the \emph{in-out lower antidiagonal profile} $P_{\iolad}$, which are defined at the index $1\leq n \leq 16$ as the position of the first black pixel found in the $2n$-th and in the $(2n-1)$-th orthogonal lines to the antidiagonal of the character matrix, starting from the antidiagonal going to the periphery in upper and lower triangles, respectively (i.e. 32 features in total): \begin{table*}[!th] \centering \caption{\label{Ta:TrSet}Training set from NIST database} \begin{tabular}{|l|c|c|} \hline NIST database & Partition & Handwriting Sample Forms \\ \hline Digits & $HSF_0$ & $F0000 \ldots F0099$ \\ \hline Uppercase characters & $HSF_0$, $HSF_1$ & $F0000 \ldots F0999$ \\ \hline Lowercase characters & $HSF_0$, $HSF_1$ & $F0000 \ldots F0999$\\ \hline \end{tabular} \end{table*} \; \begin{table*}[!th] \centering \caption{\label{Ta:TstSet}Test set from NIST database} \begin{tabular}{|c|c|c|} \hline NIST database & Partition & Handwriting Sample Forms \\ \hline Digits & $HSF_0$ & $F0100 \ldots F0149$ \\ \hline Uppercase characters & $HSF_3$ & $F1000 \ldots F1499$ \\ \hline Lowercase characters & $HSF_3$ & $F1000 \ldots F1499$\\ \hline \end{tabular} \end{table*} \begin{table*}[!th] \centering \caption{\label{Ta:Ka}Results of Algorithm~\cite{KaFaKo}} \begin{tabular}{|c|c|c|c|} \hline & 1\textsuperscript{st} Choice & 2\textsuperscript{nd} Choice & 3\textsuperscript{rd} Choice \\ \hline Digits & 92.48\% & 96.02\% & 97.60\% \\ \hline Uppercase characters & 87.08\% & 92.95\% & 95.26\% \\ \hline Lowercase characters & 79.71\% & 88.62\% & 92.12\% \\ \hline \end{tabular} \end{table*} \; \begin{table*}[!th] \centering \caption{\label{Ta:Niko}Results of our Algorithm} \begin{tabular}{|c|c|c|c|} \hline & 1\textsuperscript{st} Choice & 2\textsuperscript{nd} Choice & 3\textsuperscript{rd} Choice \\ \hline Digits & 93.75\% & 97.02\% & 97.90\% \\ \hline Uppercase characters & 88.58\% & 94.09\% & 95.79\% \\ \hline Lowercase characters & 81.74\% & 90.13\% & 92.89\% \\ \hline \end{tabular} \end{table*} \[ P_{\iouad}(n) = \left\lbrace I \ \left| \begin{array}{c} \displaystyle \sum^{I-1}_{\substack{k\geq 0\\ 2n-k \geq 1 \\ 33-2n-k \geq 1}} f(2n-k,33-2n-k)= 0 \\ \qquad \qquad f(2n-I, 33-2n-I)=1 \\ \end{array} \right. \right. \] for all $1\leq n \leq 16$, and \[ P_{\iolad}(n) = \left\lbrace J \ \left| \begin{array}{c} \displaystyle \sum^{J-1}_{\substack{k\geq 0\\ 2n-1+k\leq 32 \\ 34-2n+k\leq 32 }} f(2n-1+k,34-2n+k) = 0 \\ \qquad \qquad \ f(2n-1+J, 34-2n+J)=1 \\ \end{array} \right. \right. \] for all $1\leq n \leq 16$. \subsection{Classification} In the previous step, a $256$-dimensional feature vector have been extracted from each isolated handwritten character image. These feature vectors are then used in the classification step, where we use the $k$-means clustering algorithm to train and create a classification model. \section{Experimental evaluation} \label{S:exp} We run experiments using the NIST database of handwritten English characters \cite{NIST}. The experiments were held separately for each one of the following categories: digits, uppercase characters and lowercase characters. In more detail, using programs written in Python, our recognition algorithm was trained and tested in comparison with the algorithm given in \cite{KaFaKo} on about 1000 samples and 64 classes and on 500 samples for each isolated handwritten character from NIST database, respectively. Namely, Table~\ref{Ta:TrSet} and Table ~\ref{Ta:TstSet} show the exact input data for our experiments. Thus, the training and the test set of our experiments were completely disjoint, which means that the writers used in testing were completely different from the ones used for training. We show the accuracy rate obtained by \cite{KaFaKo} for each character category in Table~\ref{Ta:Ka}, whereas Table~\ref{Ta:Niko} shows the accuracy rates obtained by our method. Analogously to \cite{KaFaKo}, we also show the recognition accuracy rates when the second and third choices are taken into account, as in a real system one could use lexicons to improve the output of the proposed character recognizer. These results show that our technique outperforms the algorithm in \cite{KaFaKo} for all categories, in some cases by a margin of more than 2 percentage points. \section{Conclusion}\label{S:concl} In this paper, we present a technique for English handwritten character recognition based on the extraction of new structural features. More concretely, we introduce eight new histograms and four new profiles, which have been proven to successfully represent the handwritten characters. We have tested our approach using the NIST database and obtained recognition accuracies varying from 81.74\% to 93.75\%, depending on the difficulty of the character category. These results outperform previous attempts of using just structural features, in addition to being fast and simple to compute. The results are promising and usable is some sort of applications. In the nearest future they will be implemented in a mobile (iOS) application. We also plan to apply the technique to characters from other languages, e.g. Georgian characters. Moreover, due to the nature of our method, it is possible to reduce the current number of features, 256 in this paper, according to the needs of the application where the technique will be used. \section*{Acknowledgments} This work was supported by the Agencia Estatal de Investigaci\'on, Spain (European ERDF support included, UE) [grant numbers MTM2016-79661-P, TIN2016-77158-C4-3-R]; and by the Conseller\'ia de Cultura, Educaci\'on e Ordenaci\'on Universitaria and the European Regional Development Fund (ERDF) [grant number ED431G/01]. \bibliographystyle{model2-names}
2024-02-18T23:39:52.721Z
2018-02-15T02:11:44.000Z
algebraic_stack_train_0000
738
3,305
proofpile-arXiv_065-3692
\section{Introduction} Active matter consists of a large number of self-driven agents converting chemical energy, usually stored in the surrounding environment, into mechanical motion \cite{Ram2010,MarJoaRamLivProRaoSim2013,ElgWinGom2015}. In the last decade various realizations of active matter have been studied including living self-propelled particles as well as synthetically manufactured ones. Living agents are for example bacteria \cite{DomCisChaGolKes2004,sokolov2007concentration}, microtubules in biological cells \cite{SurNedLeiKar2001,SanCheDeCHeyDog2012}, spermatozoa \cite{2005Riedel_Science,Woolley,2008Friedrich_NJP} and animals \cite{CavComGiaParSanSteVia2010,CouKraJamRuxFra2002,VisZaf2012}. Such systems are out-of-equilibrium and show a variety of collective effects, from clustering \cite{BocquetPRL12,Bialke_PRL2013,Baskaran_PRL2013,Palacci_science} to swarming, swirling and turbulent type motions \cite{ElgWinGom2015, DomCisChaGolKes2004,sokolov2007concentration,VisZaf2012,wensink2012meso,SokAra2012,SaiShe08,RyaSokBerAra13}, reduction of effective viscosity \cite{sokolov2009reduction,GacMinBerLinRouCle2013,LopGacDouAurCle2015,HaiAraBerKar08,HaiSokAraBer09,HaiAroBerKar10,RyaHaiBerZieAra11}, extraction of useful energy \cite{sokolov2010swimming,di2010bacterial,kaiser2014transport}, and enhanced mixing \cite{WuLib2000,SokGolFelAra2009,pushkin2014stirring}. Besides the behavior of microswimmers in the bulk the influence of confinement has been studied intensively in experiments \cite{DenissenkoPNAS,Chaikin2007} and numerical simulations \cite{ElgetiGompper13,Lee13Wall,Ghosh,Wensink2008}. There are two distinguishing features of swimmers confined by walls and exposed to an external flow: accumulation at the walls and upstream motion (rheotaxis). Microorganisms such as bacteria \cite{BerTur1990,RamTulPha1993,FryForBerCum1995,VigForWagTam2002,BerTurBerLau2008} and sperm cells \cite{Rot1963} are typically attracted by no-slip surfaces. Such accumulation was also observed for larger organisms such as worms \cite{YuaRaiBau2015} and for synthetic particles \cite{DasGarCam2015}. The propensity of active particles to turn themselves against the flow (rheotaxis) is also typically observed. \textcolor{black}{While for larger organisms, such as fish, rheotaxis is caused by a deliberate response to a stream to hold their position \cite{JiaTorPeiBol2015}, for micron sized swimmers rheotaxis has a pure mechanical origin \cite{HilKalMcMKos2007,fu2012bacterial,YuaRaiBau2015rheo,TouKirBerAra14,PalSacAbrBarHanGroPinCha2015}.} These phenomena observed in living active matter can also be achieved using synthetic swimmers, such as self-thermophoretic \cite{Sano_PRL2010} and self-diffusiophoretic \cite{paxton2004catalytic,HowsePRL2007,Bechinger_JPCM,Baraban_SM2012} micron sized particles as well as particles set into active motion due to the influence of an external field \cite{bricard2013emergence,bricard2015emergent,KaiserSciAdv2017}. Using simple models we describe the extrusion of a dilute active suspension through a trapezoid nozzle. We analyze the qualitative behavior of trajectories of an individual active particle in the nozzle and study the statistical properties of the particles in the nozzle. The accumulation at walls and rheotaxis are important for understanding how an active suspension is extruded through a nozzle. Wall accumulation may eliminate all possible benefits caused by the activity of the particles in the bulk. \textcolor{black}{Due to rheotaxis active particles may never reach the outlet and leave the nozzle through the inlet, so that properties of the suspension coming out through the outlet will not differ from those of the background fluid.} The specific geometry of the nozzle is also important for our study. The nozzle is a finite domain with two open ends (the inlet and the outlet) and the walls of the nozzle are not parallel but convergent, that is, the distance between walls decreases from the inlet to the outlet. The statistical properties of active suspension (e.g., concentration of active particles) extruded in the infinite channel with parallel straight or periodic walls are well-established, see e.g., \cite{EzhSai2015} and \cite{MalSta2017}, respectively. The finite nozzle size leads to a ``proximity effect", i.e., the equilibrium distribution of active particles changes significantly in proximity of both the inlet and the outlet. The fact that the walls are convergent, results in a ``focusing effect", i.e., the background flow compared to the pressure driven flow in the straight channel (the Poiseuille flow) has an additional convergent component that turns a particle toward the centerline. \textcolor{black}{Specifically, in this work it is shown that due to this convergent component of the background flow both up- and downstream swimming at the centerline are stable. Stability of the upstream swimming at the centerline is somewhat surprising since from observations in the Poisueille flow it is expected that an active particle turns against the flow only while swimming towards the walls, where the shear rate is higher. This means that we find rheotaxis in the bulk of an active suspension.} \section{Model} \label{sec:model} \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{nozzle_sketch_title.jpg} \caption{Sketch of a trapezoid nozzle filled with an dilute suspension of rodlike active particles in the presence of a converging background flow.} \label{fig:nozzle-sketch} \end{figure} To study the dynamics of active particles in a converging flow, two modeling approaches are exploited. In both, an active particle is represented by a rigid rod of length $\ell$ swimming in the $xy$-plane. In the first - simpler - approach, the rod is a one-dimensional segment which cannot penetrate a wall, whereas in the second - more sophisticated - approach we use the Yukawa segment model \cite{Kirchhoff1996} to take into account both finite length and width of the rod, as well as a more accurate description of particle-wall steric interaction. The active particle's center location and its unit orientation vector are denoted by ${\bf r}=(x,y)$ and ${\bf p}=(\cos \varphi,\sin\varphi)$, respectively. The active particles are self-propelled with a velocity directed along their orientation $v_{0}{\bf p}$. \textcolor{black}{The active particles are confined by a nozzle, see Fig.~\ref{fig:nozzle-sketch}, which is an isosceles trapezoid $\Omega$, placed in the $xy$-plane so that inlet $x=x_{\text{in}}$ and outlet $x=x_{\text{out}}$ are bases and the $y$-axis is the line of symmetry: \begin{equation} \Omega=\left\{x_{\text{in}}<x<x_{\text{out}}, \; \alpha^2 x^2 -y^2>0\right\}. \end{equation} The nozzle length, the distance between the inlet and the outlet, is denoted by $L$, i.e., $L=|x_{\text{out}}-x_{\text{in}}|$. The width of the outlet and the inlet are denoted by $w_{\text{out}}$ and $w_{\text{in}}$, respectively, and their ratio is denoted by $k={w_{\text{out}}}/{w_{\text{in}}}$.} \textcolor{black}{Furthermore, the active particles are exposed to an external background flow. We approximate the resulting converging background flow due to the trapezoid geometry of the nozzle by \begin{equation}\label{convergent_flow} {\bf u}_{\text{BG}}({\bf r})=(u_x(x,y),u_y(x,y))=(-u_0 (\alpha^2 x^2-y^2)/x^3, -u_0 y (\alpha^2x^2-y^2)/x^4), \end{equation} where $u_0$ is a constant coefficient related to the flow rate and $\alpha$ is the slope of walls of the nozzle. Equation \eqref{convergent_flow} is an extension of the Poiseuille flow to channels with convergent walls\footnote{In order to recover the Poiseuille flow (for channels of width $2H$) from Eq.~\eqref{convergent_flow}, take $x=H/\alpha$, $u_0=H^3/\alpha^3$ and pass to the limit $\alpha\to 0$. Note that the walls of the nozzle are placed so that they intersect at the origin, so in the limit of parallel walls, $\alpha \to 0$, both the inlet and the outlet locations, $x_{\text{in}}$ and $x_{\text{out}}$, go to $-\infty$.}}. Active particles swim in the low Reynolds-number regime. The corresponding overdamped equations of motion for the locations ${\bf r}$ and orientations ${\bf p}$ are given by: \begin{equation} \label{orig-location} \dfrac{\text{d}\bf r}{\text{d}t}={\bf u}_{\text{BG}}({\bf r})+v_{0}{\bf p}, \end{equation} \begin{equation} \label{orig-orientation} \dfrac{\text{d}\bf p}{\text{d}t} =(\text{I}-{\bf p}{\bf p}^{\text{T}})\nabla_{\bf r}{\bf u}_{\text{BG}}({\bf r}){\bf p}\,+\sqrt{2D_r}\,\zeta \,{\bf e}_{\varphi}. \end{equation} Here \eqref{orig-orientation} is the Jeffery's equation \cite{SaiShe08,Jef1922,KimKar13} for rods with an additional term due to random re-orientation with rotational diffusion coefficient $D_r$; $\zeta$ is an uncorrelated noise with the intensity $\langle \zeta(t),\zeta(t')\rangle=\delta(t-t')$, $e_{\varphi}=(-\sin \varphi, \cos \varphi)$. Equation \eqref{orig-orientation} can also be rewritten for the orientation angle $\varphi$: \begin{equation} \label{orig-orientation-angle} \dfrac{\text{d}\varphi}{\text{d}t}=\omega+ \nu\, \sin 2\varphi + \gamma \,\cos 2\varphi+\sqrt{2D_r}\,\zeta. \end{equation} Here $\omega=\dfrac{1}{2}\left(\dfrac{\partial u_y}{\partial x}-\dfrac{\partial u_x}{\partial y}\right)$, $\nu=\dfrac{1}{2}\left(\dfrac{\partial u_y}{\partial y}-\dfrac{\partial u_x}{\partial x}\right)=\dfrac{\partial u_y}{\partial y}=-\dfrac{\partial u_x}{\partial x}$, and $\gamma=\dfrac{1}{2}\left(\dfrac{\partial u_y}{\partial x}+\dfrac{\partial u_x}{\partial y}\right)$ are local vorticity, vertical expansion (or, equivalently, horizontal compression; similar to Poisson's effect in elasticity) and shear. The strength of the background flow is \textcolor{black}{quantified by the inverse Stokes number, which is the ratio between the background flow at the center of the inlet and the self-propulsion velocity $v_0$. Specifically,} \begin{equation} \sigma = \dfrac{u_x(x_{\text{in}},0)}{v_{0}}=\dfrac{u_0\alpha^2}{v_{0}|x_{\text{in}}|}, \end{equation} where $(x_{\text{in}},0)$ denotes the location at the center of the inlet. In the first modeling approach we include the particle wall interaction in the following way: an active particle is not allowed to penetrate the walls of the nozzle. To enforce this, we require that both the front and the back of the particle, ${\bf r}(t)\pm(\ell/2) {\bf p}$, are located inside the nozzle. In numerical simulations of the system \eqref{orig-location}-\eqref{orig-orientation-angle} this requirement translates into the following rule: if during numerical integration of \eqref{orig-location}-\eqref{orig-orientation-angle} a particle penetrates one of the two walls, then this particle is instantaneously shifted back along the inward normal at the minimal distance, so its front and back are again located inside the nozzle while its orientation is kept fixed. \textcolor{black}{Unless mentioned otherwise, in this modeling approach we consider a nozzle whose inlet width $w_{\text{in}}=0.2$ mm and outlet width $w_{\text{out}}=0.1$ mm are fixed. The following nozzle lengths are considered: $L=0.2$ mm, $L=0.5$ mm and $L=1.0$ mm. The length of the active particles is $\ell = 20$ $\mu$m, they swim with a self-propulsion velocity $v_{0}=10$ $\mu$m $\text{s}^{-1}$ and their rotational diffusion coefficient is given by $D_r=0.1$ $\text{s}^{-1}$.} All active particles are initially placed at the inlet, $x(0)=x_{\text{in}}$, with random $y$-component $y(0)$ and orientation angle $\varphi(0)$. The probability distribution function for initial conditions $y(0)$ and $\varphi(0)$ is given by $\Psi\propto 1$ (uniform). The trajectory of an active particle is studied until it leaves the nozzle either through the inlet or the outlet. To gather statistics we use 96,000 trajectories. \begin{figure}[ht!] \centering \includegraphics[width=0.6\textwidth]{nozzleSketchAK2.jpg} \caption{Sketch of a discretized active rod (red) of length $\ell$ and width $\lambda$ which is propelled with a velocity $v_0$ along its orientation ${\bf p}$ and is exposed to a converging background flow ${\bf u}_{\text{BG}}$ in the presence of a trapezoid nozzle confinement of length $L$ and with an inlet of size $w_{\text{in}}$ and outlet of size $w_{\text{out}}$(blue). To study a system with a packing fraction $\rho=0.1$ a channel is attached to the inlet with a non-converging background flow.} \label{fig:nozzle-sketchAK} \end{figure} We use the second approach to describe the particle-wall interactions and the torque induced by the flow more accurately. \textcolor{black}{For this purpose each rod, representing an active particle, of length $\ell$, width $\lambda$ and the corresponding aspect ratio $a=\ell/\lambda$ is discretized into $n_r$ spherical segments with $n_r = \lfloor 9 a /8 \rceil$ ($\lfloor x \rceil$ denotes the nearest integer function).} The resulting segment distance is also used to discretize the walls of the nozzle into $n_w$ segments in the same way. Between the segments of different objects a repulsive Yukawa potential is imposed. The resulting total pair potential is given by $U = U_0\sum_{i=1}^{n_r}\sum_{j=1}^{n_w} \exp [-r_{ij} / \lambda]/r_{ij}$, where $\lambda$ is the screening length defining the particle diameter, $U_0$ is the prefactor of the Yukawa potential and $r_{ij} = |{\bf r}_{i} - {\bf r}_{j}|$ is the distance between segment $i$ of a rod and $j$ of the wall of the nozzle, see Fig.~\ref{fig:nozzle-sketchAK}. The equations of motion (\ref{orig-location}) and (\ref{orig-orientation}) are complemented by the respective derivative of the total potential energy of a rod along with the one-body translational and rotational friction tensors for the rods ${\bf f}_{\cal T}$ and ${\bf f}_{\cal R}$ which can be decomposed into parallel $f_\parallel$, perpendicular $f_\perp$ and rotational $f_{\cal R}$ contributions which depend solely on the aspect ratio $a$~\cite{tirado}. For this approach we measure distances in units of $\lambda$, velocities in units of $v_0=F_0/f_\parallel$ (here $F_0$ is an effective self-propulsion force), and time in units of $\tau = \lambda f_\parallel / F_0$. While the width of the outlet $w_{\text{out}}$ is varied, the width of the inlet $w_{\text{in}}$ as well as the length of the nozzle $L$ is fixed to $100\lambda$ in our second approach. \textcolor{black}{Initial conditions are the same as in the first approach. To avoid that a rod and a wall initially intersect each other, the rod is allowed to reorient itself during an equilibration time $t_e = 10 \tau$ while its center of mass is fixed.} \textcolor{black}{Furthermore, we use the second approach to study the impact of a finite density of swimmers. For this approach we initialize $N$ active rods in a channel confinement which is connected to the inlet of the nozzle, see Fig.~\ref{fig:nozzle-sketchAK}. Inside the channel we assume a regular (non-converging) Poiseuille flow~\cite{zottl2013periodic}. We restrict our study to a dilute active suspension with a two dimensional packing fraction $\rho=0.1$. To maintain this fraction, particles which leave the simulation domain are randomly placed at the inlet of the channel confinement.} \section{Results} \label{sec:results} \subsection{Focusing of outlet distribution} \label{sec:focusing} Here we characterize the properties of the particles leaving the nozzle at either the outlet or the inlet. Specifically, our objective is to determine whether particles accumulate at the center or at walls when they pass through the outlet or the inlet. \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{dependence_on_sigma_joint.jpg} \caption{Histograms of the outlet distribution for $y|_{\text{out}}$ for given \textcolor{black}{inverse Stokes number} $\sigma$ and length $L$ of the nozzle. The histograms are obtained from numerical integration of \eqref{orig-location}-\eqref{orig-orientation-angle}. } \label{fig:dependence-on-sigma} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{y_phi_diagram.jpg} \caption{Outlet distribution histograms for $(y,\varphi)|_{\text{out}}$ computed for given inverse Stokes number $\sigma$ and nozzle length $L=0.2$. } \label{fig:y_phi_diagram} \end{figure} We start with the first modeling approach. Figure~\ref{fig:dependence-on-sigma} shows the spatial distribution of active particles leaving the nozzle at the outlet for various \textcolor{black}{inverse Stokes number} $\sigma$ and three different lengths $L$ of the nozzle, while the width of the inlet and the outlet are fixed. For small inverse Stokes number $\sigma$, the background flow is negligible compared to the self-propulsion velocity. Active particles swim close to the walls and peaks at walls are still clearly visible for $\sigma=0.5$ for all nozzle lengths $L$, see Fig.~\ref{fig:dependence-on-sigma}(a). For $\sigma=1$, the self-propulsion velocity and the background flow are comparable; in this case the histogram shows a single peak at the center of the outlet, see Fig.~\ref{fig:dependence-on-sigma}(b). Further increasing the inverse Stokes number from $\sigma=1$ to $\sigma=9$ leads to a broadening of the central peak and then to the formation of two peaks with a well in the center of the outlet, see Fig.~\ref{fig:dependence-on-sigma}(c)-(e). Finally, for an even larger inverse Stokes number $\sigma$, the self-propulsion velocity is negligible and the histogram becomes close to the one in the passive (no self-propulsion, $v_0 = 0$) case, see Fig.~\ref{fig:dependence-on-sigma}(f). Here the histogram for a nozzle length $L=0.2$ mm is uniform except at the edges where it has local peaks due to accumulation at the walls caused by steric interactions. Histograms for both the $y$-component and the orientation angle $\varphi$ of the active particles reaching the outlet are depicted in Fig.~\ref{fig:y_phi_diagram}(a)-(c). While active particles leave the nozzle with orientations away from the centerline for small inverse Stokes number, $\sigma = 0.5$, they are mostly oriented towards the centerline for larger values of the inverse Stokes number. \textcolor{black}{In Fig.~\ref{fig:y_phi_diagram}(c), one can observe that the histogram is concentrated largely for downstream orientations $\varphi \approx 0$ and slightly for upstream orientations $\varphi \approx \pm \pi$. These local peaks for $\varphi \approx \pm \pi$ away from walls are evidence of rheotaxis in the bulk. \textcolor{black}{These peaks are visible for large inverse Stokes numbers only and the corresponding active particles are flushed out of the nozzle with upstream orientations.} } \begin{figure}[ht!] \centering \includegraphics[width=1.0\textwidth]{share_of_particles_edited.jpg} \caption{(a) Probability of active particles to reach the outlet for various \textcolor{black}{inverse Stokes number} $\sigma$ (horizontal axis) and given lengths of the nozzle $L$. Insets: Trajectories for the case of $L=0.2$ mm. (b-d) Distribution histograms for particles leaving the nozzle through the inlet $y|_{\text{in}}$ computed for given reduced flow velocities $\sigma$ and nozzle lengths $L$. } \label{fig:share-of-particles} \end{figure} \textcolor{black}{Due to rotational diffusion and rheotaxis it is possible that an active particle can leave the nozzle through the inlet. We compute the probability of active particles to reach the outlet.} This probability, as a function of the inverse Stokes number $\sigma$ for the three considered nozzle lengths $L$, is shown in Fig.~\ref{fig:share-of-particles}(a), together with selected trajectories, see insets in Fig.~\ref{fig:share-of-particles}(a). The figure shows that the probability that an active particle eventually reaches the outlet monotonically grows with \textcolor{black}{the inverse Stokes number} $\sigma$. Note that a passive particle always leaves the nozzle through the outlet. By comparing the probabilities for different nozzle lengths $L$ it becomes obvious that an active particle is less likely to leave the nozzle through the outlet for longer nozzles. Due to the larger distance $L$ between the inlet and the outlet an active particle spends more time within the nozzle, which makes it more likely to swim upstream by either rotational diffusion or rheotaxis. In Fig.~\ref{fig:share-of-particles}(b)-(d) histograms for active particles leaving the nozzle through the inlet are shown. In the case of small inverse Stokes number, $\sigma=0.5$, the majority of active particles leaves the nozzle at the inlet. Specifically, most of them swim upstream due to rheotaxis close to the walls, but some active particles leave the nozzle at the inlet close to the center. These active particles are oriented upstream due to random reorientation. By increasing the inverse Stokes number $\sigma \geq 1$, active particles are no longer able to leave the nozzle at the inlet close to the center. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{two_representative_trajectories2.jpg} \caption{Examples of two trajectories for $L=1$ mm and $\sigma=1.0$. The red trajectory starts and ends at the inlet (the endpoint is near the lower wall). The blue trajectory has a zigzag shape with loops close to the walls; the particle that corresponds to the blue trajectory manages to reach the outlet.} \label{fig:two_representative} \end{figure} Let us now consider specific examples of active particles' trajectories, see Fig.~\ref{fig:two_representative}. The first trajectory (red) starts and ends at the inlet. Initially the active particle swims downstream and collides with the upper wall due to the torque induced by the background flow. Close to the wall it exhibits rheotactic behavior, but before it reaches the inlet it is expelled towards the center of the nozzle due to rotational diffusion, similar to bacteria that may escape from surfaces due to tumbling \cite{DreDunCisGanGol2011}. Eventually, the active particle leaves the nozzle at the inlet. As for the other depicted trajectory (blue), the active particle manages to reach the outlet. Along its course through the nozzle it swims upstream several times but in the end the active particle is washed out through the outlet by the background flow. For larger flow rates the trajectories of active particles are less curly, since the flow gets more dominant, see insets of Fig.~\ref{fig:share-of-particles}(a). \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth]{AKratio.jpg} \caption{(a) Probability for an active particle to reach the outlet of the nozzle $P_{\text{out}}$ as a function of \textcolor{black}{inverse Stokes number} $\sigma$ for three given aspect ratios $a$ of self-propelled rods and (b) for a fixed aspect ratio $a$ and three given ratios of the nozzle $k$. \textcolor{black}{Insets show close-ups.}} \label{fig:AKratio} \end{figure} Next we present results of the second modeling approach which is based on the Yukawa-segment model. So far we have concentrated on fixed widths of the inlet and outlet. Here we consider nozzles with fixed length $L$ and inlet width $w_{\text{in}}$ and vary nozzle ratio $k$. We study the behavior of active rods with varied aspect ratio $a$. As shown in Fig.~\ref{fig:AKratio}, neither the aspect ratio $a$, see Fig.~\ref{fig:AKratio}(a), nor the nozzle ratio $k$, see Fig.~\ref{fig:AKratio}(b), have a significant impact on the probability $P_{\text{out}}$ which measures how many active rods leave the nozzle at the outlet. However, the aspect ratio $a$ is important for the location where the active rods leave the nozzle at the inlet and the outlet, see Fig.~\ref{fig:AK1d}. For short rods $(a=2)$ and small inverse Stokes numbers $(\sigma \leq 1)$ the distribution of active particles shows just a single peak located at the center. This peak broadens if the inverse Stokes number increases, which is in perfect agreement with the results obtained by the first approach, cf. Fig.~\ref{fig:dependence-on-sigma}. It is more likely for short rods than for long ones to be expelled towards the center due to rotational diffusion. Hence the distribution of particles at the outlet for long rods $(a=10)$ shows additional peaks close to the wall. These peaks become smaller if the inverse Stokes number increases. The distribution of particles leaving the nozzle at the inlet is similar to our first approach. While the distribution is almost flat for small inverse Stokes numbers, increasing this number makes it impossible to leave the nozzle close to the center at the inlet. Similar to the outlet the wall accumulation at the inlet is more pronounced for longer rods. \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth]{AK1d.jpg} \caption{ Comparison of the spatial distribution of active particles at (top row) the outlet and (bottom row) the inlet of the nozzle for given inverse Stokes numbers $\sigma$ and aspect ratios $a$, an outlet width $w_{\text{out}}=50\lambda$ and an inlet width $w_{\text{in}}=100\lambda$.} \label{fig:AK1d} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{AK2d.jpg} \caption{Outlet distribution histograms for $(y,\varphi)|_{\text{out}}$ computed for given \textcolor{black}{inverse Stokes numbers} $\sigma$ and a nozzle with an outlet width of $w_{\text{out}}=50\lambda$ for active rods with an aspect ratio (top row) $a=2$ and (bottom row) $a=10$.} \label{fig:AK2d} \end{figure} By comparing the orientation of the particles at the outlet, the influence of the actual length of the rod becomes visible, see Fig.~\ref{fig:AK2d}. As seen before for short rods, $a=2$, for small inverse Stokes numbers $\sigma$ there is no wall accumulation. Hence most particles leave the nozzle close to the center and are orientated in the direction of the outlet. This profile smears out if the inverse Stokes number is increased to $\sigma = 1$. For larger inverse Stokes numbers the figures are qualitatively similar to the one obtained by the first approach, cf. Fig.~\ref{fig:y_phi_diagram}(c). Particles in the bottom half of the nozzle tend to point upwards and particles in the top half tend to point downwards. The same tendency is seen for long rods $a=10$ and small inverse Stokes number. However for long active rods, this is because they slide along the walls. The bright spots close to the walls for long rods and large inverse Stokes numbers indicate that particles close to the walls are flushed through the outlet by the large background flow even if they are oriented upstream. \textcolor{black}{In addition, there are blurred peaks away from the walls for large inverse Stokes numbers $\sigma$. The corresponding particles crossed the outlet with mostly upstream orientations. This is similar to Fig.~\ref{fig:y_phi_diagram}(c), where particles exhibiting in-bulk rheotactic characteristics were observed at the outlet of the nozzle.} \textcolor{black}{By comparing the results for individual active rods, see again Fig.~\ref{fig:AK2d}, with those for interacting active rods at a finite packing fraction $\rho = 0.1$, see Fig.~\ref{fig:AK2dint}, we find that wall accumulation becomes more pronounced. Mutual collisions of the rods lead to a broader distribution of particles. For long rods, $a=10$, the peaks at $\varphi \approx 0$ and $\varphi \approx \pm \pi$ remain close to the walls and the blurred peaks at the center vanish. } \begin{figure}[h!] \centering \includegraphics[width=1.0\textwidth]{AK2dint.jpg} \caption{Outlet distribution histograms for $(y,\varphi)|_{\text{out}}$ computed for given \textcolor{black}{inverse Stokes numbers} $\sigma$ and a nozzle with an outlet width $w_{\text{out}}=50\lambda$ for active rods with an aspect ratio (top row) $a=2$ and (bottom row) $a=10$ for a packing fraction $\rho = 0.1$. } \label{fig:AK2dint} \end{figure} \subsection{Optimization of focusing} \label{sec:optimization} Here we study the properties of the active particles in more detail and provide insight into the nozzle geometry, the background flow and the size of the swimmers that should be used in order to optimize the focusing at the outlet of the nozzle. For this purpose we study three distinct quantities. The averaged dwell time $\langle T\rangle$, the time it takes for an active particle to reach the outlet, the mean alignment of the particles measured by $\langle \cos \varphi_{\text{out}}\rangle$ and the mean deviation from the center $y=0$ at the outlet $\langle |y_{\text{out}}|\rangle$. As depicted in Fig.~\ref{fig:share-of-particles}, for increasing inverse Stokes number the probability for active particles to reach the outlet increases. However they are spread all over the outlet. This is quantified by the $\langle |y_{\text{out}}|\rangle$. Small values of $\langle |y_{\text{out}}|\rangle$ correspond to a better focusing. If particles leave the nozzle with no preferred orientation, their mean orientation vanishes, $\langle \cos \varphi_{\text{out}}\rangle = 0$; in case of being orientated upstream we obtain $\langle \cos \varphi_{\text{out}}\rangle = -1$ and finally $\langle \cos \varphi_{\text{out}}\rangle = 1$ if the particles are pointing in the direction of the outlet. Obviously in an experimental realization a fast focusing process and hence small dwell times $T$ would be preferable. The numerical results obtained by the first modeling approach are depicted in Fig.~\ref{fig:optimization}. While the dwell time hardly depends on the size ratio $k$ of the nozzle, obviously the strength of the background flow has a huge impact on the dwell time and large inverse Stokes numbers $\sigma$ lead to a faster passing through the nozzle of the active particles, see Fig.~\ref{fig:optimization}(a). The alignment of the active particles, $\langle \cos \varphi_{\text{out}}\rangle$, becomes better if the nozzle ratio $k$ is large and the flow is slow, see Fig.~\ref{fig:optimization}(b). The averaged deviation from the centerline $\langle |y_{\text{out}}|\rangle$ increases with increasing nozzle ratio $k$ since the width of the outlet becomes larger. As could already be seen in Fig.~\ref{fig:dependence-on-sigma}, the averaged deviation from the centerline is non-monotonic as a function of the inverse Stokes number and shows the smallest distance from the centerline for all nozzle ratios if the strength of the flow is comparable to the self-propulsion velocity of the swimmers, $\sigma=1$. \begin{figure}[ht] \centering \includegraphics[width=1.0\textwidth]{optimization.jpg} \caption{(a) Dwell time $\langle T\rangle$; (b) mean alignment at the outlet, $\langle \cos \varphi\rangle$; (c) mean deviation from center $y=0$ at the outlet $\langle |y_{\text{out}}|\rangle$. } \label{fig:optimization} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{AKopti.jpg} \caption{(a,e) Dwell time $\langle T\rangle$, (b,f) the mean alignment, $\langle \cos \varphi_{\text{out}}\rangle$ and (c,f) mean deviation from center $y=0$ at the outlet $\langle |y_{\text{out}}|\rangle$ for (top row) a fixed outlet width of $w_{\text{out}}=50\lambda$ and given aspect ratios $a$ of the swimmers and (bottom row) fixed aspect ratio $a=2$ and varied nozzle ratio $k$, whereby the width of the outlet changes.} \label{fig:AKopti} \end{figure} Let us now study how these three quantities depend on the aspect ratio of the swimmer. To this end, we use the second modeling approach. We consider all three parameters as a function of the \textcolor{black}{inverse Stokes number} $\sigma$. Longer rods have a shorter dwell time so that they reach the outlet faster, see Fig.~\ref{fig:AKopti}(a). Increasing the flow velocity obviously leads to a decreasing dwell time. The same holds for the mean alignment -- it decreases for increasing inverse Stokes number, see Fig.~\ref{fig:AKopti}(b). Moreover, for small inverse Stokes numbers, $\sigma \leq 2$, the mean alignment is better for long rods. For large inverse Stokes numbers, long rods $a=10$ are washed out with almost random orientation, however short rods $a=2$ are slightly aligned with the flow. Short rods are focused better for small inverse Stokes numbers, $\sigma \leq 2$, see Fig.~\ref{fig:AKopti}(c), due to wall alignment and wall accumulation of longer rods. For larger inverse Stokes numbers, it is the other way around -- long rods are better focused. Comparing various nozzle ratios $k$ with fixed simmers' aspect ratio $a$, we obtain that smaller ratios $k$ lead to smaller dwell times [Fig.~\ref{fig:AKopti}(d)] and better alignment [Fig.~\ref{fig:AKopti}(e)]. For narrow outlets (small $k$) the active particles leave the outlet closer to the center, see Fig.~\ref{fig:AKopti}(f). \textcolor{black}{ \section{Discussion} } \textcolor{black}{We discuss the stability of particles around the centerline $y=0$ in the presence of a background flow and confining walls if they are converging with a non-zero slope $\alpha$. This stability is in contrast to a channel with parallel walls, where an active particle swims away from the centerline provided that its orientation angle $\varphi$ is different from $n\pi$, $n=0,\pm1,\pm2,\ldots$.} Indeed, in the case of a straight channel, $\alpha = 0$, the background flow is defined as $u_x=u_0 (H^2-y^2)$, $u_y=0$ (Poiseuille flow; $u_0$ is the strength of the flow, $2H$ is the distance between the walls). Then the system \eqref{orig-location}-\eqref{orig-orientation-angle} reduces to \begin{eqnarray} \dot{\varphi} &=& u_0 y (1-\cos 2\varphi) \label{varphi_poiseuille}\\ \dot{y} &=& v_{0}\sin \varphi. \label{y_poiseuille} \end{eqnarray} Here we omit the equation for $x(t)$ due to invariance of the infinite channel with respect to $x$ and neglect orientation fluctuations, that is $D_r=0$. The phase portrait for this system is depicted in Fig.~\ref{fig:stability}(a). Dashed vertical lines $\varphi=n\pi$, $n=0,\pm1,\pm2,\ldots$ consist of stationary solutions: if an active particle is initially oriented parallel to the walls, it keeps swimming parallel to them. If initially $\varphi$ is different from $n\pi$, then the active particle swims away from the centerline, $y(t)\to\pm\infty$ as $t \to \infty$. When the walls are converging, $\alpha > 0$, the $y$-component of the background flow is non-zero and directed towards the centerline. For the sake of simplicity we take $u_y=-\alpha y$, $\alpha>0$ and $u_x$ as in the Poiseuille flow, $u_x=u_0 (H^2-y^2)$. In this case, the system \eqref{orig-location}-\eqref{orig-orientation-angle} reduces to \begin{eqnarray} \dot{\varphi} &=& -(\alpha/2)\sin 2\varphi + u_0 y (1-\cos 2\varphi) \label{varphi_convergent_simple}\\ \dot{y} &=& -\alpha y+ v_{0}\sin \varphi. \label{y_convergent_simple} \end{eqnarray} The corresponding phase portrait for this system is depicted in Fig.~\ref{fig:stability}(b). Orientations $\varphi=n\pi$ represent stationary solutions only if $y=0$. In contrast to the Poiseuille flow in a straight channel, see Eqs. (\ref{varphi_poiseuille}) and (\ref{y_poiseuille}), these stationary solutions $(\varphi=\pi n, y=0)$ are asymptotically stable with a decay rate $\alpha$ (recall that $\alpha$ is the slope of walls). In addition to these stable stationary points there are pairs of unstable (saddle) points with non-zero $y$ (provided that $v_{0}>0$). In these saddle points, the distance from centerline $|y|$ does not change, since a particle is oriented away from centerline, so the propulsion force moves the particle away from the centerline and this force is balanced by the convergent component of the background flow, $u_y$, moving the particle toward the centerline. The orientation angle $\varphi$ does not change since the torque from the Poiseuille component of the background flow, $u_x$, is balanced by the torque from the convergent component, $u_y$. \begin{figure}[ht] \begin{center} \includegraphics[width=\textwidth]{phase_portraits_new.jpg} \caption{\footnotesize \textcolor{black}{Phase portraits $(\varphi,y)$ for $v=0.2$, $H=1.0$ and $u_0=0.6$. (a) System \eqref{varphi_poiseuille}-\eqref{y_poiseuille}, describing Poiseuille flow in a straight channel; dashed lines consist of stationary points. (b) System \eqref{varphi_convergent_simple}-\eqref{y_convergent_simple} describing a simplified convergent flow with $\alpha=0.25$,; stationary points: stable $(\pi n,0)$ (in red) and pairs of saddles with non-zero $y$ (in blue). Trajectories near the centerline converge to a stationary solution in the centerline. (c) System \eqref{orig-location}-\eqref{orig-orientation-angle} with the convergent flow ${\bf u}_{\text{BG}}=(u_x,u_y)$ used in Section~\ref{sec:focusing} with $x=-H/\alpha=-4.0$.}} \label{fig:stability} \end{center} \end{figure} We also draw the phase portrait for the converging flow ${\bf u}_{\text{BG}}=(u_x,u_y)$ introduced in Section~\ref{sec:model}, Fig.~\ref{fig:stability}(c). One can compare the phase portraits Fig.~\ref{fig:stability}(b) and Fig.~\ref{fig:stability}(c) around the stationary point $(\varphi=0, y=0)$ to see that the qualitative picture is the same: this stationary point is stable and it neighbors with two saddle points. {\textcolor{black}{The asymptotic stability of $(\varphi=0, y=0)$ means that if a particle is close to the centerline and its orientation angle is close to $0$ (particle is oriented towards the outlet), it will keep swimming at the centerline pointing toward the outlet}, whereas in Poiseuille flow the particle would swim away. The asymptotic stability of $(\varphi=\pm \pi, y=0)$ is evidence of that in the converging flow there is rheotaxis not only at walls but also in the bulk, specifically at the centerline. Another consequence of this stability is the reduction of effective rotational diffusion of an active particle in the region around the centerline, that is the mean square angular displacement $\langle\Delta \varphi^2\rangle$ is bounded in time due to the presence of restoring force coming from the converging component of the background flow (cf. diffusion quenching for Janus particles in \cite{DasGarCam2015}).} \textcolor{black}{Finally, we note that the nozzle has a finite length $L$ and thus, the conclusions of the stability analysis are valid if the stability relaxation time, $1/\alpha$ s, does not exceed the average dwell time $\langle T \rangle$. We introduce a lower bound $\tilde{T}$ for the dwell time $\langle T\rangle$ as the dwell time of an active particle swimming along the centerline oriented forward, $\varphi=0$:} \textcolor{black}{\begin{equation*}\tilde{T}=Lk/(\sigma v_0 (1-k))\ln |1+\sigma (1-k)/(k(\sigma+1))|.\end{equation*} } \textcolor{black}{Our numerical simulations show that $\tilde{T}$ underestimates the average dwell time by a factor larger than two. Using this lower bound, we obtain the following sufficient condition for stability: $\dfrac{k w_{\text{in}}}{\sigma v_0}\ln\left|1+\dfrac{\sigma(1-k)}{k(\sigma+1)}\right|\geq 1$. } \medskip \medskip \section{Conclusion} In this work we study a dilute suspension of active rods in a viscous fluid extruded through a trapezoid nozzle. Using numerical simulations we examined the probability that a particle leaves the nozzle through the outlet - which is the result of the two counteracting phenomena. On the one hand, swimming downstream together with being focused by the converging flow increases the probability that an active rod leaves the nozzle at the outlet. On the other hand, rheotaxis results in a tendency of active rods to swim upstream. Theoretical approaches introduced in this paper can be used to design experimental setups for the extrusion of active suspensions through a nozzle. The optimal focusing is the result of a compromise. While for large flow rates it is very likely for active rods to leave the nozzle through the outlet very fast, their orientation is rather random and they pass through the outlet close to the walls. The particles are much better aligned with the flow for small flow rates and focused closer to the centerline of the nozzle, however the dwell time of the particles becomes quite large. Based on our findings the focusing is optimal if the velocity of the background flow and the self-propulsion velocity of the active rods are comparable. To reduce wall accumulation, the rods should have a small aspect ratio. \textcolor{black}{We find that rheotaxis in bulk is possible for simple rigid rodlike active particles.} We also established analytically the local stability of active particle trajectories in the vicinity of the centerline. This stability leads to the decrease of the effective rotational diffusion of the active particles in this region \textcolor{black}{as well as the emergence of rheotaxis away from walls.} Our findings can be experimentally verified using biological or artificial swimmers in a converging flow. \section*{Acknowledgements } The work was supported by NSF DMREF grant DMS-1628411. A.K. gratefully acknowledges financial support through a Postdoctoral Research Fellowship (KA 4255/1-2) from the Deutsche Forschungsgemeinschaft (DFG). \section*{Author contributions statement} Simulations have been performed by M.P. and A.K., the research has been conceived by L.B. and I.S.A. and all authors wrote the manuscript.
2024-02-18T23:39:53.559Z
2017-07-28T02:02:25.000Z
algebraic_stack_train_0000
754
6,619
proofpile-arXiv_065-3737
\section{Introduction} \label{Intro} Single image super resolution (SR) aims to reconstruct a high-resolution (HR) image from its low-resolution (LR) counterpart \cite{ref1}. It has received much attention due to its values in many applications, such as surveillance imaging \cite{ref3} and thumbnail image enlargement \cite{ref4}. In almost all practical applications, limited by storage capacity and transmission bandwidth, images are not only down-sampled but also compressed, especially in social media. If the compression is lossy, images are inevitably contaminated by annoying artefacts, such as blocking, ringing, fake edges, etc. Enhancing the resolution of such images will exacerbate the artefacts. Thus, in comparison to clean images, it is more challenging to super-resolve compressed images. Despite its practical application values, compressed image super resolution (CISR) is not well studied in the literature, and there are many problems remain unsolved. This work focuses on the single image SR (SISR) problem for compressed images. To super-resolve compressed images, training a traditional SISR model with compressed data will have difficulty in producing high-quality super-resolved images (SRIs). The reason is that the resolution enhancement process will inevitably amplify high-frequency artefacts \cite{ref31}. In previous works, several specific models have been designed for CISR. They fall into two categories: joint model \cite{ref32, ref33} and series (or cascaded) model \cite{ref34, ref35}. Specifically, the joint model is a parallel architecture, where the input image or part of it streams to two independent modules simultaneously, and then the results from the two modules are fused to obtain the output, as shown in Fig. \ref{fig:res1}(a). However, two independent modules cannot benefit each other, limiting the performance of SISR for compressed images. In the cascaded model, the output of one module streams to the other module. Generally, two modules are involved: one is for compression artefacts removal and the other is for SR. If the SR module is applied first, artefacts would be amplified. It is much more difficult to suppress the amplified artefacts than the original signal. Therefore, in existing methods \cite{ref31, ref34, ref35}, the LR input image is first restored by reducing compression artefacts and then rescaled to a higher resolution, as shown in Fig. \ref{fig:res1}(b). However, some image details would be inevitably lost during the artefacts removal process, and those lost details will be difficult to be retrieved by subsequent modules. \begin{figure}[!htb] \begin{minipage}{0.4\linewidth} \centerline{\includegraphics[scale=0.45]{fig1_a}} \centerline{(a)} \end{minipage} \hfill \begin{minipage}{.6\linewidth} \centerline{\includegraphics[scale=0.45]{fig1_b}} \centerline{(b)} \end{minipage} \vfill \begin{minipage}{1\linewidth} \centerline{\includegraphics[scale=0.45]{fig1_c}} \centerline{(c)} \end{minipage} \caption{Illustrations on different frameworks for CISR. (a) Parallel framework. (b) Series or cascaded framework. (c) Our framework.} \label{fig:res1} \end{figure} Different from previous works, our framework is illustrated in Fig. \ref{fig:res1}(c). Our system is designed based on a mathematical inference for estimating a clean LR image and a clean HR image from a down-sampled and compressed observation. Formal derivation of our model design will be presented in Section \ref{overall}. A unique feature of our method is that it exploits the parallel and series connections between the ARM and the REM, and recursive optimization to reduce the model’s dependency on specific types of degradation thus making it possible to train a single model to super-resolve images compressed by different methods to different qualities. This is important, as in many real-world applications such as photo sharing on social media platforms, images would have always undergone scaling and compression by unknown algorithms. As shown in Fig. \ref{fig:res1}(c), our method consists of two modules, the ARM (Module I) and the REM (Module II). However, these two modules are not simply parallel or cascaded. On one hand, the compressed LR input streams to the two modules in a parallel way. On the other hand, the output of one module is fed back to the other module, resulting in two series flows. Here, we regard the output of one module as the auxiliary input of the other one. Essentially, both parallel and series flows are involved in our framework. Its advantages are threefold: First, the original information in the input is fully available to the both modules. Second, the output of Module I facilitates Module II by providing it with a relatively clean version of the LR image. Third, the output of Module II supplies Module I with an image with high-frequency details which are frequently lost during the artefacts removal process. In this work, the both modules are implemented by deep neural networks, and both training and testing are achieved by a recursive process sometimes referred to as unfolding \cite{ref36}. In addition, at the input end of each module, we include a modified non-local operator to capture long-range dependencies in images. By using this modified non-local operator, a relatively clean but blurry image named non-locally filtered image is provided. In an SISR deep network, a long skip connection has been demonstrated to be highly effective in forcing the network to learn residuals, \textit{i.e.}, high-frequency image details \cite{ref15}. It not only allows low-frequency information to take a shortcut, but also alleviates the problem of vanishing or exploding gradients \cite{ref37}. Hence, a long skip connection is also adopted in each module of our framework. Since three images, \textit{i.e.}, the input, the auxiliary input, and the non-locally filtered image, are available, we propose to adaptively combine them for the skip connection by learning their respective weights during training. That is, all three images are connected to the output with learnable contributions. To demonstrate the effectiveness of the proposed framework, we collect a photograph dataset and a \textit{WeChat} avatar image dataset. The photograph dataset is used to generate compressed images by different compression methods with various quality factors. The \textit{WeChat} avatar image dataset contains LR images that have undergone compression and scaling by \textit{WeChat}’s internal algorithms (unknown to users). More details about these two datasets are provided in Section \ref{dataset}. The main contributions of this work are as follows: \begin{itemize} \item We have developed a new framework for compressed image super resolution (CISR). Our method consists of an artefacts removal module (ARM) and a resolution enhancement module (REM) which are connected in parallel and in series. A unique feature of our CISR system is that it exploits the parallel and series connections between the ARM and the REM, and recursive optimization to reduce the model’s dependency on specific types of degradation thus making it possible to train a single model to super-resolve images compressed by different methods to different qualities. \item We present two datasets which would benefit research in super-resolving compressed images. One dataset contains photography images compressed by two most widely-used compression methods JPEG and WebP, and the other dataset contains real-world images that has undergone compression and scaling by unknown algorithms from one of the world’s largest social media platform \textit{WeChat}. \item We present extensive experimental results to demonstrate that our new method outperforms state-of-the-art based on quantitative measures and visual comparison. \end{itemize} \section{Related Work} \label{relatedwork} \subsection{Super Resolution} \label{SISR} Early models in SISR are example-based \cite{ref5}, where a search for the nearest neighbor is performed with compatibility constraints. Subsequent famous methods include the models based on locally linear embedding \cite{ref7}, sparse coding \cite{ref8}, and neighborhood regression \cite{ref9}, etc. All of these various techniques, called traditional methods, are limited by their shallow architectures. One can refer to \cite{ref1} for the detailed review of traditional SISR methods. After the first successful attempts to adopt deep networks in SISR tasks \cite{ref13}, many powerful techniques emerge to make deep networks more effective in the SISR task. For example, residual learning is introduced to achieve a very deep network. Successful applications of residual learning in SR include: global residual learning in \cite{ref15}, local residual learning in enhanced deep super-resolution network (EDSR) \cite{ref16}. Attention mechanism is also considered in the residual channel attention network (RCAN) \cite{ref19} and second-order attention network (SAN) \cite{ref20}. In feedback network \cite{ref24}, deep networks are unfolded to make their training feasible. The success of unfolding technique in deep SISR models motivates us to employ it to train our model in Fig. \ref{fig:res1}(c), where the auxiliary inputs can be treated as feedback. Moreover, non-local means has be successfully adopted in both traditional and deep SISR methods. A non-local total variation prior \cite{ref25} are applied in the traditional SISR framework. In deep SISR models \cite{ref26}, non-local means is known as the spatial attention that is essentially a non-local convolution process. One can refer to \cite{ref11} for comprehensive surveys of deep SISR models. Albeit great success of the above methods for clean images, they fail to super-resolve the images with multiple degradations. To reduce the simulated-to-real gap \cite{ref28}, some SISR methods are developed for the images with various degradations. To super-resolve noisy LR images, the method in \cite{ref29} combines a noisy SR image and a de-noised SR image. In \cite{ref39}, the noise level of LR images is estimated to determine the value of regularization parameter. Recently, the noise-robust iterative back-projection (NRIBP) is presented in \cite{ref40} for noisy image SR. In addition to noises, some models further consider the impact of various blur kernels in SISR. In \cite{ref41}, blur kernel and noise level are regarded as one of the inputs, making deep networks possible to handle blurring and noises. In \cite{ref42}, an auxiliary variable is introduced to separate the problem of blurred image SR to two iterative sub-problems, \textit{i.e.}, image restoration problem and SR problem. And its extension named unfolding super-resolution network (USRNet) \cite{ref43} introduces a trainable prior module. With regard to the unknown blur kernel, kernel prediction \cite{ref30} method and generative adversarial networks \cite{ref45} are used in SR framework. However, it is required that degradation parameters are available in most methods mentioned above, \textit{e.g.}, \cite{ref41, ref42, ref43, ref44}. In these above literatures, Gaussian noises and blurring are most considered. However, compression-induced artefacts are totally different from Gaussian noises and blurring. First, Gaussian noises are independent from image contents, and the noise distribution generally remains the same over the whole image. In contrast to Gaussian noises, compression artefacts would be highly related to image contents and spatially variant. Second, blurring kernels can be spatially variant but produce no high-frequency artefacts. Different from blurring, compression could give rise to high-frequency artefacts, \textit{e.g.}, blocking. Therefore, the models designed for noisy and blurred images are not appropriate for compressed images, as we will demonstrate in Section \ref{compare}. In comparison with noisy and blurred image SR, studies on the SISR problem for compressed images are relatively seldom. The method in \cite{ref31} performs an iterative regularization and SR procedure. In \cite{ref32}, image patches are classified into two sets of blocking and non-blocking to super-resolve them separately. Regarding compression artefacts as noises, the method in \cite{ref33} adopts a similar strategy in \cite{ref29}. However, the methods in \cite{ref31}–\cite{ref33} ignore the information exchange between different modules. The model of iterative cascaded SR and de-blocking (ICSD) \cite{ref34} is the first attempt to use the information exchange between de-blocking and SR. In \cite{ref35}, CISR is implemented by deep convolution neural networks (CISRDCNN), which consists of three cascaded modules to obtain SR images. However, the losses of details in previous modules can hardly be retrieved by the subsequent ones. Moreover, the information exchange among different modules is not fully exploited. \subsection{Compression Artefacts Removal} \label{car} Traditional methods of compression artefacts removal can be performed in either spatial domain \cite{ref46, ref47} or transform domain \cite{ref48, ref49}. The method of shape-adaptive DCT (SA-DCT) \cite{ref49} defines the shape of the transform supporting in a point-wise adaptive way to produce clean edges. Similar to the development of SISR, deep learning has also achieved success in the field of compression artefacts removal. The pioneer work in \cite{ref50} first introduces convolutional network to achieve compression artefacts removal. Subsequently, more deep models are presented, such as residual learning in de-noising convolutional neural network (DnCNN) \cite{ref53} and deep convolutional sparse coding (DCSC) model \cite{ref55}, etc. One can refer to \cite{ref56} for the detailed review of studies on compression artefacts removal. It would be interesting to see that many techniques or models have been successfully applied to both the problems of SISR and compression artefacts removal. For example, the trainable nonlinear reaction diffusion (TNRD) model can be trained with different reaction terms to solve the two problems \cite{ref57}. As well-known techniques in SISR, sparse coding \cite{ref55, ref58}, non-local means \cite{ref59} and attention networks \cite{ref60} are also employed to reduce compression artefacts. This phenomenon can be attributed to the common properties of reproduced images and learning models. \section{Proposed SISR Model for Compressed Images} \subsection{Overall framework} \label{overall} Before discussing the problem of super-resolving compressed images, we review the SR of clean images. Let us consider a clean LR image \textbf{y} and its corresponding HR counterpart \textbf{x}. Their relation can be formulated as \begin{equation} \mathbf{y} = \mathbf{Dx}, \end{equation} where \textbf{D} is a sub-sampling operator. SR seeks to reverse the sub-sampling procedure in Eq. (1) and find a mapping $\mathcal{F}: \mathbf{y} \rightarrow \mathbf{x}$. Recently, many researchers model this mapping as a deep convolutional neural network (DCNN), such as \cite{ref13}, \cite{ref16}, \cite{ref18}, \cite{ref80} etc. Therefore, these models can be represented as \begin{equation} \hat{\mathbf{x}} = \mathcal{F}(\mathbf{y};\Theta_{\mathcal{F}}), \end{equation} where $\Theta_{\mathcal{F}}$ is the parameter set of DCNN. They directly learn from a training set of degraded and ground-truth image pairs by an end-to-end training. Back to the problem of super-resolving compressed images, the biggest difference from the traditional SR problem is that compression is involved. The compression procedure can be formulated by \begin{equation} \mathbf{z} = \mathbf{T}^{-1}\mathbf{QTy}, \end{equation} where \textbf{z} denotes a compressed LR image, \textbf{T} is a linear transform used in compression, $ \mathbf{T}^{-1} $ is the corresponding inverse transform, and \textbf{Q} is a quantization operator. In this work, we do not assume any specific transform, although the discrete cosine transform (DCT) is widely used, \textit{e.g.}, in JPEG standard \cite{ref61}. The relation between \textbf{z} and \textbf{x} can be given by \begin{equation} \mathbf{z} = \mathbf{Cx}, \end{equation} where $\mathbf{C} = \mathbf{T}^{-1}\mathbf{QTD}$. From Eqs. (1) and (4), the clean HR image \textbf{x} can be reconstructed from either its clean LR version \textbf{y} or its compressed LR version \textbf{z}. Therefore, a mapping function that uses both \textbf{y} and \textbf{z} to restore \textbf{x} is desirable, \textit{i.e.}, we would like a mapping $\mathcal{R}: (\mathbf{y}, \mathbf{z}) \rightarrow \mathbf{x}$. Assuming both \textbf{y} and \textbf{z} are available, similar to the SR of clean images in Eq. (2), we can model the mapping function $\mathcal{R}$ as \begin{equation} \hat{\mathbf{x}} = \mathcal{R}(\mathbf{y}, \mathbf{z};\Theta_{\mathcal{R}}), \end{equation} where $\Theta_{\mathcal{R}}$ represents the parameter set of $\mathcal{R}$. Notice that Eq. (5) has two inputs, \textit{i.e.}, \textbf{y} and \textbf{z}, but only \textbf{z} is available in our problem. Therefore, using Eq. (5) to estimate \textbf{x}, it is necessary to recover \textbf{y}. In Eq. (1), the clean LR image \textbf{y} can be obtained by the sub-sampling procedure from the HR image \textbf{x} while in Eq. (3), \textbf{y} can be generated by the compressed image restoration procedure from the compressed LR image \textbf{z}. Hence, a mapping that takes both \textbf{x} and \textbf{z} as input to estimate \textbf{y} is desirable, \textit{i.e.}, we require $\mathcal{P}: (\mathbf{x}, \mathbf{z}) \rightarrow \mathbf{y}$. Similarly, assuming both \textbf{x} and \textbf{z} are available, we can model the mapping function $\mathcal{P}$ as \begin{equation} \hat{\mathbf{y}} = \mathcal{P}(\mathbf{x}, \mathbf{z};\Theta_{\mathcal{P}}), \end{equation} where $\Theta_{\mathcal{P}}$ represents the parameter set of $\mathcal{P}$. Based on Eqs. (5) and (6), we propose a novel framework that integrates both parallel model and series model as shown in Fig. \ref{fig:res1}(c). The artefacts removal module (ARM), \textit{i.e.}, Module I is derived from Eq. (6) while the resolution enhancement module (REM), \textit{i.e.}, Module II is associated with Eq. (5). Specifically, for the two inputs in Eq. (6), one is the compressed LR input image \textbf{z}, while the other (which is called auxiliary input) is the clean HR image \textbf{x}. Note that in the training stage, \textbf{x} is available as the training target but not available in the testing stage, therefore we have to use the output of module II to replace \textbf{x} in Eq. (6). Similarly, for the two inputs in Eq. (5), one is the compressed LR input image \textbf{z}, while the other is the clean LR image \textbf{y}. Again in the training stage \textbf{y} is available but it is not available in the testing stage, it will therefore have to come from the output of module I instead, \textit{i.e.}, an estimated clean version of \textbf{z} which replace \textbf{y} in Eq. (5). To solve Eqs. (5) and (6), we utilize the strategy of recursive optimization. Since the estimation of \textbf{y} is easier than \textbf{x}, we perform the estimation of \textbf{y} first. That is, we generate the output of the ARM module using $\hat{\mathbf{x}}$, then obtain the output of the REM module using $\hat{\mathbf{y}}$ recursively. Hence, the whole recursive procedure can be rewritten to \begin{equation} \begin{aligned} &\hat{\mathbf{y}}_{j} = \mathcal{P}(\hat{\mathbf{x}}_{j-1}, \mathbf{z}; \Theta_{\mathcal{P}}), \\ &\hat{\mathbf{x}}_{j} = \mathcal{R}(\hat{\mathbf{y}}_{j}, \mathbf{z}; \Theta_{\mathcal{R}}), \\ \end{aligned} \end{equation} where $j \leq J $ indicates the index of iteration, $J$ is a preset maximum iteration number. $\hat{\mathbf{y}}_{j}$ and $\hat{\mathbf{x}}_{j}$ are the estimation of \textbf{y} and \textbf{x} at \textit{j-th} iteration respectively. This strategy is also known as deep unfolding or unrolling in the field of deep learning \cite{ref36}. \begin{figure*}[!htb] \centerline{\includegraphics[scale=0.3]{fig2}} \caption{Overall framework of the proposed model.} \label{fig:res2} \end{figure*} The unfolding of our parallel and series integration framework is illustrated in Fig. \ref{fig:res2}. \textbf{From Eq. (7), one can see that, there are only two mapping functions in the recursive optimization. That is, the parameters $\Theta_{\mathcal{P}}$ in the ARM module are shared in each iteration, and the parameters $\Theta_{\mathcal{R}}$ in the REM module are also shared across different iteration indexes.} Each dash box in Fig. \ref{fig:res2} represents an iteration and there are two outputs in each iteration, \textit{i.e.}, $\hat{\mathbf{y}}$ and $\hat{\mathbf{x}}$. The subscript $j$ in $\hat{\mathbf{x}}_{j}$ is the same as the $j$ in Eq. (7). These two outputs in each iteration are required in the loss function for model training and we calculate the loss for every iteration. Given a $N$ training samples $\{ \mathbf{z}^{(i)}, \mathbf{y}^{(i)}, \mathbf{x}^{(i)}\}_{i=1}^{N}$, the loss function $\mathcal{L}$ can be written as \begin{equation} \begin{aligned} \mathcal{L}(\Theta_{\mathcal{P}}, \Theta_{\mathcal{R}})=& \frac{1}{NJ}\sum_{i=1}^{N}\sum_{j=1}^{J}\rho_{j}({||\mathcal{P}(\hat{\mathbf{x}}_{j-1}^{(i)},\mathbf{z}^{(i)};\Theta_{\mathcal{P}})-\mathbf{y}^{(i)}||}_{1} \\ &+ \gamma {||\mathcal{R}(\hat{\mathbf{y}}_{j}^{(i)},\mathbf{z}^{(i)};\Theta_{\mathcal{R}})-\mathbf{x}^{(i)}||}_{1}), \\ \end{aligned} \end{equation} where $\rho_{j}$ controls loss weight of each iteration, and $\gamma$ balances the impacts of the ARM and the REM. According to curriculum learning strategy \cite{ref68}, we regard the training in the first few iterations as easy tasks by setting smaller loss weight $\rho_{j}$ for smaller $j$. Using the unfolding technique and the loss defined in Eq. (8), an end-to-end training is performed to fix the parameters $\Theta_{\mathcal{P}}$ and $\Theta_{\mathcal{R}}$ in our model. After training the model, for a given LR compressed image, the proposed model can produce $J$ results for $\hat{\mathbf{x}}$. It is important to note that unlike conventional memory-less feedforward neural network architecture, our system is recursive in both the training and testing stages. In the training stage, training samples have to go through $J$ iterations, each iteration produces two outputs which are compared with the ground truths for training. Similarly, in the testing stage, an input image also has to go through $J$ iterations, each produces a version of the super-resolved image with increasing accuracy, and the version from the final iteration which should contain the most details is normally used as the final output. In order to set up the whole recursive optimization procedure, the initial estimation of \textbf{x}, \textit{i.e.}, $\hat{\mathbf{x}}_{0}$, is obtained by bicubicly up-samling \textbf{z}. Details of the training and testing stages are shown in pseudo code in Algorithm 1 and Algorithm 2. \begin{algorithm} \caption{Training stage} \label{alg:1} \begin{algorithmic}[1] \REQUIRE Distribution of HR images $p(X)$. \STATE Initialize the parameters of the ARM and the REM $\Theta_{\mathcal{P}}$, $\Theta_{\mathcal{R}}$ and set the maximum iteration number as $J$ and the sampling size of images as $N$. \REPEAT \STATE Sample a batch of images $\{\mathbf{x}^{(i)} \}_{i=1}^{N} \sim p(X)$. \STATE Generate clean LR images $\{\mathbf{y}^{(i)}\}_{i=1}^{N}$ and compressed LR images $\{\mathbf{z}^{(i)}\}_{i=1}^{N}$ from $\{\mathbf{x}^{(i)}\}_{i=1}^{N}$. \STATE Set $\{\hat{\mathbf{x}}_{0}^{(i)}\}_{i=1}^{N}$ as bicubicly up-sampled $\{\mathbf{z}^{(i)}\}_{i=1}^{N}$. \FOR {$i=1$ to $n$} \FOR {$j=1$ to $J$} \STATE $\hat{\mathbf{y}}_{j}^{(i)} = \mathcal{P}(\hat{\mathbf{x}}_{j-1}^{(i)}, \mathbf{z}^{(i)}; \Theta_{\mathcal{P}})$, \STATE $\hat{\mathbf{x}}_{j}^{(i)} = \mathcal{R}(\hat{\mathbf{y}}_{j}^{(i)}, \mathbf{z}^{(i)}; \Theta_{\mathcal{R}})$, \ENDFOR \ENDFOR \STATE Compute the gradients $\nabla_{\Theta_{\mathcal{P}}}\mathcal{L}$ and $\nabla_{\Theta_{\mathcal{R}}}\mathcal{L}$ using $\mathcal{L}$ in Eq. (8). \STATE Update the parameters $\Theta_{\mathcal{P}}$ and $\Theta_{\mathcal{P}}$ using $\nabla_{\Theta_{\mathcal{P}}}\mathcal{L}$ and $\nabla_{\Theta_{\mathcal{R}}}\mathcal{L}$ respectively. \UNTIL{convergence} \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Testing stage} \label{alg:1} \begin{algorithmic}[1] \REQUIRE Compressed LR image $\mathbf{z}$. \REQUIRE Parameters of trained model $\Theta_{\mathcal{P}}$, $\Theta_{\mathcal{R}}$ and the maximum iteration number $J$. \STATE Set $\hat{\mathbf{x}}_{0}$ as bicubicly up-sampled \textbf{z}. \FOR {$j=1$ to $J$} \STATE $\hat{\mathbf{y}}_{j} = \mathcal{P}(\hat{\mathbf{x}}_{j-1}, \mathbf{z}; \Theta_{\mathcal{P}})$ \STATE $\hat{\mathbf{x}}_{j} = \mathcal{R}(\hat{\mathbf{y}}_{j}, \mathbf{z}; \Theta_{\mathcal{R}})$ \ENDFOR \STATE Take $\hat{\mathbf{x}}_{J}$ as the final estimation of HR image. \end{algorithmic} \end{algorithm} Obviously, the estimations of both \textbf{x} and \textbf{y} will not be perfect. False patterns are very likely to appear and will be further amplified by the sequential processing. These false patterns would cause the results to diverge from real image contents. From this perspective, it is essential to use the original compressed LR image \textbf{z} as an input in each modules during the unfolding process. The input \textbf{z} helps reducing the impact of accumulated errors on the two modules by continuously providing each module with the original signals. As we will show in the experiment (Section \ref{diff_compress}), a distinctive advantage of our model is that a single trained model can better handle input image compressed to different qualities than either the series or the parallel architecture. It is not difficult to understand this property from the design of the model. From Eq. (5), we can see that the super-resolved output has a clean input image \textbf{y}. Although \textbf{y} is not available in practice, we use a version of \textbf{y} restored from the compressed input \textbf{z}, \textit{i.e.}, $\hat{\mathbf{y}}$ in equation Eq. (6), as a substitute of the clean \textbf{y}. In this way, we have reduced the impact of compression on the final super-resolved results. It is also worth noting that Eq. (6) takes both \textbf{x} and \textbf{z} as input (although in practice an estimated version of \textbf{x}, \textit{i.e.}, $\hat{\mathbf{x}}$ in Eq. (5) is used), this input has provided the ARM with high-frequency details which will prevent $\hat{\mathbf{y}}$ from becoming excessively blur, thus improving the quality of $\hat{\mathbf{y}}$. \subsection{Architectures of Modules I and II} \label{arch_module} In this sub-section, we will detail the mappings $\mathcal{P}(\cdot)$ and $\mathcal{R}(\cdot)$, which are implemented via DCNN. The mapping $\mathcal{P}(\cdot)$ in Eq. (7) essentially achieves artefacts removal, and the mapping $\mathcal{R}(\cdot)$ in Eq. (7) is used for the resolution enhancement. As mentioned in Section \ref{car}, the solutions to these two problems share many common techniques and models. Therefore, we employ similar architectures for both mappings, as illustrated in Fig. \ref{fig:res3}. Most recently-developed architectures and related techniques can be used to embody the ARM and the REM. In this work, we adopt several residual groups of RCAN \cite{ref19} as the backbone of these two modules. Moreover, in each module, we include a non-local operator, highlighted in cyan in Fig. \ref{fig:res3}. The non-local operator can benefit the mappings in Eq. (7) by capturing long-range dependencies over the whole image. Different from the previous non-local operator that only operates on the input image, we make use of both the original input and the auxiliary input to weaken the influence of blocking artefacts. After the non-local operator, its output, the original input, and the auxiliary input are concatenated for subsequent processing. A long skip connection, marked in blue in Fig. \ref{fig:res3}, is used to pass the concatenated images to the output by a shortcut. Thereinto, an adaptive combination of concatenated images is exploited to fuse the pass-by information from different sources for the long skip connection. \begin{figure*}[ht] \begin{minipage}{1\linewidth} \centerline{\includegraphics[scale=0.4]{fig2_a}} \centerline{(a)} \end{minipage} \begin{minipage}{1\linewidth} \centerline{\includegraphics[scale=0.4]{fig2_b}} \centerline{(b)} \end{minipage} \caption{Architectures of the two modules in our framework. (a) Module I: ARM. (b) Module II: REM.} \label{fig:res3} \end{figure*} It is worth noting that the architectures of the ARM and the REM are not exactly the same. The reason is that the resolutions of the auxiliary inputs and the outputs are different for the two modules. In Module II, the resolution of the auxiliary input $\hat{\mathbf{y}}$ is the same as that of the input \textbf{z}. Whereas, in Module I, the width and height of the auxiliary input $\hat{\mathbf{x}}$ are \textit{s} times larger than those of \textbf{z}, where \textit{s} is the upscale factor. A straightforward idea is to down-scale $\hat{\mathbf{x}}$ to the same resolution as \textbf{z}, just like the strategy in \cite{ref34}. However, many important image details would be lost from $\hat{\mathbf{x}}$ after down-scaling. Hence, to keep the full information of $\hat{\mathbf{x}}$, we rearrange it into $\mathit{s}^{2}$ copies by using the space-to-depth transformation in \cite{ref63}, which can be regarded as the inverse process of pixel shuffle. All the copies, denoted as $\hat{\mathbf{x}}_{<1>}, \hat{\mathbf{x}}_{<2>}, …, \hat{\mathbf{x}}_{<\mathit{s}^{2}>},$ have similar image contents and the same resolution as the input \textbf{z}. Among those copies, there exist sub-pixel displacements whose values are the multiples of 1/\textit{s}. According to the space-to-depth operator, the copy $\hat{\mathbf{x}}_{<\mathit{k}>}$ is well registered with \textbf{z}, where $\mathit{k}$ equals to the number rounding off $(\mathit{s}^{2}+1)/2$. Thus, this copy, highlighted in red in Fig. \ref{fig:res3}(a), is fed to the non-local operator and the adaptive combination for long skip connection. The remaining copies, marked in green in Fig. \ref{fig:res3}(a), are also concatenated with the pass-by information and then inputted into the backbone. Another difference between the ARM and the REM lies in the output end. In the ARM, the output is of the same resolution as the input \textbf{z}, while the output resolution in the REM is $\mathit{s}^{2}$ times larger than the input resolution. Thus, after the backbone in the REM we deploy an up-sampling layer, which consists of $\mathit{s}^{2}$ convolutional filters followed by an operation of pixel shuffle. Correspondingly, a simple up-sampling operator, such as bicubic interpolation, is adopted in the long skip connection. In the followings, we will provide the details in the modified non-local operator and the adaptive combination, respectively. \subsubsection{Modified non-local operator} \label{nonlocal} The idea of non-local means is to utilize the self-similarity of images \cite{ref64, ref65}. Similar patches or features from long-range positions are selected as candidates to recover local signals which may be lost due to artefacts or down-sampling. The non-local operator performed on the input \textbf{z} can be defined as \begin{equation} \mathbf{u}_{\mathit{a}}=\sum_{n} \mathbf{w}(m,n)\cdot\mathbf{z}(n), \end{equation} where $\mathbf{u}_{\mathit{a}}$ is the output of non-local operator, \textit{a} is 1 for Module I and 2 for Module II, $m$ is the index of local patch to be recovered, $n$ is the index of candidate patches over the whole image, and \textbf{w} is the similarity matrix of \textbf{z}. In Eq. (9), all the candidates are directly from \textbf{z} to prevent any processing or manipulation on the original signal. For a non-local operator, the measure of similarity is very important. When measuring the similarity in compressed images, we should be aware of the high similarity among the patches containing blocking artefacts. The horizontal or vertical signal patterns of blocking patches are highly similar to each other and may repeatedly appear in compressed images. Given a blocking patch, we wish to employ the patches that are similar in image content, instead of the blocking pattern. Therefore, we resort to the auxiliary input to measure the similarity matrix \textbf{w}. Specifically, the similarity between the \textit{m-th} patch and the \textit{n-th} one is calculated as \begin{equation} \mathbf{w}(m,n) =\frac{1}{S} \cdot exp(-\frac{{||\mathbf{g}(m)-\mathbf{g}(n)}||^{2}}{\mathbf{h}(m)^{2}}) \cdot d(\mathbf{z}(n)), \end{equation} where $S$ is the normalized parameter to make the summation of each row in \textbf{w} equals to 1, \textbf{h} is an adaptive parameter map which will be explained later, and \textbf{g} denotes an image from the auxiliary input. In Module I, \textbf{g} is selected as $\hat{\mathbf{x}}_{<k>}$, and it is the auxiliary input $\hat{\mathbf{y}}$ in Module II. Thus, \textbf{g} has exactly the same size as the input \textbf{z} in the both modules. In Eq. (10), we further include a binary function $d(\cdot)$ to detect the blocking edges in \textbf{z}. This function returns $0$ for blocking patches and $1$ otherwise. In this work, we adopt the method in \cite{ref66} to implement the detection function $d(\cdot)$. Both \textbf{g} and $d(\cdot)$ are essential to calculate the similarity matrix \textbf{w}. By using \textbf{g} instead of \textbf{z} to measure the patch similarity, we can pay attention to the candidate patches with similar contents rather than similar blocking patterns. By using the binary function $d(\cdot)$, we can discard the blocking patches of \textbf{z} in the calculation of Eq. (9). It means that some candidates with similar contents will be excluded if they are severely contaminated by blocking artefacts in \textbf{z}. In the measure of the similarity matrix, the role of parameters \textbf{h} is to control the sparsity of the similarity matrix. Generally, a large value of the parameter would result in a smooth result, while a small one would produce artefacts and noises in $\mathbf{u}_{a}$. It has been demonstrated that the selection of \textbf{h} has great impact on the results of non-local operator \cite{ref59}. Moreover, its selection should depend on image content. For smooth regions, a large value of $\mathbf{h}$ is preferred. For textural regions, the reverse applies. Therefore, in this work, we employ a simple DCNN to adaptively estimate \textbf{h} from \textbf{g}. This network only consists of two convolutional layers with a layer of rectified linear unit (ReLU) between them. Since the parameter map \textbf{h} has the same size as \textbf{g}, we get a pixel-wise control for the sparsity of the similarity matrix. The flowchart of our modified non-local operator is illustrated in Fig. \ref{fig:res4}(a). The modified non-local operators in the ARM and the REM utilize the same formulation and flowchart, but their learnable parameters are not shared during the training. \begin{figure}[!htb] \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[scale=0.25]{fig3}} \centerline{(a)} \end{minipage} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[scale=0.3]{fig4}} \centerline{(b)} \end{minipage} \caption{(a) Flowchart of our non-local operator. (b) Flowchart of our adaptive combination for long skip connection.} \label{fig:res4} \end{figure} \subsubsection{Adaptive combination for long skip connection} \label{residual} In the architectures of the ARM and the REM, we include a long skip connection, although there may already exist multiple short or long skip connections in the backbone. Instead of delivering learned features, the purpose of this skip connection is to pass the signals from the input end to the output end. Such an input-to-output pass-by has been demonstrated to be effective and necessary in solving the SISR problem \cite{ref15}. In our architecture, after the non-local operator, three images, \textit{i.e.}, \textbf{z}, \textbf{g}, and $\mathbf{u}_{a}$, are available for the long skip connection. Their properties are different. The input \textbf{z} is the signal remained unprocessed, but it may suffer from severe artefacts. The image \textbf{g} is from the auxiliary input, which is assumed to be a clean signal. However, since the auxiliary input of one module is the output of the other one, some fake signal patterns would be introduced into \textbf{g} when we train the other module by external samples. The image $\mathbf{u}_{a}$ is from the non-local operator and much cleaner than the input \textbf{z}. Although $\mathbf{u}_{a}$ is also a processed image, all the signal patterns in $\mathbf{u}_{a}$ are from \textbf{z} itself. Therefore, all three images should be included in the long skip connection. In this work, we use a convex combination with adaptive weights to fuse them to one image. That is \begin{equation} \begin{aligned} \mathbf{v}_{a}=\mathbf{t}_{1} \odot \mathbf{z}+\mathbf{t}_{2} \odot \mathbf{g} + \mathbf{t}_{3} \odot \mathbf{u}_{a} \\ \text{s.t.} \ \mathbf{t}_{1} + \mathbf{t}_{2} +\mathbf{t}_{3}=\mathbf{1}, \end{aligned} \end{equation} where $\mathbf{v}_{a}$ is the output of the adaptive combination, the subscript \textit{a} indicates Module I or Module II, $\mathbf{t}_{1}$, $\mathbf{t}_{2}$, and $\mathbf{t}_{3}$ are the weight maps for \textbf{z}, \textbf{g}, and $\mathbf{u}_{a}$ respectively, $\odot$ represents the pixel-wise multiplication, and \textbf{1} denotes the map of all ones. Note that $\mathbf{t}_{1}$, $\mathbf{t}_{2}$, and $\mathbf{t}_{3}$ are three maps instead of three scalar values, which can further improve the flexibility. As demonstrated in \cite{ref67}, the identity mapping is the best option for residual learning. Thus, the constraint on the summation of $\mathbf{t}_{1}$, $\mathbf{t}_{2}$, and $\mathbf{t}_{3}$ is essential to make our long skip connection approximate the identity mapping. Obviously, the weight maps in Eq. (11) depend on the image contents and qualities of \textbf{z}, \textbf{g}, and $\mathbf{u}_{a}$. Thus, similar to the adaptive parameter \textbf{h} in Eq. (10), we employ a light network to estimate $\mathbf{t}_{1}$, $\mathbf{t}_{2}$, and $\mathbf{t}_{3}$ from \textbf{z}, \textbf{g}, and $\mathbf{u}_{a}$. This network consists of two convolutional layers, which are followed by a ReLU layer and a SoftMax operation, respectively. The SoftMax operation is performed on each pixel position to satisfy the constraint in Eq. (11). The flowchart of our adaptive combination is summarized in Fig. \ref{fig:res4}(b). In Section \ref{abla}, we also discuss the contribution of individual images, against their adaptive combination. \section{Experiments} \label{exper} In this section, we will provide implementation details and the datasets used in our experiments first. Subsequently, we compare the proposed method with state-of-the-art SR methods. Then, ablation studies are conducted to demonstrate the effectiveness of the proposed method. Finally, we apply our method to a real-world problem - the restoration of \textit{WeChat} avatar images that have undergone unknown scaling and compression. \subsection{Implementation Details and Datasets} \label{dataset} The proposed method is implemented in PyTorch on a machine of NVIDIA GeForce 1080Ti. Two versions of our model are included in the following comparisons. One is named as tiny model, and the other one is called full model. The number of learnable parameters in the backbone of the tiny model is much smaller than that of the full model. Specifically, we adopt 5 residual groups from RCAN as the backbone for each module in our full model. In our tiny model, only 2 residual groups are employed for each module. Moreover, each residual group in our tiny or full model contains 12 channel attention blocks, instead of 20 attention blocks used in the original RCAN model. Thus, our full model is still much lighter than RCAN. In addition to the convolutional layers in the backbone, there are two convolutional layers in the modified non-local operator, as shown in Fig. \ref{fig:res4}(a), and two convolutional layers are used in the adaptive combination, as shown in Fig. \ref{fig:res4}(b). Among these four layers, the kernel size of the first convolutional layer in Fig. \ref{fig:res4}(a) is $3\times 3$, while the kernel sizes of the rest are $1\times 1$. The numbers of their output channel are 64, 1, 64, and 3, respectively. The maximum iteration number $J$ is empirically set to 3 for our full model and 5 for our tiny one. Correspondingly, we set $\rho_{j}$ in Eq. (8) as {0.3, 0.6, 1} for the full model and {0.2, 0.4, 0.6, 0.8, 1} for the tiny one. The parameter $\gamma$ in the loss function is simply set to 1. Two widely-used compression types JPEG \cite{ref61} and WebP \cite{ref70}, are involved in our experiments. For each type of compression, we further involve 5 compression levels or quality factors (QFs). Specifically, for the JPEG compression, we have QFs of 10, 20, 30, 40, and 50. For the WebP compression, QFs of 5, 10, 20, 30, and 40 are utilized. Hence, to compress an LR image, we have a total of 10 kinds of compression configurations, viz., 2 compression types multiplied by 5 QFs. Furthermore, three scaling factors ($2\times$, $3\times$, and $4\times$) are involved in the experiments. For each up-scaling factor, we train a tiny model and a full model over all the compression configurations which leads to a total of 6 models in our experiments \textit{i.e.}, tiny model ($2\times$), full model ($2\times$), tiny model ($3\times$), full model ($3\times$), tiny model ($4\times$), full model ($4\times$). In other words, LR images with various compression configurations are mixed together to train and test these 6 models. Since our focus is on the compression of LR images rather than down-sampling kernels, the bicubic down-sampling is employed to resize images for simplicity. To prepare training and validation data, compressed LR images are generated by first down-sampling and then compressing the samples in DIV2K \cite{ref71}. Data augmentation is also performed on the training pairs by random rotation and flipping. In each training batch, we randomly crop 32 patches with the size of $48\times 48$ as LR inputs. Our models are trained by the ADAM optimizer \cite{ref69} with an initial learning rate of $10^{-4}$. The training is terminated when the performance of the model decreases on the validation set. In addition to the training and validation data, we also require a testing dataset with ground truths to facilitate quantitative comparisons. Ground truths should be uncompressed HR images. However, most publicly available image datasets suffer from compression to some extent. As an exception, the Kodak24 dataset \cite{ref72} contains 24 lossless images, which are used to produce our testing dataset. Moreover, we capture another 76 images in various scenarios by ourselves. These images are lossless as well. For more details about these captured images, one can refer to our online supplementary materials in \cite{ref73}. In total, 100 lossless HR images are used as ground truths in our quantitative tests. By using these 100 images and the above-mentioned compression configurations, we produce 1000 testing LR images for each up-scaling factor. Finally, to demonstrate the value of our method in real-world application, we have collected a dataset of 50 images from a social media platform. These images have undergone severe scaling and compression by unknown algorithms hidden from the users. We apply the trained models directly to these real-world images, which will be shown in Section \ref{real_app}. \subsection{Comparisons} \label{compare} \begin{table}[ht] \caption{Quantitative Comparisons on PSNR, SSIM, IFC, and SIS.} \setlength{\tabcolsep}{1.8mm}{ \begin{tabular}{|c|l|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Up-scaling \\ factor\end{tabular}} & \multicolumn{1}{c|}{\multirow{2}{*}{SR Models}} & \multicolumn{4}{c|}{Criteria} \\ \cline{3-6} & \multicolumn{1}{c|}{} & PSNR & SSIM & IFC & SIS \\ \hline \multirow{9}{*}{2} & ICSD & 28.86 & 0.7903 & 1.7968 & 0.6662 \\ & CISRDCNN & 29.31 & 0.8019 & 1.8964 & 0.7094 \\ & DnCNN + SAN & 29.48 & 0.8027 & 1.8815 & 0.7223 \\ & DnCNN + RCAN & 28.67 & 0.7927 & 1.7316 & \textit{0.7540} \\ & DnCNN + USRNet & 29.49 & 0.8028 & 1.8850 & 0.7212 \\ & USRNet & 28.59 & 0.7682 & 1.6711 & 0.6381 \\ & USRNet + NRIBP & 29.47 & 0.8020 & 1.8619 & 0.7216 \\ & Our tiny model & \textit{29.97} & \textit{0.8149} & \textit{2.0978} & 0.7495 \\ & Our full model & \textbf{30.10} & \textbf{0.8181} & \textbf{2.1521} & \textbf{0.7576} \\ \hline \multirow{9}{*}{3} & ICSD & 26.82 & 0.7259 & 1.1362 & 0.4956 \\ & CISRDCNN & 27.37 & 0.7391 & 1.2406 & 0.5287 \\ & DnCNN + SAN & 27.40 & 0.7384 & 1.2151 & 0.5374 \\ & DnCNN + RCAN & 26.75 & 0.7245 & 1.0905 & 0.5845 \\ & DnCNN + USRNet & 27.43 & 0.7392 & 1.2208 & 0.5399 \\ & USRNet & 26.86 & 0.7159 & 1.1387 & 0.4637 \\ & USRNet + NRIBP & 27.40 & 0.7378 & 1.2022 & 0.5397 \\ & Our tiny model & \textit{27.84} & \textit{0.7536} & \textit{1.3947} & \textit{0.5864} \\ & Our full model & \textbf{27.94} & \textbf{0.7564} & \textbf{1.4355} & \textbf{0.5893} \\ \hline \multirow{9}{*}{4} & ICSD & 25.54 & 0.6820 & 0.7974 & 0.3702 \\ & CISRDCNN & 25.99 & 0.6959 & 0.8995 & 0.3691 \\ & DnCNN + SAN & 26.16 & 0.6954 & 0.8767 & 0.3843 \\ & DnCNN + RCAN & 25.65 & 0.6808 & 0.7799 & 0.4167 \\ & DnCNN + USRNet & 26.18 & 0.6964 & 0.8811 & 0.3863 \\ & USRNet & 25.65 & 0.6759 & 0.8175 & 0.3251 \\ & USRNet + NRIBP & 26.16 & 0.6950 & 0.8670 & 0.3883 \\ & Our tiny model & \textit{26.55} & \textit{0.7117} & \textit{1.0267} & \textit{0.4404} \\ & Our full model & \textbf{26.62} & \textbf{0.7138} & \textbf{1.0504} & \textbf{0.4418} \\ \hline \end{tabular}} \label{tab1} \end{table} To show the effectiveness of the proposed method, we compare it with some state-of-the-art SISR models, including ICSD \cite{ref34}, CISRDCNN \cite{ref35}, SAN \cite{ref20}, RCAN \cite{ref19}, and USRNet \cite{ref43}. Among these competitors, ICSD is a traditional method, while all the others are based on DCNN. The models of ICSD and CISRDCNN are specifically designed for CISR. The models of SAN and RCAN are proposed for clean LR images. The USRNet model is applicable to either clean images or noisy and blurred images. For fair comparisons, we add a pre-processing before SAN, RCAN, and USRNet to reduce compression artefacts. Here, the pre-processing is achieved by DnCNN \cite{ref53}, which is a widely-used model trained to reduce compression artefacts for a wide range of QFs. The USRNet model can further incorporate noise levels and blur kernels of LR images. It would be interesting to see whether the performance of SISR is satisfying when compression artefacts are treated as noises and blurring. Thus, we also provide the SR results that are obtained by applying USRNet alone. To estimate noise levels and blur kernels for USRNet, the methods in \cite{ref74} and \cite{ref75} are adopted in our experiments, respectively. Moreover, the method of NRIBP \cite{ref40} can be combined with SR models to suppress the noises and artefacts in SR results, \textit{e.g.}, USRNet + NRIBP. The codes of ICSD, CISRDCNN and NRIBP are implemented by ourselves, while the rest are provided by their authors. Here, we provide visual and quantitative comparisons in Fig. \ref{fig:res5} and Table \ref{tab1}. One can refer to our online materials in \cite{ref73} for more results in comparison with more SISR methods, including A+ \cite{ref8} and EDSR \cite{ref16}. In Fig. \ref{fig:res5}(a), we show three examples of testing LR images, which are heavily compressed by JPEG or WebP. In each image, we highlight an image region, whose results from different SISR models are exhibited in Fig. \ref{fig:res5}(b). From the results of highlighted regions, we can see that results from some competitors suffer from conspicuous artefacts. At the same time, the results from other compared SISR models are over-smoothed. In contrast, our models can successfully retrieve sharp edges as well as remove artefacts. Moreover, the results from our tiny model are visually comparable with those from our full model, although the former is much lighter. \begin{figure*}[ht] \centerline{\includegraphics[scale=0.6]{statical_significance1.png}} \caption{Statistical significance testing on PSNR in Table \ref{tab1} for three scale factors. (a) $2\times$ (b) $3\times$ (c) $4\times$.} \label{fig:sign1} \end{figure*} Four criteria are used to quantitatively measure the performance of different SR methods. They are peak signal to noise ratio (PSNR), structural similarity (SSIM) index \cite{ref76}, information fidelity criterion (IFC) \cite{ref77}, and structure-texture decomposition for image quality assessment of SRIs (SIS) \cite{ref78}. PSNR and SSIM are widely adopted in the evaluation of SRIs. And it has been demonstrated in \cite{ref1} and \cite{ref78} that IFC and SIS have relatively high correlations with the perceptual quality of SRIs. Therefore, it is appropriate to include these four criteria in quantitative comparisons. For all these criteria, larger values imply better performance. Quantitative results of competitors and our models are provided in Table \ref{tab1}, which contains three up-scaling factors. For each up-scaling factor, the listed values are the average results over 1000 testing images, \textit{i.e.}, 100 images multiplied by 10 compression configurations. The best performance is highlighted in bold, and the second-best results are distinguished by italics. From Table \ref{tab1}, we can see that our full model achieves an improvement of 0.5-0.6 dB on PSNR. On the other three criteria, its superiority to the competitors is also significant. Notably, even our tiny model can achieve very good performance on all the criteria, although it is much lighter than our full model. \begin{figure*}[] \begin{minipage}{1\linewidth} \centerline{\includegraphics[height=17cm,width=17cm]{fig5}} \end{minipage} \caption{Visual comparisons. (a) Several compressed LR images for testing. (b) Ground truths and results of different SISR methods.} \label{fig:res5} \end{figure*} To further demonstrate the significant improvement of our models, we conduct the statistical testing in Table \ref{tab1}, checks whether our method is statistically distinguishable from the competitors. Specifically, the paired-samples T-test is conducted on each up-sampling scale. In each scale, there are 1000 samples which approximately follows normal distributions. Due to the limitation space, we only perform the statistical significance test on PSNR. The results of this statistical significance test are shown in Fig. \ref{fig:sign1}, where the array element of “1” presents the p-value is less than 0.05, implying that there is significant difference of performance between two SR models. Otherwise, the array element is filled with “0”. From Fig. \ref{fig:sign1}, we can find that the performance differences between our model (either the tiny one or the full one) and all the competitors are statistically significant. Besides, as shown in Table \ref{tab1}, our model achieves the highest PSNR. Thus, we can conclude that our model is significantly better than the compared ones. To show the performance of the proposed models across different compression configurations, we list detailed results based on PSNR in Table \ref{tab2}. There are 100 testing images for each compression configuration. Thus, each PSNR value in Table \ref{tab2} are the average result over 100 images. Detailed results based on SSIM, IFC, and SIS can be found in \cite{ref73}. In conclusion, the visual and quantitative comparisons in Fig. \ref{fig:res5} and Tables \ref{tab1}-\ref{tab2} demonstrate the effectiveness of our models. \begin{table}[ht] \centering \caption{Quantitative Comparisons for Different Compression Configurations Based on PSNR.} \renewcommand\arraystretch{0.8} \centerline{ \setlength{\tabcolsep}{1mm}{ \begin{tabular}{|c|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Up-scaling\\ factor\end{tabular}} & \multicolumn{1}{c|}{\multirow{2}{*}{SR Models}} & \multicolumn{10}{c|}{Compression Configurations (Compression type \& QF)} \\ \cline{3-12} & \multicolumn{1}{c|}{} & \begin{tabular}[c]{@{}c@{}}JPEG \\ \& 10\end{tabular} & \begin{tabular}[c]{@{}c@{}}JPEG \\ \& 20\end{tabular} & \begin{tabular}[c]{@{}c@{}}JPEG \\ \& 30\end{tabular} & \begin{tabular}[c]{@{}c@{}}JPEG \\ \& 40\end{tabular} & \begin{tabular}[c]{@{}c@{}}JPEG \\ \& 50\end{tabular} & \begin{tabular}[c]{@{}c@{}}WebP \\ \& 5\end{tabular} & \begin{tabular}[c]{@{}c@{}}WebP \\ \& 10\end{tabular} & \begin{tabular}[c]{@{}c@{}}WebP \\ \& 20\end{tabular} & \begin{tabular}[c]{@{}c@{}}WebP \\ \& 30\end{tabular} & \begin{tabular}[c]{@{}c@{}}WebP \\ \& 40\end{tabular} \\ \hline \multirow{9}{*}{2} & ICSD & 26.96 & 28.31 & 29.02 & 29.52 & 29.90 & 27.47 & 28.19 & 29.11 & 29.76 & 30.30 \\ & CISRDCNN & 27.51 & 28.91 & 29.62 & 30.07 & 30.41 & 27.88 & 28.57 & 29.45 & 30.07 & 30.54 \\ & DnCNN + SAN & 27.68 & 29.14 & 29.91 & 30.41 & 30.82 & 27.72 & 28.46 & 29.48 & 30.27 & 30.92 \\ & DnCNN + RCAN & 27.35 & 28.49 & 29.04 & 29.38 & 29.63 & 27.36 & 27.93 & 28.67 & 29.21 & 29.64 \\ & DnCNN + USRNet & 27.68 & 29.12 & 29.90 & 30.42 & 30.82 & 27.74 & 28.48 & 29.51 & 30.28 & 30.93 \\ & USRNet & 27.23 & 28.41 & 28.93 & 29.20 & 29.38 & 27.60 & 28.09 & 28.69 & 29.05 & 29.28 \\ & USRNet + NRIBP & 27.57 & 29.05 & 29.87 & 30.40 & 30.81 & 27.73 & 28.47 & 29.52 & 30.30 & 30.96 \\ & Our tiny model & \textit{28.01} & \textit{29.51} & \textit{30.31} & \textit{30.84} & \textit{31.25} & \textit{28.41} & \textit{29.11} & \textit{30.07} & \textit{30.81} & \textit{31.41} \\ & Our full model & \textbf{28.11} & \textbf{29.61} & \textbf{30.43} & \textbf{30.95} & \textbf{31.37} & \textbf{28.56} & \textbf{29.26} & \textbf{30.20} & \textbf{30.93} & \textbf{31.53} \\ \hline \multirow{9}{*}{3} & ICSD & 25.35 & 26.41 & 26.94 & 27.31 & 27.58 & 25.82 & 26.41 & 27.02 & 27.49 & 27.88 \\ & CISRDCNN & 25.90 & 27.05 & 27.62 & 27.98 & 28.22 & 26.27 & 26.84 & 27.53 & 28.00 & 28.36 \\ & DnCNN + SAN & 25.97 & 27.09 & 27.67 & 28.06 & 28.34 & 26.09 & 26.69 & 27.49 & 28.08 & 28.55 \\ & DnCNN + RCAN & 25.70 & 26.59 & 27.02 & 27.26 & 27.45 & 25.74 & 26.20 & 26.78 & 27.19 & 27.51 \\ & DnCNN + USRNet & 25.96 & 27.09 & 27.68 & 28.08 & 28.37 & 26.09 & 26.72 & 27.53 & 28.13 & 28.61 \\ & USRNet & 25.59 & 26.63 & 27.09 & 27.35 & 27.52 & 26.03 & 26.48 & 27.02 & 27.33 & 27.55 \\ & USRNet + NRIBP & 25.87 & 27.03 & 27.65 & 28.05 & 28.36 & 26.08 & 26.70 & 27.53 & 28.13 & 28.62 \\ & Our tiny model & \textit{26.22} & \textit{27.43} & \textit{28.08} & \textit{28.49} & \textit{28.78} & \textit{26.65} & \textit{27.23} & \textit{27.98} & \textit{28.54} & \textit{28.99} \\ & Our full model & \textbf{26.31} & \textbf{27.53} & \textbf{28.18} & \textbf{28.59} & \textbf{28.88} & \textbf{26.78} & \textbf{27.35} & \textbf{28.09} & \textbf{28.63} & \textbf{29.07} \\ \hline \multirow{9}{*}{4} & ICSD & 24.29 & 25.20 & 25.65 & 25.97 & 26.19 & 24.75 & 25.12 & 25.73 & 26.11 & 26.45 \\ & CISRDCNN & 24.73 & 25.70 & 26.17 & 26.46 & 26.66 & 25.09 & 25.56 & 26.14 & 26.53 & 26.81 \\ & DnCNN + SAN & 24.88 & 25.87 & 26.37 & 26.70 & 26.94 & 25.04 & 25.56 & 26.27 & 26.78 & 27.17 \\ & DnCNN + RCAN & 24.68 & 25.50 & 25.87 & 26.10 & 26.27 & 24.75 & 25.17 & 25.71 & 26.06 & 26.34 \\ & DnCNN + USRNet & 24.87 & 25.87 & 26.39 & 26.72 & 26.97 & 25.05 & 25.59 & 26.31 & 26.83 & 27.23 \\ & USRNet & 24.48 & 25.40 & 25.81 & 26.05 & 26.21 & 24.95 & 25.35 & 25.82 & 26.09 & 26.28 \\ & USRNet + NRIBP & 24.81 & 25.83 & 26.36 & 26.70 & 26.96 & 25.04 & 25.58 & 26.30 & 26.83 & 27.23 \\ & Our tiny model & \textit{25.12} & \textit{26.21} & \textit{26.76} & \textit{27.11} & \textit{27.36} & \textit{25.53} & \textit{26.03} & \textit{26.71} & \textit{27.19} & \textit{27.56} \\ & Our full model & \textbf{25.17} & \textbf{26.25} & \textbf{26.81} & \textbf{27.17} & \textbf{27.44} & \textbf{25.62} & \textbf{26.13} & \textbf{26.78} & \textbf{27.25} & \textbf{27.62} \\ \hline \end{tabular}}} \label{tab2} \end{table} In addition to SISR results, we would like to compare our results of compression artefacts removal, \textit{i.e.}, the output of Module I, with several classical and state-of-the-art methods, including SA-DCT \cite{ref49}, TNRD \cite{ref57}, DnCNN \cite{ref53}, and DCSC \cite{ref55}. Quantitative and visual comparisons can be accessed in our online materials \cite{ref73}. It can be seen from these results that our technique has excellent performance in reducing artefacts and recovering details. Moreover, we perform model parameters and runtime comparisons with other models. The comparisons are shown in Table \ref{runtime}, where the runtime is calculated on an RGB image with the size of $3\times128\times128$. All of the models are run on the device we mentioned in Section \ref{dataset}. From the comparison, it can be seen that our models, both the tiny and the full model, use much less parameters to achieve the best performance due to the iteration optimization. Obviously, the iteration optimization procedure costs more running time, and the reason, that the runtime of our tiny model is longer than the one of our full model, is five iteration step for the tiny model while three steps for the full model. \begin{table}[] \centering \caption{Comparison of parameters and runtime.} \label{runtime} \begin{tabular}{|c|c|c|} \hline & params(M) & runtime(ms) \\ \hline CISRDCNN & 1.28 & 18 \\ \hline DnCNN+SAN & 16.38 & 287 \\ \hline DnCNN+RCAN & 16.08 & 66 \\ \hline DnCNN+USRNet & 17.69 & 56 \\ \hline Our tiny model & 3.97 & 211 \\ \hline Our full model & 9.56 & 179 \\ \hline \end{tabular} \end{table} \subsection{Ablation Studies} \subsubsection{Outputs in different iterations} \label{iter_output} \begin{figure*}[ht] \centerline{\includegraphics[scale=0.8]{fig7}} \caption{HR estimations in different iterations of the proposed framework.} \label{fig:res6} \end{figure*} Since the proposed framework produces its results in a recursive manner, it is interesting to investigate the estimated HR output $\hat{\mathbf{x}}$ in different iterations. Two examples are shown in Fig. \ref{fig:res6}, which are the outputs of our tiny model for LR images compressed by JPEG with QF = 10 and the scaling factor is 2. From the visual results, we can see that the initial estimation $\hat{\mathbf{x}}_{0}$ is very rough. The quality of the estimation gets better with each iteration. After a few iterations, the outputs of each iteration become indistinguishable, indicating the recursive algorithm has converged. \subsubsection{Comparison of parallel model, series model, and our framework} \label{compare2} In order to demonstrate the effectiveness of the proposed parallel and series integration model, we make a comparison with a parallel model and two series models. To make a fair comparison, the architectures of the two modules in all the parallel and series models are the same as our tiny model. For the parallel model, we bicubicly up-sample the output of the ARM, then fuse it with the output of the REM by a convolutional layer to reconstruct an RGB output. The kernel size of this convolutional layer is $3\times 3$. There are two kinds of series models, \textit{i.e.}, the ARM followed by the REM and the REM followed by the ARM. For the sake of simplicity, all models are only trained with LR images compressed by JPEG (QF = 30) and its corresponding $2\times$ HR images. The quantitative results are reported in Table. \ref{tab4}, and visual results are shown in Fig. \ref{fig:res7}. ``ARM\underline{~~}REM'' represents the series model that the ARM is followed by the REM while ``REM\underline{~~}ARM'' represents the transposition of the two modules. ``ARM\underline{~~}REM\underline{~~}fusion'' represents the parallel model. As we can see that, the proposed parallel and series integration model outperforms other two kinds of models in both quantitative and visual comparisons. \begin{figure*}[ht] \centerline{\includegraphics[scale=0.2]{fig8}} \caption{Visual comparisons of two images with JPEG compression QF = 30.} \label{fig:res7} \end{figure*} \begin{table}[ht] \centering \caption{Average PSNR(dB) Results of a Parallel Model, two Series Models and Our Model.} \label{tab4} \begin{tabular}{|l|l|l|l|l|} \hline & ARM\underline{~~}REM & REM\underline{~~}ARM & ARM\underline{~~}REM\underline{~~}fusion & Our tiny model \\ \hline PNSR & \multicolumn{1}{c|}{30.04} & \multicolumn{1}{c|}{30.00} & \multicolumn{1}{c|}{29.95} & \multicolumn{1}{c|}{30.34} \\ \hline \end{tabular} \end{table} \begin{table}[] \centering \caption{Comparison between models that two modules are shared or not.} \label{shared} \begin{tabular}{|c|c|c|c|} \hline & PSNR & SSIM & params(M) \\ \hline Shared (Ours) & 27.38 & 0.7787 & 9.56 \\ \hline Not shared (Variant) & 27.39 & 0.7790 & 28.68 \\ \hline \end{tabular} \end{table} \begin{figure}[ht] \centerline{\includegraphics[scale=0.7]{fig9}} \caption{Average PSNR(dB) curves of different models on different compression QFs. Only scale factor of 2 are considered in this experiments.} \label{fig:res8} \end{figure} \subsubsection{Shared weights vs Not Shared weights} From the formulation in Section \ref{overall}, our model only consists of two modules, \textit{i.e.}, ARM and REM, in the recursive optimization. Therefore, the parameters of ARM are shared in each iteration, so does REM. And our models achieve the best performance as shown in Section \ref{compare}. Moreover, we conduct an ablation study to investigate the problem of parameter sharing. Specifically, our full model with scale factor 2 is compared with its variant. The variant has exactly the same architecture with our full model but not share parameters in different iterations, and thus resulting in 3 times more parameters. This study is conducted on all the ten compression configurations, and average PSNR and SSIM of these configurations on the DIV2K validation set are recorded in Table \ref{shared}. Besides, we also include the model parameter numbers in this table. From Table \ref{shared}, we see that, the variant (not shared parameters) can only achieve slight performance improvement at the expense of large number of parameters. These results demonstrate sharing parameters among iterations is beneficial in the proposed method. \subsubsection{Analysis for different compression configurations} \label{diff_compress} A benefit of the proposed model is its capacity of handling different compression QFs. The recursive optimization with feedback helps to reduce the dependency on specific compression QF. To demonstrate the benefit of our model, we compare the performance of our model and 6 series models on the testing data with different compression QFs. The reasons of selecting series models as our competitors are that the series models are more common than parallel models and the series models beat the parallel model in Section \ref{compare2}. The configurations of these models are the same as the ones mentioned in Section \ref{compare2}. Fig. \ref{fig:res8} shows the performance curves of different models. Specifically, the training data with 5 JPEG compression QFs (QF=10,20,30,40,50) is adopted to train our tiny model which is represented by ``Ours-overall'' as well as a series model called ``ARM\underline{~~}REM-overall''. And we consider the 5 series models trained on the data with a single compression QF, \textit{e.g.}, ``ARM\underline{~~}REM-10'' represents the series model trained on the data with fixed compression QF=10. The compression QFs of the testing data spread from 10 to 55. Some of these QFs are not included in the QFs of the training data, \textit{e.g.}, QF=15, 25, 35, etc. From the performance curves, we have the following observations. First, the series model trained on the data with a specific QF achieves the best performance against other series models when the compression QF of the testing data matches the one of its training data. To be specific, ``ARM\underline{~~}REM-10'' beats other series models when the testing data are compressed by JPEG (QF = 10) and so are the others. However, the performances of these models deteriorate when the compression QF of the testing data mismatches the one of the training data. Second, the series model ``ARM\underline{~~}REM-overall'' achieves comparable performance on the testing data with every compression QF, whereas it fails to defeat other series models on the testing data with their corresponding compression QF. The above observations imply that pursuing the best performance means sacrificing generalization for the series models. And our model outperforms other series models on the testing data with every compression QF and achieves the best performance without sacrificing generalization. Even though the compression QF of the testing data mismatches the one of the training data, our model still attain the best performance, e.g, QF=15, 25, 35, etc. We attribute our model’s good generalization performances to the feedback routes between the ARM and the REM. \subsubsection{Effectiveness of the modified non-local operator and the adaptive combination for residual learning} \label{abla} In this sub-section, we would like to conduct ablation studies to show the effectiveness of the modified non-local operator and the adaptive combination in our model. For simplicity, only our tiny model is investigated here, and only PSNR values are recorded. Moreover, this experiments are only performed on the case that LR images compressed by JPEG with QF = 10 are super-resolved by a factor of 2. \begin{table}[ht] \centering \caption{Ablation Studies of the Modified Non-local Operator and the Adaptive Combination Based on PNSR.} \label{tab5} \begin{threeparttable} \begin{tabular}{|c|l|c|c|l|} \hline \multicolumn{2}{|c|}{w/o\tnote{\dag} non-local operator} & \multicolumn{2}{c|}{traditional non-local operator} & \multicolumn{1}{c|}{Ours} \\ \hline \multicolumn{2}{|c|}{27.94} & \multicolumn{2}{c|}{27.95} & \multirow{3}{*}{28.01} \\ \cline{1-4} w/o residual & z as residual & g as residual & $\mathbf{u}_{a}$ as residual & \\ \cline{1-4} 27.90 & \multicolumn{1}{c|}{27.95} & 27.96 & 27.97 & \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item[\dag] "w/o" means without. \end{tablenotes} \end{threeparttable} \end{table} \begin{figure}[!ht] \begin{minipage}{1\linewidth} \centerline{\includegraphics[scale=0.5]{statical_significance.png}} \end{minipage} \centering \caption{Statistical significance testing on PSNR in Table \ref{tab5} for two ablation studies. (a) Test for different configurations of non-local operators. (b) Test for different configurations of adaptive residual learning.} \label{fig:ss} \end{figure} We investigate the impact of the non-local operator by removing it and replacing it with a traditional non-local operator. For the traditional non-local operator, we empirically set all the elements in \textbf{h} to 30. Similarly, we can investigate the effect of the adaptive combination for skip connection by using only one of \textbf{z}, \textbf{g}, and $\mathbf{u}_{a}$ for the global residual learning. Besides, the result from the model without such a long skip connection is also recorded. All these results are provided in Table \ref{tab5}, which demonstrate the non-local operator and the adaptive combination are beneficial. Similar to Section \ref{compare}, we further measure the statistical significance for the results in Table \ref{tab5}, \textit{i.e.}, checking the significance of our modified non-local operator and the proposed adaptive residual learning. There are 100 samples for each configuration, and the results are provided in Fig. \ref{fig:ss}. From Fig. \ref{fig:ss}(a), we can see that the performance differences between the model with our non-local operator and the one with other two configurations are significant. And the results in Fig. \ref{fig:ss}(b) demonstrate that the adaptive combination of three images achieve significantly different performance compared with individual skip connections. By considering the results in Table \ref{tab5} and Fig. \ref{fig:ss}, it can be concluded that the modified non-local operator and the adaptive combination for residual learning are nontrivial. \begin{figure*}[] \begin{minipage}{1\linewidth} \centerline{\includegraphics[height=16cm,width=16cm]{fig6_1}} \end{minipage} \centering \caption{Visual comparisons on the collected avatar images dataset.} \label{fig:real} \end{figure*} \begin{table}[t] \centering \caption{Quantitative Comparisons Based on CaHDC.} \setlength{\tabcolsep}{5.6mm}{ \begin{tabular}{|l|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{SR Models}} & \multicolumn{3}{c|}{Up-scaling factor} \\ \cline{2-4} \multicolumn{1}{|c|}{} & 2 & 3 & 4 \\ \hline ICSD & 43.20 & 38.03 & 31.89 \\ \hline CISRDCNN & 46.11 & 42.05 & 34.76 \\ \hline DnCNN + SAN & 45.22 & 39.58 & 32.73 \\ \hline DnCNN + RCAN & 41.25 & 35.30 & 27.40 \\ \hline DnCNN + USRNet & 43.16 & 37.12 & 30.61 \\ \hline USRNet & 43.32 & 37.43 & 30.63 \\ \hline USRNet + NRIBP & 41.92 & 36.34 & 29.54 \\ \hline Our tiny model & \textit{46.91} & \textit{42.93} & \textit{36.71} \\ \hline Our full model & \textbf{46.98} & \textbf{43.33} & \textbf{37.54} \\ \hline \end{tabular}} \label{quan_real} \end{table} \begin{figure*}[ht] \begin{minipage}{1\linewidth} \centerline{\includegraphics[scale=0.6]{failure.png}} \end{minipage} \centering \caption{Two failure cases on the collected avatar images dataset. Rescaling all of these images for better comparison.} \label{fig:fail} \end{figure*} \subsection{Super-resolving Images from Social Media} \label{real_app} Social media platforms such as \textit{WeChat} have become popular for internet users to share photos. However, images uploaded by users would be downscaled and compressed by the service due to the limitations of storage capacity and transmission bandwidth. For example, users’ avatar images in \textit{WeChat} are downscaled and compressed, and their degradation histories are unknown to users. That is, the down-sampling kernels and compression parameters are not available. As a real-world application, we apply our trained models directly to super-resolve these severely scaled and compressed avatar images to demonstrate the generalization ability of our model. We have constructed a dataset containing 50 \textit{WeChat} avatar images. Specifically, 5 randomly selected \textit{WeChat} users (3 females and 2 males), and each user has volunteered to provide 10 avatar images from their friend lists after obtaining approval from the content owners. We evaluate the performance of the proposed method on this dataset and compare it with the competitors in Section \ref{compare}. In this real-world testing, all the SISR models, including our tiny and full models, remain the same as the ones described in Sections \ref{dataset} and \ref{compare}, without any retraining or fine-tuning to fit the dataset. We simply up-scale these 50 images by a factor of 2, 3 and 4 respectively. Due to the lack of ground truths, a state-of-the-art blind quality assessment method, abbreviated as CaHDC \cite{ref79}, is utilized for quantitative evaluations. Higher score means better image quality for this assessment method. The average values of CaHDC are shown in Table \ref{quan_real}, in which the best performance is highlighted in bold, and the second-best results are distinguished by italics. Three visual examples are further provided in Fig. \ref{fig:real}. From Table \ref{quan_real} and Fig. \ref{fig:real}, we can see that our method has much better generalization ability for real-world applications. More details about the collected avatar dataset, as well as more visual examples, can be found in our online materials \cite{ref73}. However, super-resolving real-world images whose formation remains unknown is very challenging. Thus, there are also some failure cases when applying our model to these real-world images. As shown in Fig. \ref{fig:fail}, it can be seen that, some artefacts might be enlarged if we super-resolve the input with the scale factor 4 by our model, although our results for up-scaling factor 2 are satisfactory. It may be attributed to the fact that the model trained on the data with scale factor of 4 would struggle to enhance weak edges. One possible solution to this limitation is to make use of the up-scaling factor as another input to provide auxiliary information for the adaptive combination in our framework. \section{Concluding Remarks} In this paper, we propose a parallel and series integration framework to super-resolve compressed LR images. This framework includes two modules: the ARM and the REM. Both modules are based on deep neural networks and share similar network architectures. Between the ARM and the REM, both parallel and series flows are included. On one hand, compressed LR image is received and processed by the ARM and the REM in a parallel way. As a result, the original information in the input is fully available for the both modules without any loss or change. On the other hand, two series flows are formed by regarding the output of one module as the auxiliary input of the other one. These series streams enable the information exchange between the ARM and the REM so that the two modules can facilitate each other. In this way, our framework is capable of super-resolving compressed images without assuming apriori compressions. Furthermore, to make better use of the auxiliary inputs, a modified non-local operator and an adaptive combination module, both with learnable parameters, are introduced, which have helped improving performances. Experiments are conducted on the LR images with various compression configurations. Extensive comparisons on both benchmark dataset and social media dataset demonstrate the advantage of the proposed model over state-of-the-art SISR models. \section{Acknowledgements} \label{sec:Acknowledgements} This work was in part by Guangdong Basic and Applied Basic Research Foundation with No.2021A1515011584, and the Shenzhen Research and Development Program under Grant JCYJ20220531102408020 and Grant JCYJ-20200109105008228, and in part by National Natural Science Foundation of China under Grant 62271323.
2024-02-18T23:39:53.727Z
2022-11-23T02:06:43.000Z
algebraic_stack_train_0000
760
12,296
proofpile-arXiv_065-3852
\section{Introduction} \label{LS:sec:Introduction} Let us consider the following space-time tracking optimal control problem: For a given target function $y_d \in L_2(Q)$ (desired state) and for some appropriately chosen regularization parameter $\varrho>0$, find the state $y \in Y_0 = \{ v \in L^2(0,T; H_0^1(\Omega)) : \partial_t v \in L^2(0,T;H^{-1}(\Omega)), \, v = 0 \mbox{ on } \Sigma_0\}$ and the control $u \in U = L_2(0,T;L_2(\Omega)) = L_2(Q)$ minimizing the cost functional \begin{equation} \label{LS:eqn:CostFunctional} J(y,u) = \frac{1}{2}\int_Q\!|y-y_d|^2\;\mathrm{d}Q + \frac{\varrho}{2} \|u\|_{L_2(Q)}^2 \end{equation} subject to the linear parabolic initial-boundary value problem (IBVP) \begin{equation} \label{LS:eqn:IBVP} \partial_t y - \mbox{div}_x(\nu \nabla_x y) = u \mbox{ in } Q,\quad y = 0 \mbox{ on } \Sigma,\quad y = 0 \mbox{ on } \Sigma_0, \end{equation} where $Q := \Omega \times (0,T)$, $\Sigma:= \partial \Omega \times (0,T)$, $\Sigma_0:=\Omega\times\{0\}$, $T > 0$ is the final time, $\partial_t$ denotes the partial time derivative, $\mbox{div}_x$ is the spatial divergence operator, $\nabla_x$ is the spatial gradient, and the source term $u$ on the right-hand side of the parabolic PDE serves as control. The spatial domain $\Omega \subset \mathbb{R}^d$, $d=1,2,3$, is supposed to be bounded and Lipschitz. We assumed that $0 < \nu_1 \le \nu(x,t) \le \nu_2$ for almost all $(x,t) \in Q$ with positive constants $\nu_1$ and $\nu_2$. This standard setting was already investigated in the famous book by J.L.~Lions \cite{LSSC2021:LS:Lions:1971}. Since the state equation~\eqref{LS:eqn:IBVP} has a unique solution $y \in Y_0$, one can conclude the existence of a unique control $u \in U$ minimizing the quadratic cost functional $J(S(u),u)$, where $S$ is the solution operator mapping $u \in U$ to the unique solution $y \in Y_0$ of \eqref{LS:eqn:IBVP}; see, e.g., \cite{LSSC2021:LS:Lions:1971} and \cite{LSSC2021:LS:Troeltzsch:2010a}. There is an huge number of publications devoted to the numerical solution of the optimal control problem \eqref{LS:eqn:CostFunctional}--\eqref{LS:eqn:IBVP} with the standard $L_2(Q)$ regularization; see, e.g., \cite{LSSC2021:LS:Troeltzsch:2010a}. The overwhelming majority of the publications uses some time-stepping or discontinuous Galerkin method for the time discretization in combination with some space-discretization method like the finite element method; see, e.g., \cite{LSSC2021:LS:Troeltzsch:2010a}. The unique solvability of the optimal control problem can also be established by showing that the optimality system has a unique solution. In \cite{LSSC2021:LS:LangerSteinbachTroeltzschYang:2020b}, the Banach-Ne\u{c}as-Babu\u{s}ka theorem was applied to the optimality system to show its well-posedness. Furthermore, the discrete inf-sup condition, which does not follow from the inf-sup condition in the infinite-dimensional setting, was established for continuous space-time finite element discretization on fully unstructured simplicial space-time meshes. The discrete inf-sup condition implies stability of the discretization and a priori discretization error estimates. Distributed controls $u$ from the space $U = L_2(0,T;H^{-1}(\Omega))$ together with energy regularization were investigated in \cite{LSSC2021:LS:LangerSteinbachTroeltzschYang:2020c}, where one can also find a comparison of the energy regularization with the $L_2(Q)$ and the sparse regularizations. In this paper, we make use of the maximal parabolic regularity of the reduced optimality system in the case of the $L_2(Q)$ regularization and under additional assumptions imposed on the coefficient $\nu$. Then we can derive a stabilized finite element discretization of the reduced optimality system in the same way as it was done for the state equation in our preceding papers \cite{LSSC2021:LangerNeumuellerSchafelner:2019a} The properties of the finite element scheme lead to a priori discretization error estimates that are confirmed by the numerical experiments. \section{Space-Time Finite Element Discretization} \label{LS:sec:SpaceTimeFEDiscretization} Eliminating the control $u$ from the optimality system by means of the gradient equation $p + \varrho u = 0$, we arrive at the reduced optimality system the weak form of which reads as follows: Find the state $y \in Y_0$ and the adjoint state $p \in P_T$ such that, for $v, q \in V = L_2(0,T; H^1_0(\Omega))$, it holds \begin{equation} \label{LS:eqn:OptimalitySystemWeakForm} \begin{array}{rcl} \displaystyle \varrho \int_Q \Big[\partial_t y \, v + \nu \,\nabla_x y \cdot \nabla_x v \Big] dQ + \int_Q p \, v \, dQ & = & 0, \\ \displaystyle - \int_Q y \, q \, dQ + \int_Q \Big[ - \partial_t p \, q + \nu \, \nabla_x p \cdot \nabla_x q \Big] dQ & = & \displaystyle - \int_Q y_d \, q \, dQ, \end{array} \end{equation} where $ P_T:= \{p \in L^2(0,T;H^1_0(\Omega)):\, \partial_t p \in L^2(0,T;H^{-1}(\Omega)), p = 0 \; \mbox{on} \; \Sigma_T \}. $ The variational reduced optimality system \eqref{LS:eqn:OptimalitySystemWeakForm} is well-posed; see \cite[Theorem 3.3]{LSSC2021:LS:LangerSteinbachTroeltzschYang:2020b}. Moreover, we additionally assume that the coefficient $\nu(x,t)$ is of bounded variation in $t$ for almost all $x \in \Omega$. Then $\partial_t u$ and $Lu := - \mathrm{div}_x(\nu\, \nabla_{x} u )$ as well as $\partial_t p$ and $Lp := - \mathrm{div}_x(\nu\, \nabla_{x} p )$ belong to $L_2(Q)$; see \cite{LSSC2021:LS:Dier:2015a}. This property is called maximal parabolic regularity. In this case, the parabolic partial differential equations involved in the reduced optimality system \eqref{LS:eqn:OptimalitySystemWeakForm} hold in $L_2(Q)$. Therefore, the solution of the reduced optimality system \eqref{LS:eqn:OptimalitySystemWeakForm} is equivalent to the solution of the following system of coupled forward and backward systems of parabolic PDEs: Find $y \in Y_0 \cap H^{L,1}(Q)$ and $p \in P_T \cap H^{L,1}(Q)$ such that the coupled PDE optimality system \begin{equation} \label{LS:eqn:OptimalitySystemStrongForm} \begin{array}{rcl} \displaystyle \varrho \Big[\partial_t y - \mbox{div}_x(\nu \nabla_x y) \Big] & = & - p\quad \mbox{in}\; L_2(Q), \\ \displaystyle - \partial_t p \, q - \mbox{div}_x(\nu \, \nabla_x p) & = & y - y_d\quad \mbox{in}\; L_2(Q) \end{array} \end{equation} hold, where $H^{L,1}(Q) = \{v \in H^1(Q): Lv := -\mathrm{div}_x (\nu \nabla_{x} v) \in L_2(Q)\}$. The coupled PDE optimality system \eqref{LS:eqn:OptimalitySystemStrongForm} is now the starting point for the construction of the coercive finite element scheme. Let $\mathcal{T}_h$ be a regular decomposition of the space-time cylinder $Q$ into simplicial elements, i.e., $\overline{Q} = \bigcup_{K\in\mathcal{T}_h} \overline{K}$, and $K\cap K'=\emptyset$ for all $K$ and $K'$ from $ \mathcal{T}_h $ with $K \neq K'$; see, e.g., \cite{LS:Ciarlet:1978a} for more details. On the basis of the triangulation $\mathcal{T}_h$, we define the space-time finite element spaces \begin{eqnarray} \label{LS:eqn:Y0h} Y_{0h} & = & \{ y_h\in C(\overline{Q}) : y_h(x_K(\cdot)) \in\mathbb{P}_k(\hat{K}),\, \forall K \in \mathcal{T}_h,\, y_h=0\;\mbox{on}\; {\overline \Sigma} { \cap {\overline \Sigma}_0 } \}, \\ \label{LS:eqn:PTh} P_{Th} & = & \{ p_h\in C(\overline{Q}) : p_h(x_K(\cdot)) \in\mathbb{P}_k(\hat{K}),\, \forall K \in \mathcal{T}_h,\, p_h=0\;\mbox{on}\; {\overline \Sigma} { \cap {\overline \Sigma}_T } \}, \end{eqnarray} where $x_K(\cdot)$ denotes the map from the reference element $\hat{K}$ to the finite element $K \in \mathcal{T}_h$, and $\mathbb{P}_k(\hat{K})$ is the space of polynomials of the degree $k$ on the reference element $\hat{K}$. For brevity of the presentation, we set $\nu$ to $1$. The same derivation can be done for $\nu$ that fulfill the condition $\mbox{div}_x(\nu \, \nabla_x w_h)|_K \in L_2(K)$ for all $w_h$ from $Y_{0h}$ or $P_{Th}$ and for all $K \in \mathcal{T}_h$ (i.e., piecewise smooth) in addition to the conditions imposed above. Multiplying the first PDE in \eqref{LS:eqn:OptimalitySystemStrongForm} by $v_h + \lambda \partial_t v_h$ with $v_h \in Y_{0h}$, and the second one by $q_h - \lambda \partial_t q_h$ with $q_h \in P_{Th}$, integrating over $K$, integrating by parts in the elliptic parts where the scaling parameter $\lambda$ does not appear, and summing over all $K \in \mathcal{T}_h$, we arrive at the variational consistency identity \begin{equation} \label{LS:eqn:Consistency} a_h(y,p;v_h,q_h) = \ell_h(v_h,q_h) \quad \forall (v_h,q_h) \in Y_{0h} \times P_{Th}, \end{equation} with the combined bilinear and linear forms \begin{eqnarray} \label{LS:eqn:a_h} \nonumber a_h(y,p;v,q) & = & \sum_{K\in\mathcal{T}_h} \int_K\Big[\varrho\bigl(\partial_{t}y\,v+\lambda\partial_{t}y\partial_{t}v + \nabla_{x}y\cdot\nabla_{x}v - \lambda \Delta_{x}y\,\partial_{t}v\bigr) \\ \nonumber &&\quad\quad + p(v + \lambda \partial_{t}v) - \partial_{t}p\,q+\lambda \partial_{t}p\partial_{t}q + \nabla_{x}p\cdot\nabla_{x}q\\ &&\quad\quad +\lambda \Delta_{x}p\,\partial_{t}q - u(q - \lambda \partial_{t}q)\Big]\,\mathrm{d}K \quad\mbox{and}\\ \label{LS:eqn:l_h} \ell_h(v,q) & = & - \sum_{K\in\mathcal{T}_h} \int_K y_d (q - \lambda \partial_{t}q)\,\mathrm{d}K, \end{eqnarray} respectively. Now, the corresponding consistent finite element scheme reads as follows: Find $(y_h,p_h) \in Y_{0h} \times P_{Th}$ such that \begin{equation} \label{LS:eqn:FEM} a_h(y_h,p_h;v_h,q_h) = \ell_h(v_h,q_h) \quad \forall (v_h,q_h) \in Y_{0h} \times P_{Th}. \end{equation} Subtracting \eqref{LS:eqn:FEM} from \eqref{LS:eqn:Consistency}, we immediately get the Galerkin orthogonality relation \begin{equation} \label{LS:eqn:GO} a_h(y-y_h,p-p_h;v_h,q_h) = 0 \quad \forall \, (v_h,q_h) \in Y_{0h} \times P_{Th}, \end{equation} which is crucial for deriving discretization error estimates. \section{Discretization Error Estimates} \label{LS:sec:DiscretizationErrorEstimates} We first show that the bilinear $a_h$ is coercive on $Y_{0h} \times P_{Th}$ with respect to norm \begin{eqnarray*} \|(v,q)\|_{h}^2 &=& \varrho\, \|v\|_{h,T}^2 + \|q\|_{h,0}^2 = \varrho\,\bigl( \|v(\cdot,T)\|_{L_2(\Omega)}^2 + \|\nabla_{x}v\|_{L_2(Q)}^2 + \lambda \|\partial_t v\|_{L_2(Q)}^2 \bigr)\\ && \hspace*{30mm} + \|q(\cdot,0)\|_{L_2(\Omega)}^2 + \|\nabla_{x}q\|_{L_2(Q)}^2 + \lambda \|\partial_t q\|_{L_2(Q)}^2. \end{eqnarray*} Indeed, for all $(v_h,q_h) \in Y_{0h} \times P_{Th}$, we get the estimate \begin{eqnarray} \label{LS:eqn:Coercivity} \nonumber a_h(v_h,q_h;v_h,q_h) &=& \sum_{K\in\mathcal{T}_h} \int_K\Big[\varrho\bigl(\partial_{t}v_h\,v_h+\lambda |\partial_{t}v_h|^2 + |\nabla_{x}v_h|^2 - \lambda \Delta_{x}v_h\,\partial_{t}v_h\bigr)\\ \nonumber &&\quad\quad + q_h(v _h+ \lambda \partial_{t}v_h) - \partial_{t}q_h\,q_h+\lambda |\partial_{t}q_h|^2 + |\nabla_{x}q_h|^2\\ \nonumber &&\quad\quad +\lambda \Delta_{x}q_h\,\partial_{t}q_h - v_h(q_h - \lambda \partial_{t}q_h)\Big]\,\mathrm{d}K\\ &\ge& \mu_c \, \|(v_h,q_h)\|_h^2, \end{eqnarray} with $\mu_c = 1/2$ provided that $\lambda \le c_{inv}^{-2} h^2$, where $c_{inv}$ denotes the constant in the inverse inequality $\|\mbox{div}_x(\nabla_x w_h)\|_{L_2(K)}\le c_{inv} h^{-1} \|\nabla_x w_h\|_{L_2(K)}$ that holds for all $w_h \in Y_{0h}$ or $w_h \in P_{Th}$. For $k=1$, the terms $\Delta_{x}v_h$ and $\Delta_{x}q_h$ are zero, and we do not need the inverse inequality, but $\lambda$ should be also $O(h^2)$ in order to get an optimal convergence rate estimate. The coercivity of the bilinear form $a_h$ immediately implies uniqueness and existence of the finite element solution $(y_h,p_h) \in Y_{0h} \times P_{Th}$ of \eqref{LS:eqn:FEM}. In order to prove discretization error estimates, we need the boundedness of the bilinear form \begin{equation} \label{LS:eqn:Boundedness} |a_h(y,p;v_h,q_h)| \le \mu_b \|(y,p)\|_{h,*}\|(v_h,q_h)\|_{h} \quad \forall (v_h,q_h) \in Y_{0h} \times P_{Th}, \end{equation} and for all $y \in Y_{0h} + Y_0 \cap H^{L,1}(Q)$ and $p \in P_{Th} + P_T \cap H^{L,1}(Q)$, where \begin{eqnarray*} \|(y,p)\|_{h,*}^2 &=& \|(y,p)\|_{h}^2 + \varrho \sum_{K\in\mathcal{T}_h}\lambda\,\|\Delta_x y\|_{L_2(K)}^2 + [(\varrho + 1)\lambda^{-1} +\lambda]\, \|y\|_{L_2(Q)}^2\\ && + \sum_{K\in\mathcal{T}_h}\lambda\,\|\Delta_x p\|_{L_2(K)}^2 + [2\lambda^{-1} +\lambda]\, \|p\|_{L_2(Q)}^2\\ \end{eqnarray*} Indeed, using Cauchy's inequalities and the Friedrichs inequality $\|w\|_{L_2(Q)} \le c_{F\Omega} \|\nabla_x w\|_{L_2(Q)}$ that holds for all $w \in Y_{0}$ or $w \in P_{T}$, we can easily prove \eqref{LS:eqn:Boundedness} with $\mu_b = (\max\{4, 1 + \lambda c_{F\Omega}^2, 3 + \varrho^{-1}, 1 + \lambda c_{F\Omega}^2 \varrho^{-1}\})^{1/2}$. Now, \eqref{LS:eqn:GO}, \eqref{LS:eqn:Coercivity}, and \eqref{LS:eqn:Boundedness} immediately lead to the following C\'{e}a-like estimate of the discretization error by some best-approximation error. \begin{theorem} \label{LS:the:CEA} Let $y_d \in L_2(Q)$ be a given target, and let $\nu \in L_\infty(Q)$ fulfill the assumptions imposed above. Furthermore, we assume that the regularization (cost) parameter $\varrho \in \mathbb{R}_+$ is fixed. Then the C\'{e}a-like estimate \begin{equation*} \|(y-y_h,p-p_h)\|_h \le \inf_{v_h \in Y_{0h}, q_h \in P_{Th}} \Bigl( \|(y-v_h,p-q_h)\|_h + \frac{\mu_b}{\mu_c} \|(y-v_h,p-q_h)\|_{h,*}\Bigr) \end{equation*} holds, where $(y,p)$ and $(y_h,p_h)$ are the solutions of \eqref{LS:eqn:OptimalitySystemWeakForm} and \eqref{LS:eqn:FEM}, respectively. \end{theorem} This C\'{e}a-like estimate immediately yields convergence rate estimates of the form \begin{equation} \|(y-y_h,p-p_h)\|_h \le c(u,p) h^s \end{equation} with $s = \min\{k,l\}$ provided that $y \in Y_0 \cap H^{L,1}(Q) \cap H^{l+1}(Q)$ and $p \in P_T \cap H^{L,1}(Q) \cap H^{l+1}(Q)$, where $l$ is some positive real number defining the regularity of the solution; see \cite{LSSC2021:LangerNeumuellerSchafelner:2019a} for corresponding convergence rate estimates for the state equation only. \section{Numerical Results} \label{LS:sec:Numerical Result} Let $ \{ \phi^{(j)} : j=1,\dots,N_h \} $ be a nodal finite element basis for $ Y_{0h} $, and let $ \{ \psi^{(m)} : m=1,\dots,M_h \} $ be a nodal finite element basis for $ P_{Th} $. Then we can express each finite element function $ y_h \in Y_{0h} $ and $ p_h \in P_{Th}$ via the finite element basis, i.e. $ y_h = \sum_{j=1}^{N_h} y_{j} \phi^{(j)} $ and $ p_h = \sum_{m=1}^{M_h} p_{m} \psi^{(m)} $, respectively. We insert this ansatz into \eqref{LS:eqn:FEM}, test with basis functions $ \phi^{(i)} $ and $\psi^{(n)}$, and obtain the system \[ \mathbf{K}_h \begin{pmatrix} \mathbf{y}_h \\\mathbf{p}_h \end{pmatrix} =\begin{pmatrix} \mathbf{0} \\ \mathbf{f}_h \end{pmatrix} \] with $ \mathbf{K}_{h} = (a_h(\phi^{(j)},\psi^{(m)};\phi^{(i)},\psi^{(n)}))_{i,j=1,\dots,N_h}^{m,n=1,\dots,M_h} $, $ \mathbf{f}_h = (\ell_h(0,\psi^{(n)}))_{n=1,\dots,M_h} $, $\mathbf{y}_h = (y_j)_{j=1,\dots,N_h}$ and $\mathbf{p}_h = (p_m)_{m=1,\dots,M_h}$. The (block)-matrix $ \mathbf{K}_h$ is non-symmetric, but positive definite due to \eqref{LS:eqn:Coercivity}. Hence the linear system is solved by means of the flexible General Minimal Residual (GMRES) method, preconditioned by a block-diagonal algebraic multigrid (AMG) method, i.e., we apply an AMG preconditioner to each of the diagonal blocks of $ \mathbf{K}_h$. Note that we only need to solve once in order to obtain a numerical solution of the space-time tracking optimal control problem \eqref{LS:eqn:CostFunctional}--\eqref{LS:eqn:IBVP}, consisting of state and adjoint state. The control can then be recovered from the gradient equation $ p + \varrho u = 0 $. The space-time finite element method is implemented by means of the \texttt{C++} library MFEM \cite{LS:mfem-library}. We use \emph{BoomerAMG}, provided by the linear solver library \emph{hypre}, to realize the preconditioner. The linear solver is stopped once the initial residual is reduced by a factor of $10^{-8}$. We are interested in convergence rates with respect to the mesh size $h$ for a fixed regularization (cost) parameter $\varrho$. \subsection{Smooth Target} For our first example, we consider the space-time cylinder $ Q = (0,1)^3$, i.e., $d=2$, the manufactured state \[ y(x,t) = \sin(x_1\,\pi)\sin(x_2\,\pi) \left(a\,t^2+b\,t\right), \] as well as the corresponding adjoint state \[ p(x,t) = -\varrho \sin(x_1\,\pi)\sin(x_2\,\pi) \left(2\,\pi^2\,a\,t^2 + (2\,\pi^2\,b + 2\,a)t + b\right), \] with $ a=\frac{2\,\pi^2 + 1}{2\,\pi^2 + 2}\ \text{and}\ b=1. $ The desired state $ y_d $ and the optimal control $ u $ are then computed accordingly, and we fix the regularization parameter $ \varrho = 0.01 $. This problem is very smooth and devoid of any local features or singularities, hence we expect optimal convergence rates. Indeed, as we can observe in Fig.~\ref{LS:fig:conv}, the error in the $ \|(\cdot,\cdot)\|_{Y_0\times P_T} $-norm decreases with a rate of $ \mathcal{O}(h^{k}) $, where $ k $ is the polynomial degree of the finite element basis functions. \begin{figure}[!htb] \centerin \includegraphics[width=\linewidth]{ms_fig1 \caption{Convergence rates for different polynomial degrees $ k = 1,2,3 $.}\label{LS:fig:conv} \end{figure} \subsection{Discontinuous Target} For the second example, we consider once more the space-time cylinder $ Q = (0,1)^3$, and specify the target state \[ y_d(x,t) = \begin{cases} 1, & \sqrt{(x_1 - 0.5)^2 + (x_2 - 0.5)^2 + (t-0.5)^2} \le 0.25, \\ 0, & \text{else}, \end{cases} \] as an expanding and shrinking circle that is nothing but a \emph{fixed} ball in the space-time cylinder $Q$. We use the fixed regularization parameter $ \varrho = 10^{-6} $. Here, we do not know the exact solutions for the state or the optimal control, thus we cannot consider any convergence rates for the discretization error. However, the discontinuous target state may introduce local features at the (hyper-)surface of discontinuity. Hence it might be beneficial to use adaptive mesh refinements driven by an a posteriori error indicator. In particular, we use the residual based indicator proposed by Steinbach and Yang \cite{LS:SteinbachYang:2018a}, applied to the residuals of the reduced optimality system \eqref{LS:eqn:OptimalitySystemStrongForm}. The final indicator is then the sum of the squares of both parts. In Fig.~\ref{LS:fig:meshes}, we present the finite element functions $y_h$, $p_h$, and $u_h$, plotted over cuts of the space-time mesh $ \mathcal{T}_h $ at different times $t$. We can observe that the mesh refinements are mostly concentrated in annuli centered at $ (0.5, 0.5) $, e.g. for $ t = 0.5$, the outer and inner radii are $ \sim\frac{7}{36}\pm \frac{1}{36} $, respectively; see Fig.~\ref{LS:fig:meshes} (middle row). \begin{figure}[!htb] \centerin \includegraphics[width=\linewidth]{ms_fig2 \caption{Finite element solutions, with $ J(y_h,u_h) = 3.5095\times10^{-3} $, plotted over the space-time mesh, obtained after 20 adaptive refinements, and cut at $t = 0.3125$ (upper row), $t=0.5$ (middle row), and $t=0.6875$ (lower row).} \label{LS:fig:meshes} \end{figure} \section{Conclusions} \label{LS:sec:Conclusions} We proposed a stable, fully unstructured, space-time simplicial finite element discretization of the reduced optimality system of the standard space-time tracking parabolic optimal control problem with $L_2$-regularization. We derived a priori discretization error estimates. We presented numerical results for two benchmarks. We observed optimal rates for the example with smooth solutions as predicted by the a priori estimates. In the case of a discontinuous target, we use full space-time adaptivity. In order to get the full space-time solution $(y_h,p_h,u_h)$, one has to solve only {\it one} system of algebraic equations. In this paper, we used flexible GMRES preconditioned by AMG. \bibliographystyle{acm}
2024-02-18T23:39:54.163Z
2021-03-03T02:28:02.000Z
algebraic_stack_train_0000
779
3,394
proofpile-arXiv_065-4004
\section*{Plain Language Summary} \section{Introduction} Seasonal Climate Prediction (SCP) has gained momentum in the last decade \cite{doblas2013seasonal}, becoming an important field of study, with applications in very different areas such as agriculture, risk management, long-term energy planning or climate change and extreme events modelling \cite{pepler2015ability,salcedo2022analysis}, among others. SCP problems are specially interesting in the current context of climate change, since they may have important consequences in the future \cite{masson2021climate}. One of such climate change effects are the constantly rising long-termed average temperature, coined as the so-called global warming, and associated greenhouse gases \cite{seager2019strengthening,change2018global}. However, the constantly changing weather conditions not only affect the long-term temperature averages, but also stipulate temporally much shorter periods with drastically large deviations from steady levels, producing extreme phenomena such as heatwaves and severe droughts. Evidence shows that these extreme weather events can cause worldwide consequences and impacts in natural resources (agriculture, construction, energy) \cite{bergmann2016natural}, financial sector \cite{wolf2010social} and of course human's health \cite{diaz2002effects,diaz2002heat}. Also, one of the effects of climate change is to produce warmer summers \cite{pena2015multidecadal}, which can be further studied by predicting average summer months temperature at a long-term basis. SCP related to air temperature are, therefore, extremely important and challenging problems, due to the long-term prediction time-horizon involved in these problems. There are many different previous works involving problems related to long-term prediction of air temperature, many of them involving Machine Learning (ML) or Artificial Intelligence (AI) methods. For example, there have been several previous works discussing the application of Neural Networks to long-term air temperature prediction problems, such as in \cite{ustaoglu2008forecast}, where three different types of neural networks were applied to a problem of daily mean, maximum and minimum temperature time series in Turkey. In \cite{abdel1995modeling} different artificial neural networks were applied to a problem of daily maximum temperature prediction in Dhahran, Saudi Arabia. Data for 18 weather parameters were considered as input variables, and the objective was to predict the maximum temperature on a given day, with different prediction time-horizons up to 3 days in advance. In \cite{de2009artificial} a multi-layer perceptron neural network is applied to the prediction of the maximum air temperature in the summer monsoon season in India. The mean temperature oft previous months in the period of analysis is considered as inputs for the system. Other ML approaches have also been applied to long-term prediction of air temperature. For example, in \cite{paniagua2011prediction} a Support Vector Regression algorithm (SVR) was applied to a problem of daily maximum air temperature prediction, with a 24h prediction time-horizon. Input variables such as previous air temperature, precipitation, relative humidity and air pressure and synoptic situation were considered. Results in different European measurement stations were reported. In \cite{mellit2013least} a least squares SVR algorithm is applied to prediction of time series temperature in Saudi Arabia. In \cite{ahmed2020multi} different ML approaches are proposed to develop multi-model ensembles from global climate models. The objective is to obtain annual prediction of monsoon maximum temperature and minimum temperature, among other variables, over Pakistan. In \cite{peng2020prediction} two ML algorithms (MLP and natural gradient boosting (NGBoost)), are applied to improve the prediction skills of the 2-m maximum air temperature, with prediction time horizon with lead times from 1 to 35 days. In \cite{Oettli2022} a number of ML algorithms such as neural networks, SVMs, RF, Gradient Boosting or regression trees have been applied to the prediction of surface air temperature two months in advance, with input data two months in advance from SINTEX-F2, a dynamical prediction system. Results in data from Tokio (Japan) have confirmed the good skill of the prediction. In the last years, Deep Learning (DL) algorithms have been successfully applied to long-term air temperature prediction problems, such as in \cite{karevan2020transductive}, where a type of LSTM network (Transductive LSTM) is applied to a problem of temperature prediction in Belgium and the Netherlands, or \cite{vos2021long} where a coupling of CNN and LSTM (ConvLSTM) is proposed for a long-range air temperature prediction problem. Note that such DL models get complex very soon, and in many cases the training sample size needs to be extraordinary large. However, one important issue is that the training size is usually severely constrained, due to the availability of the historic data. There are several public meteorological databases from measurements or Reanalysis \cite{salcedo2020machine}, but many of them are limited to data from 1950 or 1979 such as Reanalysis data. This means that, in many cases there are 72 years of the meteorological data available effectively for the given geographical location and, if severe extreme events occur every 10-15 years, there are just a sample of extreme events incorporated within the data. The application of DL complex models to SCP problems implies, therefore, a trade-off with the data availability, in which improvements can be expected by means of information fusion \cite{rasp2018deep}. For example, in \cite{rasp2020weatherbench} (WeatherBench) an example of an image-to-image translation using the CNN (among other methods) for medium-range weather predictions of up to 5 days has been shown. In that paper, the inputs are organised as images, where each pixel represents a geographical location. Similarly, the outputs are as organised as images, hence the image-to-image translation. An improved WeatherBench approach with the pre-trained ResNet was proposed soon later in \cite{rasp2021data}. In \cite{jin2022deep} the application of CNN on the case study for climate prediction over China was shown. The so-called capsule neural networks (CapsNets) were proposed for DL analog predictions in \cite{chattopadhyay2020analog}, where they have exhibited significant statistical benefits compared to usual DL practices. In \cite{taylor2022deep} an integrated framework for predicting the sea surface temperature was proposed. The proposed method, so-called Unet-LSTM, was based on the LSTM showed mixed prediction skills for predicting two of the past extreme events, again on the image-to-image basis to emphasise the ``big-picture'' phenomena. Based on the excellent performance previously shown by ML and DL approaches in air temperature prediction, in this paper we propose and analyze different ML and DL approaches with data fusion and data reduction techniques, for a long-term air temperature prediction problem. Specifically, the objective of the research is to predict the average temperature of the first and second August fortnights, using meteorological data from previous months. This problem has different climatological and energy-related applications, such as detection and attribution of heatwaves or prediction of energy consumption, among others. In order to achieve this objective, we propose the following procedure, based on artificial intelligence techniques: we start with a first correlation analysis among predictive variables (meteorological variables) and the target variable (air temperature of the first and second August fortnights). This correlation analysis defines a Geographic Selection Area (GSA), a reduced area of study with the highest correlation among predictive and target variables. Following, we apply an Exhaustive Feature Search (EFS) to reduce the number of predictive (input) variables in the modelling methods. Then, three different computational frameworks for prediction are defined: First, we analyze the performance of a Convolutional Neural Network (CNN), with video-to-image translation. In this case, a video stands for a sequence of half-monthly climate data and a 3D CNN filters are exploited to reduce the input dimension to an output image. Here, pixels represent geographical coordinates and the 3-channelled RGB dimensions are replaced by $n$-channelled climate data. This CNN with video-to-image translation has been applied to the whole GSA defined data area. The second computational framework, independent of the first one, analyzes the performance of several ML approaches (Multi-linear regression (LR), Lasso regression, Regression trees (DT) and Random Forest (RF). In this case we have selected a single node of the GSA (the most correlated node), and the inputs to the ML approaches are time series of climate variables (not images). The last computational framework also considers the CNN as a central processing element, but instead of processing the raw data, it relies on a pre-processing step with Recurrence Plots (RPs) \cite{eckmann_recurrence_1987,thiel2004much}, which convert time series into images. In this case, RPs share the same initial data as the ML methods, i.e. a time series of length $t$ for a given geographic coordinate with the highest correlation with the target. Two different methodologies, i.e., analogue and binarised RPs are applied and compared. After the application of the RP, the resulting image is applied to a CNN in order to obtain a final air temperature prediction within this computational framework. The proposed methodology, with the three computational frameworks proposed are trained and validated on Reanalysis data (ERA5 Reanalysis), considering two different geographical locations in Europe: Paris (France, northern Europe) and Córdoba (Spain, Iberian peninsula), where episodes of extreme summer temperature have occurred in the last decades. For example, August's 2003 extreme temperature rise severely impacted south of Spain, such as the city of Córdoba. Also, another extreme temperature rise was recorded in August 2003 that severely shocked the north of the France \cite{garcia2010review}. The rest of the paper has been structured in the following way: next section discusses the different data handling and fusion techniques used in this paper. We discuss here the Reanalysis data used, the processing of the data into geographical area selection and a process of feature selection to obtain the best set of inputs data for the DL approaches. Section \ref{sec:methods} present the proposed CNN-based methods for accurate long-term air prediction. Section \ref{sec:Experiments} shows the performance of the proposed DL approaches based on CNN, in the two geographical areas considered (Paris and Córdoba). A comparison with alternative ML algorithms and a discussion of the findings are also shown in this section. Section \ref{sec:Conclusions} closes the paper with some final remarks and conclusions on the research work carried out. \section{Data handling and data reduction techniques} \label{data} Original meteorological data were obtained from a single source, the ERA5 Reanalysis \cite{hersbach2020era5}, compiled and maintained by European Centre for Medium-Range Weather Forecasts (ECMWF) \cite{ECMWF} in a GRIB file format. The considered input and output variables are listed in Table \ref{tab:vars}, and the corresponding notation are given as used throughout the paper. Each data variable were initially obtained on hourly basis, ranging from 1st January 1950 to 31st December 2021, between latitude and longitude coordinates ranging $[70^\circ \textmd{N},~20^\circ \textmd{N}]$ and $[30^\circ \textmd{W},~30^\circ \textmd{E}]$, with the coordinate resolution of 0.25 degrees. \begin{table}[] \centering \begin{tabular}{llr} \toprule No. & Variable & Notation \\ \toprule 1. & air temperature* (at 2m) & $x_{ijt}^{(t2m)}$ \\ 2. & sea surface temperature & $x_{ijt}^{(sst)}$ \\ 3. & 10 metre u-component of wind & $x_{ijt}^{(u10)}$ \\ 4. & 10 metre v-component of wind & $x_{ijt}^{(v10)}$ \\ 5. & 100 metre u-component of wind & $x_{ijt}^{(u100)}$ \\ 6. & 100 metre v-component of wind & $x_{ijt}^{(v100)}$ \\ 7. & mean sea level pressure & $x_{ijt}^{(msl)}$ \\ 8. & volumetric soil water layer 1 & $x_{ijt}^{(swvl1)}$ \\ 9. & geopotential pressure level on 500 hPa & $x_{ijt}^{(geo500)}$ \\ \bottomrule \end{tabular} \caption{Meteorological variables (data) used in the study. *=not only input variable but output variable as well. A whole dataset is denoted with $x_{ijt}^{(k)}$, where $k$ represents the arbitrary data variable. The true output is represented as $y_t$, prediction output as $\hat{y}_t$.} \label{tab:vars} \end{table} These meteorological data were then further treated for data reduction by temporal averaging. Downsampling was performed for each meteorological data variable separately, on a fortnight (semi-monthly) basis. This means that the original hourly data were transformed into averaged fortnight data. Hence, two data samples were created for each month, the first sample describing the observations in the first fortnight of a given month and the second sample for the second one. This way, 24 downsampled climate data samples per year were generated. A custom notation of describing of the semi-monthly data was utilised in this paper, i.e., the $\tau_1$ represents the first fortnight of a given month, and $\tau_2$ represents the second fortnight. The spatial treatment of the data was carried out as follows: First, these 9 different meteorological variables were considered and visualised using the coordinate (geographical) plots. The ERA5 variables were obtained in a regular grid, consisted of a very large sized area incorporating almost the whole Europe including Iceland, part of the northern Africa and almost a half of the Atlantic towards the USA. Three specific problems arose with the incorporation of such amount of data, e.g., (1) It was extremely difficult to process the complete available area with all available meteorological variables due to computational limitations; (2) It seemed intuitive that filtered and concrete subsets of data should lead to better DL performance than tons of unfiltered data; (3) Specific predictor variables, such as the $x_{ijt}^{(sst)}$, were only available at certain areas, i.e., the sea, while for land areas these values were not defined, which could be problematic for the prediction stage. These problems suggest that subsets of data need to be selected before further modelling. We called these subsets as ``geographic area selection'' (GAS), and their purpose is to obtain relevant geographic areas for predicting the outputs ($\hat{y}_t$, compared to the true outputs $y_t$) in a given area of study, i.e., Paris and Córdoba in this case. GAS were obtained for Paris and Córdoba by calculating the Pearson's correlation coefficients for each meteorological variable for each geographic coordinate available. Then, rectangular images considering the most relevant areas were selected to form images, for each predictor variable (geographic areas for each predictor variable were allowed to be different). Image sizes of $33 \times 33$ were empirically recognised as a compromise between the geographical area coverage on one hand and a homogeneity of the correlated areas on the other (larger image sizes would expose areas with less homogeneous values of correlation coefficients, smaller images would omit relevant geographical information). Pearson's correlation coefficients between each predictor variable and a temperature in Paris or Córdoba ($y_t$) were calculated as follows: \begin{equation} corr_{x_{ij}^{(k)}} = \rho \left (x_{ijt}^{(k)}, y_{t'} \right ), \end{equation} where $\rho$ denotes the correlation coefficient calculation, $x$ denotes one of the 9 available data variables, indices $i,j$ denote the pair of location coordinates (latitude, longitude) and $t$ is a time index. $t'$ represents the delayed time index and is used in combination with a variable $y$ that represents the given area temperature at time $t$, $t-1$ ($\tau_1$) or $t-2$ ($\tau_2$). Variable $k$ represents the given predictor (explanatory) variable. It must hold that $lat_{min}^{(k)}<i<lat_{max}^{(k)}$ and $long_{min}^{(k)}<j<long_{max}^{(k)}$, where $lat_{min}^{(k)}, lat_{max}^{(k)}, long_{min}^{(k)}, long_{max}^{(k)}$ define the GAS area. In the next subsections, the two GAS procedures carried, i.e., one for Paris and the other for Córdoba will be presented in detail. \subsection{Geographic Area Selection for Paris} In order to obtain the GAS, correlation analyses for each variable are performed between the averaged climate data predictors and an averaged temperature in a given study area (city), considering a possible synoptic relation between predictive and target variables. They are performed specifically for each predictor to obtain the most correlated areas with temperature in a given city. Since forecasts are always made in advance, both the coincident (present) and time-delayed (past) scenarios of correlation analyses are looked for, and a compromise between the two is taken when selecting the GAS. Coincident scenario considers time--coincidental pairs of the temperature in Paris (target) and a given predictor variable for each geographic coordinate, e.g., the series of a predictor variable for a given geographic coordinate $x_{ij}$ from Jan-$\tau_1$'1950 to Dec-$\tau_2$'2021 and the $y_{t}$'s in Paris from Jan-$\tau_1$'1950 to Dec-$\tau_2$'2021. The $\tau_1$ time delay scenario depicts the Pearson's correlation analysis between the time delayed $y_{t}$ in Paris for $\tau_1$, e.g. the series of given predictor variable for given geographic coordinate $x_{ij}$ from Jan-$\tau_1$'1950 to Dec-$\tau_1$'2021 and the $y_{t}$'s in Paris from Jan-$\tau_2$'1950 to Dec-$\tau_2$'2021 (note that the sample size decreases for 1 instance in this case). In turn, the $\tau_2$ time delay scenario depicts the Pearson's correlation analysis between the time delayed $y_{t}$ in Paris for $\tau_2$, e.g. the series of given predictor variable for given geographic coordinate $x_{ij}$ from Jan-$\tau_1$'1950 to Nov-$\tau_2$'2021 and the $y_{t}$'s in Paris from Feb-$\tau_1$'1950 to Dec-$\tau_2$'2021 (note that the sample size decreases for 2 instances in this case). Results are visualised onto a geographic map and are interpreted with the help of a colour-bar, where the darker the red or blue colour symbolises the larger the positive or negative correlation, respectively. Figure \ref{fig:paris_1_corr} depicts the three Pearson's correlation analyses for Paris. \begin{figure} \centering \includegraphics[width=\textwidth]{fig/Correlation_Paris_first.png} \caption{Correlation analysis (Paris) first part of the variables. The three columns represent the Pearson's correlation analyses between the $y_{t}$ in Paris and the each geographic coordinate for each variable $x_{ijt}^{(k)}$. "Coincident"=Pearson's correlation coefficients between the coincident pairs; "$\tau_1$"=Pearson's correlation coefficients between pairs delayed for $\tau_1$, "$\tau_2$"=Pearson's correlation coefficients between the pairs delayed for $\tau_2$. The red rectangles inside the figures denote the regions with highest or lowest correlation coefficients.} \label{fig:paris_1_corr} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig/Correlation_Paris_second.png} \caption{Correlation analysis (Paris) second part of the variables. The three columns represent the Pearson's correlation analyses between the $y_{t}$ in Paris and the each geographic coordinate for each variable $x_{ijt}^{(k)}$. "Coincident"=Pearson's correlation coefficients between the coincident pairs; "$\tau_1$"=Pearson's correlation coefficients between pairs delayed for $\tau_1$, "$\tau_2$"=Pearson's correlation coefficients between the pairs delayed for $\tau_2$. The red rectangles inside the figures denote the regions with highest or lowest correlation coefficients.} \label{fig:paris_2_corr} \end{figure} As expected, the $y_{t}$ data variable is the most correlated to itself among all predictor variables. Regions near Paris score correlation coefficients near +1, the further we go, the lower the correlation coefficients are. Land is more correlated than the sea: over the Atlantic correlation coefficients in average score values around +0.5. The further we go to the north or south, the lower the correlation coefficient. Similar latitudes on the other hand maintain similar correlation coefficient values. The $\tau_1$ delay is as expected a bit less correlated and the $\tau_2$ delay even less. For the latter, latitudes below $30^\circ$N score correlation coefficients near zero, therefore they cannot deliver much of an information value towards predictions of the $y_{t}$ in Paris. In this case, the GAS region is centred in Paris and extends symmetrically to north, south, east and west. Next, we analyze the pair of wind components at 10 meters. The $(x_{ijt}^{(v10)}, y_{t})$ pair obtains higher levels of correlation coefficients than the $(x_{ijt}^{(u10)}, y_{t})$ pair. $(x_{ijt}^{(v10)}, y_{t} )$ pair is similarly correlated considering the $\tau_1$ or $\tau_2$ delay with the coincident, and can thus be treated as a stable (or even leading) indicator, although of lower correlation magnitudes, approximately -0.25. South Mediterranean area and the north-west (as well as north-east) of Africa score much higher correlation coefficients and the relation even gets stronger by prolonging the delay. Some parts of the eastern and western Europe seem to be highly positively correlated, but the effect is not much homogeneous and the information value towards predicting the $y_{t}$ is questionable. No significant correlation is found in the central European area for the pair $(x_{ijt}^{(u10)}, y_{t})$ but strong negative correlation is found between $y_{t}$ in Paris and the $x_{ijt}^{(u10)}$ in Mediterranean sea, meaning that the stronger the u-component of wind (westerlies) the lower the Paris temperature. According to the dark blue colour, wind in Mediterranean's should be a good fit. As expected, the 100 meter wind is more homogeneous than the 10 meter wind. Also, the magnitudes of correlation coefficients are maintained even if increasing the delay. Mediterranean's and the north of Africa again play an important role, just like the north of the Norway. Both of the GAS regions were selected at the north of the Africa, $x_{ijt}^{(u10)}$ at latitudes closer to a Greenwich meridian, $x_{ijt}^{(v10)}$ further away. Both the $x_{ijt}^{(v10)}$ and $x_{ijt}^{(v100)}$ components that represent the north-south component exhibit a semi-homogeneous tunnel located at the north Africa which, we suppose, symbolises the Sirocco wind pattern. Western and eastern parts of the north of the Africa are of strong negative correlations but the latitudes close to the Greenwich meridian are more to the red. Further, the $x_{ijt}^{(u10)}$ and $x_{ijt}^{(u100)}$ components in central Africa are positive as here the easterlies prevail. The GAS regions for $x_{ijt}^{(u100)}$ and $x_{ijt}^{(v100)}$ are as well set at the north of the Africa. The $x_{ijt}^{(msl)}$ data variable is among the more important variables. It is utterly homogeneous, with two different zones, north-west and south-east. The former (Atlantic) is positively correlated, while the latter (the north of Africa) negatively correlated, meaning that the higher the $x_{ijt}^{(msl)}$ on Atlantic the higher the Paris temperature, and the higher the $x_{ijt}^{(msl)}$ in Africa, the lower the Paris temperature. Both the positive and negative correlations enlarge by extending the delay, thus the $x_{ijt}^{(msl)}$ should be treated as an excellent leading indicator. The GAS is set at the extreme western part of the Atlantic available, at high latitudes, close to Iceland. The $x_{ijt}^{(sst)}$ data variable is available only for the sea areas, e.g. Atlantic and Mediterranean's. The relation is highly robust for coincident scenario, but drops substantially with the introduction of delay. Most correlated seem to be areas of similar geographic latitudes. On the other hand, the $( x_{ijt}^{(geo500)}, y_{t} )$ pair exhibits much higher relation strengths. Except a central area on the Atlantic which seems to exhibit lower correlation pairs, correlations are above +0.5 and persistent at the delays. The higher the $x_{ijt}^{(sst)}$ or $x_{ijt}^{(geo500)}$ values, the higher the temperature in Paris. Finally, the $(x_{ijt}^{(swvl1)}, y_{t})$ is addressed, here the prolongation of delay reduces the correlation fit. Strongest negative fit is found to be on the land near across the whole Europe, meaning that the higher the volumetric soil water layer, the lower the Paris temperature. No correlation is found between the $( x_{ijt}^{(swvl1)}, y_{t} )$ pair on sea (either the Atlantic or Mediterranean). The correlation fit worsens by considering northern or southern latitudes, such as north of Africa or Norway, Sweden, Iceland. An outlier, i.e. the Alps, is spotted: here, the Alps are positively correlated. The GAS region is selected to be covering as much as France as possible. \subsection{Geographic Area Selection for Córdoba} The results show in this case that the $y_{t}$ data variable is among the most correlated variables again, especially for regions of the central Europe and the north-east of the Africa. Unfortunately, the correlation coefficients quickly reduce by introducing the $\tau_1$ and $\tau_2$ delays. Again, the $y_{t}$ on the land is more correlated than the $y_{t}$ on the sea, e.g., Atlantic; low correlations are found for $y_{t}$ on the sea below latitudes $30^\circ$N. The GAS region is centred at Córdoba. Correlations for pairs $(x_{ijt}^{(u10)}, y_{t})$ and $(x_{ijt}^{(u100)}, y_{t})$ wind components are similar between themselves. The Mediterranean sea is highly negatively correlated with the $y_{t}$ in Córdoba and the south of the Europe is highly positively correlated, but the fit is not as homogeneous as Mediterranean's. Additionally, the north of the Sahara seems to be positively correlated with the Córdoba's $y_{t}$, but the fit is due to limiting coordinates not homogeneous as desired. Both the v-components seem to exhibit similarly positive correlations behaviour as in the Paris case, with the significant but leaky tunnel between the north of the Africa and south of France. Again, the south-east of the Europe is most negatively correlated. The GAS regions for $x_{ijt}^{(u10)}, x_{ijt}^{(u100)}, x_{ijt}^{(v10)}, x_{ijt}^{(v100)}$ are centred identically to the Paris case. The $x_{ijt}^{(msl)}$ shows a similar structure to the Paris case, yet very interesting realisation -- the larger the time delay, the higher the correlation. Atlantic and the north-east of the Europe is positively correlated with the Córdoba's $y_{t}$ and the south-east of Europe and north of Africa is again negatively correlated. A similar realisation is with the $x_{ijt}^{(sst)}$ -- the higher the latitude, the higher the correlation coefficient; although the sea surface temperature correlation coefficient gets weaker by increasing the extending the time delay. The $(x_{ijt}^{(geo500)}, y_{t})$ again realises a central Atlantic part less related with the Córdoba $y_{t}$, but for the rest of the regions, especially the northern Africa, it exhibits a highly positively related connection. The $(x_{ijt}^{(swvl1)}, y_{t})$ pair shows a strong negative correlation fit over the land, but weak or none fit on the sea. The strongest negative fit is spotted for the similar latitudes as the Córdoba itself. The GAS for $x_{ijt}^{(sst)}, x_{ijt}^{(geo500)}$ predictor variables are centred identically as in Paris case, while the $x_{ijt}^{(swvl1)}$ is centred to cover the Iberian Peninsula. \begin{figure} \centering \includegraphics[width=\textwidth]{fig/Correlation_Cordoba_first.png} \caption{Correlation analysis (Córdoba) first part of the variables. The three columns represent the Pearson's correlation analyses between the $y_{t}$ in Córdoba and the each geographic coordinate for each variable $x_{ijt}^{(k)}$. "Coincident"=Pearson's correlation coefficients between the coincident pairs; "$\tau_1$"=Pearson's correlation coefficients between pairs delayed for $\tau_1$, "$\tau_2$"=Pearson's correlation coefficients between the pairs delayed for $\tau_2$. The red rectangles inside the figures denote the regions with highest or lowest correlation coefficients.} \label{fig:cordoba_1_corr} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig/Correlation_Cordoba_second.png} \caption{Correlation analysis (Córdoba) second part of the variables. The three columns represent the Pearson's correlation analyses between the $y_{t}$ in Córdoba and the each geographic coordinate for each variable $x_{ijt}^{(k)}$. "Coincident"=Pearson's correlation coefficients between the coincident pairs; "$\tau_1$"=Pearson's correlation coefficients between pairs delayed for $\tau_1$, "$\tau_2$"=Pearson's correlation coefficients between the pairs delayed for $\tau_2$. The red rectangles inside the figures denote the regions with highest or lowest correlation coefficients.} \label{fig:cordoba_2_corr} \end{figure} The predictive climate variable have been then limited to the GAS regions in order to carry out the air temperature prediction with the ML and DL approaches. Since we are particularly interested in forecasting the summer temperatures, GAS regions were further downsampled to include months from April to August only. Between these, 8 time samples from April to July were considered as input data, and 2 time samples as output model data, i.e., August $\tau_1$ or $\tau_2$. The next subsection describes the further data adjustment procedure. \subsection{Data adjustment procedure} The GAS procedure leaved us with the original (unit) data, thus some data adjustment were needed before employing the proposed modelling with AI techniques. Input and output data were first normalised separately. First, the input data ($x_{ijt}^{(k)}$) normalisation within the range $[0,1]$, was employed using the following transformation: \begin{equation} x_{ijt}^{(k)'} = \frac{x_{ijt}^{(k)} - \min_t x_{ijt}^{(k)}}{\max_t x_{ijt}^{(k)} - \min_t x_{ijt}^{(k)}}, \end{equation} where the $x_{ijt}^{(k)}$ represents the original input set of data from April--July, $x_{ijt}^{(k)'}$ the normalised input climate variables, $i$ and $j$ represent the coordinates (longitude, latitude), $k$ represents each of the nine of the climate variables considered and $t ~\in~ [1950,1951,\ldots,2021]$ represents the time. As it can be seen in the transformation with the index $t$, input data normalisation was performed specifically for each year (still, all the time samples from April--July within a given year were normalised using the same factors). The output data ($y$), which effectively represents the given area temperature data in August, either the first or second fortnight, is adjusted twofold. First, it is adjusted using the input data $x$ normalisation factors, as follows: \begin{equation} y_{ijt}' = \frac{y_{ijt} - \min_t x_{ijt}^{(t2m)}}{\max_t x_{ijt}^{(t2m)} - \min_t x_{ijt}^{(t2m)}}, \end{equation} where $y_{ijt}$ illustrates the original regional temperature output and the $y_{ijt}'$ is the scaled regional temperature output data. However, note that this adjustment does not enssure the normalised data within the range $[0,1]$, rather aggregated numbers close to 1 with a very small variance. After, the adjusted $y_{ijt}'$ is normalised to ensure the $[0,1]$ range as follows (and hence maximise the output variance): \begin{equation} y_{ijt}'' = \frac{y_{ijt}' - \min_t y_{ijt}'}{\max_t y_{ijt}' - \min_t y_{ijt}'}. \end{equation} Also, the output data was given as an image (with appropriate $i$ and $j$ coordinates), but by selecting a single pixel (e.g., $i=1,j=1$) only a specific geographical location could be extracted, i.e., $y_{t}''$. \subsection{Exhaustive feature search} There are nine different predictors (input variables) included in the analysis carried out. Some of them may appear as redundant (especially regarding the wind) and thus they are suspicious to lower the forecasting skills of the prediction model. Exhaustive Feature Search (EFS) is therefore employed, due to low number of existing predictors, to test all possible combinations of predictors ($2^9=512$). Upon, the best obtained combination is taken for forecasting. In addition, note that we consider nine different modelling methods applied in the paper (LR, Lasso, Poly, AdaBoost, DT, RF, CNN, RP+CNN and RP+CNN+BIN). Different modelling methods incorporate different training skills, so in order to maximise the training skill (and consequently the forecasting skill), the most suitable combination of predictors is sought for each modelling method specifically. The best combination of predictors may thus differ between methods (no universal solution may work equally well for all methods). EFS is conducted by first generating a list of all possible combinations of predictors. Next, each of the possible combination of predictors is trained for each model and forecasts are run. The mean squared error ($mse$) of the forecasts is then calculated compared to the true values. Finally, the combination of predictors with lowest $mse$ (for each modelling method specifically) is taken as a best combination of predictors. The best set of features for each prediction model will be shown in the results section. \section{Proposed computational frameworks based on AI for long-term temperature prediction}\label{sec:methods} This section presents the three proposed computational frameworks for long-term air temperature prediction. All the proposed computational frameworks exploited the same data as described above. However, slight further modifications and adjustments were applied to adjust the data to each of the methods unambiguously. Especially, the data sequencing procedure, which will be explained for each method distinctively, provided large differences regarding the data exploitation among the three frameworks. \subsection{Computational framework 1: Convolutional Neural Networks} CNNs are universal, deep learning networks for processing images and videos, either for regression, classification, segmentation or identification purposes \cite{dhillon2020convolutional}. The hearth of the CNNs is the CNN kernel, a matrix or tensor with trainable weights. Weights are typically randomly initialised and are adjusted during the CNN training. CNN kernel with weights performs a mathematical operation of convolution and produces CNN's hidden layers, so-called feature map. Within a single hidden layer many feature maps are typically produced by many distinctive CNN kernels. Feature maps are usually of lowered dimensionality compared to the inputs. Such dimensionality reduction depends on the CNN kernel size and is usually minimal. Rather, dimensionality of feature maps is controlled by the pooling operation. Pooling only adjusts the dimensionality, but does not provide any trainable weights. Several pooling strategies, such as maximum or average pooling, exist. Most often, pooling is used in conjunction with a convolution layer, in a stacked architecture where convolution is first applied and then pooling follows. In a multi-layered CNN such stacking combinations are applied several times, meaning that the dimensions and number of feature maps may change several times between CNN input and output. The CNN output also is a feature map. It can be either treated as an image or alternatively be flattened into a single regression value or classification probability using a dense layer. For complex regression or classification problems, several dense layers can be applied. Figure \ref{fig:gen_cnn} shows a general example of the CNN convolution. Figure is divided into two schemes representing the same concept. The left scheme is more abstract, while the right more detailed. Input into the demonstrated CNN is the image of dimensions $d_1 \times d_2$ and $d_3$ channels (images typically incorporate three channels, the red, green and blue). Input image is convolved with the demonstrated CNN kernel of dimensions $k_1 \times k_2$, where $k_1=k_2=3$. Procedure of convolution is repeated $f_3$-times, each time with distinctively initialised kernel weights. In such a way, $f_3$ feature maps of dimensions $f_1 \times f_2 \times f_3$ are generated. The detailed scheme represents the extraction of dark grey coloured subimage to be convolved with the light grey coloured kernel. Typical CNN convolution multiplies the values of subimage with the kernel weights element-wised and sums them. After, the bias $b$ is added. Finally, the result is saved as a single component into the feature map (on figure represented by the dark greyed colour single pixel). This process is repeated by gradually moving the dark grey coloured subimage over the rest of the image, a process thoroughly controlled by the kernel stride parameters. Both the overlapping or non-overlapping scenarios of subimages may be applied. After all suitable combinations, given by image size, kernel size and kernel stride, are gone through, a complete feature map is built. \begin{figure} \centering \includegraphics[width=.8\textwidth]{fig/General_CNN_scheme.pdf} \caption{Demonstration of a CNN convolution.} \label{fig:gen_cnn} \end{figure} Following the introductory demonstration of CNNs, the CNN computational framework as used in the study is presented. The use-case diagram of the originally proposed CNN computational framework can be visualised in figure \ref{fig:cnn_use_case}. Figure is organised as a flowchart, and addresses three important steps of CNN exploitation (each of these steps is indicated by a grey coloured rectangular box). First, a correlation analyses of the fused data are run as shown in section \ref{data}. Correlation analyses provide the GAS regions, one per predictor, which are in the figure exhibited by red symmetrical rectangles. Positions of GAS regions are fixed for a given variable but may differentiate between variables. Next, the data sequencing step follows. The purpose of the data sequencing is to build a multivariate data structure similar to the moving images (video). There are 9 predictors, each of them forms a single channel. Predictor values are taken from GAS regions for two of the each monthly fortnights ($\tau_1,\tau_2$). Months from April to July are covered, meaning that 8 different images that form a motion with a sequence length 8, are introduced. Processing of the images is always from the oldest to latest, as depicted on the figure \ref{fig:cnn_use_case}. First comes the April's $\tau_1$, followed by April's $\tau_2$,..., the last image is July's $\tau_2$. The whole motion of images is called an instance. Each instance represents an individual year, there will be so many instances as is the number of years of data available. However, not only input variables undergo slight data modifications but also the output variable does. The CNN output $\hat{y}_{ijt}''$ is organised as an image and needs to be compared with an image during the CNN training to derive the weight corrections. Namely, if the dimension of the CNN output is lower than the CNN input (due to CNN convolutions), the true output image $y_{ijt}''$ dimensionality needs to be thus lowered as well. A symmetric lowering of dimensions is employed as $y_{ijt}''' = y_{ijt}'' [l:n-l, l:n-l]$, where $l$ controls the level of lowering. Due to symmetry, the centre of the so reduced true image is maintained. Finally, the CNN training and forecasting procedures are run. Instances (72 of them) are divided into two strictly non-overlapping sets, the training and forecasting sets. As output, either the August's $\tau_1$ or $\tau_2$ is applied. Figure \ref{fig:time_horizon} better describes the outline of a single instance and forecast output. \begin{figure} \centering \begin{subfigure}{1.0\textwidth} \includegraphics[width=\textwidth]{fig/prediction_1_2m.pdf} \caption{Temperature forecasts for sooner fortnight ($\tau_1$).} \end{subfigure} \hfill \begin{subfigure}{1.0\textwidth} \includegraphics[width=\textwidth]{fig/prediction_2_2m.pdf} \caption{Temperature forecasts for later fortnight ($\tau_2$).} \end{subfigure} \caption{Forecast diagram. One CNN input instance consists of 4 consecutive months from April to July, two fortnights per month. An output is organised as either Aug $\tau_1$ or Aug $\tau_2$.} \label{fig:time_horizon} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{fig/cnn_use_case.pdf} \caption{The three grey rectangles represent the proposed workflow. The correlation analysis is used to derive the GAS regions. Next the data sequencing follows to build the instances. Finally, the supervised training with out-of-sample forecasts is employed.} \label{fig:cnn_use_case} \end{figure} A detailed architecture of proposed CNN computational framework is visualised in Figure \ref{fig:cnn}. The figure is adjusted to exhibit a single instance only. Input size of an instance is $8 \times 33 \times 33 \times 9$ (time samples $\times$ x-axis size $\times$ y-axis size $\times$ number of channels, respectively). A 3D CNN kernel of $3 \times 3 \times 3$ is proposed to create the first layer of feature maps. The 32 feature maps are generated, each of the size $31 \times 31$, and the sequence is lowered from 8 to 6. Next CNN hidden layer is applied by a CNN kernel of $3 \times 3 \times 3$. The number of feature maps is increased to 64 and sequence length decreased to 4. The final CNN kernel is of customised dimensions $4 \times 3 \times 3$ to assure the single-channelled output image. The CNN output image dimension is equal to $27 \times 27$ and that is also the dimension of the true output image dimension, hence $l=(33-27)/2=3$. \begin{figure} \centering \includegraphics[width=\textwidth]{fig/cnn_arch.pdf} \caption{CNN architecture. The input consists of a sequence of images (or a motion). Each image consists of $9^*$ different channels, 2 images per month. Months from April to July are covered within the input data, either August $\tau_1$ or August $\tau_2$ in the output. The CNN processes the input data using 3 separate 3D kernels, hence 2 sets of feature maps are generated (the third set of feature map is the output). The output is organised as a 2D image, with adjusted dimension size.} \label{fig:cnn} \end{figure} \subsection{Computational framework 2: ML methods} Six different ML methods are also implemented and tested to be applied in the air temperature forecast problems considered. Three of them are deterministic, such as linear regression (LR), Lasso regression (Lasso), and Polynomial regression (Poly), and three of them more sophisticated, such as AdaBoost, Regression Trees (DTs), and Random Forest (RF). In what follows, each ML method is presented briefly. LR is a traditional, easy-to-use, and a low-complex shallow modelling method. It is able to capture linear, as well as non-linear connections between input and output variables. Due to its versatility and extremely quick processing, LR is one of the more popular benchmark methods among researchers. The original version of the LR, incorporating the ordinary least square minimising algorithm, was proposed by mathematician Gauss \cite{weisberg2005applied}. Lasso regression is an advanced and automated modelling method that combines the feature selection with original LR methodology. Developed by Tibshirani \cite{tibshirani1996regression}, the motivation of the Lasso is to omit the redundant and less relevant predictors from the model and thus improve the model performance. The objective of the Lasso is to minimise the variance of regression parameters on the behalf of so-called shrinkage parameter $\lambda$. An original objective function (which is also used during the experiments), is stated as $1/2||Y-X\beta||^2_2+\lambda||\beta||_1$, where the term $Y-X\beta$ represents the residual sum of squares and the mathematical notation $||$ represents the norm. The Lasso can be also seen as a regularisation method and is especially suitable for datasets with higher number of features and for datasets with higher level of uncertainty. Polynomial regression works with the traditional LR, but transforms predictors non-linearly before the use. By its nature, it increases number of features, but this increase can be controlled conveniently by the parameter setting on level of degrees. DTs are a popular shallow estimators which are originally purposed for classification tasks, but are also capable of solving the regression tasks. Pioneered by Quinlan \cite{quinlan1986induction} and Breiman et al. \cite{breiman2017classification}, DTs became a robust and reliable ML estimators and have since inception been applied to a large number of prediction problems, including meteorological applications and climate prediction tasks \cite{geetha2014data,wei2020decision,ngo2021novel}. AdaBoost (also Adaptive Boosting) was not proposed as a self-standing modelling method \cite{freund1996experiments}. Instead, it bases on one of the other underlying estimators, typically DTs. Purpose of the AdaBoost is to build an ensemble of DTs with different subsamples \cite{schapire2013explaining}. There have been recent successful applications of AdaBoost to prediction problems in climate and related tasks such as \cite{xiao2019short,asadollah2022prediction}. RFs are a state-of-the-art ensemble classification and regression methods that similarly as AdaBoost use DTs as an underlying estimators \cite{ho1995random, breiman2001random}. RFs also exploit repetitive subsampling to build many weak-learners, which are then managed into a strong-learner using a voting mechanism. Some recent climate applications with RFs are \cite{grazzini2020extreme,grazzini2021extreme,park2016drought}. All the adopted ML methods are trained and tested using the same data samples. However, some further modifications of ML methods data are required compared to the CNN data, since adopted ML methods cannot process images, nor motions of images. A simple remediation to adjust the data for ML methods is employed. The CNN data structure is taken as a baseline, from which maximum values (a single pixel) for each channel are extracted. Initially, we have tested other extractions, such as minimum, average or median, but extraction of maximum value was realised empirically as the best hit. The process of extracting the maximum value is repeated for each fortnight and the temporal data are stacked horizontally as individual instances. Formally, the extraction of maximums is denoted in Equation \eqref{eq:max}. \begin{equation} \label{eq:max} x_{t}^{(k)''} = \max_{i \in 1,2,\ldots,n} \left( \max_{j \in 1,2,\ldots,n} \left( x_{ijt}^{(k)''} \right) \right), \end{equation} where $x_{t}^{(k)''}$ denotes the ML adjusted data. The equation is saying that the spatial dependencies are removed by picking the maximum point within each channel of the image. Only two indices, $t$ and $k$, which represent time and type of the predictor variable (channel), remain in the ML adjusted data. In this way, a lot of the data is lost. This can be either positive, due to significant filtration of data redundancy, since we assume that the climate data close together are similar. Or, it can also be negative, due to losing many of the details. Next, the use-case diagram of ML methods is demonstrated. The use-case diagram of ML methods is shown in Figure \ref{fig:ml}. ML adjusted data follows the identical correlation analyses as CNN data to obtain GAS regions. Adjusting the ML data by maximising each channel is thus seen as additional adjustment procedure. Maximum values are on the use-case diagram demonstrated with a small but visible red rectangular point and red arrows are driven out of them. The process is repeated for each predictor and individual maximum values are collated to form a vector of $1 \times 9^*$. The process is further repeated for each fortnight (8 in total) and individual vectors of predictors are stacked into an instance to form a vector of $1 \times 9^*x8$, i.e., $1 \times 72^*$. \begin{figure} \centering \includegraphics[width=.8\textwidth]{fig/ML_scheme.pdf} \caption{The three grey rectangles represent the proposed workflow. The correlation analysis is used to derive the GAS regions. Next, the data sequencing follows to make the adjustments for ML data. Finally, the supervised training with out-of-sample forecasts is employed.} \label{fig:ml} \end{figure} The architecture ML data is visualised in Figure \ref{fig:ml_arch}. Each fortnight is represented with the $9^*$ predictors. The output is organised as a single $1 \times 1$ value. \begin{figure} \centering \includegraphics[width=.7\textwidth]{fig/ml_arch.pdf} \caption{Architecture of ML data. The input consists of a $72^*$ featured vector which represent the sequence of months variables from April to July. The output is organised as a single value for regression task and either represents the August's $y_{t}$ in $\tau_1$ or $\tau_2$ prediction horizons.} \label{fig:ml_arch} \end{figure} Due to the ML data adjustments, the ML methods are fed with significantly less data. Theoretically, this is a drawback, since less data carry less information. Although, performed tests have revealed that model performance is not hurt much by incorporating less data. Therefore, we proposed to build a variant of CNN that would operate on the ML adjusted data. ML adjusted data is seen by CNNs as 1D. We proposed to transform the 1D data into images by using the RP to make them more comfortable for CNNs. In this way, the same data were exploited in the transformed way. The next subsection describes the combination of RP and the CNN. \subsection{Computational framework 3: Recurrence plot with Convolutional neural network} This subsection represents the theoretical outline of the RP transformation and associated transformation process. Two types of RPs are used, the classic and binarised. Next, the use-case diagram and the used RP+CNN(+BIN) architecture are demonstrated. RP transformation is in general a mathematical process of subtracting and deriving a norm of the two displaced time series elements and the result is graphically visualised as an image. Only a single image with $9^*$ channels is created from ML adjusted data, where the shape (ornament) of an image represents the time series of each channel. Formally, the input vector ($\mathbf{g}_p$) is represented as follows (mathematical expressions are summarised from pyts library \cite{faouzi2020pyts}, please note that the variable names and indices are customised): \begin{equation} \mathbf{g}_{a} = (g_a, g_{a + \tau}, \ldots, g_{a + (b - 1)\tau}), \quad \forall a \in \{1, \ldots, c - (b - 1)\tau \}, \end{equation} where the $a$ runs from 1 to $c - (b - 1)\tau$. In case $\tau=1$, one can state a simplified representation: \begin{equation} \mathbf{g}_a = (g_1, \ldots, g_c). \end{equation} where the $c$ represents the number of timestamps. The RP calculation is derived by accounting for two iterative variables, i.e., $a,d$. The output of the RP is a 2D image and is symmetric over the diagonal, formally, \begin{equation} R_{a, d} = \Theta(\varepsilon - \| \mathbf{g}_a - \mathbf{g}_d \|), \quad \forall a,d \in \{1, \ldots, c - (b - 1)\tau \}. \end{equation} Mathematical operator $\| \cdot \|$ represents the Euclidean 2D norm between the two timestamps $a$ and $d$, and the $\varepsilon$ represents the so-called threshold. Threshold is optional. If used, the RP image is binarised, if not, the RP image is the left analogue. Both of the options have been tested in this computational framework, the analogue we denote as RP+CNN, the binarised as RP+CNN+BIN. By nature, threshold is one of the tuning parameters. If applied, the RP image undergoes the Heaviside step function, denoted as $\Theta$, which then delivers the binarisation. Three different scenarios of binarising the RPs exist. First of them sets the given percentage of $1-p$ of pixels with lowest values to 0 and the rest of the pixels of percentage $p$ to 1. The second seeks for the maximum value of RP and sets the individual pixels which values are less than given percentage of $1-p$ of the maximum value to 0. Others are set to 1. The third option comes with manual specification of threshold. Figure \ref{fig:rp_cnn_use_case} shows the use-case diagram of the RP+CNN(+BIN) methods. Correlation analyses are identical to CNN and ML computational frameworks. Again, maximal values are derived from the coordinate data and the RPs are generated with this adjusted data. The analogue (originally obtained) RP is processed as-is. For the binarised RP, the first option with percentage of $1-p$ of pixels is applied. \begin{figure} \centering \includegraphics[width=.8\textwidth]{fig/rp_scheme.pdf} \caption{The three grey rectangles represent the proposed workflow. The correlation analysis is used to derive the GAS regions. Next, the data sequencing follows to make the adjustments for ML data. Then, RPs are build. Finally, the supervised training with out-of-sample forecasts is employed.} \label{fig:rp_cnn_use_case} \end{figure} The RP+CNN(+BIN) architecture is shown in Figure \ref{fig:rp_cnn_arch}. Each input instance is organised tabularly, with dimensions $9^* \times 8$. These are transformed by RP with dimensions $8 \times 8 \times 9^*$. For the CNN, a reduced $2 \times 2$ kernel is employed to process the input images. There are 32 feature maps in the first hidden layer, imitating the CNN computational framework setting. Generated first layer of feature maps is reduced from the $8 \times 8$ to the dimension of $7 \times 7$. Another hidden layer of 64 feature maps follows, again imitating the CNN framework. Final CNN layer is the third layer of 128 feature maps with the dimensions of $5 \times 5$. Feature maps are collected by a single flattening layer with 128 neurons that outputs a single regression value. \begin{figure} \centering \includegraphics[width=\textwidth]{fig/rp_arch.pdf} \caption{CNN architecture. The input consists of a RPs of dimensions $8 \times 8 \times 9^*$. Months from April to July are depicted on the RPs. The CNN processes the input data using 3 separate 2D kernels, hence 3 sets of feature maps are generated. The output is organised as a combination of a flattened and a dense layer and represents the $y_{t}$ in either the $\tau_1$ or $\tau_2$ prediction horizons. No differences are made to architectures between RP+CNN or RP+CNN(+BIN). Notes: A=April, M=May, Jn=June, Jl=July. } \label{fig:rp_cnn_arch} \end{figure} \section{Experiments and Results}\label{sec:Experiments} This experimental section is divided into two subsections, each of them dealing with long-term air temperature forecasts in Paris (France) and Córdoba (Spain), respectively. Two different experiments are conducted for each area, the first one for the shorter ($\tau_1$) prediction time-horizon, and the second one for the prolonged ($\tau_2$) prediction time-horizon. The objective is to forecast the air temperature $\hat{y}_{t}$ in the considered study area (cities) for the given prediction time-horizon with the minimum possible errors (deviations). The methodology carried out is the following: First, the climate data are obtained, treated and fused, and further adjusted to comply with the specifics of each method (Table \ref{tab:data} shows the input and output data for each of the employed family of methods). Period from April--July is adopted to represent the sequence of input variables, and August as the target month (forecast). In total, 72 years from 1950--2021 are considered in the study, of which 52 instances during 1950--2001 are considered for training, and the rest 20 instances during 2002-2021 as out-of-sample forecasts (test). For each given study area and for each prediction time-horizon, multiple algorithms are tested, in total 9. The first 3 of them belong to the family of deterministic shallow ML methods, the next 3 to the family of the stochastic shallow ML methods, the last 3 are the stochastic CNN methods (for stochastic methods $N=10$ independent runs are considered instead of a single one, to avoid the stochastic bias). In total, $9^*$ predictor variables and a single $y_{t}$ output are supplied to each model. For each method specifically, the EFS procedure is run. Results are interpreted by a combination of performance graphics and a set of performance metrics. Performance graphics indicate in detail (1) how consistently each method forecasts $\hat{y}_{t}$ with minimum error from actual $y_{t}$; (2) how well each methods adjusts to the trend of slight $y_{t}$ increase within the forecasting period, and (3) how well each method forecasts the $y_{t}$ outliers, i.e., observations far away from long-term average, therefore possibly indicating a heatwave or a coolwave signal appearing in August summer air temperature. Performance metrics are given in numerical values and indicate how well the forecasts are as a whole. For each method, the following metrics are considered: (1) the numeric rank according to the mean squared error statistical indicator, (2) two most common statistical indicators, i.e., $mse$ and mean absolute error ($mae$), (3) two correlation coefficients, i.e., the Pearson and Spearman, with appropriate statistical significance, and (4) the optimal subset of predictor variables obtained by the exhaustive search. \begin{table}[ht] \centering \begin{tabular}{lcc} \toprule Method & Input & Output \\ \midrule ML methods & $x_{tk}''$ & $y_{t}''$ \\ CNN & $x_{ijt}^{(k)''}$ & $y_{ijt}'''$ \\ RP+CNN(+BIN) & $x_{tk}''$ & $y_{t}''$ \\ \bottomrule \end{tabular} \caption{Input and output data as required by each of the family of methods.} \label{tab:data} \end{table} The evaluation function for EFS is defined as $z_{m}=mse$. Evaluation function is adjusted for stochastic models, as follows in Equation \eqref{eq:fitness}, effectively averaging the $mse_{h}$ performance among the $N=10$ runs. Here, the $mse_{h}$ denotes the mean squared error or $mse$ of the $h$-th model, the lower the error, the better the model. Only the best model, according to the best evaluation function value for each method is shown in the results. Parameter settings as outlined in table \ref{tab:exp_setup} were used for modelling methods. Finally, the CNN and RP+CNN(+BIN) architecture settings are listed in Tables \ref{tab:cnn_arch} and \ref{tab:rp_cnn_bin_arch}. \begin{equation} z_{m}=\frac{\sum_{run=1}^{N=10} mse_{h}}{N}. \label{eq:fitness} \end{equation} \begin{table}[ht] \centering \footnotesize \begin{tabular}{lr} \toprule Variable & Setting \\ \midrule Paris geographical coordinates* & 48.75$^\circ$N, 2.25$^\circ$E \\ Córdoba geographical coordinates* & 37.75$^\circ$N, 4.75$^\circ$W \\ \midrule LR's learning algorithm & OLS \\ Lasso's $\lambda$ param & 0.0005 \\ No. of polynomial degrees & 4 \\ AdaBoost's no. of estimators & 100 \\ DT's max. depth & 10 \\ RF's max. depth & 10 \\ \midrule Learning algorithm of the CNN, RP+CNN(+BIN) & Adam \cite{kingma2014adam} \\ Learning rate of the CNN, RP+CNN(+BIN) & 0.001 \\ \hline \end{tabular} \caption{Parameter settings for the modelling methods. *=rounded to nearest quarter. The first box exhibits basic information; the second box shows the ML experimental setup. Third box exposes CNN setup and the fourth shows the RP+CNN(+BIN) settings.} \label{tab:exp_setup} \end{table} \begin{table}[ht] \centering \begin{tabular}{lrrr} \toprule Block type & Ingredients & Kernel size & Size of feature maps \\ \midrule input & ~ & ~ & 8 $\times$ $33 \times 33 \times 9^*$ \\ down 1 & Conv3D/relu & $3 \times 3 \times 3$ & $6 \times 31 \times 31 \times 32$ \\ down 2 & Conv3D/relu & $3 \times 3 \times 3$ & $4 \times 29 \times 29 \times 64$ \\ down 3 (output) & Conv3D/sigmoid & $4 \times 3 \times 3$ & $1 \times 27 \times 27 \times 1$ \\ \bottomrule \end{tabular} \caption{CNN architecture. Number of channels ($9^*$) are subject to change due to the exhaustive search.} \label{tab:cnn_arch} \end{table} Kernel size settings were set to minimal values practicable, as suggested by \cite{simonyan2014very}, who realised that a very small kernel size, e.g., $3 \times 3$, delivers significant improvements and increases the CNN effectiveness. Additionally, small kernel size has also been used because of the low input image size dimension, which has been selected due to the geographical constraints, namely the homogeneous area with relatively uniform correlation coefficients. Furthermore, the kernel size has been further reduced to $2 \times 2$ in case of RP+CNN(+BIN) due to very small input image size dimension ($8 \times 8$). Due to operating with very small input image size dimensions on one hand, but higher number of channels on the other, no pooling layers to reduce the dimensionality have been introduced to any framework. The introduction of the Experiments and results section is finalised by the Algorithm \ref{pseudocode} representing the pseudocode of the $\hat{y}_{t}$ forecasts. \begin{table}[ht] \centering \begin{tabular}{lrrr} \toprule Block type & Ingredients & Kernel size & Size of feature maps \\ \midrule input & ~ & ~ & $8 \times 8 \times 9^*$ \\ down 1 & Conv2D/relu & $2 \times 2$ & $7 \times 7 \times 32$ \\ down 2 & Conv2D/relu & $2 \times 2$ & $6 \times 6 \times 64$ \\ down 3 & Conv2D/relu & $2 \times 2$ & $5 \times 5 \times 128$ \\ output & Flatten \& Dense/linear & 1 & 1 \\ \bottomrule \end{tabular} \caption{RP+CNN(+BIN) architecture. Number of channels ($9^*$) are subject to change due to the FS.} \label{tab:rp_cnn_bin_arch} \end{table} \begin{algorithm}[ht] \begin{algorithmic}[1] \Procedure{Forecasting the $\hat{y}_{t}$ using the ML, CNN, RP+CNN(+BIN)}{} \State INITIALISE city and time horizon; \State $x_{t}^{(k)''}, x_{ijt}^{(k)''} \gets$ FUSE and ADJUST the input data; \State $y_{t}'', y_{ijt}'' \gets$ FUSE and ADJUST the output data; \State $u \gets$ GENERATE all possible combinations of predictor variables; \For{all possible combinations $\textbf{u}$} \For{all modelling methods $\textbf{g}$} \State TRAIN MODEL on subset of predictors $u_n$ for model $g_m$; \State MAKE FORECASTS $\hat{y}_t$ on the trained model $g_m$; \State $z_{n,m} \gets$ CALCULATE $mse$ for the subset $u_n$ for model $g_m$; \EndFor \EndFor \EndProcedure \end{algorithmic} \caption{The pseudocode of the temperature forecasts in a given study area.} \label{pseudocode} \end{algorithm} This pseudocode shows the workflow of the $\hat{y}_{t}$ forecasts for a given study area and a given prediction time-horizon. After the study area and the prediction time-horizon are defined, the input and output data are fused and adjusted to comply with the requirements of each specific method. Then, the EFS is run for each modelling method. Each possible combination of predictors is sequentially trained and forecast is obtained. The deterministic models run the trial solutions just a single time, others $N=10$ times. For the latter, an average of $mse_{h}$ is calculated to evaluate the quality of forecasts. Finally, a vector of new trial solutions $\textbf{u}_{m}$ is generated. The iterative procedure is run until the stopping criteria is met, i.e., the number of function evaluations hit the $nFEs\_max$. The next subsection reports the results on forecasting the $\hat{y}_{t}$ in Paris. Code was written exclusively in Python programming language. Data fusioning, adjusting and handling were done with the following Python libraries: Pandas \cite{reback2020pandas,mckinney-proc-scipy-2010}, Numpy \cite{harris2020array} and Xarray \cite{xarray}. RPs were created in Pyts \cite{JMLR:v21:19-763}. For the implementation of ML methods, sklearn \cite{scikit-learn} library was chosen. The CNN architectures were implemented with Keras \cite{chollet2015keras} and Tensorflow \cite{tensorflow2015-whitepaper} libraries. \subsection{Results for long-term temperature forecasts in Paris} This subsection starts with the comment on the $y_{t}$ dynamics for Paris in years 2002--2021, and continues with the performance graphics. Results on nine different modelling methods are visualised in a shape of a $3 \times 3$ table. Later, the performance metrics with a set of five statistical indicators and a best EFS combination follow. The daily mean air temperature ($y_{t}$ ) in Paris in August ranges from $17.43^\circ$ to $26.61^\circ$ Celsius, with a mean of $19.94^\circ$C and a variance $4.96^\circ$C. Period within 2002--2009 shows a very unsteady and difficult-to-predict $y_{t}$ performance, associated with an extreme event in year 2003. The temperature rises rapidly during one year, which is then followed by approximately 3 years of $y_{t}$ lower than usual. Since 2009, the time series is more stable, quasi first-order negatively autocorrelated. Therefore, we expect a worse performance in the first part of the time series and a better performance in the second. Figure \ref{fig:paris_short} shows the performance graphics of forecasting $\hat{y}_{t}$ on shorter prediction time-horizon (first fortnight of August), and Figure \ref{fig:paris_long} on prolonged prediction time-horizon (second fortnight of August). \begin{figure} \centering \begin{subfigure}{1.0\textwidth} \includegraphics[width=\textwidth]{fig/Paris_1.png} \caption{Temperature forecasts $\hat{y}_{t}$ in Paris, $\tau_1$.} \label{fig:paris_short} \end{subfigure} \hfill \begin{subfigure}{1.0\textwidth} \includegraphics[width=\textwidth]{fig/Paris_2.png} \caption{Temperature forecasts $\hat{y}_{t}$ in Paris, $\tau_2$.} \label{fig:paris_long} \end{subfigure} \caption{Forecast of average daily mean temperature in August ($\hat{y}_{t}$) in Paris. Solid black line represents the true $y_{t}$ in Paris, red dotted lines represent individual runs of $\hat{y}_{t}$, solid green represents the average of the individual runs (not applicable for deterministic models in first row). The first row represents the deterministic ML methods, LR, Lasso and Polynomial regressions. Second row shows results for more complex ML methods, such as AdaBoost, DT and RF. The third row shows the results of the proposed methodologies, CNN, RP+CNN and the RP+CNN+BIN.} \label{fig:paris} \end{figure} Interpretation of the modelling methods is as follows. For shorter time horizon, all the methods included exhibit underestimations during the extreme weather event in year 2003 for shorter-time horizon $\tau_1$. All of them also underestimate the temperature drop during 2006 cool event. Contrary, all methods indicate the temperature increases in 2020 well. Visually, Poly is the best fit among modelling methods in the horizon $\tau_1$, since it best forecasts the 2003 year heatwave and associated temperature drop afterwards. It delivers the best compromise between forecasts during non-extreme (regular, typical, casual) events and forecasts during extreme events. ML methods show a lower level of variability than CNN-based methods. Among them, RP+CNN+BIN is the most promising by visual means, since it delivers the best compromise between variability and non-extreme events forecasting. Visually, for the prolonged horizon $\tau_2$, RP+CNN and RP+CNN+BIN seem to be the best fit. Predictions are less variable than for the horizon $\tau_1$. This is positive, but lower variability inherently implies lower skill on forecasting extremes. Deterministic and ML methods lack of forecast skills in years 2005 and 2016. ML techniques also lack of forecast skill in years 2011 and 2014. CNN-based methods are far from perfect, but capture the trend and magnitudes to the best degree among all methods analysed. We deduce that the more complex the modelling method, the better the forecast for prolonged time horizon. \begin{table}[!ht] \centering \begin{tabular}{rrrrrrr} \toprule \multicolumn{7}{c}{Paris $\tau_1$} \\ \toprule ~ & rank & $mse$ & $mae$ & Pearson & Spearman & Vars. \\ LR & 3(9) & 2.973 & 1.264 & **0.695 & 0.409 & 011000100 \\ Lasso & 7(6) & 3.316 & 1.519 & *0.559 & 0.347 & 111100000 \\ Poly & 1(7) & 1.928 & 1.226 & **0.772 & *0.535 & 101100100 \\ AdaBoost & 5(1) & 3.092 & 1.236 & **0.645 & *0.519 & 111111100 \\ DT & 9(4) & 3.651 & 1.508 & **0.665 & 0.433 & 010100010 \\ RF & 8(2) & 3.375 & 1.247 & *0.559 & 0.424 & 110100100 \\ CNN & 6(5) & 3.232 & 1.446 & **0.606 & 0.292 & 100000001 \\ RP+CNN & 2(8) & 2.970 & 1.332 & **0.63 & *0.517 & 110000100 \\ RP+CNN+BIN & 4(3) & 3.007 & 1.295 & **0.625 & *0.462 & 000001010 \\ \toprule \multicolumn{7}{c}{Paris $\tau_2$} \\ \toprule ~ & rank & $mse$ & $mae$ & Pearson & Spearman & Vars. \\ LR & 2(9) & 2.704 & 1.159 & 0.401 & *0.526 & 100010010 \\ Lasso & 3(6) & 2.722 & 1.333 & 0.121 & 0.177 & 100100010 \\ Poly & 9(8) & 3.299 & 1.475 & 0.253 & 0.25 & 101100010 \\ AdaBoost & 6(2) & 3.062 & 1.457 & 0.091 & 0.116 & 100001011 \\ DT & 5(7) & 2.938 & 1.362 & 0.343 & 0.365 & 010101100 \\ RF & 7(3) & 3.064 & 1.478 & 0.081 & 0.123 & 100001011 \\ CNN & 8(4) & 3.072 & 1.348 & -0.01 & 0.041 & 111110000 \\ RP+CNN & 4(5) & 2.931 & 1.351 & 0.246 & 0.286 & 010010101 \\ RP+CNN+BIN & 1(1) & 2.117 & 1.261 & *0.516 & *0.487 & 100000001 \\ \bottomrule \end{tabular} \caption{Statistical indicators of Paris $\tau_1$ and $\tau_2$ forecasts. Ranks in brackets represent the non-FS ranks (all predictor variables included). "Pearson/Spearman"=Pearson's and Spearman's rank correlation coefficients, *=$p$-value less than 0.05, **=$p$-value less than 0.01, "Vars."=variables ordered as \{t2m, u10, u100, v10, v100, msl, sst, geo500, swvl1\}. Ranks calculated on the basis of $mse$ value.} \label{tab:paris} \end{table} Table \ref{tab:paris} represents performance metrics of forecasts in Paris, for both $\tau_1$ and $\tau_2$. Poly is the best modelling method according to the performance metrics for shorter prediction horizon and RP+CNN+BIN for prolonged horizon $\tau_2$. Both of them are significantly better than the rest of the methods, regarding the $mse$ statistical indicator. Correlation coefficients are significant for all methods for shorter prediction and significant only for LR and RP+CNN+BIN for prolonged horizon (for either Pearson's or Spearman's coefficients). The use of EFS drastically lowers the number of predictors, e.g. RP+CNN+BIN only includes two variables. As expected, the air temperature predictor seems to be among the more important. \subsection{Results for long-term temperature forecasts in Córdoba} Average daily mean August air temperature in Córdoba ranges from $25.78^\circ$ to $29.87^\circ$ Celsius, with a mean $27.66^\circ$C and a variance $1.02^\circ$C. Two extreme temperature events are spotted in the test period considered, one in the famous 2003 summer, the other in years 2017--2018. A significant cool event is spotted in year 2014. Córdoba experimented a gradual increase in temperatures in years 2002--2021, which even more intensifies the challenge of forecasting. Performance graphics are visualised in Figure \ref{fig:cordoba}. First impression is that the Córdoba is more forecastable than Paris area. By far, the best forecasts are in this case provided by the RP+CNN+BIN. With the exception of years 2010--2013 and years 2017--2018, forecasts are very similar to the actual temperatures, either at extreme or non-extreme events. Among the deterministic methods, Lasso is the best compromise. CNN variability is much decreased compared to the Paris case, which is again a sign that different study areas have different forecastibilities. Despite, all the methods are prone to the erroneous forecasting in years 2017--2018. \begin{figure} \centering \begin{subfigure}{1.0\textwidth} \includegraphics[width=\textwidth]{fig/Cordoba_1.png} \caption{Temperature forecasts $\hat{y}_{t}$ in Córdoba, $\tau_1$.} \end{subfigure} \hfill \begin{subfigure}{1.0\textwidth} \includegraphics[width=\textwidth]{fig/Cordoba_2.png} \caption{Temperature forecasts $\hat{y}_{t}$ in Córdoba, $\tau_2$.} \end{subfigure} \caption{Forecasting the $y_{t}$ in Córdoba. Solid black line represents the true $y_{t}$ in Córdoba, red dotted lines represent individual runs of $\hat{y}_{t}$, solid green represents the average of the individual runs (not applicable for models in first row). The first row represents the deterministic ML methods, LR, Lasso and Polynomial regressions. Second row represents results of more complex ML methods, such as AdaBoost, DT and RF. The third row represents the results of the proposed methodologies, CNN, RP+CNN and the RP+CNN+BIN. Years on the x-axis, 2 meter temperature in $^\circ$ Celsius on the y-axis.} \label{fig:cordoba} \end{figure} Performance metrics can be found in Table \ref{tab:cordoba}. RP+CNN+BIN is found to be the best method for shorter, RP+CNN for prolonged forecast horizon. Compared to the Paris, $mse$ of both horizons are decreased much and correlation coefficients are increased. Three of the Pearson's coefficients are significant for methods during the prolonged forecast horizon. It is realised that Poly does not deliver stable performance, since ranks are inverted compared to the Paris and correlation coefficients are insignificant. EFS again reduces much the sets of most suitable predictors. \begin{table}[!ht] \centering \begin{tabular}{rrrrrrr} \toprule \multicolumn{7}{c}{Córdoba $\tau_1$} \\ \toprule ~ & rank & $mse$ & $mae$ & Pearson & Spearman & Vars. \\ LR & 6(9) & 0.903 & 0.740 & *0.548 & *0.531 & 100000000 \\ Lasso & 2(7) & 0.778 & 0.720 & **0.639 & *0.538 & 100000100 \\ Poly & 9(8) & 1.149 & 0.770 & 0.423 & 0.332 & 100111010 \\ AdaBoost & 7(3) & 0.911 & 0.705 & **0.577 & 0.441 & 101010100 \\ DT & 4(2) & 0.833 & 0.745 & **0.636 & 0.432 & 111010100 \\ RF & 5(1) & 0.872 & 0.712 & **0.604 & *0.483 & 101010100 \\ CNN & 3(4) & 0.814 & 0.741 & **0.614 &**0.568 & 111000010 \\ RP+CNN & 8(6) & 1.029 & 0.808 & **0.609 & *0.486 & 000100111 \\ RP+CNN+BIN & 1(5) & 0.718 & 0.696 & **0.789& **0.651& 000110100 \\ \toprule \multicolumn{7}{c}{Córdoba $\tau_2$} \\ \toprule ~ & rank & $mse$ & $mae$ & Pearson & Spearman & Vars. \\ LR & 9(9) & 1.398 & 1.001 & 0.230 & 0.180 & 000100000 \\ Lasso & 5(7) & 1.093 & 0.906 & 0.364 & 0.331 & 000001110 \\ Poly & 6(8) & 1.094 & 0.791 & *0.454 & 0.257 & 101001010 \\ AdaBoost & 2(2) & 0.976 & 0.841 & 0.249 & 0.165 & 111100000 \\ DT & 8(6) & 1.196 & 0.914 & **0.678 & 0.441 & 000000010 \\ RF & 3(1) & 1.012 & 0.822 & 0.313 & 0.164 & 100101000 \\ CNN & 7(4) & 1.145 & 0.755 & 0.396 & 0.322 & 010100100 \\ RP+CNN & 1(5) & 0.971 & 0.857 & 0.373 & 0.314 & 010101000 \\ RP+CNN+BIN & 4(3) & 1.028 & 0.829 & *0.538 & 0.380 & 111011111 \\ \bottomrule \end{tabular} \caption{Statistical indicators of Córdoba $\tau_1$ and $\tau_2$ forecasts. Ranks in brackets represent the non-FS ranks (all predictor variables included). "Pearson/Spearman"=Pearson's and Spearman's rank correlation coefficients, *=$p$-value less than 0.05, **=$p$-value less than 0.01, "Vars."=variables ordered as \{t2m, u10, u100, v10, v100, msl, sst, geo500, swvl1\}. Ranks calculated on the basis of $mse$ value.} \label{tab:cordoba} \end{table} \section{Conclusions}\label{sec:Conclusions} Seasonal climate prediction problems involve uncertain and demanding tasks related to forecasting the long-term steady-levels of different climate variables, such as air temperature. In this long-term behaviour of variables, it is possible to spot short-term extreme events signals, such as heatwaves. Some geographical areas are in fact more exposed to weather extremes than others, and hence, these extreme signals should appear in the long-term prediction of climate variables at these zones. In line with this, no universal model can fit forecasts for all the geographical areas well, which means that not only the span of coordinates of input data may be different to forecast in a specific area, but also the set of the best input data features (data variables) may be different. Following this idea, in this paper we have tackled a problem of long-term air temperature prediction in summer (August), using different computational frameworks based on AI techniques. Specifically we first propose a novel approach based on CNN combined with different process for data fusion and dimensionality reduction. In the second computational framework, different ML approaches are proposed, including Lasso, regression trees and Random Forest. The third computational framework also considers a CNN, with pre-processing steps via RPs as data reduction technique. RPs have been assimilated as a compromise to exploit the temporal structure of the data. Since the RP is a transformation of a time-series into an image, the CNN has been further exploited in this case with the RPs output as a processing medium. The performance of the different proposed AI-based computational frameworks have been evaluated in two problems of long-term air temperature prediction at Paris (France) and Cordoba (Spain), considering the prediction in the first and second August fortnights using predictive variables from the previous months. The results obtained seem to indicate a superior performance by the RP+CNN-based approaches, albeit no unique model is the best approach for both prediction time-horizons considered. The proposed RP+CNN-based approaches were able to accurately detect some maximums in the summer temperature better than classical CNN and ML techniques. These maximum values can be associated with heatwaves signals occurring in August in the areas studied (Paris and Córdoba), such as that of 2003, whose signal is detectable in the August mean temperature when comparing with other years. As future research lines, we propose that the original CNN model could be reworked to output not only a single channel (like the 2 meter air temperature in this paper), but rather a set of multiple channels, including the wind information and/or volumetric soil water layers. Increased complexity due to multi outputs could be compensated by data augmentation techniques to achieve identical stability of the models. Different architectures, including auto-encoders, could be employed to exploit the benefit of converging the images into a single-size and diverging it back to the original size. In general, a larger amount of climate data could be exploited for training the models, by including the climate data from January--April and from September--December, to better capture the climate trend. Also, a universal model that would fit forecasts for all geographical areas should be built and verified compared to the ML and DL methods. \section*{Open Research} All the data used in this paper are from ERA5 Reanalysis, available under request to the European Centre for Medium-Range Weather Forecasts \cite{ECMWF}. \section*{Acknowledgement} This research has been partially supported by the European Union, through H2020 Project ``CLIMATE INTELLIGENCE Extreme events detection, attribution and adaptation design using machine learning (CLINT)'', Ref: 101003876-CLINT. This research has also been partially supported by the project PID2020-115454GB-C21 of the Spanish Ministry of Science and Innovation (MICINN). Javier Del Ser is supported by the Basque Government through the ELKARTEK program and the consolidated research group MATHMODE (IT1456-22)
2024-02-18T23:39:54.994Z
2022-10-03T02:13:09.000Z
algebraic_stack_train_0000
810
12,566
proofpile-arXiv_065-4016
\section{Introduction} The characterization of magnetic nanoparticles (MNPs) is a crucial part in the process towards their safe and efficient usage in biomedical applications\cite{Pankhurst2003,Tong2019,CoeneLeliaert2022}. Not only single particle properties such as size, shape and composition influence their magnetic behaviour, but also the state of the particles with respect to their environment. Clustering, aggregation, and immobilization of MNPs are processes of special interest in the context of biomedical applications\cite{Etheridge2014}. During arrival at the targeted body tissue, local particle concentrations get high and interparticle distances small. Therefore, magnetic exchange and dipolar interactions between particles may become important. Their mobility also changes during cellular uptake or molecular binding. These processes affect their magnetic properties\cite{DEberbeckFWiekhorst2006, Gutierrez2019} and - as a consequence - their diagnostic\cite{Loewa2013, Paysen2020} and therapeutic\cite{Bender2018b,Cabrera2018,Ortega-Julia2022} effect. To optimize the biomedical performance of the particles, it is thus necessary to determine the behavior of the particles within the body\cite{CoeneLeliaert2022} and map the clustering, aggregation and immobilization of the particles in their biomedical environment. Magnetic properties and magnetization dynamics of MNP are often determined by measuring their response to an external magnetic field excitation\cite{Wiekhorst2012,Ludwig2013,Bui2022}. This external perturbation can potentially change the magnetic state of the sample, thereby influencing the outcome of the method. However, the magnetic moments of the MNP fluctuate at non-zero temperatures, and probing the corresponding induced magnetic noise allows one to obtain similar information about the inherent properties of the sample. The analysis of the fluctuation dynamics of MNPs is the idea behind a recently developed characterization technique \cite{Leliaert2015}, known as Thermal Noise Magnetometry (TNM). It is a unique method, since the sample is characterized while in an equilibrium state. Compared to other characterization techniques, the signals in TNM are rather small (down to a few femtotesla) and thus require sensitive magnetic field sensors. Superconducting Quantum Interference Devices (SQUIDs), which were used in previous TNM studies\cite{Leliaert2015,Leliaert2017,Everaert2021}, have a well established reputation in magnetometry and biomedical applications. Their success is attributed to their excellent sensitivity, broad bandwidth, and durability\cite{Cohen1972,Burghoff2009,Korber2016}. However, they have the disadvantage that they require a liquid He infrastructure and a rather large sample-probe distance in the centimeter range caused by the mandatory thermal insulation between sensor and sample at ambient temperature. To increase the accessibility and consequently the adoption of TNM thus broadening its application field, there is a necessity for more flexible measurement setups. For this purpose, Optically Pumped Magnetometers (OPMs) form an attractive sensor system\cite{Budker2007}. In this work, we present a tabletop TNM setup based on commercially available OPMs operating in a laboratory magnetically shield. The operational bandwidth of the used OPMs is limited by a phase shift above 100 Hz. However, the spectral measure used in TNM is phase insensitive. By accounting for the frequency response profile of the magnetometers, qualitatively correct measurements are ensured and the bandwidth of the sensors is increased to 550 Hz. We compare the TNM results of two MNP systems measured with the OPM setup with those obtained with an in-house developed SQUID-based system. Employing OPM sensors offer an ideal measurement approach to monitor clustering and aggregation of MNPs by TNM over time. TNM is particularly sensitive to particle clustering because the amplitude of the signal increases with the square of the volume of the individual fluctuators\cite{Everaert2021}, which means that an aggregate of two particles results in a signal that is twice as large as the sum of their individual signals. Moreover, no external excitation is required during the TNM measurement procedure, which could influence the clustering process itself or falsely influence the outcome of the measurement. Second, the excellent low-frequency performance of OPMs favors particle systems with slow dynamics or processes which tend to slow down the dynamics of the magnetic entities in the sample, such as clustering and immobilization processes. Finally, the setup can be operated anywhere with a conventional wall socket AC power due to the OPM flexibility and can thus also be used to track processes that require environmental and experimental freedom. As a proof-of-concept, we designed and performed three different experiments in the tabletop setup, which concern the clustering, aggregation, and immobilization of Perimag particles. \section{Methods} \subsection{TNM model} Two mechanisms are responsible for the thermal fluctuations measured in TNM. In liquid samples, the particles are submissive to Brownian motion. The MNPs, and thus their magnetic moments, rotate at time scales\cite{Debye1929} \begin{equation} \quad \tau_{\mathrm{B}} = \frac{3\eta V_{h}}{k_{B}T} \label{eq:brown} \end{equation} with $\eta$ the viscosity of the fluid, $V_h$ the hydrodynamic volume of the particle, and $k_BT$ the thermal energy in the system. Additionally, magnetization can also change within the frame of the particle itself, which is the only mechanism present if the particles are immobilized. The N\'{e}el fluctuation time depends Arrhenius-wise on the energy barrier $KV_c$ set by the anisotropy of the particle \begin{equation} \tau_{\mathrm{N}} = \tau_0 \exp\left(\frac{KV_c}{k_{B}T}\right) \label{eq:neel} \end{equation} where $K$ is the anisotropy constant, $V_c$ the magnetic core volume, and $\tau_0$ the characteristic attempt time\cite{Brown1963}. The effective fluctuation time then naturally combines to \begin{equation} \tau_{\mathrm{eff}}=\frac{\tau_{\mathrm{N}} \tau_{\mathrm{B}}}{\tau_{\mathrm{N}}+\tau_{\mathrm{B}}}, \end{equation} in samples where both mechanisms are present. Depending on the size of the particles, $\tau_N$ or $\tau_B$ are dominant and define the value of $\tau_{\mathrm{eff}}$.\\ The magnetic time signal $B$ measured in TNM is stochastic in nature with an autocorrelation function \begin{equation} G_B(t)=\langle B(0)B(t)\rangle = \langle B^2\rangle \exp(-\vert t\vert/\tau_{\mathrm{eff}}). \label{eq: autocorrelation} \end{equation} The Power Spectral Density (PSD) is then obtained from the Wiener-Khintchine theorem as the Fourier transform of the autocorrelation function\cite{Wiener1930,Khintchine1934}: \begin{equation} S_B(f)=2\left(\frac{\mu_0 M V_c}{4\pi d^3}\right)^2 \cdot \frac{ (4\tau_{\mathrm{eff}})^{-1} }{(\pi f)^2+(2\tau_{\mathrm{eff}})^{-2}} \label{eq: lorentzian} \end{equation} The amplitude of the fluctuations depends on the total magnetic moment of the sample $M V_c$, and the distance $d$ from the sample at which the magnetic field is measured. At typical distances of few mm to few cm, the TNM signal a of nanoparticle ensemble ranges from pT-fT. The dynamics of the particles is quantified in the fluctuation time $\tau_\mathrm{eff}$, which therefore also defines the width of the PSD. The cutoff frequency $\nu_\mathrm{cutoff}=\frac{1}{2\cdot \tau_\mathrm{eff}}$ divides the PSD into two regimes: a low frequency regime where $f < \nu_\mathrm{cutoff}$ and a high frequency regime where $f > \nu_\mathrm{cutoff}$. The low frequency regime corresponds to the region on Fig. \ref{Fig:Cartoon}.1b. where the PSD is flat. In the high frequency regime, the PSD drops with $1/f^2$. Direct parameters influencing the cutoff frequency are those entering equations (\ref{eq:brown}) and (\ref{eq:neel}), such as the suspension viscosity, the anisotropy constant, and the local temperature. However, a change in aggregation state or mobility of the particles, or an increase in the interparticle interaction, affects their magnetization dynamics - and thus the noise spectrum - as well. Fig. \ref{Fig:Cartoon} shows the theoretical expression of the PSD for different MNP configurations to illustrate changes in the noise spectrum. Particle clustering (Fig \ref{Fig:Cartoon}.1a) increases the hydrodynamic volume of the individual fluctuators with a decrease in the Brownian fluctuation time as a result. The cutoff frequency shifts towards smaller frequencies, and the $1/f^2$ behavior becomes more pronounced. The N\'{e}el mechanism is the submissive mechanism in most MNP systems (for the common case of iron oxide MNPs with rather large core diameters $d_c$>10 nm) . i.e. the N\'{e}el fluctuations are often orders of magnitude slower than Brownian rotations. This means that the cutoff frequency also shifts towards lower values upon elimination of Brownian rotations during immobilization (Fig. \ref{Fig:Cartoon}.1c). \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Cartoon.JPG} \caption{Theoretical noise spectra of different samples. The monodisperse PSDs are displayed for particles with a hydrodynamic diameter of $d_h=130$ nm and a core diameter of $d_c$ = 25 nm. the PSD is flat up to the characteristic cutoff frequency, after which it falls off with $1/f^2$. The diameters of the polydisperse particles follow a lognormal size distribution with parameters $d_h\sim$ ln N($\mu$ = 124 nm, $\sigma$=0.35) and $d_c\sim$ ln N($\mu =$25 nm, $\sigma$=0.45). In the case of polydisperse cluster formation (2a), the PSD still prominently has a $1/f^2$ shape. In the case of the immobilization of the polydisperse particles however, a typical $1/f$ shape is distinguished due to the extreme broad fluctuation time distribution. The clustered were taken to have 3 times the size of the single sinlge particles.} \label{Fig:Cartoon} \end{figure} For a polydisperse sample, the volumes $V_h$ and $V_c$ - and therefore also the fluctuation time $\tau_\mathrm{eff}$ - are distributed along $P(\tau_\mathrm{eff})$. The PSD can then be written as a superposition of independent fluctuators (\ref{eq: lorentzian}) \begin{align} S_b^\mathrm{poly}(f)&=\int_0^{\infty} P(\tau_\mathrm{eff})\cdot S_b^{\tau_\mathrm{eff}}(f)d\tau_\mathrm{eff}\\ &=\int_0^{\infty}\int_0^{\infty}P(V_h)P(V_c)\cdot S_b^{V_h,V_c}(f)dV_{h}dV_{c} \label{Eq:size_dist} \end{align} This typically results in a stretching of the cutoff frequency $\nu_\mathrm{cutoff}$ over a certain frequency range, and a less distinct shape of the PSD as visible in Fig \ref{Fig:Cartoon}.2b. For a fairly broad size- or fluctuation time distribution, the PSD finally can be approximated by a 1/f curve\cite{Weissman1988, Leliaert2015}. Similar to clustering of monodisperse particles, the hydrodynamic size of a polydisperse cluster also increases, and its related cutoff frequency shifts towards lower frequency values. The $1/f^2$ falloff dominates in the considered bandwidth as shown on Fig. \ref{Fig:Cartoon}.2a. The effect of immobilization of a polydisperse sample is even more pronounced, since the N\'{e}el fluctuation time depends exponentially on the core volume. The size distribution is stretched into a broad fluctuation time distribution, and the PSD gets the distinct $1/f$ shape which is displayed on Fig. \ref{Fig:Cartoon}.2c. We would like to point out that the clustering, aggregation, and immobilization of MNPs are generally not uncorrelated and often occur at the same time. The broad size distributions $P(V_h)$ and $P(V_c)$ also make the quantitative interpretation of the fluctuation dynamics of the magnetic moments less straightforward than for the model curves displayed in Fig. \ref{Fig:Cartoon}. \subsection{Experimental setups} \subsubsection{Magnetic Nanoparticles.} Two commercially available MNP systems have been used for the comparison of the two setups: Resovist particles (an MRI liver contrast agent provided by Meito Sanyo, Japan) with an iron concentration of $c$(Fe)=429.1 mmol/L and Perimag particles (Micromod Partikeltechnologie GmbH, Rostock, Germany) with a plain surface and an iron concentration of $c$(Fe)=644.4 mmol/L. Perimag particles were chosen for the proof-of-concept experiments with a COOH group coating for cellular uptake. \subsubsection{OPM tabletop setup for Thermal Noise Magnetometry.} Altough the concept of magnetic sensing by use of optical pumping dates back to 1950-1960, the field of Optically Pumped Magnetometers is still developing steadily. In this technique, an alkali metal gas vapor - often Rb or Cs - is polarized by pumping with a polarized light beam. Once fully polarized, the gas becomes transparent. A magnetic field changes the polarization state of the vapor atoms, which is quantified by measuring the polarity or intensity of a second probing light beam through the gas vapor. Today, there are many different OPM configurations, covering a broad range of applications\cite{Sander2020, Bason2022, Deans2021}. Comparisons have been made with SQUID systems has been made\cite{Knappe2010, Marhl2022} and the magnetometers have also found their way into applications such as MNP characterization and imaging\cite{Johnson2012,Dolgovskiy2015,Baffa2019,Jaufenthaler2020a,Jaufenthaler2021}. The developed tabletop setup consists of two Gen-2 QuSpin Zero-Field Magnetometers (QZFMs) (QuSpin Inc., CO, USA)\cite{Shah2013} that are operated in single-axis mode. One is placed near the MNP probe (QZFM1), the other is used for reference measurements (QZFM2). The sample and the QZFMs are placed inside a laboratory MS-2 magnetic shield (Twinleaf LLC, NJ, USA) to minimize the effect of external fields on the MNPs dynamics and to ensure the proper working of the QZFMs. The shield consists of four metal layers and has a shielding factor of $10^6$ as specified by the manufacturer\cite{twinleaf}. The controlling QZFM electronics is placed outside the shield and driven via the program QuSpin ZFM on a laptop, from which the data is also collected with a U6 Labjack (LabJack Corporation, CO, USA) in stream mode. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{BG_both.pdf} \caption{Background power spectral density in tabletop QZFM setup (a). The 50 Hz contamination from the power line and its related 150 Hz peak are visible amongst other environmental disturbances. The 923 Hz signal from the QZFM modulation and its aliasing peak around 77 Hz . (b) Background of the in-house developed SQUID setup. The 50 Hz and 150 Hz peaks from power line residuals are visible, as well as a two peaks around 24 kHz. } \label{Fig:BG_both} \end{figure} The tabletop setup was operated in a conventional lab environment at the Physikalisch-Technische Bundesanstalt in Berlin. The 50 Hz contamination from the power line and its related 150 Hz peak are visible in the measured background spectrum on a noise floor of 200-2000 $\frac{\textrm{fT}^2}{\textrm{Hz}}$ (approx. 10-30$\frac{\textrm{fT}}{\sqrt{\textrm{Hz}}}$). Among other environmental disturbances, the 923 Hz signal from the QZFM modulation and its aliasing peak around 77 Hz are visible\footnote[1]{These disturbances can be minimized further by placing the shield in an aluminum cage, and grounding both the shield and the cage to the USB ground of the laptop.}. The manufacturer does not specify any product information of the QZFM above 100 Hz, because of the phase shift in the signal above this frequency. However, since our spectral measure is phase-insensitive, we are able to extend the bandwidth of the QZFM beyond their usual 100 Hz frequency range, as explained in Sec. \ref{Sec:freq_resp}. Time signals of up to 20 minutes were recorded at a sample rate of 2 kHz, and the PSD were calculated and averaged as explained in the method section of Ref. \cite{Everaert2021}. The final displayed spectra are calculated by subtracting the background spectrum PSD$_\mathrm{BG}$ from the MNP spectrum PSD$_\mathrm{MNP}$, i.e. the spectra of an MNP sample that were measured in the tabletop setup. To achieve an optimal Rb density for increased sensitivity, the QZFM vapor cell is heated to a temperature of about $160 ^{\circ} $C\cite{Shah2013,Borna2017}. This has an immediate effect on the temperature of the sample, and overheating of the MNPs is prevented by placing a 2 mm thick isolation material between the housing of the QZFM and the sample. A sample temperature of $43.2 \pm 0.5 ^{\circ} $C was measured after a stabilization time of 20 min after placement in the setup. With the isolation material included, the minimal distance between the centre of the vapor cell and the sample is estimated at 8.5 mm. With a sample height of approximately 10 mm, the average distance between the centre of the vapor cell and the particles is 13.5 mm. \subsubsection{SQUID setup for Thermal Noise Magnetometry.} The in house developed SQUID setup consists of a superconducting Niobium shield\cite{Ackermann2007} in which 6 SQUID sensors with a rectangular pickup coil are operated. The sample can be placed inside a warm bore at an average distance of 23.5 mm from the pickup coils. Only one sensor is used for the TNM experiment. The background spectrum of the SQUID setup shows a relatively flat profile with values between 2-3.5 $\frac{fT^2}{Hz}$ (1.5-1.8 $\frac{fT}{\sqrt{Hz}}$). 50 Hz and 150 Hz peaks from power line residuals are visible, as well as a two peaks around 24 kHz. \begin{figure} \centering \subfloat[\centering ]{{\includegraphics[width=0.45\textwidth]{setup_numbers.JPG} }}% \qquad \subfloat[\centering ]{{\includegraphics[width=0.45\textwidth]{6channel_picture.jpg} }}% \caption{(a) Tabletop TNM setup based on OPMs. The sample is places inside the Twinleaf MS-2 shield (1) together with the two QZFMs. QZFM electronics (2) controle the sensors and a laptop (4) serves for driving the DAQ Labjack U6 (3) in stream mode, running the QZFM electronics and collecting the data. The sample is on an average disntance of 13.5 mm from the sensible volume.\\(b) In-house developed SQUID setup for reference measurements. A superconducting Nb magnetic shield 6 SQUID sesors are kept on LHe temperature, the sample is placed in a warmbore and kept on an average temperature of $43.0 \pm 0.5 ^{\circ} C$ to match the sample temperature in the tabletop setup. An average distance of 23.5 mm is measured between the pickup coils of the SQUID sensors and the sample. Only one sensor is used for the TNM measurement.} \label{fig:setup} \end{figure} As is clear from Eq. (\ref{eq:brown}) and Eq. (\ref{eq:neel}), the dynamics of the particles is strongly dependent on the temperature. To compare both measurement systems under equal conditions, the sample has been kept at a constant temperature of $43.0 \pm 0.5 ^{\circ} C$ in the SQUID setup to match the sample temperature in the QZFM setup. This was achieved by use of a stable airflow through the warm bore. 13 minute time signals were acquired at a sample rate of 100 kHz and the PSDs were subsequently calculated and averaged. \subsection{Frequency response of the sensors} \label{Sec:freq_resp} \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Freq_resp2.pdf} \caption{Measured frequency responses $H(f)$ of the different sensors. The data was acquired by application of an AC magnetic field with sweeping frequency.} \label{Fig:freq_resp} \end{figure} To increase the QZFM frequency range above 100 Hz, a frequency response profile of the QZFMs has been measured to ensure a quantitatively correct measurement of the power spectra of the MNPs. To this end, the sensors were placed inside a magnetic shielded room at the Physikalisch-Technische Bundesanstalt (named as the “Zuse-MSR” in Ref. \cite{Voigt2015}) and a homogeneous AC field with an amplitude of 772 pT was applied by use of a square Helmholtz coil. Its frequency was swept in the range of $[2-600]$ Hz. The amplitude of the QZFM output signal was monitored and analysed in the time domain, after which the response values were averaged and normalized. From this data set, a frequency response profile was calculated for both QZFMs as shown in Fig. \ref{Fig:freq_resp}. The uncertainty on the response values was calculated from the standard deviation of the different peaks of the excitation and the response at one frequency. Note that the frequency response varies strongly and differently for both QZFMs. It is also not constant within the low frequency measurement range of OPM applications. For comparison, a similar procedure was performed with the SQUID sensor with a field of 751 nT by inserting a small coil into the warm bore. There was no frequency dependence of the amplitude detected as can be seen in Fig. \ref{Fig:freq_resp}. Having determined the frequency response of the QZFMs, the Power Spectral Density $S_B(f)$ of a MNP ensemble was calculated as \begin{equation} S_B(f)=\frac{S_{QZFM}(f)}{H(f)^2} \end{equation} with $S_{QZFM}(f)$ the Power Spectral Densities measured by the QZFMs and $H(f)$ their relative frequency responses as displayed in Fig. \ref{Fig:freq_resp}. \section{Results and discussion} \subsection{Comparison of Power Spectra} Fig. \ref{Fig:Comparison} (a) shows the Power Spectral Densities of both MNP systems measured in the SQUID setup. The spectra displayed are the noise spectra measured with MNPs PSD$_\mathrm{MNP}$ with the background spectra PSD$_\mathrm{BG}$ subtracted. The raw spectra can be found in the Supplementary Material. The PSD of the Resovist system is relatively flat up to a cutoff frequency of about 90 Hz, after which the PSD starts to decrease continuously due to the size distribution of the particles. On the other hand, the Perimag system shows higher power at lower frequencies, with cutoff frequency located at values lower than the displayed bandwidth. This is a result of the slower magnetization dynamics due to the large hydrodynamic size of the Perimag particles with a broad size distribution, as explained by Eq. (\ref{Eq:size_dist}). If the PSDs are normalized to iron the amount in the samples (see the Supplementary Material), a crossing occurs around 200 Hz. \begin{figure*}[h!] \centering \includegraphics[width=1\textwidth]{Comparison_and_SNR.pdf} \caption{Measured SQUID profiles of the two MNP systems (a) and measured and compensated QZFM profiles of the two MNP system (b). For clarity, the points in the QZFM spectra at the unstable background frequencies (30, 50, 330, 375 Hz) have been plotted separately. These corresponds to the peaks in the background spectrum of the tabletop setup in Fig. \ref{Fig:BG_both}. Qualitative comparison of the MNP spectra measured in both setups (c). The SQUID spectra of (a) have been rescaled to match the power of the OPM spectra S$_B$ in (b) at 80 Hz. Apart from the unstable background peaks in the OPM spectra, a very good agreement between the measurements obtained in both setups is visible.} \label{Fig:Comparison} \end{figure*} The same MNP systems are measured in the tabletop setup, where a bandwidth up to 550 Hz can be reached. The spectra before ($S_{QZFM}(f)$) and after ($S_B(f)$) the frequency response correction are displayed in Fig. \ref{Fig:Comparison} (b). Due to the reduced sample-sensor distance, the MNP signal is higher in the tabletop setup than in the SQUID system. However, the tabletop setup is less sensitive than the SQUID setup. This is clear from Fig. \ref{Fig:Comparison} (c), which shows the signal-to-noise ratio (SNR) as a function of frequency for both setups and both MNP systems: \begin{equation} SNR(f)=\frac{PSD_{MNP}(f)-PSD_{BG}(f)}{PSD_{BG}(f)} \label{Eq:SNR} \end{equation} The tabletop setup has a steeper SNR loss than the SQUID setup above 100 Hz. Not only the signal, but also the noise is amplified as a result of the frequency response compensation. The excellent state of the SQUID setup becomes visible in the SNR plots. Both MNP systems have a SNR up to one order of magnitude higher in the SQUID setup compared to the tabletop OPM setup, even if the signal is two orders of magnitude lower. SNR = 1 marks the limit where environmental and sensor noise (that is, unwanted noise) has the same amplitude as MNP noise (that is, wanted noise). Although this is not a strict limit of acceptance, it can still serve as a mark to validate the SNR of the MNP systems and the setup. Only Perimag will be used for the proof-of-concept experiments, since the SNR of Resovist is relatively low in the lower frequency range in the tabletop setup. Moreover, the SNR of Perimag above 400 Hz also crosses the SNR=1 limit. For lower concentrated samples, such as those used in the proof-of-concept experiments, this crossing will occur even at lower frequencies. The measurements are directly compared in Fig. \ref{Fig:Comparison} (d) by rescaling the SQUID spectra to match the OPM spectra at 80 Hz. As the curves of the two particle systems measured in two setups overlap nicely, we conclude that the compensation for the frequency response profile of the QZFM is a valid approach. Despite their loss in sensitivity above 100 Hz, the QZFMs recover a quantitatively correct spectrum. For MNP samples with high power in the lower frequency range, such as the Perimag and Resovist samples used as example MNP systems here, both measurement systems are thus equally suitable. For smaller MNP systems with dynamics in the higher frequency range, the tabletop setup might not be sufficient, both in bandwidth and sensitivity. However, these particle systems could still be tuned to fall within the QZFM bandwidth by increasing the viscosity of the suspension, as proposed in Ref. \cite{Leliaert2017}. To further increase the bandwidth, both setups still have some potential towards lower frequencies. Since both magnetic shields (i.e. the superconducting shield integrated into the dewar of the SQUID system and the mu-metal Twinleaf shield of the tabletop setup) are relatively small, the low-frequency shielding is very effective. Longer measurement times then lead to larger time intervals for the Fourier transformation and averaging procedure, reaching lower frequencies in the spectra. For an increase towards higher frequencies, the SQUID setup could measure at higher sample rates. However, the SNR also gets small at 50 kHz, with relatively little information gain for these MNP systems. The current tabletop setup is limited to 550 Hz due to the frequency of the modulation signal of the QZFMs. \subsection{Monitoring of clustering processes} Since the TNM signal scales quadratically with the volume of the noise sources \cite{Everaert2021}, this technique is particularly suited to monitor clustering processes of magnetic nanoparticles. The absence of any driving field during the measurement also excludes any undesired effects induced by an external excitation. Moreover, the good performance of the QZFM at lower frequencies favours processes which tend to slow down the magnetization dynamics of the sample. An OPM based TNM setup thus offers a broadly applicable tool to monitor the clustering of MNPs. As a proof of concept, we report on the monitoring of three such processes, measured with TNM in the described tabletop setup: \begin{enumerate} \item Enforced aggregation of Perimag particles by addition of ethanol \item Formation of photopolymer structures in a Perimag sample by exposure to UV light \item Cellular uptake of Perimag particles by THP-1 cells \end{enumerate} \subsubsection{Enforced aggregation of Perimag particles by addition of ethanol} In a first example, the aggregation of Perimag particles is enforced by adding ethanol to the sample. A 200 $\mu$l Perimag plain solution with an iron concentration of c(Fe)=466.4 mmol/L was diluted with 200 $\mu$l ethanol. The stabilizing dextran surfaces of the particles are dissolved, the attractive forces between the magnetic cores prevail, and the system aggregates. The sedimentation of the aggregates due to gravity was visually detectable after several seconds. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Ethanol_photopolymer2.pdf} \caption{(Left) Power Spectral Densities of Perimag particles before and after addition of ethanol. A lognormal size distribution was fitted to the MNP sample before aggregation. After the addition of ethanol, the magnetic cores aggregate and sediment due to gravity. Due to the extremely broad distribution of the N\'{e}el fluctuation times, the PSD has the distinct $1/f$ shape as shown in Fig \ref{Fig:Cartoon}.2c. (Right) Power Spectral Densities of Perimag particles before and after polymer formation. A gradual immobilization of the particles is induced during UV curing and full immobilization is reached after 15 minutes exposure time.} \label{Fig:ethanol_polymer} \end{figure} The influence of aggregation on the noise spectrum of Perimag is visible in Fig. \ref{Fig:ethanol_polymer} (a), where a spectrum before and after aggregation is displayed. Since the geometry of the sample is not conserved due to a change in spatial distribution of magnetic material, the spectra show only qualitative effects. A lognormal size distribution logN($\mu$=72.6 $\pm$5 nm, $\sigma$=0.82 $\pm$ 0.3) with an average hydrodynamic diameter of 101 $\pm$ 26.0 nm was fitted to the curve before addition of ethanol. Given the limited bandwidth of 400 Hz, these parameters match the average diameter of 130 nm of the manufacturer reasonably well. After the addition of ethanol, the aggregates sediment and are only submissive to the N\'{e}el mechanism. Their noise curve is dominated by $1/f$ noise. The slow magnetization dynamics of the aggregated cores and the broad size distribution of their fluctuation times are a distinctive signature of this process which was cartoonized in Fig \ref{Fig:Cartoon}.2c. \subsubsection{Formation of photopolymer structures in a Perimag sample by exposure to UV light} Photopolymer resins are popular materials used in additive manufacturing. They form a highly controllable system to gradually solidify suspensions. In combination with magnetic nanoparticles, they are of particular interest for precise phantom design and fabrication\cite{Loewa2021,Nordhoff2022} to evaluate Magnetic Particle Imaging scanners\cite{Loewa2019,Dutz2019}. In our experiment, a photopolymer was mixed with Perimag particles to mimic the gradual change in mobility of the particles when being embedded in the target tissue. First, the Perimag-photopolymer mixture was prepared by adding 100 $\mu$L Perimag plain with an iron concentration of 644.4 mmol/L to a 100 $\mu$l photopolymer base material \footnote[2]{Perfactory acrylic R5 red from EnvisionTEC Inc., composed of acrylic acid esters and a photoinitiator (0.1 $-$ 5 \text{\%}). The Perfactory Acryl R5 resin has a density of 1.12 $-$ 1.13 g/cm$^3$.} in a 2 ml Eppendorf tube. A homogeneous spatial distribution of the particles in the base material was ensured by sonication with an ultrasound sonifier (UP200Ht, Hielscher Electronics, Germany). 120 $\mu$l of the mixture was used as a sample and a first spectrum was measured before UV exposure. The sample was then exposed to UV light in a UVACUBE 2000 for 5 and 10 min subsequently. Fig. \ref{Fig:ethanol_polymer} (b) shows the measured spectra of the particles in the base material before exposure, and after 5 and 15 min total exposure time. Since only 60 $\mu$l magnetic material has been used in this experiment, the TNM signal is lower than in the spectra shown previously. Therefore, we argue that the falloff above 200 Hz of the two exposure spectra is an artificial effect due to insufficient SNR and not the physical shape of the spectra, which we expect to decrease linearly on the log-log scale. Before UV exposure, the particles rotate freely in the highly viscous base material. Compared to the water suspended particles in Fig. \ref{Fig:ethanol_polymer} (a), their Brownian rotations are slower. The related cutoff frequency is shifted to lower frequencies outside the window, and only the straight tail of the PSD is visible. Brownian movement of the particles is further excluded due to crosslink formation as the MNPs get enclosed in small polymer cavities during UV curing. The effective viscosity increases towards an eventual full immobilization. The Brownian fluctuations gradually slow down, the related Brownian cutoff frequency moves closer towards DC values and N\'{e}el fluctuations become dominant. After 15 minutes of exposure time, the PSD reaches the limiting $1/f$ shape on Fig. \ref{Fig:ethanol_polymer} (b). All particles are immobilized, as the PSD is directly comparable with the PSD of the aggregates in Fig. \ref{Fig:ethanol_polymer} (a). During UV curing, volume and geometry of the sample are conserved and the spectra can be compared quantitatively. This allows us to define an effective immobilization degree based on the PSD value at a stable low frequency after each exposure step. A normalization of the PSD values at 1.6 Hz to the fully immobilized state gives an immobilization of 50\% before UV exposure and 72\% after five minutes exposure time. This experiment therefore shows the potential of TNM to be used for continuous monitoring during MNP clustering and immobilization processes. \subsubsection{Cellular uptake of Perimag particles by THP-1 cells} MNPs are known to form clusters during cellular uptake, which impacts their magnetization dynamics\cite{Loewa2013,Etheridge2014,DiCorato2014,Poller2016,Teeman2019}. For their usage in biomedical application as Magnetic Particle Imaging (MPI) and hyperthermia treatment, the change in their magnetic state can heavily influence their performance\cite{Bender2018b,Cabrera2018,Paysen2020,Remmo2022}. However, the change in the thermal noise of the MNPs due to cellular uptake is unknown so far. Especially the absence of an external magnetic excitation during a TNM experiment can be seen as an advantage in the determination of the precise clustering mechanism, since cluster formation and aggregation due to an external perturbation are eliminated in this technique. In a third experiment, the noise profile of COOH coated Perimag particles is measured after cellular uptake by THP-1 cells in the tabletop setup and compared with the pre-uptake water suspended system. \begin{figure} \centering \subfloat[\centering ]{{\includegraphics[width=0.45\textwidth]{PBS_without.png} }}% \qquad \subfloat[\centering ]{{\includegraphics[width=0.45\textwidth]{PBS_with.png} }}% \caption{THP-1 cells without (a) and with (b) addition of Perimag particles after 24 hours of incubation time. Iron in the sample is visualized by Prussian Blue staining. The particles are to a great extent taken up by the cells. Redundant particles outside the cells still form aggregates, being attached to the outer walls of the cells.} \label{Fig:PBS} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Cells.pdf} \caption{Power Spectral Densities of COOH coated Perimag particles before and after cellular uptake by THP-1 cells. The influence of the cell medium on the dynamics of the particles is very limited. Due to cluster formation and partial immobilization of the particles after cellular uptake, the power in the lower frequency range increases. Full immobilization is however excluded, since the distinctive $1/f$ behaviour of Fig. \ref{Fig:ethanol_polymer} is not reached.} \label{Fig:cells} \end{figure} 200 $\mu$L COOH coated Perimag particles with a concentration of c(Fe)=244.7 mmol/L were incubated with $2\cdot 10^7$ THP-1 cells in a 800 $\mu$l RPMI +1$\%$ FCS medium for 24 hours. Fig. \ref{Fig:PBS} showes the cells before and after the incubation (undiluted sample), where iron is visualized by Prussian blue. From these pictures, it is clear that the magnetic nanoparticles are taken up by the cells to a great extent. Moreover, MNP in the surrounding solution also form aggregates. 3 different samples have been measured in the tabletop TNM setup and are displayed in Fig. \ref{Fig:cells} with their respective colours: \begin{enumerate}[label=\Alph*] \item Perimag particles in water suspension (blue) \item Perimag particles in the cell medium (green) \item Perimag particles after 24 hours of incubation time with THP-1 cells (pink). \end{enumerate} The PSD of the particles in the water suspension and the cell medium show only quantitative differences, which are due to the difference in concentration of the samples. The influence of the cell medium on the dynamics of the particles is - at least in the measured frequency range - very limited. After cellular uptake of the particles by the cells, a higher relative noise power in the lower frequency regime is measured, and the faster fluctuations are less present in the noise density. The broad distribution of cutoff frequencies clearly shifts towards lower values, which can be attributed to the formation of clusters and partial immobilization. Full immobilization can however be excluded, since the PSD does not fall off with $1/f$ as in Fig. \ref{Fig:ethanol_polymer} (a) and (b). A repetition measurement of Sample C was carried out after 10 days (orange curve). Apart from an increased SNR - which is related to the further cell sedimentation and a decreased average sample sensor distance - no qualitative differences were detectable. The magnetization dynamics of the particles in the well-aged cell sample shows no notable differences with that of the sample directly after the 24 hours incubation. Two noise curves can be compared quantitatively, namely those of the particles suspended in the cell medium and those of the particles directly after the incubation time, because the MNP concentration and sample volume were similar. A continuous probing of the noise power at e.g. 10 Hz could quantify the cellular uptake during the incubation process. \section{Conclusion and outlook} The OPM based tabletop setup offers a flexible measurement unit to track changes in the thermal noise spectra of magnetic nanosystems. This flexibility provides the potential for use in processes and industrial application beyond biomedicine, such as 3d additive manufacturing. The power spectral densities of two commercially available MNP systems were compared with TNM measurements in a SQUID setup and a good agreement of the noise curves was found. These are the first thermal noise spectra of MNP ensembles measured with OPMs. Moreover, the proposed setup is particularly suited to monitor clustering and immobilization processes of the particles over time, due to the excellent performance of the OPMs in the lower frequency regime and the dependency of the TNM signal on the power of the individual fluctuators' volume. Three proof-of-concept experiments were performed to show their effect on the noise spectra of the particles. The immobilization of the particles induces a distinct $1/f$ dependency in the power spectral density, which results from the broad distribution of the N\'{e}el fluctuation times, as is visible from the gradual formation of UV polymers in an MNP sample. In contrast, the clustering of the particles when taken up by THP-1 cells slows down the Brownian fluctuations, due to the increased volume of the fluctuators, with a shift of the corresponding cutoff frequency towards lower frequencies as a result. Presently, for a detailed MNP characterization by TNM, the SQUID setup remains preferred because of its broad bandwidth, which allows for the mapping of a broad range of MNP systems, and higher sensitivity, which facilitates the investigation of lower concentrated samples. However, a specifically for TNM designed OPM sensor could be more suited than the broadly applicable commercial magnetometers used here. By tuning the modulation frequency of the sensors, an optimum for the trade-off between sensitivity and bandwidth can be defined for each different MNP system. Further work will be dedicated to this. \section*{Acknowledgments} This work was supported by the German Research Foundation (DFG) through the Project ‘‘MagNoise: Establishing Thermal Noise Magnetometry for Magnetic Nanoparticle Characterization’’ under Grant FKZ WI4230/3-1. J. L. was supported by the Fonds Wetenschappelijk Onderzoek (FWO-Vlaanderen) with senior postdoctoral research fellowship No. 12W7622N. \bibliographystyle{unsrt} \section{Introduction} The characterization of magnetic nanoparticles (MNPs) is a crucial part in the process towards their safe and efficient usage in biomedical applications\cite{Pankhurst2003,Tong2019,CoeneLeliaert2022}. Not only single particle properties such as size, shape and composition influence their magnetic behaviour, but also the state of the particles with respect to their environment. Clustering, aggregation, and immobilization of MNPs are processes of special interest in the context of biomedical applications\cite{Etheridge2014}. During arrival at the targeted body tissue, local particle concentrations get high and interparticle distances small. Therefore, magnetic exchange and dipolar interactions between particles may become important. Their mobility also changes during cellular uptake or molecular binding. These processes affect their magnetic properties\cite{DEberbeckFWiekhorst2006, Gutierrez2019} and - as a consequence - their diagnostic\cite{Loewa2013, Paysen2020} and therapeutic\cite{Bender2018b,Cabrera2018,Ortega-Julia2022} effect. To optimize the biomedical performance of the particles, it is thus necessary to determine the behavior of the particles within the body\cite{CoeneLeliaert2022} and map the clustering, aggregation and immobilization of the particles in their biomedical environment. Magnetic properties and magnetization dynamics of MNP are often determined by measuring their response to an external magnetic field excitation\cite{Wiekhorst2012,Ludwig2013,Bui2022}. This external perturbation can potentially change the magnetic state of the sample, thereby influencing the outcome of the method. However, the magnetic moments of the MNP fluctuate at non-zero temperatures, and probing the corresponding induced magnetic noise allows one to obtain similar information about the inherent properties of the sample. The analysis of the fluctuation dynamics of MNPs is the idea behind a recently developed characterization technique \cite{Leliaert2015}, known as Thermal Noise Magnetometry (TNM). It is a unique method, since the sample is characterized while in an equilibrium state. Compared to other characterization techniques, the signals in TNM are rather small (down to a few femtotesla) and thus require sensitive magnetic field sensors. Superconducting Quantum Interference Devices (SQUIDs), which were used in previous TNM studies\cite{Leliaert2015,Leliaert2017,Everaert2021}, have a well established reputation in magnetometry and biomedical applications. Their success is attributed to their excellent sensitivity, broad bandwidth, and durability\cite{Cohen1972,Burghoff2009,Korber2016}. However, they have the disadvantage that they require a liquid He infrastructure and a rather large sample-probe distance in the centimeter range caused by the mandatory thermal insulation between sensor and sample at ambient temperature. To increase the accessibility and consequently the adoption of TNM thus broadening its application field, there is a necessity for more flexible measurement setups. For this purpose, Optically Pumped Magnetometers (OPMs) form an attractive sensor system\cite{Budker2007}. In this work, we present a tabletop TNM setup based on commercially available OPMs operating in a laboratory magnetically shield. The operational bandwidth of the used OPMs is limited by a phase shift above 100 Hz. However, the spectral measure used in TNM is phase insensitive. By accounting for the frequency response profile of the magnetometers, qualitatively correct measurements are ensured and the bandwidth of the sensors is increased to 550 Hz. We compare the TNM results of two MNP systems measured with the OPM setup with those obtained with an in-house developed SQUID-based system. Employing OPM sensors offer an ideal measurement approach to monitor clustering and aggregation of MNPs by TNM over time. TNM is particularly sensitive to particle clustering because the amplitude of the signal increases with the square of the volume of the individual fluctuators\cite{Everaert2021}, which means that an aggregate of two particles results in a signal that is twice as large as the sum of their individual signals. Moreover, no external excitation is required during the TNM measurement procedure, which could influence the clustering process itself or falsely influence the outcome of the measurement. Second, the excellent low-frequency performance of OPMs favors particle systems with slow dynamics or processes which tend to slow down the dynamics of the magnetic entities in the sample, such as clustering and immobilization processes. Finally, the setup can be operated anywhere with a conventional wall socket AC power due to the OPM flexibility and can thus also be used to track processes that require environmental and experimental freedom. As a proof-of-concept, we designed and performed three different experiments in the tabletop setup, which concern the clustering, aggregation, and immobilization of Perimag particles. \section{Methods} \subsection{TNM model} Two mechanisms are responsible for the thermal fluctuations measured in TNM. In liquid samples, the particles are submissive to Brownian motion. The MNPs, and thus their magnetic moments, rotate at time scales\cite{Debye1929} \begin{equation} \quad \tau_{\mathrm{B}} = \frac{3\eta V_{h}}{k_{B}T} \label{eq:brown} \end{equation} with $\eta$ the viscosity of the fluid, $V_h$ the hydrodynamic volume of the particle, and $k_BT$ the thermal energy in the system. Additionally, magnetization can also change within the frame of the particle itself, which is the only mechanism present if the particles are immobilized. The N\'{e}el fluctuation time depends Arrhenius-wise on the energy barrier $KV_c$ set by the anisotropy of the particle \begin{equation} \tau_{\mathrm{N}} = \tau_0 \exp\left(\frac{KV_c}{k_{B}T}\right) \label{eq:neel} \end{equation} where $K$ is the anisotropy constant, $V_c$ the magnetic core volume, and $\tau_0$ the characteristic attempt time\cite{Brown1963}. The effective fluctuation time then naturally combines to \begin{equation} \tau_{\mathrm{eff}}=\frac{\tau_{\mathrm{N}} \tau_{\mathrm{B}}}{\tau_{\mathrm{N}}+\tau_{\mathrm{B}}}, \end{equation} in samples where both mechanisms are present. Depending on the size of the particles, $\tau_N$ or $\tau_B$ are dominant and define the value of $\tau_{\mathrm{eff}}$.\\ The magnetic time signal $B$ measured in TNM is stochastic in nature with an autocorrelation function \begin{equation} G_B(t)=\langle B(0)B(t)\rangle = \langle B^2\rangle \exp(-\vert t\vert/\tau_{\mathrm{eff}}). \label{eq: autocorrelation} \end{equation} The Power Spectral Density (PSD) is then obtained from the Wiener-Khintchine theorem as the Fourier transform of the autocorrelation function\cite{Wiener1930,Khintchine1934}: \begin{equation} S_B(f)=2\left(\frac{\mu_0 M V_c}{4\pi d^3}\right)^2 \cdot \frac{ (4\tau_{\mathrm{eff}})^{-1} }{(\pi f)^2+(2\tau_{\mathrm{eff}})^{-2}} \label{eq: lorentzian} \end{equation} The amplitude of the fluctuations depends on the total magnetic moment of the sample $M V_c$, and the distance $d$ from the sample at which the magnetic field is measured. At typical distances of few mm to few cm, the TNM signal a of nanoparticle ensemble ranges from pT-fT. The dynamics of the particles is quantified in the fluctuation time $\tau_\mathrm{eff}$, which therefore also defines the width of the PSD. The cutoff frequency $\nu_\mathrm{cutoff}=\frac{1}{2\cdot \tau_\mathrm{eff}}$ divides the PSD into two regimes: a low frequency regime where $f < \nu_\mathrm{cutoff}$ and a high frequency regime where $f > \nu_\mathrm{cutoff}$. The low frequency regime corresponds to the region on Fig. \ref{Fig:Cartoon}.1b. where the PSD is flat. In the high frequency regime, the PSD drops with $1/f^2$. Direct parameters influencing the cutoff frequency are those entering equations (\ref{eq:brown}) and (\ref{eq:neel}), such as the suspension viscosity, the anisotropy constant, and the local temperature. However, a change in aggregation state or mobility of the particles, or an increase in the interparticle interaction, affects their magnetization dynamics - and thus the noise spectrum - as well. Fig. \ref{Fig:Cartoon} shows the theoretical expression of the PSD for different MNP configurations to illustrate changes in the noise spectrum. Particle clustering (Fig \ref{Fig:Cartoon}.1a) increases the hydrodynamic volume of the individual fluctuators with a decrease in the Brownian fluctuation time as a result. The cutoff frequency shifts towards smaller frequencies, and the $1/f^2$ behavior becomes more pronounced. The N\'{e}el mechanism is the submissive mechanism in most MNP systems (for the common case of iron oxide MNPs with rather large core diameters $d_c$>10 nm) . i.e. the N\'{e}el fluctuations are often orders of magnitude slower than Brownian rotations. This means that the cutoff frequency also shifts towards lower values upon elimination of Brownian rotations during immobilization (Fig. \ref{Fig:Cartoon}.1c). \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Cartoon.JPG} \caption{Theoretical noise spectra of different samples. The monodisperse PSDs are displayed for particles with a hydrodynamic diameter of $d_h=130$ nm and a core diameter of $d_c$ = 25 nm. the PSD is flat up to the characteristic cutoff frequency, after which it falls off with $1/f^2$. The diameters of the polydisperse particles follow a lognormal size distribution with parameters $d_h\sim$ ln N($\mu$ = 124 nm, $\sigma$=0.35) and $d_c\sim$ ln N($\mu =$25 nm, $\sigma$=0.45). In the case of polydisperse cluster formation (2a), the PSD still prominently has a $1/f^2$ shape. In the case of the immobilization of the polydisperse particles however, a typical $1/f$ shape is distinguished due to the extreme broad fluctuation time distribution. The clustered were taken to have 3 times the size of the single sinlge particles.} \label{Fig:Cartoon} \end{figure} For a polydisperse sample, the volumes $V_h$ and $V_c$ - and therefore also the fluctuation time $\tau_\mathrm{eff}$ - are distributed along $P(\tau_\mathrm{eff})$. The PSD can then be written as a superposition of independent fluctuators (\ref{eq: lorentzian}) \begin{align} S_b^\mathrm{poly}(f)&=\int_0^{\infty} P(\tau_\mathrm{eff})\cdot S_b^{\tau_\mathrm{eff}}(f)d\tau_\mathrm{eff}\\ &=\int_0^{\infty}\int_0^{\infty}P(V_h)P(V_c)\cdot S_b^{V_h,V_c}(f)dV_{h}dV_{c} \label{Eq:size_dist} \end{align} This typically results in a stretching of the cutoff frequency $\nu_\mathrm{cutoff}$ over a certain frequency range, and a less distinct shape of the PSD as visible in Fig \ref{Fig:Cartoon}.2b. For a fairly broad size- or fluctuation time distribution, the PSD finally can be approximated by a 1/f curve\cite{Weissman1988, Leliaert2015}. Similar to clustering of monodisperse particles, the hydrodynamic size of a polydisperse cluster also increases, and its related cutoff frequency shifts towards lower frequency values. The $1/f^2$ falloff dominates in the considered bandwidth as shown on Fig. \ref{Fig:Cartoon}.2a. The effect of immobilization of a polydisperse sample is even more pronounced, since the N\'{e}el fluctuation time depends exponentially on the core volume. The size distribution is stretched into a broad fluctuation time distribution, and the PSD gets the distinct $1/f$ shape which is displayed on Fig. \ref{Fig:Cartoon}.2c. We would like to point out that the clustering, aggregation, and immobilization of MNPs are generally not uncorrelated and often occur at the same time. The broad size distributions $P(V_h)$ and $P(V_c)$ also make the quantitative interpretation of the fluctuation dynamics of the magnetic moments less straightforward than for the model curves displayed in Fig. \ref{Fig:Cartoon}. \subsection{Experimental setups} \subsubsection{Magnetic Nanoparticles.} Two commercially available MNP systems have been used for the comparison of the two setups: Resovist particles (an MRI liver contrast agent provided by Meito Sanyo, Japan) with an iron concentration of $c$(Fe)=429.1 mmol/L and Perimag particles (Micromod Partikeltechnologie GmbH, Rostock, Germany) with a plain surface and an iron concentration of $c$(Fe)=644.4 mmol/L. Perimag particles were chosen for the proof-of-concept experiments with a COOH group coating for cellular uptake. \subsubsection{OPM tabletop setup for Thermal Noise Magnetometry.} Altough the concept of magnetic sensing by use of optical pumping dates back to 1950-1960, the field of Optically Pumped Magnetometers is still developing steadily. In this technique, an alkali metal gas vapor - often Rb or Cs - is polarized by pumping with a polarized light beam. Once fully polarized, the gas becomes transparent. A magnetic field changes the polarization state of the vapor atoms, which is quantified by measuring the polarity or intensity of a second probing light beam through the gas vapor. Today, there are many different OPM configurations, covering a broad range of applications\cite{Sander2020, Bason2022, Deans2021}. Comparisons have been made with SQUID systems has been made\cite{Knappe2010, Marhl2022} and the magnetometers have also found their way into applications such as MNP characterization and imaging\cite{Johnson2012,Dolgovskiy2015,Baffa2019,Jaufenthaler2020a,Jaufenthaler2021}. The developed tabletop setup consists of two Gen-2 QuSpin Zero-Field Magnetometers (QZFMs) (QuSpin Inc., CO, USA)\cite{Shah2013} that are operated in single-axis mode. One is placed near the MNP probe (QZFM1), the other is used for reference measurements (QZFM2). The sample and the QZFMs are placed inside a laboratory MS-2 magnetic shield (Twinleaf LLC, NJ, USA) to minimize the effect of external fields on the MNPs dynamics and to ensure the proper working of the QZFMs. The shield consists of four metal layers and has a shielding factor of $10^6$ as specified by the manufacturer\cite{twinleaf}. The controlling QZFM electronics is placed outside the shield and driven via the program QuSpin ZFM on a laptop, from which the data is also collected with a U6 Labjack (LabJack Corporation, CO, USA) in stream mode. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{BG_both.pdf} \caption{Background power spectral density in tabletop QZFM setup (a). The 50 Hz contamination from the power line and its related 150 Hz peak are visible amongst other environmental disturbances. The 923 Hz signal from the QZFM modulation and its aliasing peak around 77 Hz . (b) Background of the in-house developed SQUID setup. The 50 Hz and 150 Hz peaks from power line residuals are visible, as well as a two peaks around 24 kHz. } \label{Fig:BG_both} \end{figure} The tabletop setup was operated in a conventional lab environment at the Physikalisch-Technische Bundesanstalt in Berlin. The 50 Hz contamination from the power line and its related 150 Hz peak are visible in the measured background spectrum on a noise floor of 200-2000 $\frac{\textrm{fT}^2}{\textrm{Hz}}$ (approx. 10-30$\frac{\textrm{fT}}{\sqrt{\textrm{Hz}}}$). Among other environmental disturbances, the 923 Hz signal from the QZFM modulation and its aliasing peak around 77 Hz are visible\footnote[1]{These disturbances can be minimized further by placing the shield in an aluminum cage, and grounding both the shield and the cage to the USB ground of the laptop.}. The manufacturer does not specify any product information of the QZFM above 100 Hz, because of the phase shift in the signal above this frequency. However, since our spectral measure is phase-insensitive, we are able to extend the bandwidth of the QZFM beyond their usual 100 Hz frequency range, as explained in Sec. \ref{Sec:freq_resp}. Time signals of up to 20 minutes were recorded at a sample rate of 2 kHz, and the PSD were calculated and averaged as explained in the method section of Ref. \cite{Everaert2021}. The final displayed spectra are calculated by subtracting the background spectrum PSD$_\mathrm{BG}$ from the MNP spectrum PSD$_\mathrm{MNP}$, i.e. the spectra of an MNP sample that were measured in the tabletop setup. To achieve an optimal Rb density for increased sensitivity, the QZFM vapor cell is heated to a temperature of about $160 ^{\circ} $C\cite{Shah2013,Borna2017}. This has an immediate effect on the temperature of the sample, and overheating of the MNPs is prevented by placing a 2 mm thick isolation material between the housing of the QZFM and the sample. A sample temperature of $43.2 \pm 0.5 ^{\circ} $C was measured after a stabilization time of 20 min after placement in the setup. With the isolation material included, the minimal distance between the centre of the vapor cell and the sample is estimated at 8.5 mm. With a sample height of approximately 10 mm, the average distance between the centre of the vapor cell and the particles is 13.5 mm. \subsubsection{SQUID setup for Thermal Noise Magnetometry.} The in house developed SQUID setup consists of a superconducting Niobium shield\cite{Ackermann2007} in which 6 SQUID sensors with a rectangular pickup coil are operated. The sample can be placed inside a warm bore at an average distance of 23.5 mm from the pickup coils. Only one sensor is used for the TNM experiment. The background spectrum of the SQUID setup shows a relatively flat profile with values between 2-3.5 $\frac{fT^2}{Hz}$ (1.5-1.8 $\frac{fT}{\sqrt{Hz}}$). 50 Hz and 150 Hz peaks from power line residuals are visible, as well as a two peaks around 24 kHz. \begin{figure} \centering \subfloat[\centering ]{{\includegraphics[width=0.45\textwidth]{setup_numbers.JPG} }}% \qquad \subfloat[\centering ]{{\includegraphics[width=0.45\textwidth]{6channel_picture.jpg} }}% \caption{(a) Tabletop TNM setup based on OPMs. The sample is places inside the Twinleaf MS-2 shield (1) together with the two QZFMs. QZFM electronics (2) controle the sensors and a laptop (4) serves for driving the DAQ Labjack U6 (3) in stream mode, running the QZFM electronics and collecting the data. The sample is on an average disntance of 13.5 mm from the sensible volume.\\(b) In-house developed SQUID setup for reference measurements. A superconducting Nb magnetic shield 6 SQUID sesors are kept on LHe temperature, the sample is placed in a warmbore and kept on an average temperature of $43.0 \pm 0.5 ^{\circ} C$ to match the sample temperature in the tabletop setup. An average distance of 23.5 mm is measured between the pickup coils of the SQUID sensors and the sample. Only one sensor is used for the TNM measurement.} \label{fig:setup} \end{figure} As is clear from Eq. (\ref{eq:brown}) and Eq. (\ref{eq:neel}), the dynamics of the particles is strongly dependent on the temperature. To compare both measurement systems under equal conditions, the sample has been kept at a constant temperature of $43.0 \pm 0.5 ^{\circ} C$ in the SQUID setup to match the sample temperature in the QZFM setup. This was achieved by use of a stable airflow through the warm bore. 13 minute time signals were acquired at a sample rate of 100 kHz and the PSDs were subsequently calculated and averaged. \subsection{Frequency response of the sensors} \label{Sec:freq_resp} \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Freq_resp2.pdf} \caption{Measured frequency responses $H(f)$ of the different sensors. The data was acquired by application of an AC magnetic field with sweeping frequency.} \label{Fig:freq_resp} \end{figure} To increase the QZFM frequency range above 100 Hz, a frequency response profile of the QZFMs has been measured to ensure a quantitatively correct measurement of the power spectra of the MNPs. To this end, the sensors were placed inside a magnetic shielded room at the Physikalisch-Technische Bundesanstalt (named as the “Zuse-MSR” in Ref. \cite{Voigt2015}) and a homogeneous AC field with an amplitude of 772 pT was applied by use of a square Helmholtz coil. Its frequency was swept in the range of $[2-600]$ Hz. The amplitude of the QZFM output signal was monitored and analysed in the time domain, after which the response values were averaged and normalized. From this data set, a frequency response profile was calculated for both QZFMs as shown in Fig. \ref{Fig:freq_resp}. The uncertainty on the response values was calculated from the standard deviation of the different peaks of the excitation and the response at one frequency. Note that the frequency response varies strongly and differently for both QZFMs. It is also not constant within the low frequency measurement range of OPM applications. For comparison, a similar procedure was performed with the SQUID sensor with a field of 751 nT by inserting a small coil into the warm bore. There was no frequency dependence of the amplitude detected as can be seen in Fig. \ref{Fig:freq_resp}. Having determined the frequency response of the QZFMs, the Power Spectral Density $S_B(f)$ of a MNP ensemble was calculated as \begin{equation} S_B(f)=\frac{S_{QZFM}(f)}{H(f)^2} \end{equation} with $S_{QZFM}(f)$ the Power Spectral Densities measured by the QZFMs and $H(f)$ their relative frequency responses as displayed in Fig. \ref{Fig:freq_resp}. \section{Results and discussion} \subsection{Comparison of Power Spectra} Fig. \ref{Fig:Comparison} (a) shows the Power Spectral Densities of both MNP systems measured in the SQUID setup. The spectra displayed are the noise spectra measured with MNPs PSD$_\mathrm{MNP}$ with the background spectra PSD$_\mathrm{BG}$ subtracted. The raw spectra can be found in the Supplementary Material. The PSD of the Resovist system is relatively flat up to a cutoff frequency of about 90 Hz, after which the PSD starts to decrease continuously due to the size distribution of the particles. On the other hand, the Perimag system shows higher power at lower frequencies, with cutoff frequency located at values lower than the displayed bandwidth. This is a result of the slower magnetization dynamics due to the large hydrodynamic size of the Perimag particles with a broad size distribution, as explained by Eq. (\ref{Eq:size_dist}). If the PSDs are normalized to iron the amount in the samples (see the Supplementary Material), a crossing occurs around 200 Hz. \begin{figure*}[h!] \centering \includegraphics[width=1\textwidth]{Comparison_and_SNR.pdf} \caption{Measured SQUID profiles of the two MNP systems (a) and measured and compensated QZFM profiles of the two MNP system (b). For clarity, the points in the QZFM spectra at the unstable background frequencies (30, 50, 330, 375 Hz) have been plotted separately. These corresponds to the peaks in the background spectrum of the tabletop setup in Fig. \ref{Fig:BG_both}. Qualitative comparison of the MNP spectra measured in both setups (c). The SQUID spectra of (a) have been rescaled to match the power of the OPM spectra S$_B$ in (b) at 80 Hz. Apart from the unstable background peaks in the OPM spectra, a very good agreement between the measurements obtained in both setups is visible.} \label{Fig:Comparison} \end{figure*} The same MNP systems are measured in the tabletop setup, where a bandwidth up to 550 Hz can be reached. The spectra before ($S_{QZFM}(f)$) and after ($S_B(f)$) the frequency response correction are displayed in Fig. \ref{Fig:Comparison} (b). Due to the reduced sample-sensor distance, the MNP signal is higher in the tabletop setup than in the SQUID system. However, the tabletop setup is less sensitive than the SQUID setup. This is clear from Fig. \ref{Fig:Comparison} (c), which shows the signal-to-noise ratio (SNR) as a function of frequency for both setups and both MNP systems: \begin{equation} SNR(f)=\frac{PSD_{MNP}(f)-PSD_{BG}(f)}{PSD_{BG}(f)} \label{Eq:SNR} \end{equation} The tabletop setup has a steeper SNR loss than the SQUID setup above 100 Hz. Not only the signal, but also the noise is amplified as a result of the frequency response compensation. The excellent state of the SQUID setup becomes visible in the SNR plots. Both MNP systems have a SNR up to one order of magnitude higher in the SQUID setup compared to the tabletop OPM setup, even if the signal is two orders of magnitude lower. SNR = 1 marks the limit where environmental and sensor noise (that is, unwanted noise) has the same amplitude as MNP noise (that is, wanted noise). Although this is not a strict limit of acceptance, it can still serve as a mark to validate the SNR of the MNP systems and the setup. Only Perimag will be used for the proof-of-concept experiments, since the SNR of Resovist is relatively low in the lower frequency range in the tabletop setup. Moreover, the SNR of Perimag above 400 Hz also crosses the SNR=1 limit. For lower concentrated samples, such as those used in the proof-of-concept experiments, this crossing will occur even at lower frequencies. The measurements are directly compared in Fig. \ref{Fig:Comparison} (d) by rescaling the SQUID spectra to match the OPM spectra at 80 Hz. As the curves of the two particle systems measured in two setups overlap nicely, we conclude that the compensation for the frequency response profile of the QZFM is a valid approach. Despite their loss in sensitivity above 100 Hz, the QZFMs recover a quantitatively correct spectrum. For MNP samples with high power in the lower frequency range, such as the Perimag and Resovist samples used as example MNP systems here, both measurement systems are thus equally suitable. For smaller MNP systems with dynamics in the higher frequency range, the tabletop setup might not be sufficient, both in bandwidth and sensitivity. However, these particle systems could still be tuned to fall within the QZFM bandwidth by increasing the viscosity of the suspension, as proposed in Ref. \cite{Leliaert2017}. To further increase the bandwidth, both setups still have some potential towards lower frequencies. Since both magnetic shields (i.e. the superconducting shield integrated into the dewar of the SQUID system and the mu-metal Twinleaf shield of the tabletop setup) are relatively small, the low-frequency shielding is very effective. Longer measurement times then lead to larger time intervals for the Fourier transformation and averaging procedure, reaching lower frequencies in the spectra. For an increase towards higher frequencies, the SQUID setup could measure at higher sample rates. However, the SNR also gets small at 50 kHz, with relatively little information gain for these MNP systems. The current tabletop setup is limited to 550 Hz due to the frequency of the modulation signal of the QZFMs. \subsection{Monitoring of clustering processes} Since the TNM signal scales quadratically with the volume of the noise sources \cite{Everaert2021}, this technique is particularly suited to monitor clustering processes of magnetic nanoparticles. The absence of any driving field during the measurement also excludes any undesired effects induced by an external excitation. Moreover, the good performance of the QZFM at lower frequencies favours processes which tend to slow down the magnetization dynamics of the sample. An OPM based TNM setup thus offers a broadly applicable tool to monitor the clustering of MNPs. As a proof of concept, we report on the monitoring of three such processes, measured with TNM in the described tabletop setup: \begin{enumerate} \item Enforced aggregation of Perimag particles by addition of ethanol \item Formation of photopolymer structures in a Perimag sample by exposure to UV light \item Cellular uptake of Perimag particles by THP-1 cells \end{enumerate} \subsubsection{Enforced aggregation of Perimag particles by addition of ethanol} In a first example, the aggregation of Perimag particles is enforced by adding ethanol to the sample. A 200 $\mu$l Perimag plain solution with an iron concentration of c(Fe)=466.4 mmol/L was diluted with 200 $\mu$l ethanol. The stabilizing dextran surfaces of the particles are dissolved, the attractive forces between the magnetic cores prevail, and the system aggregates. The sedimentation of the aggregates due to gravity was visually detectable after several seconds. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Ethanol_photopolymer2.pdf} \caption{(Left) Power Spectral Densities of Perimag particles before and after addition of ethanol. A lognormal size distribution was fitted to the MNP sample before aggregation. After the addition of ethanol, the magnetic cores aggregate and sediment due to gravity. Due to the extremely broad distribution of the N\'{e}el fluctuation times, the PSD has the distinct $1/f$ shape as shown in Fig \ref{Fig:Cartoon}.2c. (Right) Power Spectral Densities of Perimag particles before and after polymer formation. A gradual immobilization of the particles is induced during UV curing and full immobilization is reached after 15 minutes exposure time.} \label{Fig:ethanol_polymer} \end{figure} The influence of aggregation on the noise spectrum of Perimag is visible in Fig. \ref{Fig:ethanol_polymer} (a), where a spectrum before and after aggregation is displayed. Since the geometry of the sample is not conserved due to a change in spatial distribution of magnetic material, the spectra show only qualitative effects. A lognormal size distribution logN($\mu$=72.6 $\pm$5 nm, $\sigma$=0.82 $\pm$ 0.3) with an average hydrodynamic diameter of 101 $\pm$ 26.0 nm was fitted to the curve before addition of ethanol. Given the limited bandwidth of 400 Hz, these parameters match the average diameter of 130 nm of the manufacturer reasonably well. After the addition of ethanol, the aggregates sediment and are only submissive to the N\'{e}el mechanism. Their noise curve is dominated by $1/f$ noise. The slow magnetization dynamics of the aggregated cores and the broad size distribution of their fluctuation times are a distinctive signature of this process which was cartoonized in Fig \ref{Fig:Cartoon}.2c. \subsubsection{Formation of photopolymer structures in a Perimag sample by exposure to UV light} Photopolymer resins are popular materials used in additive manufacturing. They form a highly controllable system to gradually solidify suspensions. In combination with magnetic nanoparticles, they are of particular interest for precise phantom design and fabrication\cite{Loewa2021,Nordhoff2022} to evaluate Magnetic Particle Imaging scanners\cite{Loewa2019,Dutz2019}. In our experiment, a photopolymer was mixed with Perimag particles to mimic the gradual change in mobility of the particles when being embedded in the target tissue. First, the Perimag-photopolymer mixture was prepared by adding 100 $\mu$L Perimag plain with an iron concentration of 644.4 mmol/L to a 100 $\mu$l photopolymer base material \footnote[2]{Perfactory acrylic R5 red from EnvisionTEC Inc., composed of acrylic acid esters and a photoinitiator (0.1 $-$ 5 \text{\%}). The Perfactory Acryl R5 resin has a density of 1.12 $-$ 1.13 g/cm$^3$.} in a 2 ml Eppendorf tube. A homogeneous spatial distribution of the particles in the base material was ensured by sonication with an ultrasound sonifier (UP200Ht, Hielscher Electronics, Germany). 120 $\mu$l of the mixture was used as a sample and a first spectrum was measured before UV exposure. The sample was then exposed to UV light in a UVACUBE 2000 for 5 and 10 min subsequently. Fig. \ref{Fig:ethanol_polymer} (b) shows the measured spectra of the particles in the base material before exposure, and after 5 and 15 min total exposure time. Since only 60 $\mu$l magnetic material has been used in this experiment, the TNM signal is lower than in the spectra shown previously. Therefore, we argue that the falloff above 200 Hz of the two exposure spectra is an artificial effect due to insufficient SNR and not the physical shape of the spectra, which we expect to decrease linearly on the log-log scale. Before UV exposure, the particles rotate freely in the highly viscous base material. Compared to the water suspended particles in Fig. \ref{Fig:ethanol_polymer} (a), their Brownian rotations are slower. The related cutoff frequency is shifted to lower frequencies outside the window, and only the straight tail of the PSD is visible. Brownian movement of the particles is further excluded due to crosslink formation as the MNPs get enclosed in small polymer cavities during UV curing. The effective viscosity increases towards an eventual full immobilization. The Brownian fluctuations gradually slow down, the related Brownian cutoff frequency moves closer towards DC values and N\'{e}el fluctuations become dominant. After 15 minutes of exposure time, the PSD reaches the limiting $1/f$ shape on Fig. \ref{Fig:ethanol_polymer} (b). All particles are immobilized, as the PSD is directly comparable with the PSD of the aggregates in Fig. \ref{Fig:ethanol_polymer} (a). During UV curing, volume and geometry of the sample are conserved and the spectra can be compared quantitatively. This allows us to define an effective immobilization degree based on the PSD value at a stable low frequency after each exposure step. A normalization of the PSD values at 1.6 Hz to the fully immobilized state gives an immobilization of 50\% before UV exposure and 72\% after five minutes exposure time. This experiment therefore shows the potential of TNM to be used for continuous monitoring during MNP clustering and immobilization processes. \subsubsection{Cellular uptake of Perimag particles by THP-1 cells} MNPs are known to form clusters during cellular uptake, which impacts their magnetization dynamics\cite{Loewa2013,Etheridge2014,DiCorato2014,Poller2016,Teeman2019}. For their usage in biomedical application as Magnetic Particle Imaging (MPI) and hyperthermia treatment, the change in their magnetic state can heavily influence their performance\cite{Bender2018b,Cabrera2018,Paysen2020,Remmo2022}. However, the change in the thermal noise of the MNPs due to cellular uptake is unknown so far. Especially the absence of an external magnetic excitation during a TNM experiment can be seen as an advantage in the determination of the precise clustering mechanism, since cluster formation and aggregation due to an external perturbation are eliminated in this technique. In a third experiment, the noise profile of COOH coated Perimag particles is measured after cellular uptake by THP-1 cells in the tabletop setup and compared with the pre-uptake water suspended system. \begin{figure} \centering \subfloat[\centering ]{{\includegraphics[width=0.45\textwidth]{PBS_without.png} }}% \qquad \subfloat[\centering ]{{\includegraphics[width=0.45\textwidth]{PBS_with.png} }}% \caption{THP-1 cells without (a) and with (b) addition of Perimag particles after 24 hours of incubation time. Iron in the sample is visualized by Prussian Blue staining. The particles are to a great extent taken up by the cells. Redundant particles outside the cells still form aggregates, being attached to the outer walls of the cells.} \label{Fig:PBS} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Cells.pdf} \caption{Power Spectral Densities of COOH coated Perimag particles before and after cellular uptake by THP-1 cells. The influence of the cell medium on the dynamics of the particles is very limited. Due to cluster formation and partial immobilization of the particles after cellular uptake, the power in the lower frequency range increases. Full immobilization is however excluded, since the distinctive $1/f$ behaviour of Fig. \ref{Fig:ethanol_polymer} is not reached.} \label{Fig:cells} \end{figure} 200 $\mu$L COOH coated Perimag particles with a concentration of c(Fe)=244.7 mmol/L were incubated with $2\cdot 10^7$ THP-1 cells in a 800 $\mu$l RPMI +1$\%$ FCS medium for 24 hours. Fig. \ref{Fig:PBS} showes the cells before and after the incubation (undiluted sample), where iron is visualized by Prussian blue. From these pictures, it is clear that the magnetic nanoparticles are taken up by the cells to a great extent. Moreover, MNP in the surrounding solution also form aggregates. 3 different samples have been measured in the tabletop TNM setup and are displayed in Fig. \ref{Fig:cells} with their respective colours: \begin{enumerate}[label=\Alph*] \item Perimag particles in water suspension (blue) \item Perimag particles in the cell medium (green) \item Perimag particles after 24 hours of incubation time with THP-1 cells (pink). \end{enumerate} The PSD of the particles in the water suspension and the cell medium show only quantitative differences, which are due to the difference in concentration of the samples. The influence of the cell medium on the dynamics of the particles is - at least in the measured frequency range - very limited. After cellular uptake of the particles by the cells, a higher relative noise power in the lower frequency regime is measured, and the faster fluctuations are less present in the noise density. The broad distribution of cutoff frequencies clearly shifts towards lower values, which can be attributed to the formation of clusters and partial immobilization. Full immobilization can however be excluded, since the PSD does not fall off with $1/f$ as in Fig. \ref{Fig:ethanol_polymer} (a) and (b). A repetition measurement of Sample C was carried out after 10 days (orange curve). Apart from an increased SNR - which is related to the further cell sedimentation and a decreased average sample sensor distance - no qualitative differences were detectable. The magnetization dynamics of the particles in the well-aged cell sample shows no notable differences with that of the sample directly after the 24 hours incubation. Two noise curves can be compared quantitatively, namely those of the particles suspended in the cell medium and those of the particles directly after the incubation time, because the MNP concentration and sample volume were similar. A continuous probing of the noise power at e.g. 10 Hz could quantify the cellular uptake during the incubation process. \section{Conclusion and outlook} The OPM based tabletop setup offers a flexible measurement unit to track changes in the thermal noise spectra of magnetic nanosystems. This flexibility provides the potential for use in processes and industrial application beyond biomedicine, such as 3d additive manufacturing. The power spectral densities of two commercially available MNP systems were compared with TNM measurements in a SQUID setup and a good agreement of the noise curves was found. These are the first thermal noise spectra of MNP ensembles measured with OPMs. Moreover, the proposed setup is particularly suited to monitor clustering and immobilization processes of the particles over time, due to the excellent performance of the OPMs in the lower frequency regime and the dependency of the TNM signal on the power of the individual fluctuators' volume. Three proof-of-concept experiments were performed to show their effect on the noise spectra of the particles. The immobilization of the particles induces a distinct $1/f$ dependency in the power spectral density, which results from the broad distribution of the N\'{e}el fluctuation times, as is visible from the gradual formation of UV polymers in an MNP sample. In contrast, the clustering of the particles when taken up by THP-1 cells slows down the Brownian fluctuations, due to the increased volume of the fluctuators, with a shift of the corresponding cutoff frequency towards lower frequencies as a result. Presently, for a detailed MNP characterization by TNM, the SQUID setup remains preferred because of its broad bandwidth, which allows for the mapping of a broad range of MNP systems, and higher sensitivity, which facilitates the investigation of lower concentrated samples. However, a specifically for TNM designed OPM sensor could be more suited than the broadly applicable commercial magnetometers used here. By tuning the modulation frequency of the sensors, an optimum for the trade-off between sensitivity and bandwidth can be defined for each different MNP system. Further work will be dedicated to this. \section*{Acknowledgments} This work was supported by the German Research Foundation (DFG) through the Project ‘‘MagNoise: Establishing Thermal Noise Magnetometry for Magnetic Nanoparticle Characterization’’ under Grant FKZ WI4230/3-1. J. L. was supported by the Fonds Wetenschappelijk Onderzoek (FWO-Vlaanderen) with senior postdoctoral research fellowship No. 12W7622N. \bibliographystyle{unsrt}
2024-02-18T23:39:55.040Z
2022-10-03T02:11:27.000Z
algebraic_stack_train_0000
811
12,934
proofpile-arXiv_065-4066
\section{Introduction} In \cite{LM}, Lanier and Margalit provided a comprehensive look at the normal closures of elements of the mapping class group of an orientable surface. This paper aims to provide analogous results for periodic elements of a mapping class group of a non-orientable surface. It is known that the mapping class group of a closed surface is generated by elements of finite order. This fact was first proved for an orientable surface by Maclachlan \cite{McL2}, who used the result in his proof of simple connectivity of the moduli space of Riemann surfaces. Since then many papers have been devoted to torsion generators of mapping class groups. For a closed orientable surface $S_g$ of genus $g\ge 3$, McCarthy-Papadopoulos \cite{MP} and Korkmaz \cite{MK2} found examples of an element $f\in\mathcal{M}(S_g)$ of finite order ($2$ and $4g+2$, respectively) whose normal closure in $\mathcal{M}(S_g)$ is the entire group. We say that such an $f$ {\it normally generates} $\mathcal{M}(S_g)$. The result of Lanier and Margalit is that any non-trivial periodic mapping class other than the hyperelliptic involution is a normal generator of $\mathcal{M}(S_g)$. In the case of a closed non-orientable surface $N_g$ it is known that $\mathcal{M}(N_g)$ is generated by involutions \cite{Sz2}. The author of this paper together with Szepietowski \cite{LSz} have recently proved that $\mathcal{M}(N_g)$ is normally generated by one element of infinite order for $g\ge 7$. While in the orientable case the mapping class group is generated by Dehn twists as shown by Dehn and Lickorish \cite{Lick}, in the non-orientable case Dehn twists generate a subgroup of index 2 known as the twist subgroup $\mathcal{T}(N_g)$ \cite{Lick1}. Similarly, where in the orientable case the commutator subgroup of the mapping class group $\mathcal{M}(S_g)$ is equal to the entire group for $g\geq 3$ \cite{Pow}, in our case the commutator subgroup of $\mathcal{M}(N_g)$ is equal to the twist subgroup for $g\geq 7$ \cite{MK}. If we use methods similar to the ones employed in \cite{LM}, the question we answer is whether the normal closure of a given element contains the twist subgroup of $N_g$. If the answer is positive, this still leaves two possibilities: the normal closure can be either the twist subgroup itself or the entire mapping class group. We learn to distinguish between these cases and indicate several finite order normal generators of $\mathcal{M}(N_g)$. Another difference between the orientable and non-orientable cases is the abundance of involutions, that is, elements of order 2. Orientation-preserving involutions with fixed points on a given orientable surface form a 1-parameter family and are determined up to conjugacy by the genus of the quotient orbifold \cite{Dug}. On the contrary, involutions on non-orientable surfaces are determined by up to six invariants \cite{Dug} and we look closely at each case to formulate the sufficient and necessary conditions for their normal closures to contain the twist subgroup. With that in mind, it is no surprise that our results come in two main cases: for periodic mapping classes of order greater than 2 and for involutions. But before we move onto the main results, a few words about notation. Any periodic mapping class $f\in\mathcal{M}(N_g)$ is represented by a homeomorphism whose order is equal to that of $f$; this was proved by Kerckhoff \cite{Kerck} in the orientable case and in the non-orientable case by applying double covering by an orientable surface (see the last two paragraphs of Section IV, page 256, in \cite{Kerck}). Moreover, this homeomorphism can be chosen to be an isometry with respect to some hyperbolic metric on $N_g$ and is unique up to conjugacy in the group of homeomorphisms of $N_g$. We refer to any such representative as a standard representative of $f$ and denote it by $\phi$. The set of fixed points of $\phi$ consists of isolated fixed points and point-wise-fixed simple closed curves we will refer to as ovals. Note that ovals appear only in the case of involutions; elements of greater finite order only have isolated fixed points. Let $r$ denote the cardinality of the set of isolated fixed points of $\phi$, $k$ -- the cardinality of the set of ovals, $k_{+}$ -- two-sided ovals and $k_{-}$ -- one-sided ovals. Now we are ready to formulate the main results. \begin{thm}\label{noninv} Let $g\geq 5$ and let $f$ be a periodic element of $\mathcal{M}(N_g)$ of order greater than 2. The normal closure of $f$ contains the commutator subgroup of $\mathcal{M}(N_g)$. \end{thm} \begin{cor}\label{ng} Let $g\geq 7$ and let $f$ be a periodic element of $\mathcal{M}(N_g)$ of order greater than 2. The normal closure of $f$ is either $\mathcal{T}(N_g)$ or $\mathcal{M}(N_g)$, the latter if and only if $f\notin \mathcal{T}(N_g)$. \end{cor} \begin{rem} Thanks to Corollary \ref{ng}, we can give an example of a torsion normal generator of $\mathcal{M}(N_g)$ for any $g\geq 7$. Let us represent $N_g$ as a sphere with $g$ crosscaps. If $g$ is even, one element normally generating $\mathcal{M}(N_g)$ is a rotation of order $g$ of the sphere with $g$ crosscaps spaced evenly along the equator. If $g$ is odd, one such element is a rotation of order $g-1$ of the sphere with $g-1$ crosscaps spaced evenly along the equator and one stationary crosscap at one of the poles (see Figure \ref{figexamples}). \sloppy By computing the determinant of the induced automorphism of $H_1(N_g;\mathbb{R})$ it can be shown that these elements are not in the twist subgroup (see Theorem \ref{thm:twsb} in Section \ref{sec:prem}). Details are left as an exercise to the reader. \end{rem} \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.35]{examples} \caption{Torsion normal generators of $\mathcal{M}(N_g)$ for even and odd $g$, respectively.} \label{figexamples}\end{center}\end{figure} \begin{rem} For $2\leq g\leq 6$ the group $\mathcal{M}(N_g)$ is not normally generated by one element, because its abelianisation is not cyclic \cite{MK}. \end{rem} \begin{thm}\label{inv} Let $g\geq 5$ and let $f\in\mathcal{M}(N_g)$ be an involution, $\phi$ its standard representative and $r$, $k$, $k_-$, $k_+$ the parameters defined above. The normal closure of $f$ in $\mathcal{M}(N_g)$ contains the commutator subgroup of $\mathcal{M}(N_g)$ if and only if one of the following conditions hold. \begin{enumerate} \item $r>0$, $k=0$ and $g-r\geq 4$. \item $k>0$, $r+k_->0$ and either: \begin{enumerate} \item the orbifold $N_g/\left\langle \phi \right\rangle$ is non-orientable, or \item the orbifold $N_g/\left\langle \phi \right\rangle$ is orientable and $g-r-2k\geq 2$. \end{enumerate} \item $r=k_{-}=0$ and either \begin{enumerate} \item the set of fixed points of $\phi$ is separating and $g-2k\geq 4$, or \item the set of fixed points of $\phi$ is non-separating. \end{enumerate} \end{enumerate} \end{thm} \begin{thm}\label{thm:normgens} For any $g\geq 7$ there is an involution which normally generates $\mathcal{M}(N_g)$. \end{thm} Our results can be interpreted in terms of covers of the moduli space of non-orientable Klein surfaces. A Klein surface is a surface with a dianalytic structure \cite{AG}. Similarly as in the case of Riemann surfaces, the moduli space $\mathfrak{M}_g$ of Klein surfaces homeomorphic to $N_g$ can be defined as the orbit space of a properly discontinuous action of $\mathcal{M}(N_g)$ on a Teichm\"uller space (see \cite{Sz2} and references therein). The moduli space $\mathfrak{M}_g$ is an orbifold whose orbifold fundamental group is $\mathcal{M}(N_g)$, and whose orbifold points correspond to periodic elements of $\mathcal{M}(N_g)$. Since normal subgroups of $\mathcal{M}(N_g)$ correspond to regular orbifold covers of $\mathfrak{M}_g$, Theorem \ref{noninv} says that any such cover of degree greater than $2$ can have orbifold points only of order $2$. This paper is organised as follows: in \nameref{sec:prem} we review basic concepts about mapping class groups, in Section \ref{sec:stds} we define standard pairs of curves and show how they relate to the main theorems. Afterwards, in Section \ref{sec:nec}, we introduce the language of NEC groups in actions of finite groups on surfaces. We use the notions introduced so far to prove Theorem \ref{noninv} in Section \ref{sec:proofnoninv}, first in the case when the action of $\left\langle \phi \right\rangle$ on $N_g$ is free, then otherwise. In Section \ref{sec:class} we discuss how to construct involutions on surfaces and use the introduced surgeries to list all possible involutions on a surface $N_g$, based on the work of Dugger \cite{Dug}. Finally, in Section \ref{sec:proofinv} we demonstrate how to show that the normal closure of a map does not contain the twist subgroup using the induced action on homology groups with coefficients in $\mathbb{Z}_2$ and prove Theorems \ref{inv} and \ref{thm:normgens}. \section{Preliminaries}\label{sec:prem} The non-orientable surface $N_g$ can be represented as an orientable surface of genus $l$ with $s$ crosscaps, where $2l+s=g$ and $s\geq 1$. In all figures of this paper shaded crossed discs represent crosscaps, meaning that the interiors of those discs are removed and antipodal points on their boundaries identified. The \emph{mapping class group} of the surface $N_g$, $\mathcal{M}(N_g)$, is the quotient of the group of all self-homeomorphisms of $N_g$ by the subgroup of self-homeomorphisms isotopic to the identity. A \emph{curve} on a surface is a simple closed curve. By an abuse of notation we do not distinguish a curve from its isotopy class or image. If the regular neighbourhood of a curve is an annulus, we call it two-sided, and if it is a M\"{o}bius strip -- one-sided. For $c,d$ -- two curves we take $i(c,d)$ to be their geometric intersection number, that is, $i(c,d)=\min\{|\gamma\cap\delta|: \gamma\in c, \delta\in d\}$. About a two-sided curve $a$ on $N_g$ we define a Dehn twist $T_a$ (the mapping and the mapping class). Because we are working on a non-orientable surface, the Dehn twist $T_a$ can be one of two, depending on which orientation of a regular neighbourhood of $a$ we choose. Then the Dehn twist in the opposite direction is $T_a^{-1}$. For any $f\in\mathcal{M}(N_g)$ we have \begin{equation*} fT_cf^{-1}=T_{f(c)}^\epsilon \end{equation*} where $\epsilon\in\{-1,1\}$, again depending on which orientation of a regular neighbourhood of $f(c)$ we choose. The subgroup of $\mathcal{M}(N_g)$ generated by all Dehn twists we call the twist subgroup and denote by $\mathcal{T}(N_g)$. The following lemma is proven in \cite{Sz1}: \begin{lem}\label{commute} For $g\geq 5$, let $c$, $d$ be two-sided simple closed curves on $N_{g}^{n}$ such that $i(c,d)=1$ and let $f:\mathcal{M}(N_g)\rightarrow G$ be a homomorphism. If the elements $f({T_c})$ and $f(T_d)$ commute in $G$, the image of $\mathcal{M}(N_g)$ under $f$ is abelian. \end{lem} \begin{cor}\label{cor} Let $c$, $d$ and $f$ be as in the lemma above. The kernel of $f$ contains the commutator subgroup of $\mathcal{M}(N_g)$. \end{cor} \begin{thm}\label{twist}\cite{MK} For $g\geq 7$, $[\mathcal{M}(N_{g}),\mathcal{M}(N_{g})]=\mathcal{T}(N_{g})$. For $g=5,6$ $[\mathcal{M}(N_{g}),\mathcal{M}(N_{g})]$ has index 4 in $\mathcal{M}(N_{g})$. \end{thm} Now consider the action of $\mathcal{M}(N_g)$ on $H_1(N_g;\mathbb{R})$. For $f\in\mathcal{M}(N_g)$ and $f_*:H_1(N_g;\mathbb{R})\rightarrow H_1(N_g;\mathbb{R})$ the induced homomorphism, we define the \emph{determinant homomorphism} $D:\mathcal{M}(N_g)\rightarrow\{-1,1\}$ as $D(f)=\det(f_*)$. \begin{thm}\label{thm:twsb}\cite{MS} For $f\in \mathcal{M}(N_g)$, $\det(f_*)=1$ if and only if $f\in\mathcal{T}(N_g)$. \end{thm} We immediately conclude that, provided the normal closure of $f$ in $\mathcal{M}(N_g)$ contains the twist subgroup, the normal closure of $f$ is equal to $\mathcal{M}(N_g)$ if and only if $\det(f_*)=-1$. This fact allows us to look for normal generators of $\mathcal{M}(N_g)$. \section{Standard pairs of curves}\label{sec:stds} The following two lemmas are adaptations of Lemmas 2.1, 2.2, 2.3 proven in \cite{LM} for orientable surfaces with minor changes in assumptions and proofs. \begin{lem}\label{L2.1} For $g\geq 5$, let $c$ and $d$ be two-sided non-separating curves in $N_{g}$ with $i(c,d)=1$. Then the normal closure of $T_{c}T_{d}$ in $\mathcal{M}(N_{g})$ is equal to the commutator subgroup of $\mathcal{M}(N_{g})$. \end{lem} \begin{proof} Since $N\setminus c$ and $N\setminus d$ are both connected and non-orientable, there exists an element $h\in \mathcal{M}(N_{g})$ such that $h(c)=d$. Then $h T_c h^{-1}=T_d^{\pm 1}$. Because $T_d$ is conjugate with $T_d^{-1}$ in $\mathcal{M}(N_{g})$ (Lemma 2.4 in \cite{MK}), we can take $h$ to be such that $h T_c h^{-1}=T_d^{-1}$. It follows that the element $T_{c}T_{d}=T_c h T_c^{-1} h^{-1}$ lies in the commutator subgroup. Since the commutator subgroup is a normal subgroup, it contains the normal closure of $T_{c}T_{d}$. Now let $H$ denote the normal closure of $T_{c}T_{d}$ in $\mathcal{M}(N_{g})$ and let $p:\mathcal{M}(N_{g})\rightarrow\mathcal{M}(N_{g})/H$ be the canonical projection map. It is easy to see that $p(T_{c})=p(T_{d})^{-1}$; in particular, $p(T_{c})$ and $p(T_{d})$ commute. By Corollary \ref{cor} the kernel of $p$ contains the commutator subgroup of $\mathcal{M}(N_{g})$. Since $\ker p=H$, the proof is complete. \end{proof} \begin{defi} Let $c$, $d$ be a pair of non-separating two-sided simple closed curves on a surface $N_{g}$. We will say that $c$ and $d$ form \begin{itemize} \item a type 1 standard pair if $i(c,d)=1$; \item a type 2 standard pair if $i(c,d)=0$ and the complement $N_{g}\setminus(c\cup d)$ is connected and non-orientable. \end{itemize} \end{defi} \begin{lem}\label{L2.2} For $g\geq5$, let $f\in\mathcal{M}(N_{g})$. Suppose that there is a curve $c$ in $N_{g}$ such that $(c,f(c))$ form a standard pair of type 1 or 2. Then the normal closure of $f$ in $\mathcal{M}(N_{g})$ contains the commutator subgroup of $\mathcal{M}(N_{g})$. \end{lem} \begin{proof} Let $T_c$ and $T_{f(c)}$ be such that $T_{f(c)}=fT_c^{-1}f^{-1}$. Suppose first that $c$ and $f(c)$ form a type 1 standard pair. Since the commutator $[T_{c},f]$ is equal to the product of $T_{c}fT_{c}^{-1}$ and $f^{-1}$, it lies in the normal closure of $f$. On the other hand, $T_{c}fT_{c}^{-1}f^{-1}$ is equal to $T_{c}T_{f(c)}$. Since $i(c,f(c))=1$, the lemma follows by Lemma \ref{L2.1}. Now suppose that $c$ and $f(c)$ form a type 2 standard pair. Then we can find a two-sided, non-separating curve $d$ such that $i(c,d)=1$ and $(d,f(c))$ form a type 2 standard pair. As before, the commutator $[T_{c},f]$ is equal to $T_{c}T_{f(c)}^{-1}$ and lies in the normal closure of $f$. The group $\mathcal{M}(N_{g})$ acts transitively on type 2 standard pairs of curves, therefore there exists a mapping class $h\in\mathcal{M}(N_{g})$ such that $h(c)=f(c)$ and $h(f(c))=d$. We choose $T_d$ be to such that $h[T_{c},f]h^{-1}$ is equal to either $T_{f(c)}^{-1}T_{d}$ or $T_{f(c)}T_{d}^{-1}$. In the former case we have $T_cT_d=\left(T_{c}T_{f(c)}\right)\left(T_{f(c)}^{-1}T_{d}\right)$, and in the latter $T_cT_d=\left(T_{c}T_{f(c)}\right)\left(T_{f(c)}T_{d}^{-1}\right)^{-1}$ (note that because $d$ and $f(c)$ are disjoint, $T_d$ and $T_{f(c)}$ commute). In both cases $T_cT_d$ lies in the normal closure of $f$. The lemma follows by Lemma \ref{L2.1}. This completes the proof. \end{proof} \begin{rem} It is easily seen that for any $f\in\mathcal{M}(N_g)$ and any $n\in\mathbb{N}$ the normal closure of $f^n$ is contained in the normal closure of $f$. \end{rem} \section{NEC groups}\label{sec:nec} \subsection{NEC group and its fundamental polygon}\label{polynot} Non-euclidean crystallographic (NEC) groups are discrete and cocompact subgroups of the group of isometries of the hyperbolic plane, $\mathrm{Isom}(\mathbb{H}^2)$. They provide a natural tool for studying finite group actions on surfaces; see for example \cite{JPAA,GMJ,RACSAM}. Every action of a finite group $G$ on a surface $N_g$ can be obtained by means of a pair of NEC groups $\Gamma$ and $\Lambda$, with $\Gamma$ -- a normal torsion-free subgroup of $\Lambda$ such that $N_g$ is homeomorphic to $\mathbb{H}^2/\Gamma$ and $G$ is isomorphic to $\Lambda/\Gamma$. Equivalently, there is an epimorphism $\theta:\Lambda\rightarrow G$ with $\Gamma=\ker\theta\cong\pi_1(N_g)$. The signature of an NEC group $\Lambda$ is a collection of non-negative integers and symbols. In our case, that is for cyclic $G$, the signature has the form \begin{equation*} \left( h; \pm ; [m_1,...,m_r] ; \{()^k\} \right) \end{equation*} where \begin{enumerate} \item the sign $\pm$ is ''$+$'' if $\mathbb{H}^2/\Lambda$ is orientable and ''$-$'' otherwise; \item the integer $h\geq0$ denotes the genus of $\mathbb{H}^2/\Lambda$; \item the ordered set of integers $m_1,...,m_r (m_i\geq 2)$, called the \emph{proper periods} of the signature, corresponds to cone points on the orbifold $\mathbb{H}^2/\Lambda$; \item $k$ empty \emph{period-cycles} $(),...,()$ correspond to boundary components of the orbifold $\mathbb{H}^2/\Lambda$. \end{enumerate} % \begin{rem} A general NEC signature can also have nonempty period cycles \cite{Buj}. \end{rem} If the set of periods or the set of period cycles is empty, we write the brackets with no symbols between them. For example, the signature $\left(g; -; [];\{\} \right)$ has no proper periods and no period cycles; it is the signature of the NEC group isomorphic to $\pi_1(N_g)$. If we know all $m_i$ to have the same value $p$, we will write $[(p)^r]$. The quotient orbifold can be reconstructed from the associated NEC group by identifying the right edges of a marked polygon which is the fundamental region $P$ of $\Lambda$, as detailed in \cite{Mac}. The marked polygon is a plane polygon in which certain edges are related by homeomorphisms; the edges are identified if one is the image of the other under an element of $\Lambda$. If the first edge has vertices, in order as we read the labels anticlockwise, $P$ and $Q$, and the other has vertices $R$ and $S$, the identifying homeomorphism can map $P$ on $S$ and $Q$ on $R$, pairing the edges orientably, or map $P$ on $R$ and $Q$ on $S$, pairing the edges nonorientably. Two sides paired orientably will be indicated by the same letter and a prime, for example $\xi,\xi'$; two sides paired nonorientably will be written using the same letter and an asterisk, for example $\alpha, \alpha^*$. If we mark all the egdes of the polygon accordingly and then write them in the order in which they appear around the polygon anticlockwise, we obtain the surface symbol of the polygon \cite{Mac}. The marked polygon of the signature \begin{equation*} \left( h; + ; [m_1,...,m_r] ; \{ ()^k \} \right) \end{equation*} has the surface symbol \begin{equation*} \xi_1\xi_1'...\xi_r\xi_r'\epsilon_1\gamma_{1}\epsilon_1'...\epsilon_k\gamma_{k}\epsilon_k'\alpha_1\beta_1'\alpha_1'\beta_1...\alpha_h\beta_h'\alpha_h'\beta_h \end{equation*} while the marked polygon of the signature \begin{equation*} \left( h; - ; [m_1,...,m_r] ; \{ ()^k \} \right) \end{equation*} has the surface symbol \begin{equation*} \xi_1\xi_1'...\xi_r\xi_r'\epsilon_1\gamma_{1}\epsilon_1'...\epsilon_k\gamma_{k}\epsilon_k'\alpha_1\alpha_1^*...\alpha_h\alpha_h^*. \end{equation*} The signature gives us a presentation of the NEC group, as shown by Wilkie in \cite{Wilkie}. The presentation is as follows: \begin{description} \item[Generators] \begin{enumerate} \item $x_1,...,x_r$ (elliptic elements); \item $c_{1},...,c_{k}$ (hyperbolic reflections); \item $e_1,...,e_k$ (hyperbolic elements except for cases $h=0$ and $r=k=1$, when they are elliptic elements); \item \begin{enumerate} \item $a_1,b_1,...,a_h,b_h$ (hyperbolic elements) if the sign is $+$; \item $d_1,...,d_h$ (glide reflections) if the sign is $-$. \end{enumerate} \end{enumerate} \item[Relations] \begin{enumerate} \item $x_i^{m_i}=1$, $i=1,...,r$; \item $c_{j}^2=1$, $j=1,...,k$; \item $c_{j}=e_j^{-1}c_{j}e_j$ $j=1,...,k$; \item The long relation: \begin{enumerate} \item $x_1...x_re_1...e_ka_1b_1a_1^{-1}b_1^{-1}...a_hb_ha_h^{-1}b_h^{-1}=1$ if the sign is $+$; \item $x_1...x_re_1...e_kd_1^2...d_h^2=1$ if the sign is $-$. \end{enumerate} \end{enumerate} \end{description} As mentioned above, the marked polygon $P$ is also the fundamental region of the NEC group $\Lambda$ associated with it. The generators of $\Lambda$ map the edges of the marked polygon in the following way: \begin{enumerate} \item $x_i(\xi_i')=\xi_i$, $i=1,...,r$; \item $e_j(\epsilon_j')=\epsilon_j$, $j=1,...,k$; \item $c_j$, $j=1,...,k$ is a reflection along the axis containing $\gamma_j$; \item \begin{enumerate} \item $a_l(\alpha_l')=\alpha_l$ and $b_l(\beta_l')=\beta_l$, $l=1,...,h$ if the sign is $+$; \item $d_l(\alpha_l^*)=\alpha_l$, $l=1,...,h$ if the sign is $-$. \end{enumerate} \end{enumerate} Then $\mathbb{H}^2/\Lambda \cong P/\sim$, where $\sim$ refers to identification of edges paired by the above homeomorphisms. Only generators $c_{1},...,c_{k}$ and $d_1,...,d_h$ are orientation-reversing, the others are all orientation-preserving. \begin{lem}\label{nonorient}\cite{Buj,RACSAM} Suppose that $\Lambda$ is an NEC group with signature \[\left( h; \pm ; [m_1,...,m_r] ; \{()^k\}\right).\] A group homomorphism $\theta:\Lambda\rightarrow G$ defines an action of G on a non-orientable surface if and only if \begin{enumerate} \item $\theta(x_i)$ has order $m_i$ for $1\leq i\leq r$, \item $\theta(c_j)$ has order 2 for $1\leq j\leq k$, and \item $\theta(\Lambda^+)=G$, where $\Lambda^+$ is the subgroup of $\Lambda$ consisting of orientation-preserving elements. \end{enumerate} \end{lem} Notice that the first two conditions guarantee that $\Gamma=\ker \theta$ is torsion-free and the third condition ensures that $\Gamma$ contains orientation-reversing elements and is therefore the fundamental group of a non-orientable surface. If $\Lambda/\Gamma$ has order $n$ and $\Gamma\cong\pi_1(N_g)$, the Hurwitz-Riemann formula takes the following form \begin{equation*} g-2=n\left(\epsilon h + k - 2+\sum_{i=1}^{r}\left(1-\frac{1}{m_i}\right)\right) \end{equation*} where $\epsilon$ is equal to $1$ if $N_g/\left\langle \phi \right\rangle$ is non-orientable and $2$ otherwise. \subsection{Topological equivalence}\label{topeq} Suppose $\theta_i:\Lambda\rightarrow G$ are two epimorphisms with $\Gamma=\ker\theta_i \cong\pi_1(N_g)$ for $i=1,2$. We say that $\theta_1$ and $\theta_2$ are topologically conjugate if and only if the corresponding $G$-actions are conjugate by a homeomorphism of $N_g$. Equivalently, $\theta_1$ and $\theta_2$ are topologically conjugate if and only if there exist automorphisms $\psi\in \mathrm{Aut}(\Lambda)$ and $\chi\in\mathrm{Aut}(G)$ such that $\chi\circ \theta_1=\theta_2\circ \psi$; see Definition 2.2 in \cite{RACSAM} and the preceeding remark, or Proposition 2.2 in \cite{JPAA}. We list the automorphisms of $\Gamma$ which will be used in the proof of Theorem \ref{noninv}; these and more are given in \cite{GMJ} (see also references therein). If the sign is "$+$", we are going to use the following automorphisms of $\Gamma$: $\sigma$ defined by $\sigma(x_r)=Ea_1^{-1}E^{-1}x_rEa_1E^{-1}$, $\sigma(a_1)=[a_1^{-1},E^{-1}x_r^{-1}E]a_1$, $\sigma(b_1)=b_1a_1^{-1} E^{-1}x_rEa_1$, where $E=e_1...e_k$, and the identity on the remaining generators. $\pi$ defined by $\pi(e_k)=a_1^{-1}e_ka_1$, $\pi(c_k)=a_1^{-1}c_ka_1$, $\pi(a_1)=[a_1^{-1},e_k^{-1}]a_1$, $\pi(b_1)=b_1a_1^{-1}e_ka_1$ and the identity on the remaining generators. $\omega$ defined by $\omega(a_1)=a_1$, $\omega(b_1)=b_1a_1$ and the identity on the remaining generators. If the sign is "$-$", we are going to use the following automorphisms of $\Gamma$: $\gamma$ defined by $\gamma(d_1)=E^{-1}x_rEd_1$, $\gamma(x_r)=x_rEd_1E^{-1}x_r^{-1}Ed_1^{-1}E^{-1}x_r^{-1}$, where $E=e_1...e_k$, and the identity on the remaining generators. $\epsilon$ defined by $\epsilon(d_1)=e_kd_1$, $\epsilon(e_k)=e_kd_1e_k^{-1}d_1^{-1}e_k^{-1}$, $\epsilon(c_k)=e_kd_1c_kd_1^{-1}e_k^{-1}$ and the identity on the remaining generators. Regardless of sign, we are also going to use the following automorphisms of $\Gamma$: $\rho_i$ defined by $\rho_i(x_i)=x_ix_{i+1}x_i^{-1}$, $\rho_i(x_{i+1})=x_i$ and the identity on the remaining generators. $\lambda_j$ defined by $\lambda_j(e_j)=e_je_{j+1}e_j^{-1}$, $\lambda_j(e_{j+1})=e_j$, $\lambda_j(c_j)=e_jc_{j+1}e_j^{-1}$ and the identity on the remaining generators. In \cite{GMJ}, the following two theorems are proven that provide criteria for topological equivalence of $\mathbb{Z}_p$-actions: \begin{thm}\label{thm1} Suppose that $p$ is an odd prime, $\Lambda$ is an $NEC$ group of signature\ $(h;-;[(p)^r];\{-\})$ and $\theta_i:\Lambda\rightarrow \mathbb{Z}_p$ for $i=1,2$ are two epimorphisms with $\ker\theta_i$ isomorphic to the fundamental group of a non-orientable surface. Then $\theta_1$ and $\theta_2$ are topologically conjugate if and only if $(\theta_2(x_1),...,\theta_2(x_r))$ is a permutation of $(\epsilon_1a\theta_1(x_1),...,\epsilon_ra\theta_1(x_r))$ for some $a\in\{1,...,p-1\}$ and $\epsilon_j\in\{1,-1\}$, $j=1,...,r$. \end{thm} \begin{thm}\label{thm2} Suppose that $\Lambda$ is an $NEC$ group of signature\linebreak $(h;-;[(2)^r];\{()^k\})$ and $\theta_i:\Lambda\rightarrow \mathbb{Z}_2$ for $i=1,2$ are two epimorphisms with $\ker\theta_i$ isomorphic to the fundamental group of a non-orientable surface. Let \begin{equation*} k_-^{(i)}=\#\{j\in\{1,...,k\}|\theta_i(e_j)=1\} \end{equation*}for i=1,2. Then $\theta_1$ and $\theta_2$ are topologically conjugate if and only if \begin{enumerate} \item $k_-^{(1)}=k_-^{(2)}$, and if $r=k_-^{(1)}=k_-^{(2)}=0$ and the sign is '$-$' also \item $\theta_1(d_1...d_g)=\theta_2(d_1...d_g)$ and \item $\theta_1(d_1)=...=\theta_1(d_g)=0$ if and only if $\theta_2(d_1)=...=\theta_2(d_g)=0$. \end{enumerate} \end{thm} Let $\gamma_j$ be the axis of the reflection $c_j$ corresponding to a boundary component of $\mathbb{H}^2/\Lambda$. Notice that $\gamma_j$ projects to a two-sided circle on $N_g\cong\mathbb{H}^2/\Gamma$ if and only if $e_j\in\Gamma$. In other words, the invariant $k_-^{(i)}$ from Theorem \ref{thm2} is the number of one-sided ovals. \section{Periodic elements of order greater than 2}\label{sec:proofnoninv} \begin{proof}[Proof of Theorem \ref{noninv}] Let $f$ be a periodic element of $\mathcal{M}(N_g)$ of order $\#f>2$ and $\phi$ its standard representative. We will show that there exists a curve $c$ on $N_g$ such that $(c,f(c))$ is a standard pair. Theorem \ref{noninv} will then follow by Lemma \ref{L2.2}. \textbf{Case 1:} the action of $\left\langle\phi \right\rangle$ is free. By raising $f$ to an appropriate power, we can assume that $\#f$ is equal to a prime $p$. Since the action of $\left\langle \phi \right\rangle$ is free, it is a covering space action. Let $p$ be an odd prime. By Theorem \ref{thm1} the covering map is uniquely determined up to powers and conjugacy by the genus $h$ of $N_g/\left\langle \phi \right\rangle$. Thus we can assume that $\phi$ is a rotation by the angle $2\pi/p$ of the surface $N_g$, as in Figure \ref{figfreeh}. If $h\geq 3$, we can find a curve $c$ such that $c$ and $f(c)$ form a type 1 standard pair. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.7]{h3} \caption{Type 1 standard pair for $h\geq 3$ and $p=3$.} \label{figfreeh}\end{center} \end{figure} An easy argument from the Hurwitz-Riemann formula shows that a case where $h<3$ does not occur. Now let $p=2$. By Theorem 4.1 of \cite{Dug}, there are exactly two distinct conjugacy classes of free actions of the cyclic group of order 2 on a non-orientable surface of even genus equal at least 4. For $g=2s$, $s\geq 2$ the two actions can be represented as the antipodism of a sphere with $2(s-1)$ crosscaps forming $s-1$ antipodic pairs and the antipodism of a torus with $2(s-2)$ crosscaps forming $s-2$ antipodal pairs; we will denote them $f_{01}$ and $f_{02}$, respectively. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.4]{freeinv} \caption{Free involutions.} \label{figfree2}\end{center} \end{figure} Since $g\geq 8$, we can always find a curve $c$ such that either $c$ and $f_{01}(c)$ or $c$ and $f_{02}(c)$ form a type 2 standard pair (Figure \ref{figfree2}). This completes the proof in Case 1. For the rest of the proof we assume that the action of $\left\langle\phi \right\rangle$ is not free. We will use the NEC groups introduced in Section \ref{sec:nec}. For a cyclic group $\left\langle \phi \right\rangle$ acting on a non-orientable surface $N_g$ there exist NEC groups $\Lambda$, $\Gamma$ such that $\Gamma\triangleleft\Lambda$ and $\Gamma$ is torsion-free, $\left\langle \phi \right\rangle\cong \Lambda/\Gamma$, $N_g\cong \mathbb{H}^2/\Gamma$ and $N_g/\left\langle \phi \right\rangle\cong \mathbb{H}^2/\Lambda$. We identify $\Lambda/\Gamma$ with $\mathbb{Z}_n$ and let $\theta:\Lambda\rightarrow\mathbb{Z}_n$ be the canonical projection. Our goal is to find a curve $c$ on $\mathbb{H}^2/\Gamma$ and a generator $y\in \Lambda/\Gamma$ such that $c$ and $y(c)$ form a standard pair of either type. Then Theorem \ref{noninv} will follow from Lemma \ref{L2.2}. To achieve this, we construct the fundamental region $D$ of $\Gamma$ from the fundamental region $P$ of $\Lambda$, namely, $D=P\cup \tilde{y}P\cup...\cup \tilde{y}^{n-1}P$ for $n=\#\phi$, where $y=\tilde{y}\Gamma$. After we identify the edges of $D$ paired by elements of $\Gamma$, we obtain the surface $N_g$. \textbf{Case 2:} $\#f$ is not a power of 2. By raising $f$ to an appropriate power we can assume that $\# f$ is an odd prime. The group $\Lambda$ has no reflections, as there are no elements of order 2 in $\mathbb{Z}_p$ for them to map onto. Moreover, $m_i=p$ for $i=1,...,r$. The signature of $\Lambda$ is \[ \left(h;-;[(p)^r]);\{-\}\right). \] The marked polygon $P$ associated with $\Lambda$ has the surface symbol \begin{equation*} \xi_1\xi_1'...\xi_r\xi_r'\alpha_1\alpha_1^*...\alpha_h\alpha_h^*. \end{equation*} Notice that $r\geq 1$, as we assumed the action is not free, and $h\geq 1$. The element $x_1$ is of order $p$, therefore it does not lie in the kernel $\Gamma$; we can assume $\theta(x_1)=1$. We can express $\Lambda$ as a sum of cosets of $\Gamma$ in the following way: \begin{equation*} \Lambda=\Gamma\cup x_1\Gamma\cup...\cup x_1^{p-1}\Gamma. \end{equation*} From this we conclude that the polygon $D$ which is the fundamental region for $\Gamma$ can be expressed as \begin{equation*} P\cup x_1P\cup...\cup x_1^{p-1}P=D \end{equation*} We take $y=x_1\Gamma$ as our generator of $\Lambda/\Gamma$. If $h\geq 2$, then by replacing $\theta$ with a topologically conjugate epimorphism if neccessary we can assume $d_l\in\Gamma$ for $l>1$ (see proof of Theorem \ref{thm1} in \cite{GMJ}). Suppose that $h\geq 2$; then $d_2\in\Gamma$. We can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair: $c$ is a projection on $\mathbb{H}^2/\Gamma$ of the union of two arcs on $D$, one connecting a point $p\in\alpha_2^*$ to $x_1(d_2(p))\in x_1(\alpha_2)$ and other one connecting $d_2(p)\in\alpha_2$ to $x_1(p)\in x_1(\alpha_2^*)$ (see Figure \ref{figh2} with $l=2$). As $d_2\in\Gamma$ identifies $\alpha_2^*$ with $\alpha_2$ and $x_1d_2x_1^{-1}\in\Gamma$ identifies $x_1(\alpha_2^*)$ with $x_1(\alpha_2)$, $c$ is indeed a two-sided simple closed curve on $\mathbb{H}^2/\Gamma$. The same works if $h=1$ and $d_1\in\Gamma$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{h2} \caption{A type 1 standard pair for $d_l\in\Gamma$.} \label{figh2}\end{center} \end{figure} It remains to consider the case when $h=1$ and $d_1\notin\Gamma$. Then by the Riemann-Hurwitz formula $r\geq 2$. If $\theta(\xi_i)\in\{1,...,p-2\}$ for some $i\in\{2,...,r\}$, then there exists $n\in\{2,...,p-1\}$ such that $x_1^nx_i\in\Gamma$. We choose the curve $c$ as the projection of an arc on $D$ connecting the centre of $\xi_i'$ to the centre of $x_1^n(\xi_i)$; since $x_1^nx_i$ identifies these two edges, $c$ is a two-sided closed curve and so is its image $y(c)$; the curves $c$ and $y(c)$ form a type 1 standard pair as in Figure \ref{figpolyr} with $A=\xi_i$. If $\theta(\xi_i)=p-1$ for all $i\in\{2,...,r\}$, then in particular $\theta(x_r)=p-1$ and by composing $\theta$ with the automorphism $\gamma$ defined in Section \ref{topeq} we get $\theta\circ\gamma(x_r)=\theta(x_r^{-1})=-(p-1)=1$. We can then repeat the reasoning with $i=r$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{r2} \caption{A type 1 standard pair.} \label{figpolyr}\end{center} \end{figure} \textbf{Case 3:} $\#f$ is a power of 2. By raising $f$ to an appropriate power we can assume that $\# f=4$. The signature of $\Lambda$ is $\left(h;\pm;[m_1,...,m_r]);\{()^k\}\right)$, where $m_i\in\left\{2,4\right\}$, $h+k>0$ and $k>0$ if the sign is "+" because $\Lambda$ has to contain an orientation-reversing element, and $r+k>0$ since we assume that the action is not free. The marked polygon $P$ associated with $\Lambda$ has the surface symbol $\xi_1\xi_1'...\xi_r\xi_r'\epsilon_1\gamma_{1}\epsilon_1'...\epsilon_k\gamma_{k}\epsilon_k'\alpha_1\beta_1'\alpha_1'\beta_1...\alpha_h\beta_h'\alpha_h'\beta_h$ if the sign is "+" and $\xi_1\xi_1'...\xi_r\xi_r'\epsilon_1\gamma_{1}\epsilon_1'...\epsilon_k\gamma_{k}\epsilon_k'\alpha_1\alpha_1^*...\alpha_h\alpha_h^*$ otherwise. \begin{lem}\label{intlem} Let $\tilde{y}=x_i$ for some $1\leq i\leq r$ or $\tilde{y}=e_j$ for some $1\leq j\leq k$ and suppose $\theta(\tilde{y})\in\{1,3\}$. \begin{enumerate} \item[(a)] If there is an orientation-preserving generator $z$ of $\Lambda$ such that $z\neq\tilde{y}$ and either $\theta(z)=\theta(\tilde{y})$ or $\theta(z)=2$, then there exists a curve $c$ on $N_g=\mathbb{H}^2/\Gamma$ such that, for $y=\tilde{y}\Gamma$, $(c,y(c))$ is a type 1 standard pair. \item[(b)] If $k\geq 2$ and $\theta(e_s)=0$ for some $1\leq s\leq k$, then there exists a curve $c$ on $N_g=\mathbb{H}^2/\Gamma$ such that, for $y=\tilde{y}\Gamma$, $(c,y(c))$ is a type 2 standard pair. \end{enumerate} \end{lem} \begin{proof} We will consider the cases when $\tilde{y}$ is equal to either $x_1$ or $e_1$. \item[(a)]First, let $\tilde{y}=x_1$. We can assume that $\theta(x_1)=1$, for otherwise we can compose $\theta$ with an automorphism of $\mathbb{Z}_4$ interchanging 1 and 3. Let $z$ be an orientation-preserving generator of $\Lambda$ which maps the edge $\zeta'$ of the marked polygon $P$ onto $\zeta$. Suppose that $\theta(z)=\theta(\tilde{y})=1$; then $x_1^3z\in\Gamma$ identifies $\zeta'$ with $x_1^3(\zeta)$ and we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair as in Figure \ref{figpolyr} with $A=\zeta$ and $n=3$. If $\theta(z)=2$, then $x_1^2z\in\Gamma$ identifies $\zeta'$ with $x_1^2(\zeta)$ and we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair as in Figure \ref{figpolyr} with $A=\zeta$ and $n=2$. Now let $\tilde{y}=e_1$; again, we can assume that $\theta(e_1)=1$. Suppose first that $\theta(z)=\theta(\tilde{y})=1$; then $e_1z^{-1}\in\Gamma$ identifies $e_1(\zeta')$ with $\zeta$ and we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair as in Figure \ref{figp2e1z1}. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{p2e1z1} \caption{A type 1 standard pair for $\theta(e_1)=1$ and $\theta(z)=1$.} \label{figp2e1z1}\end{center} \end{figure}Now let $\theta(z)=2$; then $e_1^2z\in\Gamma$ identifies $\zeta'$ with $e_1^2(\zeta)$ and we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair as in Figure \ref{figp2e1z2}.\begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{p2e1z2} \caption{A type 1 standard pair for $\theta(z)=3$ and $\theta(z)=1$.} \label{figp2e1z2}\end{center}\end{figure} \item[(b)]Let $k\geq 2$ and $\theta(e_s)=0$. Because $e_s\in\Gamma$, the image of $\gamma_s$ in $N_g$ is a two-sided closed curve. Let $c$ be this curve. The complement of $c\cup y(c)$ in $N_g$ is the image of the complement of $\gamma_s\cup \tilde{y}(\gamma_s)\cup \tilde{y}^2(\gamma_s)\cup \tilde{y}^3(\gamma_s)$ in $D=P\cup \tilde{y}P\cup\tilde{y}^2P\cup\tilde{y}^3P$ ($\tilde{y}^2c_s$ identifies $\gamma_s$ with $\tilde{y}^2(\gamma_s)$) and is therefore connected. Consider the arc connecting the midpoint of $\gamma_t$ with the midpoint of $y^2(\gamma_t)$ for $t\neq s$. The image of this arc is a one-sided curve on $N_g\setminus (c\cup y(c))$, which is therefore non-orientable. This completes the proof of the lemma. \end{proof} Since $\theta$ is an epimorphism, there exists at least one generator of $\Lambda$ which maps onto an element of order 4 in $\mathbb{Z}_{4}$. Without loss of generality we take this element of $\mathbb{Z}_{4}$ to be 1. The generator mapped onto 1 can be one of $x_i$, $i\in\{1,...,r\}$, $e_j$, $j\in\{1,...,k\}$ and $d_l$, $a_l$ or $b_l,$ $l\in\{1,...,h\}$. We will examine the cases for $x_1$, $e_1$, $d_1$ and $a_1$. \textbf{Subcase 3.1:} $\theta(x_1)=1$. Our generator of $\Lambda/\Gamma$ is $x_1\Gamma\in\Lambda/\Gamma$. We can write $\Lambda$ as a sum of cosets of $\Gamma$: \begin{equation*} \Lambda=\Gamma\cup x_1\Gamma\cup x_1^{2}\Gamma\cup x_1^{3}\Gamma \end{equation*} % Then the polygon $D$ which is the fundamental region for $\Gamma$ is \begin{equation*} P\cup x_1P\cup x_1^{2}P \cup x_1^{3}P=D \end{equation*} % By the long relation there is another generator whose image has order $4$, and it is either $x_i$ for some $i\in\{2,...,r\}$ or $e_j$ for some $j\in\{1,...,k\}$. If $\#\theta(x_i)=4$, then $\theta(x_i)$ is equal to either 1 or 3. If $\theta(x_i)=1$, we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair by Lemma \ref{intlem}(a) with $\tilde{y}=x_1$ and $z=x_i$. If $\theta(x_i)=3$ and the sign is "$-$", then by composing $\theta$ with $\rho_i\circ...\circ\rho_{r-1}\circ\gamma$ (see Section \ref{topeq}) we get $\theta\circ\rho_i\circ...\circ\rho_{r-1}\circ\gamma(x_r)=\theta(x_i^{-1})=1$ and we apply Lemma \ref{intlem}(a) with $\tilde{y}=x_1$ and $z=x_r$. If the sign is "$+$" and $h\geq 1$, then by composing $\theta$ with $\rho_i\circ...\circ\rho_{r-1}\circ\sigma^q$, $q\in\mathbb{Z}$, we get $\theta\circ\rho_i\circ...\circ\rho_{r-1}\circ\sigma^q(b_1)=\theta(b_1)+q\cdot\theta(x_i)=\theta(b_1)+q\cdot3$. By choosing the value of $q$ we can set any desired value of $\theta(b_1)$; we set $\theta(b_1)=1$. Then we can once again apply Lemma \ref{intlem}(a), with $z=b_1$. If the sign is "$+$" and $h=0$, then $k\geq 1$. If there exists an orientation-preserving generator $z$ different from $x_1$ and $x_i$ and such that $\theta(z)\neq 0$, we apply Lemma \ref{intlem}(a), taking either $x_1$ or $x_i$ as $\tilde{y}$. Otherwise $r=2$ and by the Hurwitz-Riemann formula $k\geq 2$ and $\theta(e_k)=0$. We can then apply Lemma \ref{intlem}(b). If $\#\theta(e_j)=4$, then $\theta(e_j)$ is equal to either 1 or 3. If $\theta(e_j)=1$, we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair by Lemma \ref{intlem}(a) with $\tilde{y}=x_1$ and $z=e_j$. If $\theta(e_j)=3$ and the sign is "$-$", then by composing $\theta$ with $\lambda_j\circ...\circ\lambda_{k-1}\circ\epsilon$ we get $\theta\circ\lambda_j\circ...\circ\lambda_{k-1}\circ\epsilon(e_k)=\theta(e_j^{-1})=1$ and we apply Lemma \ref{intlem}(a) with $z=e_k$. If the sign is "$+$" and $h\geq 1$, then by composing $\theta$ with $\lambda_j\circ...\circ\lambda_{k-1}\circ\pi^q$, $q\in\mathbb{Z}$, we get $\theta\circ\lambda_j\circ...\circ\lambda_{k-1}\circ\pi^q(b_1)=\theta(b_1)+q\cdot\theta(e_j)$. By choosing the value of $q$ we can set any desired value of $\theta(b_1)$; we set $\theta(b_1)=1$. Then we can once again apply Lemma \ref{intlem}(a), with $z=b_1$. If the sign is "$+$" and $h=0$, then by the Riemann-Hurwitz formula $r+k\geq 3$. If there exists an orientation-preserving generator $z$ different from $x_1$ and $e_j$ and such that $\theta(z)\neq 0$, we apply Lemma \ref{intlem}(a), taking either $x_1$ or $e_j$ as $\tilde{y}$. Otherwise $r=1$ and $k\geq 2$, thus $\theta(e_k)=0$ and we apply Lemma \ref{intlem}(b). \textbf{Subcase 3.2:} $\theta(e_1)=1$. Now the chosen generator $y$ is equal to $e_1\Gamma\in\Lambda/\Gamma$. We write $\Lambda$ as a sum of cosets of $\Gamma$: % \begin{equation*} \Lambda=\Gamma\cup e_1\Gamma\cup e_1^{2}\Gamma\cup e_1^{3}\Gamma \end{equation*} Then the polygon $D$ which is the fundamental region for $\Gamma$ is \begin{equation*} P\cup e_1P\cup e_1^{2}P \cup e_1^{3}P=D \end{equation*} % As before, there is another generator whose image has order $4$, and it is either $x_i$ for some $i\in\{1,...,r\}$ or $e_j$ for some $j\in\{2,...,k\}$. We assume that $\theta(x_i)=2$ for $1\leq i\leq r$, for otherwise we are in Subcase 3.1. This leaves us with some $e_j$ such that $\#\theta(e_j)=4$. If $\#\theta(e_j)=4$, then $\theta(e_j)$ is equal to either 1 or 3. If $\theta(e_j)=1$, we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair by Lemma \ref{intlem}(a) with $\tilde{y}=e_1$ and $z=e_j$. If $\theta(e_j)=3$ and the sign is "$-$", then by composing $\theta$ with $\lambda_j\circ...\circ\lambda_{k-1}\circ\epsilon$ we get $\theta\circ\lambda_j\circ...\circ\lambda_{k-1}\circ\epsilon(e_k)=\theta(e_j^{-1})=1$ and we apply Lemma \ref{intlem}(a) with $z=e_k$. If the sign is "$+$" and $h\geq 1$, then by composing $\theta$ with $\lambda_j\circ...\circ\lambda_{k-1}\circ\pi^q$, $q\in\mathbb{Z}$, we get $\theta\circ\lambda_j\circ...\circ\lambda_{k-1}\circ\pi^q(b_1)=\theta(b_1)+q\cdot\theta(e_j)$. By choosing the value of $q$ we can set any desired value of $\theta(b_1)$; we set $\theta(b_1)=1$. Then we can once again apply Lemma \ref{intlem}(a), with $z=b_1$. If the sign is "$+$" and $h=0$, then by the Riemann-Hurwitz formula $r+k\geq 3$. If there exists an orientation-preserving generator $z$ different from $e_1$ and $e_j$ and such that $\theta(z)\neq 0$, we apply Lemma \ref{intlem}(a), taking either $e_1$ or $e_j$ as $\tilde{y}$. Otherwise $r=0$ and $k\geq 3$, thus $\theta(e_k)=0$ and we apply Lemma \ref{intlem}(b). \textbf{Subcase 3.3:} $\theta(d_1)=1$. Now the chosen generator $y$ is equal to $d_1\Gamma\in\Lambda/\Gamma$. We write $\Lambda$ as a sum of cosets of $\Gamma$: \begin{equation*} \Lambda=\Gamma\cup d_1\Gamma\cup d_1^{2}\Gamma\cup d_1^{3}\Gamma \end{equation*} Then the polygon $D$ which is the fundamental region for $\Gamma$ is \begin{equation*} P\cup d_1P\cup d_1^{2}P \cup d_1^{3}P=D \end{equation*} By the long relation, there exists another generator not in the kernel, although the image of this generator is not necessarily of order $4$. We assume that $\theta(x_i)=2$ for all $i$, for otherwise we are in Subcase 3.1. Likewise we assume that $\theta(e_j)\in\{0,2\}$ for all $j$; if $\theta(e_j)\in\{1,3\}$ for some $j$, we are in Subcase 3.2. If $\theta(d_l)=0$ for some $l$, we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair (Figure \ref{figp2d1d0}); $d_l$ identifies $\alpha_l^*$ with $\alpha_l$, analogously for their images. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{p2d1d0} \caption{A type 1 standard pair for $\theta(d_1)=1$ and $\theta(d_l)=0$.} \label{figp2d1d0}\end{center} \end{figure} Similarly, if $\theta(d_l)=2$ for some $l$, we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair (Figure \ref{figp2d1d2}); $d_1^2d_l\in\Gamma$ identifies $\alpha_l^*$ with $d_1^2(\alpha_l)$ and $d_1d_ld_1^{-3}\in\Gamma$ identifies $d_1^3(\alpha_l^*)$ with $d_1(\alpha_l)$ (recall that $d_1^4\in\Gamma$ identifies $\alpha_1^*$ with $d_1^3(\alpha_1)$). Therefore we can assume that $\theta(d_l)\in\{1,3\}$ for all $l$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{p2d1d2} \caption{A type 1 standard pair for $\theta(d_1)=1$ and $\theta(d_l)=2$.} \label{figp2d1d2}\end{center} \end{figure} Then by Lemma \ref{nonorient} $k\geq 1$; otherwise the image of $\Lambda^+$ under $\theta$ is not equal to $\mathbb{Z}_4$. If $\theta(e_j)=2$ for some $j$, then $e_jd_1^{-2}$ identifies $d_1^2(\epsilon_j')$ with $\epsilon_j$ and we can find a curve $c$ such that $c$ and $y(c)$ form a type 2 standard pair (Figure \ref{figp2d1z2} with $A=\epsilon_j$). \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{p2d1z2} \caption{A type 2 standard pair for $\theta(d_1)=1$.} \label{figp2d1z2}\end{center}\end{figure} By the Riemann-Hurwitz formula $r\geq 1$, $k\geq 2$ or $h\geq 2$. If $r \geq 1$ or $k\geq 2$ and assuming $j=1$, there exists an edge ($\xi_1$ or $\gamma_2$) on the segment of $\partial D$ from $A=\epsilon_1$ to $d_1^2(\epsilon_1')$ (anticlockwise) identified with an edge between $d_1^2(\epsilon_1')$ and $d_1^3(\epsilon_1')$ so that the complement of $c\cup y(c)$ is connected. If $h\geq 2$, then by assumption $\theta(d_2)\in\{1,3\}$. If $\theta(d_2)=1$, then the edge $\alpha_2^*$ on the segment from $d_1(\epsilon_1)$ to $\epsilon_1$ is identified with the edge $d_1^3(\alpha_2)$ on the segment from $d_1^3(\epsilon_1')$ to $d_1(\epsilon_1)$. If $\theta(d_2)=3$, then the edge $d_1^3(\alpha_2^*)$ on the segment from $d_1^3(\epsilon_1')$ to $d_1(\epsilon_1)$ is identified with the edge $\alpha_2$ on the segment from $d_1(\epsilon_1)$ to $\epsilon_1$. In any of these cases there exists a curve on $N_g$ intersecting $c\cup y(c)$ in one point, which means that $N_g\setminus (c\cup y(c))$ is connected. A projection of the arc joining the midpoints of $\gamma_1$ and $d_1^2(\gamma_1)$ is a one-sided curve on $N_g\setminus (c\cup y(c))$($d_1^2c_1\in\Gamma$ identifying $\gamma_1$ with $d_1^2(\gamma_1)$) which is therefore non-orientable. From now on we assume $\theta(e_j)=0$ for all $1\leq j\leq k$. If $k\geq 2$, then the construction in the proof of Lemma \ref{intlem}(b) works for $\tilde{y}=d_1$ (since $d_1^2$ is orientation-preserving). We assume that $k=1$ and by the Riemann-Hurwitz formula $r\geq 2$ or $h\geq 2$. If $r\geq 2$, then we can find a curve $c$ such that $c$ and $y(c)$ form a type 2 standard pair (Figure \ref{figp2d1z2} with $A=\xi_1$). There exists an edge (namely $\gamma_1$) in the segment of $\partial D$ from $\xi_1$ to $d_1^2(\xi_1')$ (anticlockwise) which is identified with an edge on the segment from $d_1^2(\xi_1')$ to $d_1^3(\xi_1')$, so that the complement of $c\cup y(c)$ is connected. A projection of the sum of two arcs, one joining the midpoints of $\gamma_1$ and $\xi_2'$ and the other joining the midpoints of $d_1^2(\xi_2)$ and $d_1^2(\gamma_1)$, is a one-sided curve on $N_g\setminus (c\cup y(c))$ ($d_1^2c_1\in\Gamma$ identifying $\gamma_1$ with $d_1^2(\gamma_1)$ and $d_1^2x_2\in\Gamma$ identifying $\xi_2'$ with $d_1^2(\xi_2)$), which is therefore non-orientable. If $h\geq 2$, then by assumption $\theta(d_2)\in\{1,3\}$. If $\theta(d_2)=1$, then $d_1^3d_2\in\Gamma$ identifies $\alpha_2^*$ with $d_1^3(\alpha_2)$ and we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair (Figure \ref{figp2d1d1}). \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{p2d1d1} \caption{A type 1 standard pair for $\theta(d_1)=1$, $\theta(e_1)=0$ and $\theta(d_2)=1$.} \label{figp2d1d1}\end{center}\end{figure} If $\theta(d_2)=3$, then $d_1d_2\in\Gamma$ identifies $\alpha_2^*$ with $d_1(\alpha_2)$ and we can find a curve $c$ such that $c$ and $y(c)$ form a type 2 standard pair (Figure \ref{figp2d1d3}). The edge $\gamma_1$ on the segment of $\partial D$ from $\epsilon_1$ to $\epsilon_1'$ is identified with the edge $d_1^2(\gamma_1)$ on the segment from $d_1^2(\alpha_2)$ to $d_1(\epsilon_1')$ and the edge $d_1(\gamma_1)$ on the segment from $d_1(\epsilon_1')$ to $d_1(\epsilon_1)$ is identified with the edge $d_1^3(\gamma_1)$ on the segment from $d_1^2(\alpha_2)$ to $d_1(\epsilon_1')$, ensuring the connectedness of $N_g\setminus (c\cup y(c))$. A projection of the union of two arcs, one connecting the midpoint of $d_1^2(\gamma_1)$ with the midpoint of $d_1^3(\alpha_1)$ and the other connecting the midpoint of $\alpha_1^*$ with the midpoint of $\gamma_1$, is a one-sided curve on $N_g\setminus (c\cup y(c))$, which is therefore non-orientable. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{p2d1d3} \caption{A type 2 standard pair for $\theta(d_1)=1$, $\theta(e_1)=0$ and $\theta(d_2)=3$.} \label{figp2d1d3}\end{center}\end{figure} \textbf{Subcase 3.4:} $\theta(a_1)=1$. The chosen generator $y$ is equal to $a_1\Gamma\in\Lambda/\Gamma$. Certainly $k\geq 1$. If there exists $j\in\{1,...,k\}$ such that $e_j\notin\Gamma$, we can assume that $\theta(e_j)=2$ and $a_1^2e_j$ identifies $\epsilon_j'$ with $a_1^2(\epsilon_j)$. Then we can find a curve $c$ such that $c$ and $y(c)$ form a type 1 standard pair (Figure \ref{figp2oa1e2}). \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{p2oa1e2} \caption{A type 1 standard pair for $\theta(a_1)=1$ and $\theta(e_j)=2$.} \label{figp2oa1e2}\end{center} \end{figure} Now if $e_j\in\Gamma$ for all $j\in\{1,...,k\}$, $e_1$ identifies $\epsilon_1'$ with $\epsilon_1$. If $\theta(b_1)=0$, $b_1$ identifies $\beta_1'$ with $\beta_1$ and we can find a curve $c$ such that $c$ and $y(c)$ form a type 2 standard pair (Figure \ref{figp2oa1e0b0}). The edge $\gamma_1$ on the segment of $\partial D$ from $\epsilon_j$ to $\epsilon_j'$ is identified with the edge $a_1^2(\gamma_1)$ on the segment from $a_1(\epsilon_j')$ to $a_1(\beta_1')$ and the edge $a_1(\gamma_1)$ on the segment from $a_1(\epsilon_j)$ to $a_1(\epsilon_j)$ is identified with the edge $a_1^3(\gamma_1)$ on the segment from $a_1(\epsilon_j')$ to $a_1(\beta_1')$ so that the complement of $c\cup y(c)$ is connected. A projection of the union of two arcs, one connecting the midpoint of $a_1^2(\gamma_1)$ with the midpoint of $a_1^3(\alpha_1)$ and the other connecting the midpoint of $\alpha_1'$ with the midpoint of $\gamma_1$, is a one-sided curve on the complement of $c\cup y(c)$, which is therefore non-orientable. If $\theta(b_1)\neq0$, then by composing $\theta$ with $\omega^q$, $q\in\mathbb{Z}$, we get $\theta\circ\omega^q(b_1)=\theta(b_1)+q\theta(a_1)$ and by choosing the right value of $q$ we can set $\theta\circ\omega^q(b_1)=0$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.65]{p2oa1e0b0} \caption{A type 2 standard pair for $\theta(a_1)=1$, $\theta(e_1)=0$ and $\theta(b_1)=0$.}\label{figp2oa1e0b0}\end{center} \end{figure} This completes the proof of Theorem \ref{noninv}. \end{proof} \section{Classification of involutions}\label{sec:class} \subsection{Surgeries} We are going to define five surgeries which will help us construct all the involutions we need in easy steps. We begin with an oriented surface $X$ embedded in $\mathbb{R}^3$ and an involution $\varphi:X\rightarrow X$ induced either by a reflection across some plane in $\mathbb{R}^3$ or by a rotation by the angle $\pi$ about some axis (for example the hyperelliptic involution). Next we apply a series of surgeries that result in our chosen involution. After applying each surgery we obtain a new involution on a new surface $X'$, $\varphi':X'\rightarrow X'$, which we can then further modify. \begin{description} \item[Blowing up an isolated fixed point] Let $x\in X$ be an isolated fixed point, $\varphi(x)=x$. We choose a disc $D\subset X$ such that $x\in D$ and $\varphi (D)=D$. Then $\varphi_{|D}$ is a rotation by the angle $\pi$. Let $X'=(X\setminus \mathrm{int}(D))/\sim$, where $\sim$ identifies antipodal points on $\partial D$. Then $\varphi':X'\rightarrow X'$ is the involution induced by $\varphi$. This surgery replaces an isolated fixed point with a one-sided oval. \item[Blowing up a non-isolated fixed point] Let $c$ be an oval and $x\in c$. We choose a disc $D\subset X$ such that $x\in D$ and define $X'$ and $\varphi'$ as above. Now $\varphi_{|D}$ is a reflection about $c\cap D$. This surgery preserves the number of ovals, but changes the number of sides of one oval and adds one isolated fixed point. \item[Blowing up a 2-orbit] Let $x\in X$, $\varphi(x)\neq x$. We choose a disc $D\subset X$ such that $x\in D\subset X$ and $\varphi(D)\cap D=\emptyset$. Then $X'=X\setminus(\mathrm{int}(D)\cup \mathrm{int}(\varphi(D)))/\sim$, where $\sim$ identifies antipodic points on $\partial D$ and $\partial \varphi(D)$. This surgery adds two crosscaps that map onto one another and increases the genus of the quotient orbifold by 1. \item[Adding a handle] Let $x\in X$, $\varphi(x)\neq x$. We choose a disc $D$ as above. $X'=X\setminus (\mathrm{int}(D)\cup \mathrm{int}(\varphi(D)))/\sim$, where $\sim$ identifies $y$ with $\varphi(y)$ for $y\in\partial D$. If $X$ is orientable, then $X'$ is non-orientable if and only if $\varphi$ is orientation-preserving. This surgery adds a two-sided oval. \item[Surface gluing] Let $X$, $Y$ be oriented surfaces and $\phi :X\rightarrow X$, $\psi :Y\rightarrow Y$ be involutions, where $\phi$ is orientation-reversing (a reflection) and $\psi$ is orientation-preserving (a rotation). Choose two discs $D\subset X$ and $D'\subset X'$ such that $\phi(D)\cap D=\emptyset$ and $\psi(D')\cap D'=\emptyset$. Let $\theta:\partial D\rightarrow \partial D'$ be an orientation-reversing homeomorphism. Let $X'$ be the surface obtained by removing $\mathrm{int}(D)\cup \mathrm{int}(\phi(D))$ from $X$ and $\mathrm{int}(D')\cup \mathrm{int}(\psi(D'))$ from $Y$ and identifying $\partial D$ with $\partial D'$ via $\theta$ and $\partial \phi(D)$ with $\partial \psi(D') $ via $\psi\circ\theta\circ\varphi^{-1}$. Then $\phi$ and $\psi$ induce an involution on $X'$. Note that $\psi\circ\theta\circ\varphi^{-1}$ is orientation-preserving and $X'$ is non-orientable of genus $2(g(X)+g(Y)+1)$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.40]{conn} \caption{The left-hand side of the surface is reflected, and the right-hand side is rotated.} \end{center} \end{figure} \end{description} \subsection{Involutions on $N_g$}\label{sec:invs} Basing on Dugger's work \cite{Dug} and Theorem \ref{thm2}, we are going to define eleven involutions on a surface $N_g$, which will exhaust all possibilities up to conjugation. We remark that the problem of classification of involutions on compact surfaces is classical, especially in the case of orientable surfaces, but also for non-orientable surfaces several classification results have been obtained by different authors; see \S 1.7 in \cite{Dug}. Let $f$ be an involution; in this section we do not distinguish between a mapping class and its representative. Let $N_g=\mathbb{H}^2/\Gamma$, $\left\langle f\right\rangle=\Lambda/\Gamma$ and let $\theta:\Lambda\rightarrow \mathbb{Z}_2$ be the canonical projection. The signature of $\Lambda$ is $(h;\pm;[(2)^r];\{()^k\})$. Recall that $k_+=\#\{j\in\{1,...,k\}|\theta(e_j)=0\}=\#(\{e_1,...,e_k\}\cap\Gamma)$. Then $k_-=\#\{j\in\{1,...,k\}|\theta(e_j)=1\}$. Further, notice that by the long relation \begin{equation*}\begin{split} \theta(x_1...x_re_1...e_k)&= 0\\ \theta(x_1)+...+\theta(x_r)+\theta(e_1)+...+\theta(e_k)&=0 \end{split} \end{equation*} which gives \begin{equation*} r+k_-\equiv 0\mod(2) \end{equation*} and by the Hurwitz-Riemann formula \begin{equation*}\begin{split} g-2&=2(\epsilon h+k-2)+r\\ g&=2\epsilon h +2k-2+r\\ g&\equiv r\mod(2) \end{split} \end{equation*} where $\epsilon$ is equal to $1$ if $N_g/\left\langle f \right\rangle$ is non-orientable and $2$ otherwise. Additionally, also by the Hurwitz-Riemann formula, always $r+2k\leq g+2$ and $r+2k\leq g$ if the quotient orbifold is non-orientable. By Theorem 1.16 of \cite{Dug} the numbers $r$, $k$, $k_+$ and $k_-$ together with information about the orientability of the quotient orbifold (a set of characteristics which the author refers to as the ''signed taxonomy'' of $C_2$-actions) determine an action up to conjugacy if either the surface is orientable, or the quotient orbifold is orientable, or $r+k_->0$. Otherwise, additional invariants are necessary which Dugger terms the $\epsilon$-invariant and the $DD$-invariant. Notice that Theorem 1.16 of \cite{Dug} coincides with Theorem \ref{thm2}. Let us first consider the case when $r=0$ and $k=0$, that is, the action of $\left\langle f \right\rangle$ is free. Since $k_-=0$, $g$ must be even. This case was already covered in the proof of Theorem \ref{noninv} in Subsection \ref{sec:proofnoninv} (Figure \ref{figfree2}); we have denoted the two involutions $f_{01}$ and $f_{02}$, respectively. Now let $r>0$ and $k=0$. Since $k_-=0$, $r$ must be even, and so must $g$. By Theorem 7.7 of \cite{Dug} (see also Theorem \ref{thm2}) there is exactly one conjugacy class of involutions in this case for fixed $r$, with a non-orientable quotient orbifold. We denote this involution $f_1$. As the representative of this class we can take a rotation by $\pi$ of an orientable surface of genus $r/2-1$ with $h$ 2-orbits blown up, as in Figure \ref{figIn1}. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{f1} \caption{Involution $f_1$ with the set of fixed points marked.} \label{figIn1}\end{center} \end{figure} Next, let $r>0$, $k_-=0$ and $k_+>0$. Again, $k_-=0$ implies $r$ and $g$ are both even. By Theorem 7.9 of \cite{Dug} (see also Theorem \ref{thm2}) there are two conjugacy classes of involutions for fixed $r$ and $k_+$, one with a non-orientable and one with an orientable quotient orbifold. As a representative of the class with a non-orientable quotient orbifold we can take the reflection of an orientable surface of genus $k-1$ with $h$ 2-orbits blown up and $r$ non-isolated fixed points blown up, as in Figure \ref{figIn2}; we will denote it by $f_2$. \begin{figure}[!htbp] \begin{center} \includegraphics[scale=.45]{f2} \caption{Involution $f_2$ with the set of fixed points marked.} \label{figIn2}\end{center} \end{figure} As a representative of the class with an orientable quotient orbifold we can take the reflection of an orientable surface of genus $k-1+2h$ with $h$ pairs of conjugate handles and $r$ non-isolated fixed points blown up, as in Figure \ref{figIn3}; we will denote it by $f_3$. By the Hurwitz-Riemann formula, in this case $r+2k\equiv g+2\mod(4)$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.45]{f3} \caption{Involution $f_3$ with the set of fixed points marked.} \label{figIn3}\end{center} \end{figure} Next, let $r\geq 0$, $k_+\geq 0$ and $k_->0$. By Theorem 7.10 of \cite{Dug} (see also Theorem \ref{thm2}) there are two conjugacy classes of such involutions for fixed $r$, $k_-$ and $k_+$, one with a non-orientable and one with an orientable quotient orbifold. As the representative of the class with a non-orientable quotient orbifold we can take a reflection of an orientable surface of genus $k_{+}-1$ glued together with a rotated orientable surface of $k_{-}+r+1$ with $h$ 2-orbits blown up and $k_-$ isolated fixed points blown up (see Figure \ref{figIn4}); we will denote it by $f_4$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.35]{f4} \caption{Involution $f_4$ with the set of fixed points marked.} \label{figIn4}\end{center} \end{figure} As a representative of the class with an orientable quotient orbifold we can take the reflection of an orientable surface of genus $k_{+}-1$ glued together with a rotated orientable surface of genus $k_{-}+r+1+2h$ with $h$ pairs of conjugate handles and $k_-$ isolated fixed point blown up (see Figure \ref{figIn5}); we will denote it by $f_5$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.35]{f5} \caption{Involution $f_5$ with the set of fixed points marked.} \label{figIn5}\end{center} \end{figure} Finally let $r=0$, $k\geq 0$ and $k_-=0$. Then $g$ must be even. By Theorem 7.12 of \cite{Dug} (see also Theorem \ref{thm2}) there are four conjugacy classes of such involutions for fixed $k_+$, three with non-orientable quotient orbifolds and one with an orientable one. We will denote the representatives of the conjugacy classes with non-orientable quotient orbifolds by $f_6$, $f_7$ and $f_8$, in order. The involution $f_6$ always occurs. We can take its representative to be the reflection of an orientable surface of genus $k-1$ with $h$ 2-orbits blown up (Figure \ref{figIn6}). Note that in this case the set of fixed points is separating. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{f6} \caption{Involution $f_6$ with the set of fixed points marked.} \label{figIn6}\end{center} \end{figure} The involution $f_7$ occurs if and only if $2k<g$. We can take its representative to be the antipodism of a sphere with $h-1$ 2-orbits blown up and $k$ added handles (Figure \ref{figIn7}). Notice that the condition $2k<g$ is necessary for the surface to be non-orientable. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{f7} \caption{Involution $f_7$ with the set of fixed points marked. The two-sided ovals result from adding $k$ handles -- a surgery consisting in identification of boundary points on $k$ pairs of antipodal discs whose interiors are removed from the surface. Three of these discs are drawn.} \label{figIn7}\end{center} \end{figure} The involution $f_8$ occurs if and only if $2k<g-2$. We can take its representative to be the antipodism of a torus with $h-2$ 2-orbits blown up and $k$ added handles (Figure \ref{figIn8}). Notice that the condition $2k<g-2$ is necessary for the surface to be non-orientable. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{f8} \caption{Involution $f_8$ with the set of fixed points marked. The two-sided ovals result from adding $k$ handles.} \label{figIn8}\end{center} \end{figure} The conjugacy class of involutions with an orientable quotient orbifold occurs if and only if $2k\equiv g+2\mod 4$, which condition is necessitated by the orientability of the quotient orbifold. We can take its representative to be the rotation by $\pi$ of an orientable surface of genus $2h+1$ with $k$ added handles about the axis passing through its centre (Figure \ref{figIn9}); we denote it by $f_9$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{f9} \caption{Involution $f_9$ with the set of fixed points marked. The two-sided ovals result from adding $k$ handles.} \label{figIn9}\end{center} \end{figure} \begin{rem} If $k=0$, $f_7$ and $f_8$ coincide with $f_{01}$ and $f_{02}$ (Figure \ref{figfree2}). \end{rem} Summarising, from Theorems 7.7, 7.9, 7.12 in \cite{Dug} we obtain the following lemma. \begin{lem} The involutions $f_1,...,f_9$ described above are the only, up to conjugacy, involutions on a non-orientable surface $N_g$. \end{lem} \section{Normal closures of involutions}\label{sec:proofinv} \subsection{Actions on homology groups}\label{hom} We know from Lemma \ref{L2.2} that the existence of a curve $c$ such that $c$ and $f(c)$ form a standard pair of type 1 or 2 is a sufficient condition for the normal closure of $f$ to contain the commutator subgroup. We will need a way to show that a condition is also necessary, that is, a method of showing that the normal closure of a map does not contain the commutator subgroup. Let $V_g=H_1(N_g;\mathbb{Z}_2)\cong \mathbb{Z}_2^g$. The standard generators for the homology group of a non-orientable surface of genus $2h+k$ are the homology classes of the curves $a_i,b_i,i\in\{1,...,h\}$ and $c_j,j\in\{1,...,k\}$ on Figure \ref{figvg}, where $g=2h+k$. We will denote by $[a]$ the homology class of a curve $a$ in $V_g$. We have an intersection form $\left\langle \cdot ,\cdot \right\rangle : V_g \times V_ g\longrightarrow \mathbb{Z}_2$, where \begin{align*} \left\langle [c_i],[c_j] \right\rangle& =\delta_{ij},& \left\langle [a_i],[a_j] \right\rangle& =0,& \left\langle [b_i],[b_j] \right\rangle& =0,\\ \left\langle [a_i],[b_j] \right\rangle& =\delta_{ij},& \left\langle [a_i],[c_j] \right\rangle& =0,& \left\langle [b_i],[c_j] \right\rangle& =0. \end{align*} $\mathcal{M}(N_g)$ acts on $V_g$ preserving $\left\langle \cdot , \cdot \right\rangle$. Let $V_g^+=\left\{[h]\in V_g : \left\langle [h],[h] \right\rangle=0\right\}\cong\mathbb{Z}_2^{g-1}$. Notice that $V_g^+$ is generated by $[a_i],[b_i],i\in\{1,...,g\}$ and $[c_i]+[c_j],i,j\in\{1,...,k\},i\neq j$. Note that $[c]=[c_1]+...+[c_k]$ is the only element of $V_g$ such that for any $[h]\in V_g$ $\left\langle [h],[c] \right\rangle=\left\langle [h],[h] \right\rangle$, which implies that any element $f\in \mathcal{M}(N_g)$ preserves $[c]$. If $k$ is even, $[c]$ lies in $V_g^+$ and $\mathcal{M}(N_g)$ acts on $V_g^+/\left\langle [c]\right\rangle\cong \mathbb{Z}_2^{g-2}$. Let $a$, $b$ be two-sided curves on $N_g$ such that $i(a,b)=1$. The automorphism of $V_g$ induced by $T_aT_b$ is non-trivial, as it maps $[b]$ onto $[a]+[b]$; the induced automorphisms of $V_g^+$ and $V_g^+/\langle c \rangle$ are likewise non-trivial. Since $T_aT_b\in[\mathcal{M}(N_g),\mathcal{M}(N_g)]$ by Lemma \ref{L2.1}, it follows that the normal closure of an element acting trivially on any of the spaces $V_g$, $V_g^+$ and $V_g^+/\langle c \rangle$ does not contain the commutator subgroup of $\mathcal{M}(N_g)$. We will present three examples and trust the reader to apply them in further cases. \begin{ex}\label{ex1} We take the reflection of an orientable surface of genus $h$ with $k$ non-isolated fixed points blown up (Figure \ref{figvg}). \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{vg} \caption{Surface $N_g$ in Example \ref{ex1} with standard homology basis.} \label{figvg}\end{center} \end{figure} This involution acts on generators of $V_g$ as follows: $[a_i]\mapsto [a_i]$, $[b_i]\mapsto [b_i]$, $[c_j]\mapsto [c_j]$. The action on $V_g$ is trivial. It is easy to show that the same holds for the hyperelliptic involution of an orientable surface with $k$ isolated points blown up. \end{ex} \begin{ex}\label{ex2} We take the hyperelliptic involution of an orientable surface of genus $h$ with one 2-orbit blown up (Figure \ref{figvg+}). \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{vg+} \caption{Surface $N_g$ in Example \ref{ex2} with standard homology basis.}\label{figvg+}\end{center} \end{figure} This involution acts on generators of $V_g$ as follows: $[a_i]\mapsto [a_i]$, $[b_i]\mapsto [b_i]$, $[c_1]\mapsto [c_2]$, $[c_2]\mapsto [c_1]$. The action on $V_g$ is not trivial, so we look to $V_g^+=\left\langle [a_i],[b_i],[c_1]+[c_2] \right\rangle$. The involution takes $[c_1]+[c_2]$ to $[c_2]+[c_1]=[c_1]+[c_2]$, and the action on $V_g^+$ is trivial. An analogous result for the reflection of a surface with one 2-orbit blown up is easily obtained. \end{ex} \begin{ex}\label{ex3} We take the reflection of an orientable surface of genus $h$ with two 2-orbits blown up (Figure \ref{figvg+c}). \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.5]{vg+c} \caption{Surface $N_g$ in Example \ref{ex3} with standard homology basis.}\label{figvg+c}\end{center} \end{figure} As above, the action on $V_g$ is not trivial. We have $V_g^+=\left\langle [a_i],[b_i],[c_1]+[c_2],[c_1]+[c_3],[c_1]+[c_4]\right\rangle$ and the action is as follows: $[a_i]\mapsto [a_i]$, $[b_i]\mapsto [b_i]$, $[c_1]+[c_2]\mapsto [c_3]+[c_4]$, $[c_1]+[c_3]\mapsto [c_1]+[c_3]$, $[c_1]+[c_4]\mapsto [c_2]+[c_3]$. The action on $V_g^+$ is not trivial, but notice that $[c_3]+[c_4]=[c_1]+[c_2]\mod(c)$ and $[c_2]+[c_3]=[c_1]+[c_4]\mod(c)$, which means that the action on $V_g^+/\left\langle [c]\right\rangle$ is trivial. An analogous result for the hyperelliptic involution of a surface with two 2-orbits blown up is easily obtained. \end{ex} \begin{rem} It is clear that if $K_1$ is the kernel of the action of $\mathcal{M}(N_g)$ on $V_g$, $K_2$ is the kernel of the action of $\mathcal{M}(N_g)$ on $V_g^+$ and $K_3$ is the kernel of the action of $\mathcal{M}(N_g)$ on $V_g^+/\left\langle [c]\right\rangle$, we have $K_1\lneqq K_2\lneqq K_3$. Indeed, the examples we have shown prove that there exist involutions in each of $K_1$, $K_2\setminus K_1$ and $K_3\setminus K_2$. \end{rem} \subsection{Normal closures} \begin{proof}[Proof of Theorem \ref{inv}.] Let $f\in\mathcal{M}(N_g)$ be an involution, $r$, $k$, $k_+$, $k_-$ the parameters describing the structure of the set of fixed points of $f$ and $h$ the genus of $N_g/\langle f \rangle$. \textbf{Case 1:} $r>0$ and $k=0$. The only involution for $r>0$ and $k=0$, up to conjugacy, is $f_1.$ It is easy to see that if $h\geq 3$, we can find a curve $c$ such that $c$ and $f_1(c)$ form a type 2 standard pair, as in Figure \ref{figIn1}. Notice that the genus of the surface is equal to $g=2(r/2-1)+2h=r+2h-2$, and so the condition in terms of $g$ and $r$ is $g-r=2h-2\geq 4$. Now let us consider the case with $h=2$. Then the induced map $f_{1*}$ acts trivially on the space $V_{g}^{+}/\left\langle c \right\rangle$ (see Example \ref{ex3} in Section \ref{hom}), and therefore the normal closure of $f_1$ cannot contain the commutator subgroup. If $h=1$, the action is trivial on the space $V_g^+$ (see Example \ref{ex2}). $h=0$ does not occur. The condition is necessary. \textbf{Case 2:} $k>0$ and $r+k_->0$. \textbf{Subcase 2a:} $N_g/\langle f \rangle$ is non-orientable. The involutions satisfying these conditions and resulting in a non-orientable surface orbifold are $f_2$ and $f_4$. For the involution $f_2$ we can find a curve $c$ such that $c$ and $f_2(c)$ form a type 1 standard pair for $h\geq 1$ (see Figure \ref{figIn2}). By the definition of $f_2$ the orbifold $N_g/\left\langle f_2 \right\rangle$ is non-orientable, hence $h\geq 1$ is always satisfied. For the involution $f_4$ we can find a curve $c$ such that $c$ and $f_4(c)$ form a type 1 standard pair if $h\geq 1$ (see Figure \ref{figIn4}). By the definition of $f_4$, $k_->0$ and the orbifold $N_g/\left\langle f_4 \right\rangle$ is non-orientable, hence $h\geq 1$ is also always satisfied. \textbf{Subcase 2b:} $N_g/\langle f \rangle$ is orientable. The involutions satisfying these conditions and resulting in an orientable surface orbifold are $f_3$ and $f_5$. For the involution $f_3$ we can find a curve $c$ such that $c$ and $f_3(c)$ form a type 2 standard pair if $h\geq 1$ (see Figure \ref{figIn3}). Since $g=2(k-1)+r+4h$, this translates to $g-r-2k=4h-2\geq 2$. If $h=0$, $f_{3*}$ acts trivially on the space $V_g$ (see Example \ref{ex1}), hence the condition is necessary. Likewise, for the involution $f_5$ we can find a curve $c$ such that $c$ and $f_5(c)$ form a type 2 standard pair if $h\geq 1$ (see Figure \ref{figIn5}). Since $g=2(k-1)+r+4h$, this translates to $g-r-2k=4h-2\geq 2$. If $h=0$, $f_{5*}$ acts trivially on the space $V_g$ (see Figure \ref{figIn5h}). Therefore the condition is necessary. \begin{figure}[!htbp]\begin{center}\includegraphics[scale=.4]{f5h} \caption{Involution $f_5$ on the surface with standard homology base for $h=0$.} \label{figIn5h}\end{center} \end{figure} \textbf{Case 3:} $r=k_{-}=0$. \textbf{Subcase 3a:} the set of fixed points is separating. This is satisfied by the involution $f_6$. For the involution $f_6$ we can find a curve $c$ such that $c$ and $f_6(c)$ form a type 2 standard pair if $h-1\geq 3$ (see Figure \ref{figIn6}). Since $g=2h+2(k-1)$, this translates to $g-2k=2h-2\geq 4$. If $h=2$, the induced map $f_{6*}$ acts trivially on the space $V_{g}^{+}/\left\langle c \right\rangle$, and if $h=1$, the action is trivial on the space $V_g^+$; $h=0$ does not occur. The condition is necessary (see Examples \ref{ex3} and \ref{ex2}). \textbf{Subcase 3b:} the set of fixed points is non-separating. This is satisfied by the involutions $f_7$ and $f_8$ when the orbifold is non-orientable and $f_9$ when the orbifold is orientable. The case when $k=0$ and the involution is free was shown in the proof of Theorem \ref{noninv} in Section \ref{sec:proofnoninv}; we now assume $k>0$. For the involution $f_7$ we can find a curve $c$ such that $c$ and $f_7(c)$ form a type 1 standard pair for $h-1\geq 1$ (see Figure \ref{figIn7}), which is always satisfied, as the surface is nonorientable (by the definition of $f_7$, $h-1$ is the number of pairs of crosscaps on $N_g$). For the involution $f_8$ we can find a curve $c$ such that $c$ and $f_8(c)$ form a type 1 standard pair for $h-2\geq 1$ (see Figure \ref{figIn8}), which is also always safisfied for analogous reasons. For the involution $f_9$ we can find a curve $c$ such that $c$ and $f_9(c)$ form a type 1 standard pair if $2h+1\geq 1$ (see Figure \ref{figIn9}). This is always satisfied. This completes the proof of Theorem \ref{inv}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:normgens}.] The involution normally generating $\mathcal{M}(N_g)$ will be a variant of $f_4$ with $k_+=0$, which can be realised as a rotation by $\pi$ of an orientable surface of genus $(k_-+r-2)/2$ with $h$ 2-orbits blown up and $k_-$ isolated fixed points blown up. We will denote it by $f$. By Theorems \ref{inv} and \ref{twist} the normal closure of $f$ contains the twist subgroup $\mathcal{T}(N_g)$. In order to apply Theorem \ref{thm:twsb} to show that $f$ does not lie in the twist subgroup, we calculate the action of the induced map $f_{*}$ on the homology group $H_1(N_g;\mathbb{R})$. \begin{figure}[!htbp]\begin{center} \includegraphics[scale=.45]{f4ch} \caption{Involution $f$ with standard base for $H_1(N_g;\mathbb{R})$.} \label{figf4ch}\end{center} \end{figure} The action is \begin{equation*} \begin{split} a_i\mapsto & -a_i-2\sum_{j=2i}^{k_-}d_j,\quad i=1,...,\lceil k_-/2\rceil-1\\ a_i\mapsto & -a_i,\quad i=\lceil k_-/2\rceil,...,l\\ b_i\mapsto & -b_i-2(d_{2i-1}+d_{2i}),\quad i=1,...,\lceil k_-/2\rceil\\ b_i\mapsto & -b_i,\quad i=\lceil k_-/2\rceil+1,...,l\\ c_i\mapsto & c_{i+h},\quad i=1,...,h-1\\ c_h\mapsto & -\sum_{i=1}^{2h-1}c_i-\sum_{j=1}^{k_-}d_j\\ c_{i+h}\mapsto & c_i,\quad i=1,...,h-1\\ d_j\mapsto & d_j,\quad j=1,...,k_--1 \end{split} \end{equation*} The determinant of $F_*$ is equal to $(-1)^h$. By Theorem \ref{thm:twsb} $f\notin\mathcal{T}(N_g)$ if $h$ is odd. For $N_g$ we can construct $f$ to have any chosen $h\geq 1$ by adjusting $r$ and $k_-$ according to the Hurwitz-Riemann formula, including any odd $h$. This completes the proof of Theorem \ref{thm:normgens}. \end{proof} \section*{Acknowledgements} This paper is part of the author's Ph.D. thesis, written under the supervision of B{\l}a\.zej Szepietowski at the University of Gdańsk. The author wishes to express her gratitude for the supervisor's helpful suggestions.
2024-02-18T23:39:55.222Z
2019-03-26T01:36:32.000Z
algebraic_stack_train_0000
823
13,260
proofpile-arXiv_065-4083
\section{Supplemental Material} \subsection{Sample preparation} The samples were fabricated using a dry transfer technique modified with a rotation stage described previously \cite{Cao2016-ld,Cao2018-dj,Cao2018-bf} and originally measured in Ref.~\onlinecite{Cao2018-bf}. Monolayer graphene and hexagonal boron nitride (hBN) crystals of $10$--$\SI{30}{\nano\meter}$ thickness were exfoliated onto clean $\ce{SiO2}/\ce{Si}$ substrates, identified optically, and characterized with atomic force microscopy. The twisted bilayer graphene was constructed by ``tearing and stacking'' a clean monolayer with a precise rotational misalignment. A poly(bisphenol A carbonate) (PC)/polydimethylsiloxane (PDMS) stack on a glass slide was used to first pick up a piece of hBN at $\SI{90}{\celsius}$ after which the van der Waals forces between hBN and graphene were used to tear a graphene flake close to room temperature. The separated graphene pieces were rotated manually by a twist angle around $1.2$--$\SI{1.3}{\degree}$ and stacked on top of one another. The stack was encapsulated by a piece of hBN on the bottom and released at $\SI{160}{\celsius}$ onto a Cr/PdAu metal back gate on top of a highly resistive $\ce{SiO2}/\ce{Si}$ substrate. The samples were not annealed to prevent the twisted bilayer graphene from relaxing back to Bernal-stacked bilayer graphene. The device geometry was defined using standard electron-beam lithography techniques and reaction ion etching with fluoroform and $\ce{O2}$ plasmas. Electrical contact was made to the MATBG with Cr/Au edge-contacted leads \cite{Wang614}. \subsection{Measurements} \begin{figure*}[h!tp] \centering \includegraphics[scale=1]{figs1.pdf} \caption{Circuit for the capacitance and loss tangent measurements.} \label{circuit} \end{figure*} Sample M2 was measured in a helium-3 cryostat at $\SI{280}{\milli\kelvin}$, and sample M1 was measured in a dilution refrigerator at a temperature of $\SI{225}{\milli\kelvin}$ with the exception of the high temperature measurements of M2. Capacitance and loss tangent measurements were carried out on homemade cryogenic capacitance bridges on the same chip carriers as the samples. Figure~\ref{circuit} shows the full circuit schematic. The sample is modeled as a series resistor and capacitor in red. AC and DC voltages are applied to the sample through a cryogenic bias tee adjacent to the sample. An AC signal of variable phase and amplitude is applied to a fixed $\mathord{\sim}\SI{45}{\fF}$ reference capacitor to balance the sample capacitance at the bridge balance point. After an initial balance is achieved, changes in the sample capacitance are inferred from the off-balance voltage that accumulates at the balance point. The size of the reference capacitor is determined on a subsequent cooldown with a known $\SI{2}{\pF}$ capacitor. Three Fujitsu FHX35X high electron mobility transistors (HEMTs) are used in a double-amplifier configuration. The main amplification stage occurs at the gate of the labeled HEMT which has been cleaved in half to minimize its stray capacitance. A second HEMT (also cleaved) is used as a variable resistor (typically set to about $\SI{100}{\mega\ohm}$) to pinch off the measurement HEMT's channel to about $\SI{100}{\kilo\ohm}$. A third follower HEMT which is uncleaved drives the signal to a homemade wide-bandwidth amplifier located at the $\SI{1}{\kelvin}$ pot of the fridge followed by a similar amplification stage at room temperature before being measured on a lock-in amplifier (Signal Recovery SR7280). An excitation voltage of $\SI{2.8}{\milli\volt}$ RMS was applied at $\SI{150}{\kilo\hertz}$ for all measurements unless otherwise indicated. For a sample that can be modeled as a series resistor and capacitor, the output of the capacitance bridge has two components: an in-phase and out-of-phase component. The in-phase component $X$ is given by \begin{equation} X = \frac{-C_\textrm{T}/C_\textrm{ref}}{1+\left(\omega R_\textrm{in-plane} C_\textrm{T}\right)^2} \end{equation} while the out-of-phase component $Y$ is given by \begin{equation} Y = \frac{\omega R_\textrm{in-plane} C^2_\textrm{T}/C_\textrm{ref}}{1 + \left(\omega R_\textrm{in-plane}C_\textrm{T}\right)^2} \end{equation} where $C_\textrm{ref}$ is the value of the fixed reference capacitor, $R_\textrm{in-plane}$ represents the effective series resistance of the MATBG, $C_\textrm{T}$ the total sample capacitance (which includes both the geometric and quantum contributions), and $\omega$ is related to the measurement frequency $f = \omega/2\pi$. The loss tangent is defined as the ratio between the resistive and reactive impedances of the sample $\frac{Z_\textrm{res}}{Z_\textrm{react}} = \frac{R_\textrm{in-plane}}{1/(\omega C_\textrm{T})}$. This quantity is exactly equal to the ratio of $Y$ to $-X$: \begin{equation} \textrm{loss tangent} = \frac{Y}{-X} = \omega R_\textrm{in-plane}C_\textrm{T}. \end{equation} In the thermodynamic limit, $\omega$ satisfies $\omega \ll \frac{1}{R_\textrm{in-plane}C_\textrm{T}}$. In other words, the excitation frequency is slow enough that the sample has time to completely charge on each cycle of the excitation. This allows a decoupling of the capacitive and resistive impedances in the $X$ channel: \begin{equation} X \approx \frac{-C_\textrm{T}}{C_\textrm{ref}} \end{equation} and allows us to compute the total capacitance $C_\textrm{T}$ in units of the reference. In our analysis the role of the loss tangent is to confirm that our measurements are in the thermodynamic (low frequency) limit. Whenever we see modulation of the in-phase component accompanied by a featureless loss tangent, we know that $\omega \ll \frac{1}{R_\textrm{in-plane}C_\textrm{T}}$ and modulation of the in-phase channel can be interpreted as modulation of the sample's capacitance (and hence changes in the compressibility). In Fig.~2(a) and (b) the loss tangent remains near zero and perfectly flat between the superlattice gaps at $\pm n_\textrm{s}$. This confirms that the changes in the capacitance at commensurate filling arise from the compressibility and not a failure of the samples to charge completely. This rules out localization causing a large in-plane resistance without an attendant decrease in the thermodynamic density of states. The loss tangent spikes suddenly as the superlattice gaps are entered indicating a sudden increase in the in-plane resistance. This is accompanied by a sharp decrease in the capacitance. The decrease in capacitance is in part due to a decrease in the compressibility as the Fermi level enters a bandgap, but it is also related to a failure of the sample to charge completely at the given measurement frequency. In this limit the term $\left(\omega R_\textrm{in-plane} C_\textrm{T}\right)^2$ dominates the denominator of $X$ and limits our ability to be quantitative about the compressibility. The same is true at high magnetic field when the Fermi level lies in a cyclotron or exchange gap. The substantial reduction in the density of states causes the in-plane resistance to dominate our measurement. This does not affect a qualitative discussion of the compressibility evolution in magnetic field (e.g. the emergence of exchange gaps or trajectories of insulating features), but does preclude a quantitative measure of the compressibility. \subsection{Capacitance corrections and background subtraction} Due to slight mismatches in the cabling of the reference and sample lines due to different attenuators as well as slight offsets in the relative phase of the excitation and reference voltages sources, our measurements show a constant offset in the out-of-phase component (which for small values of $Y$ is approximately equal to the loss tangent) that is not attributed to the sample. In order to correct for this small phase shift we find the values of the in-phase and out-of-phase components of our signal near a highly compressible state that should have minimal in-plane resistance and an extremely small out-of-phase component. We apply a rotation matrix $M(\theta)$ (typically the rotation angle $\theta \approx \SI{1}{\degree}$) to our in- and out-of-phase signal measurements to correct this artifact. This leaves the in-phase component virtually unchanged and shifts the out-of-phase signal to the correct baseline. Our samples have a substantial stray capacitance of around $\SI{50}{\fF}$ that manifests in our measurement as a constant background in parallel with the sample $C_\textrm{raw} = C_\textrm{back} + C_\textrm{T} = C_\textrm{back} + \left(C_\textrm{geo}^{-1}+C_\textrm{q}^{-1}\right)^{-1}$. In order to accurately subtract this constant background, we utilize the field-dependent capacitance measurements where we can accurately determine the minima of the cyclotron gaps of the Landau fan emerging from charge neutrality (see Fig.~4(a)). Regardless of the underlying band structure, all Landau levels can be characterized by a field-dependent orbital degeneracy $\phi/\phi_0 = BAe/h$ where $\phi$ is the total magnetic flux through the sample, $\phi_0$ is the flux quantum, $B$ is the magnetic field, $A$ the sample area, $e$ is the elementary charge, and $h$ is Planck's constant. This orbital degeneracy is augmented by a factor of $8$ arising from the spin, valley, and layer degrees of freedom. Therefore, between the filling factors $\nu = \pm 4$, we know the total charge accumulated in the sample is given by $8BAe^2/h$. The total charge accumulated in the sample is also given by integrating the total capacitance: $Q = \int_{\Delta V} C_\textrm{T} dV$ where the limits of integration are determined by the gate voltages in Fig.~4(a). Therefore, the appropriate value of $C_\textrm{back}$ is found by enforcing \begin{equation} 8BAe^2/h = Q = \int_{\Delta V} \left(C_\textrm{raw} - C_\textrm{back}\right)dV. \end{equation} In this analysis we assume that the strongest gaps emerging from charge neutrality correspond to $\nu=\pm4$ as expected for a twisted bilayer graphene system. This is confirmed by calculating the slope of the gaps in Fig.~4(a) and using the relationship \begin{equation} \nu = \frac{nA}{\phi/\phi_0} = \frac{n\phi_0}{B} = \frac{\left(\overline{C_\textrm{T}}/A\right)\Delta V\phi_0}{eB} \end{equation} where $\overline{C_\textrm{T}}$ is the average total capacitance between the LL minima. Because $\overline{C_\textrm{T}}$ is roughly equal to the geometric value of the capacitance which is well approximated by a parallel plate model, we can say $\overline{C_\textrm{T}}/A \approx \epsilon \epsilon_0/d$ where $\epsilon$ is the relative dielectric of the hBN ($\mathord{\sim} 4.5$), $d$ is the thickness of the dielectric ($\mathord{\sim}\SI{30}{\nano\meter}$) determined from atomic force microscopy, and $\epsilon_0$ is the vacuum permittivity, yielding: \begin{equation} \nu \approx \frac{\epsilon \epsilon_0\Delta V\phi_0}{deB}. \end{equation} After extracting the slope of the gap in Fig.~4(a) and equating it to $\frac{B}{\Delta V}$ and using estimated values for $\epsilon$ and $d$, we can verify that $\nu = \pm4$. Additionally this allows us to verify that our background subtraction is reasonable by confirming $\overline{C_\textrm{T}} \approx C_\textrm{raw} - C_\textrm{back}$. \subsection{Converting from gate voltage to carrier density} Unlike transport measurements, our capacitance technique allows us to convert gate voltage to carrier density exactly. Typically, in transport the gating capacitance is taken as a constant $\overline{C_\textrm{T}}$ (typically extracted from Landau fans or modeled with parallel plate geometries) and is often described as purely geometric but in reality is an average value of the {\em total} capacitance that includes contributions from the quantum capacitance that vary as a function of density. In most samples $C_\textrm{q} \gg C_\textrm{geo}$ so that $C_\textrm{T} \approx C_\textrm{geo}$, allowing this approximation to hold. For our measurements we can simply integrate the total capacitance with respect to gate voltage to directly calculate the induced charge density: \begin{equation} n(V) = \frac{1}{Ae}\int_{V_\textrm{Dirac}}^V C_\textrm{T}(V') dV' \end{equation} where we have set the carrier density at the gate voltage associated with the Dirac point to $0$. For our samples the quantum capacitance $C_\textrm{q}$ is always much larger than the geometric capacitance inside the superlattice gaps. Therefore, the relationship between carrier density and voltage is roughly proportional, but there are subtle nonlinearities near locations of relatively small quantum capacitance (e.g.~near charge neutrality) that are captured in this conversion. \subsection{Determining the geometric capacitance} Our quantitative analysis relies on estimating the value of the geometric capacitance $C_\textrm{geo}$. In order to estimate $C_\textrm{geo}$ we use the model $\mathcal{C}(n)$ for the total capacitance: \begin{equation} \mathcal{C}(n) = \left(\frac{1}{C_\textrm{geo}} + \frac{1}{Ae^2\partial n/\partial\mu(n)}\right)^{-1}. \end{equation} For a bilayer graphene system with eight-fold degeneracy the density of states is given by \begin{equation} \frac{\partial n}{\partial \mu} = \frac{4\left|E_\textrm{F}\right|}{\pi\left(\hbar v_\textrm{F}\right)^2} = \frac{2\sqrt{2}}{\sqrt{\pi}\hbar v_\textrm{F}}\sqrt{|n|} \end{equation} where $E_\textrm{F}$ is the Fermi energy, $v_\textrm{F}$ the Fermi velocity, $\hbar$ is Planck's constant. Additionally, we take into account the spatial broadening due to the disorder profile by convolving $\partial n/\partial \mu$ with a gaussian $g(n) = e^{-n^2/2\Gamma^2}/(\sqrt{2\pi}\Gamma)$ where $\Gamma$ characterizes the scale of the disorder broadening. We fit $\partial n/\partial \mu * g(n)$ to our data to determine best-fit values of $v_\textrm{F}$, $\Gamma$, and $C_\textrm{geo}$. The value of $C_\textrm{geo}$ from best-fitting agrees nicely with the peaks in the highly compressible Landau levels that we expect to be very close to the geometric capacitance and possibly in excess if negative compressibility is present \cite{Eisenstein-1992,Yu2013-aa}. See Fig.~\ref{geo}(a) and (b) for plots of $C_\textrm{geo}$ overlaid with the field-dependent capacitance data. In order to account for the fact that our fit-derived value may deviate from the true value of the geometric capacitance, we estimate an uncertainty $\delta c = \SI{0.014}{\fF}$ in $C_\textrm{geo}$ based on a visual analysis of the compressible Landau level peaks. The lower bound of our uncertainty corresponds to assuming that the density of states maxima in the zero field capacitance data are nearly perfectly compressible. This is a reasonable lower bound assuming that the density of states peaks do not exhibit negative compressibility. This is justifiable if we compare these maxima to the highly compressible Landau levels between $\nu=-12$ and $\nu=-4$ at $B=\SI{3}{\tesla}$, a large enough field for good Landau quantization but low enough that the exchange gaps and ``Hofstadter'' features at high magnetic field do not overlap. The capacitance signal forms clear plateaus with no sign of negative compressibility and remains larger in value than the zero-field data at all densities. See panel (b) of Fig.~\ref{geo} where the blue trace saturates close to the fit-derived value of $C_\textrm{geo}$ between about $n = -1$ and $\SI{-0.5e12}{\per\centi\meter\squared}$ and remains larger than all capacitance values in the red trace. The upper bound of $C_\textrm{geo}$ is placed near the highest capacitance values recorded at high magnetic field where we expect the capacitance peaks to be highly compressible and possibly enhanced beyond the geometric value if negative compressibility is present. The role of the geometric capacitance uncertainty and its propagation in the thermodynamic gap and bandwidth calculations are detailed below. \begin{figure}[h!tp] \centering \includegraphics[scale=1]{figs2.pdf} \caption{\textbf{(a)} Plot of capacitance traces at $B=0$ (red), $\SI{3}{\tesla}$ (blue), and $\SI{9}{\tesla}$ (green) as well as the estimate for $C_\textrm{geo}$ (black dashed trace). The gray region represents the estimated uncertainty in $C_\textrm{geo}$. \textbf{(b)} Zoom-in of (a).} \label{geo} \end{figure} \subsection{Uncertainty in thermodynamic gaps} The inverse compressibility is integrated to extract the chemical potential $\mu$ as a function of carrier density $n$. A small error (compared to the magnitude of $C_\textrm{geo}$) in the geometric capacitance causes a spurious linear background in the overall slope of $\mu(n)$. If we take the true geometric capacitance to be $C_\textrm{geo}$ and $\delta c$ a small error we compute \begin{equation} \frac{1}{C_\textrm{T}} - \frac{1}{C_\textrm{geo} + \delta c} \approx \frac{1}{C_\textrm{T}} - \frac{1}{C_\textrm{geo}} + \frac{\delta c}{C_\textrm{geo}^2}. \label{error1} \end{equation} Multiplying through by $Ae^2$, we can cast Eq.~\ref{error1} in terms of the inverse compressibility and an associated deviation: \begin{equation} Ae^2\left(\frac{1}{C_\textrm{T}} - \frac{1}{C_\textrm{geo} + \delta c} \right) \approx \frac{\partial\mu}{\partial n} + \frac{Ae^2\delta c}{C_\textrm{geo}^2}. \label{error2} \end{equation} The change in computed chemical potential across a range of density $\Delta n$ is therefore \begin{equation} \int_{\Delta n} Ae^2\left(\frac{1}{C_\textrm{T}} - \frac{1}{C_\textrm{geo} + \delta c} \right)dn = \Delta \mu + \frac{Ae^2\delta c}{C_\textrm{geo}^2}\Delta n \label{error3} \end{equation} where $\Delta \mu$ represents the true change in chemical potential and $\frac{Ae^2\delta c}{C_\textrm{geo}^2}\Delta n$ the associated systematic error. If we use the value $\delta c = \SI{0.014}{\fF}$ based on our calculated estimate of $C_\textrm{geo}$ and a visual analysis of the field-dependent data, the errors associated with the gaps at $n_\textrm{s}/4$ and $n_\textrm{s}/2$ are found to be $\delta\left(\Delta_{n_\textrm{s}/4}\right) = \SI{1.0}{\milli\electronvolt}$ and $\delta\left(\Delta_{n_\textrm{s}/2}\right) = \SI{1.2}{\milli\electronvolt}$, respectively. The larger error for $\Delta_{n_\textrm{s}/2}$ is due to its slightly larger span in carrier density $\Delta n$. \subsection{Capacitance measurements at $\SI{5}{\kelvin}$.} Upon warming to $\SI{5}{\kelvin}$ the capacitance of sample M2 shows essentially no change near the commensurate filling gaps on the electron side at zero magnetic field as shown in Fig.~\ref{hightemp}. The purple (orange) trace shows the capacitance at zero magnetic field and $\SI{280}{\milli\kelvin}$ ($\SI{5}{\kelvin}$). The lack of temperature evolution implies that the energy gaps at commensurate filling are well in excess of $\SI{5}{\kelvin}$, consistent with our gap estimation in Fig.~2. At $B=\SI{9}{\tesla}$ the capacitance minima are suppressed upon warming from $\SI{280}{\milli\kelvin}$ (blue) to $\SI{5}{\kelvin}$ (red). At high magnetic field our measurements are no longer in the low frequency limit due to the large in-plane resistivity of the sample while in a quantum Hall gap. The capacitance minima at base temperature are exaggerated by the failure of the sample to charge completely on each excitation cycle. Upon warming to $\SI{5}{\kelvin}$ the in-plane conductivity increases, leading to a reduction in the capacitance features. This evolution at high magnetic field with temperature is likely not related to a change in the thermodynamic density of states, but rather in-plane transport features. \begin{figure}[h!tp] \centering \includegraphics[scale=1]{figs3.pdf} \caption{Plot of the temperature dependence of the capacitance on the electron-side near commensurate fillings. The purple (orange) trace shows the zero-field capacitance at $\SI{280}{\milli\kelvin}$ ($\SI{5}{\kelvin}$) while the blue (red) trace shows the capacitance at $B=\SI{9}{\tesla}$ at $\SI{280}{\milli\kelvin}$ ($\SI{5}{\kelvin}$). The $\SI{9}{\tesla}$ data has been shifted down for clarity. The capacitance was measured with a $\SI{2.8}{\milli\volt}$ RMS excitation at $\SI{30}{\kilo\hertz}$.} \label{hightemp} \end{figure} \subsection{Bandwidth estimation} When plotting the compressibility in Fig.~3(d), uncertainty in the precise value of $C_\textrm{geo}$ can contribute an uncertainty in the movement of the chemical potential with density. As detailed previously in Eqs.~\ref{error1}--\ref{error3}, the uncertainty in the shift of the chemical potential $\delta(\Delta \mu)$ can be related to the uncertainty in the geometric capacitance $\delta c$ through \begin{equation} \delta(\Delta \mu) = \frac{Ae^2\delta c}{C_\textrm{geo}^2}\Delta n. \end{equation} Because our total bandwidth spans a density approximately given by $\Delta n = \SI{6e12}{\per\centi\meter\squared}$, the associated error in bandwidth is given by $\delta(\Delta \mu) \approx \SI{10}{\milli\eV}$. In Fig.~3(d) we find a bandwidth of approximately $\SI{35}{\milli\eV}$. Incorporating our estimated uncertainty, the bandwidth has a range that spans approximately $25-\SI{45}{\milli\eV}$. In Fig.~\ref{compcgeo} we plot the compressibility for our best estimate of $C_\textrm{geo}$ as well as the upper and lower bounds of our uncertainty estimate $C_\textrm{geo} \pm \delta c$ for $\delta c = \SI{0.014}{\fF}$. The bandwidth range is roughly $25-\SI{45}{\milli\eV}$ in line with our error calculation. The vertical axis is very sensitive to the specific choice of $C_\textrm{geo}$ whenever $C_\textrm{T} \approx C_\textrm{geo}$ which is why the highly compressible peaks appear at such different values of $\partial n/\partial \mu$. The lower compressibility features (e.g. charge neutrality, commensurate filling on the electron-side) show much less variation. The plot in Fig.~\ref{compcgeo}(c) has been cut off above $\SI[per-mode=reciprocal]{15}{\per\electronvolt\per\nano\meter\squared }$, where the central density of states maximum rises to about $\SI[per-mode=reciprocal]{55}{\per\electronvolt\per\nano\meter\squared }$, in order to more easily compare the low-compressibility features between panels. \begin{figure}[h!tp] \centering \includegraphics[scale=1]{figs4.pdf} \caption{\textbf{(a)} Plot of $\partial n/\partial \mu$ for best estimate of $C_\textrm{geo} = \SI{20.213}{\fF}$. \textbf(b) Plot of $\partial n/\partial \mu$ for $C_\textrm{geo} + \delta c = \SI{20.227}{\fF}$. (c) Plot of $\partial n/\partial \mu$ for $C_\textrm{geo} - \delta c = \SI{20.199}{\fF}$. The central density of states peak on the electron side rises to about \SI[per-mode=reciprocal]{55}{\per\electronvolt\per\nano\meter\squared}} \label{compcgeo} \end{figure} \subsection{Loss tangent in finite magnetic field} In Fig.~\ref{lossm2}(a) we plot the loss tangent of device M2 as a function of carrier density and magnetic field in order to reveal additional information about the in-plane conductivity of the sample. The loss tangent is given by $\omega R_\textrm{in-plane}C_\textrm{T}$. Increases in the loss tangent correspond to increases in the in-plane resistance (which tend to dominate any changes in $C_\textrm{T}$ at high magnetic field for this device) as a resistive state is entered and serve as a qualitative measure of the in-plane bulk transport. The bright features emanating from charge neutrality are the cyclotron and exchange gaps arising from the quantum Hall regime whereas the bright features that emanate from high magnetic field and terminate near the commensurate fillings arise from fractal ``Hofstadter'' minibands due to the interaction of the magnetic field and superlattice potential. Importantly, the half-filling state on the electron doped regime shows faint vertical features which appear to terminate around $\SI{3}{\tesla}$, though they becomes partially obscured due to the coexistence of the fractal miniband feature. This indicates that the resistive features survive to at least $\SI{3}{\tesla}$. There are multiple closely spaced resistive features which are grouped around the half-filling location indicating possible inhomogeneity in the rotation angle across the lateral extent of the sample. Importantly, we do not see doubling of the central Landau fan indicating that the charge density across the sample remains uniform. We do not see noticeable resistive features associated with either the hole states or the quarter-filled electronic state. In panel (b) of Fig.~\ref{lossm2} we plot the same filling factor schematic as in Fig.~4(b). The Wannier diagram in panel (c) of Fig.~\ref{lossm2} shows the possible fractal miniband gaps in grey with associated observed gaps in (a) color coded to match (b). \begin{figure*}[h!tp] \centering \includegraphics[scale=1]{figs5.pdf} \caption{\textbf{(a)} Plot of the loss tangent of device M2 as a function of carrier density and magnetic field. Weak resistive features around $n_\textrm{s}/2$ are visible and track vertically with magnetic field until being obscured by the fractal miniband gaps around $\SI{3}{\tesla}$. Multiple vertical features adjacent to one another indicate that the twist angle may be inhomogeneous through the entire sample. The color scale has been suppressed above $0.004$ in order to reveal weaker features at low magnetic field. The loss tangent was measured with a $\SI{2.8}{\milli\volt}$ RMS excitation at $\SI{150}{\kilo\hertz}$. \textbf{(b)} Schematic from Fig.~4(b) tracking the filling factors of the field-induced gaps. \textbf{(c)} Wannier diagram of associated field-induced gaps observed in (b).} \label{lossm2} \end{figure*} \subsection{Magnetic field dependence of device M1} In Fig.~\ref{fieldm1} we plot the magnetic field dependence of device M1. Adjacent to the main Landau fan is a second weaker fan emanating from a displaced Dirac point, indicating that the sample contains a second region which is at a slightly different doping. This may be associated with a region of the device adjacent to one of the ohmic contacts away from the central portion of the etched Hall bar geometry. Similar incompressible phases (red lines in Fig.~\ref{fieldm1}(b)) emerge from high magnetic field on the electron-doped side and tend towards commensurate filling locations on the abscissa as discussed in the main text in Fig.~4(a) for device M2. Here, the gaps emanating from high field do not appear doubled as in Fig.~4 indicating improved homogeneity in the rotation angle. \begin{figure*}[h!tp] \centering \includegraphics[scale=1]{figs6.pdf} \caption{\textbf{(a)} Magnetic field dependence of device M1. The color scale has been suppressed below $\SI{20.5}{\fF}$. \textbf{(b)} Schematic showing some of the important incompressible phases of device M1. The black lines indicate cyclotron or exchange gaps arising from the central Landau fan. The blue lines indicate the gaps arising from an additional, weaker Landau fan, indicating device M1 contains a second region at slightly different doping. Filling factors for both fans are labeled. The red lines indicate field-induced gaps which terminate at the commensurate filling associated with fractal miniband gaps as discussed in the main text in Fig.~4. In contrast to device M2, the fractal miniband gaps do not appear doubled, indicating improved twist angle uniformity. The capacitance was measured with a $\SI{2.8}{\milli\volt}$ RMS excitation at $\SI{150}{\kilo\hertz}$ at $\SI{225}{\milli\kelvin}$.} \label{fieldm1} \end{figure*} \end{document}
2024-02-18T23:39:55.306Z
2019-07-25T02:15:48.000Z
algebraic_stack_train_0000
826
4,512
proofpile-arXiv_065-4126
\section{Introduction} Let $\mathcal A$ be the mod-$2$ Steenrod algebra, minimally generated by the Steenrod squares $\Sq^{2^s}$ ($s\geq0$), and $\mathcal A(n)\subseteq\mathcal A$, for $n\geq0$, the finite sub-Hopf algebra generated as an algebra by $\{\Sq^{2^s} \mid s\leqslant n\}$. A (left) $\mathcal A$-module~$M$ is \emph{stably realizable} if there exists a spectrum $X$ such that as $\mathcal A$-modules, \[ H^*(X) \underset{\text{def}}{=} H^*(X;\F_2) \cong M. \] For finite $\mathcal A$-modules, this is equivalent to the existence of a space $Z$ such that $\widetilde{H}^*(Z) \cong \Sigma^s M$ for some~$s$. This number $s$ is bounded from below by the \emph{unstable degree} $\sigma(M)$ of $M$, i.e. the minimal number~$t$ such that $\Sigma^tM$ satisfies the instability condition for modules over $\mathcal A$. We say that $M$ is \emph{optimally realizable} if there exists a space $Z$ such that $\widetilde{H}^*(Z) \cong \Sigma^{\sigma(M)}M$. We consider two constructions of new Steenrod modules from old. Firstly, for a left $\mathcal A$-module $M$, the linear dual $M^\vee=\Hom(M,\F_2)$ becomes a left $\mathcal A$-module using the antipode of $\mathcal A$. Secondly, the \emph{iterated double} $\dbl M(i)$ is the module which satisfies \begin{align*} \dbl M(i)^n = \begin{cases} M^{n/2^i} & \text{if $2^i \mid n$}, \\ 0 & \text{otherwise}, \end{cases} \\ \intertext{and for $x \in \dbl M(i)^n$,} \Sq^{2^k}x = \begin{cases} 0 & \text{if $k<i$}, \\ \Sq^{2^{k-i}}x & \text{if $k \geqslant i$}. \end{cases} \end{align*} We also set $\dbl M(0)=M$. Let $J$ be the quotient of $\mathcal A$ by the left ideal generated by $\Sq^3$ and $\Sq^{i}\mathcal A$ for $i \geqslant 4$. The main result of this paper is the following. \begin{thm}\label{thm:main} The modules $\dbl J(i)$ and $\dbl J(i)^\vee$ are optimally realizable for $i \leqslant 2$ and not stably realizable for $i >2$. \end{thm} The module $J$ in this theorem is known as the \emph{Joker}, although more colloquially than in actual written articles. In \cite{baker:joker}, the first author showed all cases of Theorem~\ref{thm:main} with the exception of the optimal realizability of $\dbl J(2)$ and $\dbl J(2)^\vee$: \begin{thm}[{\cite{baker:joker}}] The module $\dbl J(k)$ is stably realizable iff $k \leqslant 2$ iff $\dbl J(k)^\vee$ is stably realizable. For $k \leqslant 1$, the modules $\dbl J(k)$ and $\dbl J(k)^\vee$ are optimally realizable. \end{thm} The main result of this paper is the case of $k=2$. We will, however, also give an alternative proof of the cases where $k<2$, which may aid as an illustration of how the more complicated case of $k=2$ works. For the case of $\dbl J(2)$, we observe that this module appears as a quotient module of the rank-$4$ Dickson algebra, which is realized by the exotic $2$-compact group $B\mathrm{DW}_3$ constructed by Dwyer and Wilkerson. Our approach to constructing an optimal realization is to map a suitable skeleton of $B\mathrm{DW}_3$ to the spectrum $\mathrm{tmf}/2$ of topological modular forms modulo $2$ so that a skeleton of the homotopy fiber of this map has cohomology $\dbl J(2)$. The existence of this map $\alpha\colon (B\mathrm{DW}_3)^{(24)} \to \underline{\smash{\mathrm{tmf}/2}}_{14}$ is equivalent to the survival of a class $x_{-15} \in H^{15}((B\mathrm{DW}_3)^{24})$ in the $\mathcal A(2)$-based Adams spectral sequence. This is the content of Section~\ref{sec:quadrubleJ}. One might interpret the survival of this class as (albeit weak) evidence that a faithful $15$-dimensional spherical homotopy representation of $\mathrm{DW}_3$ exists, a question we hope to address in a later paper. The lowest known faithful homotopy representation of $\mathrm{DW}_3$ at the time of this writing has complex dimension $2^{46}$ \cite{ziemianski:di-4}. Our alternative proof for $\dbl J(1)$ in Section~\ref{sec:doubleJ} follows the same line of reasoning as for $\dbl J(2)$, but using the rank-$3$ Dickson algebra (realized by the classifying space of the Lie group $G_2$) and real $K$-theory instead of topological modular forms. In this case, we are able to construct the analog of the map $\alpha$ geometrically. \subsection*{Conventions} We assume that all spaces and spectra are completed at the prime~$2$, although our arguments can be easily modified to work globally. We will often assume that we are working with CW complexes which have been given minimal cell structures. All cohomology is with coefficients in $\F_2$, the field with two elements. For spaces, we use unreduced cohomology but for spectra reduced cohomology; hopefully the usage should be clear from the context. \subsection*{Acknowledgements} The first author would like to thank the following: The Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme \emph{Homotopy Harnessing Higher Structures} when work on this paper was undertaken (this work was supported by EPSRC grant number EP/R014604/1); Kungliga Tekniska H\"ogskolan and Stockholms Universitet for supporting a visit by A.~Baker in Spring of 2018; Okke van Garderen and Sarah Kelleher for listening. We would also like to thank the anonymous referee for their comments that helped us improve this paper. \section{Realizability of modules over the Steenrod algebra} Recall that an $\mathcal A$-module is called \emph{unstable} if \[ \Sq^i(x) = 0 \text{ if } i > |x|. \] \begin{defn} Let $M$ be an $\mathcal A$-module. The \emph{unstable degree} $\sigma(M)$ of $M$ is the minimal $s \in \Z$ such that $\Sigma^s M$ is an unstable $\mathcal A$-module. \end{defn} Obviously, $\sigma(M)$ is a finite number if $M$ is a nontrivial finite module (but may be infinite otherwise). As the anonymous referee pointed out, using lower-indexing $\Sq_i x = \Sq^{|x|-i}x$ of the Steenrod squares, \[ -\sigma(M) = \inf\{i \mid \Sq_i \neq 0\} \] since a module is unstable iff $\Sq_i = 0$ for all $i<0$. If a finite module is stably realizable, Freudenthal's theorem implies that it is realizable by a space after sufficiently high suspension (cf. Proposition~\ref{prop:stablyrealizingduals}). If $M$ is stably realizable by a spectrum $X$ then $M^*$ is stably realized by the Spanier-Whitehead dual $DX = F(X,\S^0)$. Only a finite number of iterated doubles of $M$ can ever be stably realizable by the solution of the Hopf invariant~$1$ problem. \begin{example} Any $\mathcal A$-module $M$ of dimension $1$ over $\F_2$ is optimally realizable (by a point). Let $M$ be cyclic of dimension $2$ over $\F_2$, thus $M \cong \F_2\langle \iota,\Sq^{2^i}\iota\rangle$. By the solution of the Hopf invariant one problem, $M$ is stably realizable if and only if $i=0,1,2,3$. In each case, $M$ is optimally realizable by the projective plane over $\R$, $\CC$, the quaternions, and the octonions, respectively. \end{example} \begin{example} A simple example of a module that is not optimally realizable is the ``question mark'' complex \[ \begin{tikzpicture}[scale=0.4] \node (e) at (0,0) [gen]{}; \node (1) at (0,1)[gen]{}; \node (2) at (0,3)[gen]{}; \node at (-1,0) {0}; \node at (-1,1) {1}; \node at (-1,3) {3}; \draw (e) to[sq1] (1); \draw (1) to[sq2r] (2); \end{tikzpicture} \] or $M=\mathcal A/(\Sq^2,\Sq^3,\dots)$. This picture, and others to follow, are to be read as follows. The numbers on the left denote the dimension. A dot denotes a copy of $\F_2$ in the corresponding dimension. A straight line up from a dot~$x$ to a dot~$y$ indicates that $\Sq^1x=y$, and a curved line similarly indicates a nontrivial operation $\Sq^2$. The unstable degree of this module is $1$, but it is not optimally realizable because a hypothetical space $X$ with $H^*(X)=\Sigma M$, $H^1(X)=\langle x\rangle$ would have $x^4=(x^2)^2 \neq 0$ but $x^3=0$. \end{example} \section{The family of Jokers} The finite cyclic $\mathcal A$-module \[ J = \mathcal \mathcal A/ (\mathcal A\Sq^3+\mathcal A\Sq^4\mathcal A+\mathcal A\Sq^8\mathcal A+\cdots) \] is called the Joker. Its dimension over $\F_2$ is $5$, having dimension $1$ in each degree $0 \leqslant d \leqslant 4$; a basis is given by \[ \{1,\Sq^1,\Sq^2,\Sq^2\Sq^1,\Sq^2\Sq^2=\Sq^1\Sq^2\Sq^1\}, \] or pictorially, \[ \begin{tikzpicture}[scale=0.6] \node (e) at (0,0) [gen]{}; \node (1) at (0,1)[gen]{}; \node (2) at (0,2)[gen]{}; \node (21) at (0,3)[gen]{}; \node (22) at (0,4)[gen]{}; \foreach \i in {0,...,4} {\node at (-1,\i) {\i}; } \draw (e) to[sq1] (1); \draw (21) to[sq1] (22); \draw (e) to[sq2r] (2) to[sq2r] (22); \draw (1) to[sq2l] (21); \end{tikzpicture} \] The Joker appears in several contexts in homotopy theory. In \cite{adams-priddy}, Adams and Priddy showed that $J$ generates the torsion on the Picard group of $\mathcal A(1)$-modules. The Joker also appears regularly in projective resolutions of cohomologies of common spaces (such as real projective spaces) over $\mathcal A$ or $\mathcal A(1)$. \fxnote{(3): removed comment about Toda bracket} Its linear dual $J^\vee = \Hom(J,\F_2)$ is also a cyclic left module by the antipode $\chi$ of $\mathcal A$. It is not isomorphic to $J$, even by a shift, since $\chi(\Sq^4)=\Sq^4+\Sq^1\Sq^2\Sq^1$, which means that $\Sq^4\neq0$ on $J^\vee$. Pictorially, \[ J^\vee = \begin{matrix}\begin{tikzpicture}[scale=0.6] \node (e) at (0,0) [gen]{}; \node (1) at (0,1)[gen]{}; \node (2) at (0,2)[gen]{}; \node (21) at (0,3)[gen]{}; \node (22) at (0,4)[gen]{}; \draw (e) to[sq1] (1); \draw (21) to[sq1] (22); \draw (e) to[sq2r] (2) to[sq2r] (22); \draw (1) to[sq2l] (21); \draw (e) to[sq4l] (22); \end{tikzpicture} \end{matrix} \] Here and in what follows, the slanted square brackets denote nontrivial operations $\Sq^4$ or $\Sq^8$. \fxnote{(8): added sentence} Note that $J^\vee\cong J$ as $\mathcal A(1)$-modules. The $k$-fold iterated doubles of these are $\dbl J(k)$ and $\dbl J(k)^\vee$, where $\dbl J(1)$ has basis vectors in even dimensions $0$, $2$, $4$, $8$, $10$, $\dbl J(2)$ has basis vectors in dimensions divisible by $4$, and so on. Clearly, the unstable degrees are given by $\sigma(\dbl J(k))=2 \cdot 2^k$ (the bottom cohomology class supports a nontrivial operation $\Sq^{2\cdot 2^k}$), and $\sigma(\dbl J(k)^\vee)= 4 \cdot 2^k$ (the bottom cohomology class supports also a nontrivial operation $\Sq^{4 \cdot 2^k}$). Note that if $\dbl J(k)$ is optimally realized by a space, then that space is weakly equivalent to a CW complex $X$ with cells in dimensions $i\cdot 2^k$, where $i=2,\dots,6$, hence $X$ has dimension $6\cdot 2^k$. The ring structure of the cohomology is implied by the instability condition for $\mathcal A$-algebras, namely, $\Sq^i(x)=x^2$ when $|x|=i$: \[ H^*(X) = \F_2[x_2,x_3]/(x_2,x_3)^3 \quad (|x_i|=i\cdot 2^k). \] If $\dbl J(k)^\vee$ is optimally realizable by a space, then that space is weakly equivalent to a CW complex $Y$ with cells in dimensions $j \cdot 2^k$, where $j=4,5,6,7,8$. For dimensional reasons, the ring structure of the cohomology has to be \[ H^*(Y) = \F_2[x_4,x_5,x_6,x_7]/(x_4^3)+x_4(x_5,x_6,x_7)+(x_5,x_6,x_7)^2; \quad (|x_i|=i\cdot 2^k). \] \section{Dual Jokers} If $X$ is a spectrum with $H^*(X) \cong \dbl J(k)$ then it is obvious that the Spanier-Whitehead dual $DX$ realizes $\dbl J(k)^\vee$, up to a degree shift, i.e., \[ \Sigma^{4\cdot 2^k} H^*(DX) \cong \dbl J(k)^\vee. \] Unstably, the situation is a bit more complicated, but follows from a more general consideration. \begin{lemma}\label{lemma:topdegreecorrection} Let $M$ be a finite $\mathcal A$-module with top nonvanishing degree~$n$ and $Y$ a space with an injective $\mathcal A$-module map $f\colon M \to H^*(Y)$ whose cokernel is $n-1$-connected. Then there is a space $Z$ such that $H^*(Z) \cong M$ as $\mathcal A$-modules. \end{lemma} \begin{proof} Let $V$ be a complement of $\im(f)$ in $H^n(Y)$ and denote by $\alpha\colon Y \to K(V,n)$ its representing map. Let $Z$ be the $n$-skeleton of the homotopy fiber of $\alpha$. Then $H^*(Z) \cong M$. \end{proof} \begin{prop}\label{prop:stablyrealizingduals} Let $M$ be a finite, stably realizable, nonnegatively graded $\mathcal A$-module with top nonvanishing degree~$n$. Then $\Sigma^n M = H^*(Z)$ for some CW complex~$Z$. \end{prop} \begin{proof} Let $X$ be a spectrum such that $M = H^*(X)$ and consider the space $Y = \Omega^\infty \Sigma^n X$. Since for any $k$-connected spectrum $E$, the augmentation $\Sigma^\infty\Omega^\infty E \to E$ is $(2k+2)$-connected, the map $\Sigma^\infty Y \to \Sigma^n X$ is $2(n-1)+2 = 2n$-connected. Hence the induced map $H^i(\Sigma^n X) \to H^i(Y)$ is an isomorphism for $i<2n$ and injective for $i=2n$. By Lemma~\ref{lemma:topdegreecorrection}, there exists a space~$Z$ such that $H^*(Z)\cong H^*(\Sigma^n X)$ as $\mathcal A$-modules. \end{proof} \begin{corollary} If $\dbl J(k)$ is stably realizable for any~$k$ then $\dbl J(k)^\vee$ is optimally realizable. \end{corollary} \begin{proof} The module $\dbl J(k)$ has top nonvanishing degree $4\cdot 2^k$, so $M=\Sigma^{4\cdot 2^k} \dbl J(k)^\vee$ satisfies the condition of Proposition~\ref{prop:stablyrealizingduals} for $n=4\cdot 2^k$. Hence there is a space $Z$ such that $H^*(Z) \cong \Sigma^{8\cdot 2^k} \dbl J(k)^\vee$. Then $Z$ has its bottom cell in degree \[ 4 \cdot 2^k = \sigma(\dbl J(k)^\vee) = 4 \cdot 2^k, \] proving the claim. \end{proof} Applying Proposition~\ref{prop:stablyrealizingduals} to a stably realized $\dbl J(k)$ gives a space $Z$ such that $H^*(Z) \cong \Sigma^{4 \cdot 2^k} \dbl J(k)$, but since $\sigma(\dbl J(k)) = 2 \cdot 2^k$, this does not suffice to prove optimal realizability of $\dbl J(k)$. This is why the following sections are needed. \section{Dickson algebras and their realizations} The rank-$n$ algebra of Dickson invariants $DI(n)$ is the ring of invariants of $\Sym(\F_2^n) = \F_2[t_1,\dots,t_n]$ under the action of the general linear group $\GL_n(\F_2)$. We think of $\Sym(\F_2^n)$ as a graded commutative ring with $t_i$ in degree $1$. Dickson~\cite{dickson:fundamental} showed that \[ DI(n) \cong \F_2[x_{2^n-2^i} \mid 0 \leqslant i < n], \] where subscripts denote degrees. The polynomials $x_{2^n-2^i}$ are given by the formula \[ \prod_{v \in \F_2^n} (X + v) = \sum_{i=0}^n x_{2^n-2^i} X^{2^i} \in \Sym(\F_2^n)[X], \] where $x_0 = 1$ by convention. If we give $\Sym(\F_2^n)$ the structure of an $\mathcal A$-algebra with $\Sq(t_i) = t_i+t_i^2$ (i.e., by using the isomorphism $\Sym(\F_2^n) \cong H^*(B\F_2^n)$) then $DI(n)$ is an $\mathcal A$-subalgebra with \[ \Sq^{2^i}x_{2^n-2^{i+1}}=x_{2^n-2^i}. \] \begin{thm}[Smith-Switzer, Lin-Williams, Dwyer-Wilkerson] The Dickson algebra $DI(n)$ is optimally realizable iff $n \leqslant 4$. \qed \end{thm} The first three Dickson algebras are realized by $\R P^\infty$, $BSO(3)$, and $BG_2$ (the classifying space of the exceptional Lie group $G_2$), respectively. The case $n=4$ was settled in \cite{dwyer-wilkerson:dw3}, where Dwyer and Wilkerson constructed a $2$-complete space, the exceptional $2$-compact group $B\mathrm{DW}_3$, with the required cohomology. A graphical representation of a skeleton of the spaces realizing the Dickson algebras is given below. One observes that the Jokers $\dbl J(i)$ occur as quotients of skeleta of these spaces; the kernel consists of the classes on the right of each diagram. However, realizing these quotients as fibers of certain maps is non-obvious and the purpose of the following section. \begin{minipage}{.5\textwidth} \begin{align} \label{eq:joker1} BSO(3)\colon & \begin{matrix} \begin{tikzpicture}[scale=0.6] \node (x2) at (0,0) [gen,label=180:$x_2$]{}; \node (x3) at (0,1)[gen,label=180:$x_3$]{}; \node (x2^2) at (0,2)[gen,label=0:$x_2^2$]{}; \node (x2x3) at (0,3)[gen,label=180:$x_2x_3$]{}; \node (x3^2) at (0,4)[gen,label=180:$x_3^2$]{}; \node[color=red] (x2^3) at (1,4)[gen,label=0:$x_2^3$]{}; \draw (x2) to[sq1] (x3); \draw (x2x3) to[sq1] (x3^2); \draw (x2) to[sq2r] (x2^2) to[sq2r] (x3^2); \draw (x3) to[sq2l] (x2x3); \end{tikzpicture} \end{matrix}\\ \label{eq:joker2} B\mathrm{G}_2\colon & \begin{matrix} \begin{tikzpicture}[scale=0.6] \node (x4) at (0,0) [gen,label=180:$x_4$]{}; \node (x6) at (0,2)[gen,label=180:$x_6$]{}; \node[color=orange] (x7) at (1,3)[gen,label=0:$x_7$]{}; \node (x4^2) at (0,4)[gen,label=0:$x_4^2$]{}; \node (x4x6) at (0,6)[gen,label=180:$x_4x_6$]{}; \node[color=orange] (x4x7) at (1,7)[gen,label=0:$x_4x_7$]{}; \node (x6^2) at (0,8)[gen,label=180:$x_6^2$]{}; \node[color=red] (x4^3) at (1,8)[gen,label=0:$x_4^3$]{}; \draw (x4) to[sq2l] (x6); \draw[color=orange] (x6) to[sq1] (x7) to[sq4r] (x4x7); \draw (x4x6) to[sq2l] (x6^2); \draw[color=orange] (x4x6) to[sq1] (x4x7); \draw (x4) to[sq4r] (x4^2) to[sq4r] (x6^2); \draw (x6) to[sq4l] (x4x6); \end{tikzpicture} \end{matrix} \end{align} \end{minipage}% \begin{minipage}{.5\textwidth} \begin{equation} \label{eq:joker3} B\mathrm{DW}_3\colon\begin{matrix} \begin{tikzpicture}[scale=0.6] \node (x8) at (0,0) [gen,label=180:$x_8$]{}; \node (x12) at (0,4)[gen,label=180:$x_{12}$]{}; \node[color=orange] (x14) at (1.8,6)[gen,label=0:$x_{14}$]{}; \node[color=orange] (x15) at (1.8,7)[gen,label=0:$x_{15}$]{}; \node (x8^2) at (0,8)[gen,label=0:$x_8^2$]{}; \node (x8x12) at (0,12)[gen,label=180:$x_8x_{12}$]{}; \node[color=orange] (x8x14) at (1.8,14)[gen,label=0:$x_8x_{14}$]{}; \node[color=orange] (x8x15) at (1.8,15)[gen,label=0:$x_8x_{15}$]{}; \node (x12^2) at (0,16)[gen,label=180:$x_{12}^2$]{}; \node[color=red] (x8^3) at (1.8,16)[gen,label=0:$x_8^3$]{}; \draw (x8) to[sq4l] (x12); \draw[color=orange] (x12) to[sq2r] (x14) to[sq8r] (x8x14) to[sq1] (x8x15); \draw[color=orange] (x14) to[sq1] (x15) to [sq8l] (x8x15); \draw (x8x12) to[sq4l] (x12^2); \draw[color=orange] (x8x12) to[sq2r] (x8x14); \draw (x8) to[sq8r] (x8^2) to[sq8r] (x12^2); \draw (x12) to[sq8l] (x8x12); \end{tikzpicture} \end{matrix} \end{equation} \end{minipage} \section{The Joker $J$ and its double} \label{sec:doubleJ} The cohomology picture \eqref{eq:joker1} shows that the $6$-skeleton of $B\mathrm{SO}(3)$ is almost a realization of $J=\dbl J(0)$, its only defect lying in an additional class $x_2^3$ in the top cohomology group $H^6(B\mathrm{SO}(3))$. Let $\alpha\colon B\mathrm{SO}(3) \to K(\F_2,6)$ represent this class and $X = \hofib(\alpha)^{(6)}$, the $6$-skeleton of its homotopy fiber. Then $X$ realizes $\dbl J(0)$ optimally. For the double Joker $\dbl J(1)$, as seen in the cohomology picture \eqref{eq:joker2}, it does not suffice any longer to take a skeleton of $B\mathrm{G}_2$ and kill off a top-dimensional class. \fxnote{(13) Added some justification/explanation} Since we feel that the ideas that come up here led us to the work appearing in Section~\ref{sec:quadrubleJ} we feel it worth describing them in some detail. First we recall some standard results on the exceptional Lie group~$\mathrm{G}_2$ and its relationship with $\Spin(7)$. One definition of~$\mathrm{G}_2$ is as the group of automorphisms of the alternative division ring of Cayley numbers (octonions)~$\O$. Since~$\mathrm{G}_2$ fixes the real Cayley numbers, it is a closed subgroup of $\mathrm{SO}(7)\leqslant\mathrm{SO}(8)$. A different point of view is to consider the spinor representation of $\Spin(7)$. Recall that the Clifford algebra $Cl_6\cong\Mat_8(\R)$ is isomorphic to the even subalgebra of $Cl_7\cong\Mat_8(\R)\times\Mat_8(\R)$, so $\Spin(7)$ is naturally identified with a subgroup of $\mathrm{SO}(8)\subseteq\Mat_8(\R)$, and thus acts on~$\R^8$ with its spinor representation. Then on identifying $\R^8$ with~$\O$, we find that the stabilizer subgroup in $\Spin(7)$ of a non-zero vector is isomorphic to~$\mathrm{G}_2$. It follows that the natural fibration \[ \Spin(7)/\mathrm{G}_2\to B\mathrm{G}_2\to B\Spin(7) \] is the unit sphere bundle of the associated spinor vector bundle $\sigma\to B\Spin(7)$. The mod-$2$ cohomologies of these spaces are related as follows. By considering the natural fibration \[ K(\F_2,1)\to B\Spin(7)\to B\mathrm{SO}(7) \] we find that \[ H^*(B\Spin(7)) = \F_2[w_4,w_6,w_7,u_8] \] where the $w_i$ are the images of the universal Stiefel-Whitney classes in \[ H^*(B\mathrm{SO}(7)) = \F_2[w_2,w_3,w_4,w_5,w_6,w_7], \] and $u_8\in H^8(B\Spin(7))$ is detected by $z_1^8\in H^8(K(\F_2,1))$. It is known that \[ H^*(B\mathrm{G}_2) = \F_2[x_4,x_6,x_7] \] and it is easy to see that the generators can be taken to be the images of $w_4,w_6,w_7$ under the induced homomorphism $H^*(B\Spin(7))\to H^*(B\mathrm{G}_2)$. As a consequence, these $x_i$ are Stiefel-Whitney classes of the pullback $\rho_7\to B\mathrm{G}_2$ of the natural $7$-dimensional bundle $\rho\to B\mathrm{SO}(7)$ and since this lifts to a $\Spin$ bundle, it admits an orientation in real connective $K$-theory. This leads to the following observation. \begin{lemma}\label{lemma:BG2factorization} There is a factorisation \[ B\mathrm{G}_2 \to \underline{k\mathrm{O}}_7 \to K(\F_2,7) \] of a map representing $x_7\in H^7(B\mathrm{G}_2)$. \end{lemma} Here \[ \underline{k\mathrm{O}}_7=\Omega^\infty\Sigma^7k\mathrm{O}\sim\Omega B\mathrm{O}\langle8\rangle. \] and $\underline{k\mathrm{O}}_7\to K(\F_2,7)$ is the infinite loop map induced from the unit morphism $k\mathrm{O}\to H\F_2$. The cohomology of $B\mathrm{O}\langle8\rangle$ is a quotient of that of~$B\mathrm{O}$: \begin{multline}\label{eq:BO8} H^*(B\mathrm{O}\langle8\rangle) = \F_2[w_{2^r}:r\geq3]\otimes\F_2[w_{2^r+2^{r+s}}:r\geq2,\,s\geq1] \\ \qquad \otimes\F_2[w_{2^r+2^{r+s}+2^{r+s+t}}:r\geq1,\,s,t\geq1] \\ \otimes\F_2[w_{2^r+2^{r+s}+2^{r+s+t}}:r\geq0,\,s,t\geq1], \end{multline} where the $w_i$ are images of universal Stiefel-Whitney classes in~$H^*(B\mathrm{O})$. Here \[ \Sq^4w_8\equiv w_{12}\pmod{\text{decomposables}}. \] A routine calculation shows that $H^*(\underline{k\mathrm{O}}_7) \cong H^*(\Omega B\mathrm{O}\langle8\rangle)$ is the exterior algebra on certain elements~$e_i\in H^i(\underline{k\mathrm{O}}_7)$ where $e_i$ suspends to the generator $w_{i+1}$ of~\eqref{eq:BO8}. In particular, up to degree~$13$, \[ H^*(\underline{k\mathrm{O}}_7)=\F_2\{1,e_7,e_{11}\} \] and \begin{equation}\label{eq:Sq4e8} \Sq^4e_7 = e_{11}. \end{equation} \begin{lemma}\label{lem:BG2fibre} The module $\dbl J(1)$ is optimally realizable. \end{lemma} \begin{proof} Let $\alpha\colon B\mathrm{G}_2 \to \underline{k\mathrm{O}}_7$ be the factorization of Lemma~\ref{lemma:BG2factorization}. By the above computations, $H^*(B\mathrm{G}_2)$, as an algebra over $H^*(\underline{k\mathrm{O}}_7)$, is isomorphic to \[ H^*(B\mathrm{G}_2) \cong H^*(\underline{k\mathrm{O}}_7) [x_4,x_6,\text{generators in degree greater than $12$}]/R \] where the module $R$ of relations is at least $13$-connected. This means that in the Eilenberg-Moore spectral sequence for the cohomology of the fiber of $\alpha$, $E_2^{s,t}=0$ for $s+t\leqslant 12$ and $s<0$. Thus up to degree $12$, \[ H^*(\hofib(\alpha)) \cong \F_2[x_4,x_6]. \] An application of Lemma~\ref{lemma:topdegreecorrection} takes care of the remaining top class $x_4^3$ and shows that $\dbl J(1)$ is optimally realizable. \end{proof} \section{The quadruple Joker} \label{sec:quadrubleJ} The strategy to construct an optimal realization of $\dbl J(2)$ consists of an easy and a harder step. The easy step is to construct a space $Y$ whose cohomology is diagram~\ref{eq:joker3} without the topmost unattached class: \begin{lemma} \label{lemma:Yspace} There exists a space $Y$ with \[ H^*(Y) = \F_2[x_8,x_{12},x_{14},x_{15}] / (x_8^3,\text{polynomials of degree $>24$}) \] \end{lemma} \begin{proof} This follows from an application of Lemma~\ref{lemma:topdegreecorrection} to the $24$-skeleton of $B\mathrm{DW}_3$. \end{proof} The harder step is to realize $\dbl J(2)$ as a skeleton of the homotopy fiber of a suitable map \[ \alpha\colon Y \to \underline{\smash{\mathrm{tmf}/2}}_{14} \] into the $14$th space of the spectrum of topological modular forms modulo~$2$. The spectrum $\mathrm{tmf}$ is an analog of connective real $K$-theory, $k\mathrm{O}$, but of chromatic level~$2$ \cite{hopkins-mahowald,tmfbook,behrens:notes-on-tmf,goerss:tmfsurvey} with well-known homotopy \cite{bauer:tmf}. \begin{prop}\label{prop:existenceofbeta} Let $Y$ be a space as in Lemma~\ref{lemma:Yspace}. Then there exists a $2$-torsion class \[ \beta \in \mathrm{tmf}^{15}(Y) \] whose classifying map induces an isomorphism of the order-$2$ groups \[ H^{15}(\underline{\mathrm{tmf}}_{15}) \to H^{15}(Y). \] \end{prop} \begin{proof}[Proof of Thm.~\ref{thm:main}] Given Prop.~\ref{prop:existenceofbeta} and the Bockstein spectral sequence, the class $\beta$ has to pull back to a class $\alpha \in (\mathrm{tmf}/2)^{14}(Y)$ whose classifying map induces an isomorphism in $H^{14}$. This means that under $\alpha^*\colon H^*(\underline{\smash{\mathrm{tmf}/2}}_{14}) \to H^*(Y)$, the unit $\iota \in H^{14}(\underline{\smash{\mathrm{tmf}/2}}_{14})$ is mapped to $x_{14}$. A basic property of $\mathrm{tmf}$ is that $H^*(\mathrm{tmf}) \cong \mathcal A \otimes_{\mathcal A(2)} \F_2$ and so there is a non-split extension of $\mathcal{A}$-modules \[ 0\to H^*(\mathrm{tmf})\to H^*(\mathrm{tmf}/2)\to \Sigma H^*(\mathrm{tmf}) \to 0 \] where $Sq^1$ acts non-trivially on the generator of $\Sigma H^0(\mathrm{tmf})$. This implies that $\alpha^*(\Sq^1 \iota) = x_{15}$, $\alpha^*(\Sq^8\iota) = x_8x_{14}$, and $\alpha^*(\Sq^8\Sq^1\iota) = x_8x_{15}$. Hence as in the case of $\dbl J(1)$, \[ H^*(Y) \cong H^*(\underline{\smash{\mathrm{tmf}/2}}_{14}) [x_8,x_{12},\text{generators in degree greater than $24$}]/(x_8^3,R), \] where the module $R$ of relations is at least $25$-connected. Thus the Eilenberg-Moore spectral sequence converging to $\hofib(\alpha)$ shows that up to degree $24$, \[ H^*(\hofib(\alpha)) \cong \F_2[x_8,x_{12}]/(x_8^3), \] The $24$-skeleton of $\hofib(\alpha)$ therefore optimally realizes $\dbl J(2)$. \end{proof} It remains to prove Prop.~\ref{prop:existenceofbeta}. Let $Y_m^n = Y^{(n)}/Y^{(m-1)}$ denote the $n$-skeleton of $Y$ modulo the $(m-1)$-skeleton, thus containing the cells from dimension $m$ to dimension~$n$. \begin{lemma}\label{lemma:basicd2} Let $Y$ be a space as in Lemma~\ref{lemma:Yspace}. Then the space $Y_{16}^{20}$ is homotopy equivalent to a suspension of the cone of $\pm 2\nu$. In particular, in the Adams spectral sequence \[ \Ext_{\mathcal A(2)}(\F_2,H^{-*}(Y_{16}^{20}) \Longrightarrow \mathrm{tmf}^{-*}(Y_{16}^{20}), \] there is a differential $d^2(x_{-16}) = x_{-20} h_0 h_2$, where $x_{-16}$, $x_{-20}$ in $\Ext^0$ are the classes corresponding to the two cells. \end{lemma} Here the grading is chosen such that the spectral sequence becomes a homological spectral sequence and we will display it in the Adams grading. \begin{proof} Consider the space $Y^{20}$, the $20$-skeleton of $Y$. In the Atiyah-Hirzebruch spectral sequence \[ H^{-*}(Y^{20},\mathrm{tmf}^{-*}) \cong H_*(D(Y^{20}),\mathrm{tmf}_*) \Longrightarrow \mathrm{tmf}^{-*}(Y^{20}), \] the cohomology generators $x_8$, $x_{12}$ represent classes $x_{-8}, x_{-12} \in H_*(DY^{20},\mathrm{tmf}_0)$ and, since $\Sq^4(x_8)=x_{12}$, there is a differential $d^4(x_{-8}) = x_{-12}\nu \pmod {2\nu}$. By multiplicativity, $d^4(x_{-8}^2) = 2x_{-8}x_{-12} \nu \pmod {4\nu}$. This shows that the top cell of $Y^{20}$ is attached to the $16$-dimensional cell by $2\nu \pm 4{\nu} = \pm 2 \nu$. \end{proof} \begin{proof}[Proof of Prop.~\ref{prop:existenceofbeta}] The claim boils down to showing that in the Adams spectral sequence \[ E_2^{s,t} = \Ext_{\mathcal A}(H^{-*}(\mathrm{tmf}),H^{-*}(Y)) \cong \Ext_{\mathcal A(2)}(\F_2,H^{-*}(Y)) \Longrightarrow \mathrm{tmf}^{-*}(Y), \] the unique nontrivial class \[ x_{-15} \in E_2^{0,-15} = \Hom_{\mathcal A}(H^0(\mathrm{tmf}),H^{15}(Y)) \] is an infinite cycle. As modules over $\mathcal A(2)$, \[ H^*(Y) \cong H^*(Y_0^{15}) \oplus H^*(Y_{16}^{24}), \] hence the $E_2$-term above splits as a sum as well. The following is the $E_2$-term of the Adams spectral sequence converging to $\mathrm{tmf}^{-*}(Y^{(15)})$, determined with Bob Bruner's program~\cite{bruner:extsoftware}: \[ \includegraphics[clip,trim=4.5cm 19.5cm 8cm 4cm]{joker-chart-0-15.pdf} \] Similarly, the $E_2$-term of the spectral sequence computing $\mathrm{tmf}^*(Y_{16}^{24})$ is the following. \[ \includegraphics[clip,trim=4.5cm 19.5cm 8cm 4cm]{joker-chart.pdf} \] We will identify the possible targets of differentials on $\iota$. Since $h_0\iota = 0$ and $h_1^2\iota=0$, only classes in the kernel of $h_0$ and the kernel of $h_1^2$ can be targets. This means that there is no possible target for a $d_2$ in bidegree $(-16,2)$ in the displayed Adams grading. It is easy to see that under the inclusion $Y_{16}^{20} \to Y_{16}^{24}$, the class $x_{-20}h_0h_2$ maps to the displayed class $y$. By Lemma~\ref{lemma:basicd2}, there is thus a differential $d_2(x_{-16}) = y$. This implies that the $E_3$-term for $Y_{16}^{24}$ is given by the following chart. \[ \includegraphics[clip,trim=4.5cm 19.5cm 8cm 4cm]{joker-chart-E3.pdf} \] The only remaining possible target of a $d_3$ in bidegree $(-16,3)$ is $x_{-16}h_0^3$, which is impossible for the same reason as before ($h_0$ on it is nontrivial.) There are no longer differentials possible on $\iota$ either because no classes in filtration $4$ or higher are $h_0$-torsion in any $E_n$-term. \end{proof} \bibliographystyle{amsalpha}
2024-02-18T23:39:55.492Z
2019-10-21T02:08:30.000Z
algebraic_stack_train_0000
833
5,322
proofpile-arXiv_065-4137
\section{Conclusion} We have studied the impact of the topography for radio-detection of neutrino-induced Earth-skimming air showers. For this purpose, we have developed a toy setup with a simplified topography for the detector, depending on two parameters: the distance between the air shower injection point and the detector array, and the ground slope of the detector array. We have computed the neutrino detection efficiency of this toy detector configuration through three computation chains: a microscopic simulation of the shower development and its associated radio emission, a radio-signal computation using {\it Radio Morphing} and an analytical treatment based on a {\it cone model} of the trigger volume. The comparison of these three independent tools confirms that {\it Radio Morphing} is a reliable method in this framework, while the {\it Cone Model} offers a fast, conservative estimate of the detection efficiency for realistic topographies. The latter can thus be used to perform in a negligible amount of time a preliminary estimate of the potential for neutrino detection of a given zone, and the former can then be used to carry out a detailed evaluation of selected sites instead of full Monte-Carlo simulations. More importantly, the results presented here show that ground topography has a great impact on the detection efficiency, with an increase by a factor $\sim$3 for angles of just a few degrees compared to a flat array and for an optimal case where shower trajectories face the detector plane. This boost effect is very similar for any slope value ranging between $2\degree$ and $20\degree$. The other noticeable result of this study is the moderate effect of the distance on the detection efficiency, with comparable values for tau decays taking place between $20$ and $100$\,km from the detector. Two slopes facing each other with tens of kilometers between them may therefore constitute the optimal configuration for neutrino detection, as they would correspond to enhanced rates for the two directions perpendicular to the slopes. Wide valleys or large basins could offer such topographies and will consequently be primarily targeted in the search for the optimal sites where the $\mathcal{O}$(10) sub-arrays composing the GRAND array could be deployed to optimize its neutrino detection efficiency. An effort in this direction has been initiated in the framework of the GRAND project. \section{Computational methods} \label{sim} We present in the following the implementation of the methods described in sections \ref{end2end} to \ref{coneprinciple}. \subsection{Production of the shower progenitors} \label{danton} \begin{figure*}[tb] \centering \includegraphics[width=0.32\textwidth]{./figures/elevation_distrib.pdf} \includegraphics[width=0.32\textwidth]{./figures/height_distrib.pdf} \includegraphics[width=0.32\textwidth]{./figures/energies_distrib.pdf} \caption{Distributions of tau decay elevation angles of particle trajectory measured with respect to a horizontal plane ({\it left}), height above ground at tau decay point ({\it center}) and shower energy ({\it right}) for the two sets of primary $\nu_{\tau}$ energy considered in this study.} \label{primary_distrib} \end{figure*} The production of the shower progenitors was performed with the DANTON software package \cite{DANTON:note, DANTON:GitHub}. DANTON simulates interactions of tau neutrinos and tau energy losses. It produces results compatible with similar codes\,\cite{2018PhRvD..97b3021A}. Additionally DANTON offers the possibility to run simulations in backward mode (i.e. from tau decay upwards, with appropriate event weight computations), an attractive feature for massive simulations, and it also allows us to take into account the exact topography of the Earth surface\,\cite{2019arXiv190403435N}. It is however operated here in forward mode, i.e. as a classical Monte-Carlo. The primary neutrino source is set as mono-energetic and isotropic. A spherical Earth is used with a density profile given by the Preliminary Reference Earth Model (PREM)\,\cite{PREM}, but with the sea layer replaced by Standard Rock\,\cite{StandardRock}. Two energy values are used for the primary neutrino flux: $E_{\nu}=10^{9}$~GeV and $10^{10}$~GeV. The characteristics of the tau lepton resulting from the interaction of the neutrino with the Earth and of all the particles produced during the decay of the tau in the atmosphere are also computed: decay position, list of products and their associated momenta. For this study one million primary neutrinos were simulated per energy value. Those inducing tau decays in the atmosphere were then selected if the subsequent showers had energies above $5 \times 10^{7}~{\rm GeV}$, because lower energies can hardly lead to detection for such a sparse array\,\cite{CODALEMA:2009, Huege:2016veh}. In Figure \ref{primary_distrib}, we show the distribution in energy, elevation angle and height of the two sets of tau decays. Among the surviving set, $100$ were randomly chosen for each energy. This value is a good compromise between computation time and statistical relevance. \subsection{Simulation of the electric field} \subsubsection{Microscopic method} \label{zhaires} In the {\it microscopic} method, the extensive air showers initiated by the by-products of the tau decay, and the impulsive electric field induced at the antenna locations were simulated using the ZHAireS software \cite{ZHAireS}, an implementation of the ZHS formalism \cite{ZHS} in the AIRES \cite{AIRES} cascade simulation software. To allow for geometries where cascades are up-going and initiated by multiple decay products, we implemented a dedicated module called RASPASS (Radio Aires Special Primary for Atmospheric Skimming Showers) in the ZHAireS software. \subsubsection{Radio Morphing} \label{rm} {\it Radio Morphing}~\cite{Zilles:2018kwq} is a semi-analytical method for a fast computation of the expected radio signal emitted by an air shower. The method consists in computing the radio signal of any {\it target} air shower at any target position by simple mathematical operations applied to a single {\it generic} reference shower. The principle is the following: \begin{enumerate}[i)] \item The electromagnetic radiation associated with the {\it generic} shower is simulated using standard microscopic tools at positions forming a 3D mesh. \item For each {\it target} shower, the simulated signals are scaled by numerical factors which values depend analytically on the energy and geometry of the {\it target} and {\it generic} showers. \item The {\it generic} 3D mesh is oriented along the direction of propagation of the target shower. \item The electromagnetic radiation expected at a given {\it target} position is computed by interpolation of the signals from the neighbouring positions of this 3-D mesh. \end{enumerate} % This technique lowers the required CPU time of at least two orders of magnitude compared to a standard simulation tool like ZHAireS, while reproducing its results within $\sim 25\%$ error in amplitude\,\cite{Zilles:2018kwq}. \subsection{Antenna response} \label{horant} In order to compute the voltage generated at the antenna output for both {\it microscopic} and {\it Radio Morphing} methods, we choose in this study the prototype antenna for the GRAND project: the {\sc HorizonAntenna}\,\cite{GRANDWP}. It is a bow-tie antenna inspired from the {\it butterfly antenna}\,\cite{Charrier:1999ARENA} developed for the CODALEMA experiment\,\cite{Escudie:2019ni}, later used in AERA\,\cite{Abreu:2012pi} and adapted to GRANDProto35\,\cite{GP35:2017}. As for GRANDProto35, three arms are deployed along the East-West, South-North and vertical axes, but the radiating element is half its size to better match the $50-200$\,MHz frequency range considered for GRAND. As the {\it butterfly antenna}, the {\sc HorizonAntenna} is an active detector, but in the present study, we simply consider that the radiator is loaded with a resistor $R = 300\,\Omega$, with a capacitor $C = 6.5 \times 10^{-12}\,$F and inductance $L = 1\,\mu$F in parallel. The {\sc HorizonAntenna} is set at an height of 4.5\,m above ground in order to minimize the ground attenuation of the radio signal. The equivalent length ${\vec{l}_{eq}}^k$ of one antenna arm $k$ (where $k$ = EW, NS, Vert) is derived from NEC4\,\cite{NEC4} simulations as a function of wave incoming direction ($\theta$, $\phi$) and frequency $\nu$. The voltage at the output of the resistor $R$ loading the antenna arm is then computed as: \begin{equation} \label{vant} V^k(t) = \int {\vec{l}_{eq}}^k(\theta, \phi, \nu) \cdot \vec{E}(\nu) e^{2i \pi \nu t} d\nu \end{equation} where $\vec{E}(\nu)$ is the Fourier transform of the radio transient $\vec{E}(t)$ emitted by the shower. The equivalent length was computed for a vertical antenna deployed over a flat, infinite ground. The ground slope of the toy setup can then be accounted for by a simple rotation of this system by an angle $\alpha$, which translates into a wave effective zenith angle $\theta^*=\theta-\alpha$, to be used in Eq. \ref{vant}. \subsection{Trigger} \label{trig} The last step of the treatment consists in determining whether the shower could be detected by the radio array. For this purpose, we first apply a Butterworth filtering of order $5$ to the voltage signal in the $50-200$\,MHz frequency range. This mimics the analog system that would be applied in an actual setup in order to filter out background emissions outside the designed frequency range. Then the peak-to-peak amplitude of the voltage $V_{\rm pp}$ is compared to the level of stationary background noise $\sigma_{\rm noise}=15\mu V$, computed as the sum of Galactic and ground contributions (see \cite{GRANDWP} and \cite{Charrier:2018fle} for details). If $V_{\rm pp}\geq N \sigma_{\rm \rm noise}$, then we considered that the antenna has triggered. Here $N$ = 2 in an aggressive scenario, which could be achieved if innovative triggering methods\,\cite{FuhrerARENA:2018, Erdmann:2019nie} were implemented, and $N$ = 5 in a conservative one. If at least five antennas trigger on a same shower, then we consider it as detected. \subsection{Cone Model} \label{cone} The {\it Cone Model} proposes to describe the volume inside which the electromagnetic radiation is strong enough to be detected as a cone, characterized by a height and an opening angle varying with shower energy. The {\it Cone Model} allows for a purely analytical computation of the radio footprint at ground, and thus provides a very fast evaluation of the trigger condition, while it also allows for an easier understanding of the effect of ground topography on shower detection. The parametrization of the cone height and opening angle as a function of shower energy needs to be computed once only for a given frequency range. This was done as follows for the $50-200$\,MHz band considered in this study: \begin{figure}[tb] \centering \includegraphics[width=0.9\columnwidth]{./figures/ref_shower_scheme.pdf} \caption{Position of the planes used to parametrize the {\it Cone Model}. These are placed perpendicular to the shower axis, at various longitudinal distances $L$ from $X_{\rm max}$. See section \ref{cone} for details.} \label{config} \end{figure} \begin{enumerate} \item We simulate with the ZHAireS code the electric field from one shower at different locations set at fixed longitudinal distances $L$ from the $X_{\rm max}$ position (see Figure \ref{config} for an illustration). Values $L>$100\,km are not simulated because the maximal value $D$ = 100\,km chosen in our study for the distance between the tau decay point and the basis of the detector (see section \ref{toymodeldescription}) makes it unnecessary. As the $X_{\rm max}$ position is reached $\sim$15\,km after the decay, a distance L = 100\,km allows to simulate radio signals over a detector depth of 15\,km at least. This is, in the majority of cases, enough to determine if the shower would be detected or not. \item In each of these antenna planes, identified by an index $j$ in the following, we compute the angular distance between the antennas and the shower core. We determine the maximal angular distance to the shower core $\Omega^j$ beyond which the electric field drops below the detection threshold, set to 2 (aggressive) or 5 (conservative) times the value of $E_{\rm rms}$, the average level of electromagnetic radiation induced by the Galaxy is computed as: \begin{equation} {E_{\rm rms}}^2 = \frac{Z_0}{2}\int_{\nu_0}^{\nu_1}\int_{2\pi}B_{\nu}(\theta,\phi,\nu)\sin(\theta) d\theta d\phi d\nu \end{equation} where $B_{\nu}$ is the spectral radiance of the sky, computed with GSM \cite{Gal:2016} or equivalent codes, $Z_0=376.7$\,$\Omega$ the impedance of free space, and [$\nu_0,\nu_1$] the frequency range considered for detection. Here we choose $\nu_0=50$\,MHz and $\nu_1$=200\,MHz, the frequency range of the {\sc HorizonAntenna}. The factor $1/2$ arises from the projection of the (unpolarized) Galactic radiation along the antenna axis. We find $E_{\rm rms}$ = 22\,$\mu$V/m. Defining a detection threshold on the electric field amplitude as done here ---rather than the voltage at antenna output as usual--- allows to derive results that do not depend on a specific antenna design. It is however not precise: by construction, the details of a specific antenna response and its dependency on the direction of origin of the signal are neglected here, and only the average effect is considered. The {\it Cone Model} is therefore only an approximate method. The distribution of the electric field amplitudes as a function of the angular distance to the shower axis is shown for illustration in Figure \ref{omega_trig} for the plane $j$ located at a longitudinal distance $L=59$\,km. As the Cherenkov ring induces an enhancement in the amplitude profile for $\Omega\sim1$\degree, we actually compute two values of the angle $\Omega^j$: $\Omega^j_{\rm min}$ and $\Omega^j_{\rm max}$, thus defining the angular range inside which the electric field amplitude is above the detection threshold. \begin{figure}[tb] \centering \includegraphics[width=0.9\columnwidth]{./figures/omega_trig_50-200MHz.pdf} \caption{Distribution of the electric field amplitude produced with ZHAireS} as a function of $\Omega$, the angular distance to the shower axis, for antennas located at a longitudinal distance of $59$\,km from $X_{\rm max}$. The amplitude dispersion at a given $\Omega$ value is due to interplay between Askaryan and geomagnetic effects leading to an azimuthal asymmetry of the signal amplitude. For a shower energy $E=1.5\times10^8$\,GeV (in blue) from the $E_{\nu}=10^9$\,GeV dataset, we find for instance ($\Omega^j_{\rm min}$; $\Omega^j_{\rm max}$) = (0.1$\degree$; 1.7$\degree$) in the aggressive case and (0.8$\degree$; 1.2$\degree$) in the conservative one. \label{omega_trig} \end{figure} \item The value of $\Omega^j$ does not vary significantly with $L$ (see Figure \ref{omegaVSdist}). This validates the choice of a conical model for the trigger volume and allows to derive a single set of values ($\Omega_{\rm min};\Omega_{\rm max}$)=($\langle \Omega^j_{\rm min} \rangle$; $\langle \Omega^j_{\rm max} \rangle$) for one specific energy. \begin{figure}[tb] \centering \includegraphics[width=0.9\columnwidth]{./figures/omega_trigVSdist.pdf} \caption{Angular distances $\Omega^j_{\rm max}$ computed following the method presented in Figure \ref{omega_trig} as a function of longitudinal distance for various shower energies. $\Omega_{\rm max}$ measures the maximum opening angle of the cone describing the triggering volume, while index $j$ identifies the simulation plane perpendicular to the shower axis (see Figure \ref{config}). Here only the conservative case in shown.} The angle value varies marginally over the full range of longitudinal values considered for shower energies $E = 1.1\times10^9$ and $1.1\times10^{10}$\,GeV, validating the choice of a cone model ---with fixed opening angle $\Omega_{\rm max}=<\Omega_{\rm max}^j>$--- for the trigger volume modeling. For $E = 1.1\times10^8$\,GeV, $\Omega$ drops to 0 for $L>50$\,km because the cone height $H$ is equal to 50\,km in the conservative case (see Figure \ref{dist_E}). Similar treatment is applied to determine $\Omega_{\rm min}$. \label{omegaVSdist} \end{figure} \item A similar procedure is applied to determine the cone height $H$, set to be equal to the longitudinal distance $L$ up to which the signal is strong enough to be detected. \begin{figure}[tb] \centering \includegraphics[width=0.9\columnwidth]{./figures/omega_trigVSE_sh.pdf} \caption{Angles $\Omega_{\rm max}$ and $\Omega_{\rm min}$ as a function of shower energy $E_{\rm sh}$; and fit by Eq. \ref{eq:fitOmega}. Angles $\Omega_{\rm max}$ and $\Omega_{\rm min}$ define the inner and outer boundaries of the hollow cone and are obtained by averaging the values $\Omega_{\rm max}^j$ and $\Omega_{\rm min}^j$ (see Figure \ref{omegaVSdist})}. At the highest energies, $\Omega_{\rm min}$ drops down to $0$\degree, implying that the radio signal is above the detection threshold for all angular distances $\Omega \leq \Omega_{\rm max}$. \label{omega_E} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.9\columnwidth]{./figures/dist_trigVSE_sh.pdf} \caption{Cone height $H$ as a function of shower energy $E_{\rm sh}$ and fit by Eq. \ref{eq:fitL}. Cone height saturates at values $H$\,=\,100\,km, because the antenna planes used to parametrize the {\it Cone Model} do not extend beyond this value. However points with values $H<100$\,km suffice to demonstrate that the cone heights scale linearly with energy, as one would naturally expect, since the electric field amplitude also scales linearly with energy. Cone height values $H>100$\,km are therefore extrapolated from the fit given in Eq. \ref{eq:fitL}.} \label{dist_E} \end{figure} \begin{table}[ht] \begin{center} \caption{Parameters for the fitting functions given in Equations~\ref{eq:fitL} and~\ref{eq:fitOmega}, for aggressive and conservative thresholds and maximal and minimum $\Omega$ angles. Parameters $a$ and $b$ are in km, $c$ and $d$ in degrees. } \label{tab:fitting_values} \hspace{0.4cm} \begin{tabular}{ccccccc} \toprule threshold &$a$ & $b$ & $\Omega$ & c & d\\ \midrule \midrule aggressive& 109 $\pm$ 15 & 116 $\pm$ 3 & min & 0.20 $\pm$ 0.02 &-2.4 $\pm$ 0.2\\ &&& max & 1.3 $\pm$ 0.2 &1.00 $\pm$ 0.02 & \\ \midrule conservative & 42 $\pm$ 7 & 48 $\pm$ 1 & min & 1.2 $\pm$ 0.2 & -2.2 $\pm$ 0.2 \\ &&& max & 1.0 $\pm$ 0.3 &0.80 $\pm$ 0.03 \\ \midrule \bottomrule \end{tabular} \end{center} \end{table} \item We repeat the treatment for various shower energies $E_{\rm sh}$ by rescaling the signals amplitudes and thus obtain the distributions $\Omega(E_{\rm sh})$ and $H(E_{\rm sh})$ shown in Figures \ref{omega_E} and \ref{dist_E}. We fit these distributions for shower energies larger than $3 \cdot 10^{7}$\,GeV with analytic functions given by \begin{eqnarray} H|_{\rm 50-200MHz} =& a\, +\ b\, \left(\frac{E_{\rm sh}-10^{17} {\rm eV}}{\rm 10^{17} eV}\right), \label{eq:fitL} \\ \Omega|_{\rm 50-200MHz} =& c\, +\ d\, \log{\left(\frac{E_{\rm sh}}{\rm 10^{17}eV}\right)}. \label{eq:fitOmega} \end{eqnarray} with $E_{\rm sh}$ expressed in eV in the formulas. Numerical values of $a,b,c,d$ are given in Table~\ref{tab:fitting_values}. The three parameters $\Omega_{\rm min}$, $\Omega_{\rm max}$ and $H$ allow to define a hollow cone, with an apex set at the shower $X_{\rm max}$ location and oriented along the shower axis. Any antenna located inside this volume is supposed to trigger on the shower according to the {\it Cone Model}. As mentioned in the introduction, the interplay between the geomagnetic effect and the charge excess induces an asymmetry on the electric field amplitude as a function of antenna angular position w.r.t. the shower core. This can be seen on Figure \ref{omega_trig}, for instance, where the dispersion in field strength at a given angular distance is the exact illustration of this phenomenon. The {\it Cone Model} however assumes a rotation symmetry around the shower axis and thus neglects this asymmetry. This is still acceptable if we are only interested in the average number of triggered antennas by the shower ---which is the case here--- and not in the amplitude pattern of the radio signal. \end{enumerate} Once this parametrization is completed, the {\it Cone Model} is applied to the selected set of tau decays: the values of the cone parameters are computed for the energy and geometry of each shower and the intersection between the resulting cone volume and the detection area is calculated. If at least five antennas fall within this intersection, then we consider that the shower is detected. \section{Introduction} Ultra high energy neutrinos (UHE $\nu$) are valuable messengers of violent phenomena in the Universe (\cite{2016JCAP...12..017F,GRANDWP} and references therein). Their low interaction probability with matter allows them to carry unaltered information from sources located at cosmological distances, but, on the other hand, makes their detection challenging: non-negligible detection probability can be achieved only with large volumes of dense targets. At neutrino energies targeted here (E $> 10^{16}$~eV), the Earth is opaque to neutrinos. Therefore only Earth-skimming trajectories yield significant probability of neutrino interaction with matter, leading to a subsequent tau decay in the atmosphere, eventually inducing an extensive air-shower (EAS). The detection of these EAS has been proposed as a possible technique to search for these cosmic particles\,\cite{Fargion:1999se}. The progress achieved by radio-detection of EAS in the last 15 years \cite{Falcke:2005tc,CODALEMA:2009,Tunka:2015,Aab:2015vta,Buitink:2016nkf,Charrier:2018fle} combined with the possibility to deploy these cheap, robust detectors over large areas open the possibility to instrument giant radio arrays designed to hunt for neutrino-induced EAS as proposed by the GRAND project\,\cite{GRANDWP,Ardouin:2011}. An EAS emits a radio signal via two well understood mechanisms : the {\it Askaryan effect}~\cite{Askaryan1962,Askaryan1965} and the {\it geomagnetic effect}~\cite{Kahn1966, Scholten2008}, which add up coherently to form detectable signals in the frequency range between tens to hundreds of MHz. The interplay between these two effects induces an azimuthal asymmetry of the electric field amplitude along the shower axis~\cite{Huege:2016veh,Schroder:2016hrv}. The nearly perfect transparency of the atmosphere to radio waves, combined with the strong relativistic beaming of the radio emission in the forward direction\,\cite{Alvarez-Muniz:2014dza} make it possible to detect radio signals from air showers at very large distances from their maximum of development $X_{\rm max}$: a $2\times10^{19}$\,eV shower was for example detected by the Auger radio array with a $X_{\rm max}$ position reconstructed beyond $100$\,km from shower core~\cite{Aab:2018ytv}. This is obviously an important asset in favor of radio-detection of neutrino-induced air showers. The strong beaming of the radio signal also implies that the topography of the ground surface may play a key role in the detection probability of the induced EAS. The primary objective of this article is to perform a quantitative study of the effects of ground elevation on the detection probability of neutrino-induced air showers. To do this, we use a toy configuration where a radio array is deployed over a simplified, generic topography. We compute the response of this setup to neutrino-induced showers with three different simulation chains, ranging from a fast and simple estimation using a parametrization of the expected signal amplitude, to a detailed and time consuming Monte-Carlo. This is motivated by the fact that full Monte-Carlo tools are CPU-intensive treatments, to the point of becoming prohibitive when it comes to simulating radio detection by large antenna arrays. The secondary purpose of this paper is therefore to determine if reliable results can be obtained with faster treatments than full Monte-Carlo simulation codes. In section \ref{principle} we present the general principle of our study, in section \ref{sim} we detail the implementation of the three simulation chains used, and finally in section \ref{res} we discuss the results. \section{General principle} \label{principle} Three simulation chains are used in this study. Their general principles are presented in sections \ref{end2end} to \ref{coneprinciple}, and summarized in Figure\,\ref{principle_schema}. In section \ref{toymodeldescription}, we present the toy detector configuration used for the study. \begin{figure*}[t] \begin{center} \centering \includegraphics[width=\textwidth,trim=0cm 9cm 0cm 0cm,clip=true]{./figures/principle.png} \caption{General structure of the three simulation chains ({\it microscopic, Radio Morphing and Cone} models) used in this study. The sections where their various elements are described are indicated in parenthesis. The trigger condition for the {\it microscopic} and {\it Radio Morphing} methods is fulfilled if five antennas or more with peak-peak voltages larger than a threshold value $V_{th}$ set to 5 times (conservative) or twice the minimal background noise level. For the {\it Cone model}, the trigger condition is fulfilled if five antennas or more are within the volume of the cone modeling the shower radio emission. } \label{principle_schema} \end{center} \end{figure*} \subsection{End-to-end microscopic simulation} \label{end2end} The first simulation chain consists of four independent steps: \begin{enumerate}[(i)] \item We produce a fixed number of tau decays induced by cosmic tau-neutrinos ($\nu_{\tau}$) interacting in a spherical Earth. This is done for two neutrino energies ($E_{\nu} = 10^9$ and $10^{10}$\,GeV) with a dedicated Monte-Carlo engine: DANTON\,\cite{DANTON:note,DANTON:GitHub}, further described in section \ref{danton}. \item We compute the electromagnetic field induced at the location of the detection units by the showers generated by these tau decays. This is done through a full {\it microscopic simulation} of the particles in the EAS and of the associated electromagnetic radiation using the ZHAireS\,\cite{ZHAireS} simulation code (see section \ref{zhaires} for details). \item The voltage induced by the radio wave at the antenna output is then computed using a modelling of the GRAND {\sc HorizonAntenna}\,\cite{GRANDWP} performed with the NEC4\,\cite{NEC4} code. This is detailed in section \ref{horant}. \item If the peak-to-peak amplitude of the output voltage exceeds the defined threshold for five antennas or more, then the neutrino is considered as detected (see section \ref{trig} for more details). This threshold value is either twice (aggressive scenario) or five times the minimal noise level (conservative scenario). \end{enumerate} \subsection{Radio-Morphing} \label{radiomorphing} Monte-Carlo simulations of the electric field provide the most reliable estimate of the detection probability of a shower, and are therefore used as a benchmark in this work. They however require significant computational resources: the CPU time is mainly proportional to the number of simulated antennas and can last with ZHAireS up to $\approx72$\,h on one core for $1000$ antennas given our simulation parameters. An alternative simulation chain therefore uses the so-called {\it Radio Morphing} method \cite{Zilles:2018kwq} instead of ZHAireS for the electric field computation. {\it Radio Morphing} performs a very fast, semi-analytical computation of the electric field (see section \ref{rm} for details). The antenna response and the trigger computation are simulated in the same way as for the {\it microscopic simulation} chain. The gain in computation times allows to study a larger number of configurations than with the {\it microscopic} approach. \subsection{Cone Model} \label{coneprinciple} Even if significantly faster than the microscopic method, the {\it Radio Morphing} treatment still requires that the antenna response is computed, and thus implies that hundreds of time traces for electric field and voltage need to be handled for each simulated event. A third, much lighter method is therefore used in this study. It is based on a geometric modeling of the volume inside which the electromagnetic field amplitude is large enough to trigger an antenna. We give to this volume the shape of a cone, oriented along the shower axis, with its apex placed at the $X_{\rm max}$ position, half-angle $\Omega$ and height $H$. Values of $\Omega$ and $H$ depend on shower energy, and are adjusted from ZHAireS simulations (see section \ref{cone} for details). A shower is considered as detected if at least five antennas are within the cone volume. A similar {\it Cone Model} was used to compute the initial neutrino sensitivity of the GRAND detector\,\cite{Martineau:2016yj}.Being purely analytical, this method produces results nearly instantaneously and requires only minimal disk space and no specific simulation software, an attractive feature when it comes to perform simulation for thousands of detection units covering vast detection areas. \\ \subsection{Toy detector configuration} \label{toymodeldescription} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{./figures/TM_scheme.pdf} \caption{Layout of the toy-setup considered in this study. A tau particle decays at a location represented as a star, producing an air shower. The radio signal emitted by the shower impinges the detector plane, tilted by an angle $\alpha$ from the horizontal. The intersection between the detector plane and the horizontal plane is set at a horizontal distance $D$ from the decay point. The parameter $D$ is therefore a measurement of the amount of free space in front of the detector.} \label{cone_schema} \end{figure*} The detector considered in this study is presented in Figure~\ref{cone_schema}. It is a rectangular grid with a step size of $1000$\,m between neighbouring antennas. This large step size is a distinct feature of the envisioned dedicated radio array for the detection of neutrino-induced air showers\,\cite{Alvarez-Muniz:2014dza}. It is a compromise between the need for very large detection areas imposed by the very low event rates expected for one part, and the instrumental and financial constraints which limit the number of detection units for the other. In our study we use a simplified, toy setup configuration where the antenna array is deployed over a plane inclined by an adjustable angle $\alpha$ (also called "slope" in the following) with respect to an horizontal plane. We restrict our treatment to showers propagating to the North, i.e. directly towards the detector plane. For other directions of propagation, the size of the shower footprint on ground ---hence its detection probability--- would directly depend on the width of the detector plane. Defining a specific value for this parameter would be highly arbitrary, given the great diversity of topographies existing in reality. For showers propagating towards the detector however, the shower footprint is aligned with the detector longitudinal axis (see Figure \ref{cone_schema}), and the detector width then has a negligible effect on the shower detection efficiency. This motivates our choice to restrict ourselves to this single direction of propagation. The horizontal distance between the tau decay point and the foot of the detector, $D$, can be understood as the amount of empty space in front of the detector over which the shower can develop and the radio signal propagate. It is therefore closely related to the topography of the detection site. The reference ground elevation is chosen to be 1500~m above sea level (a.s.l.). A maximum altitude of $4500$\,m a.s.l. is set for the antennas, as larger elevation differences are unrealistic. The vertical deviation due to Earth curvature can be estimated by $2\delta h \approx R_{\rm earth} (L/R_{\rm earth})^2$\,km, where $L << R_{\rm earth}$ is the longitudinal distance between the maximum development of the shower and the observer. For $L=50$\,km, we find $\delta h < 100$\,m. A flat Earth surface is therefore assumed in this toy setup configuration. The slope $\alpha$ and the distance $D$ are the two adjustable parameters of the study. Values of $\alpha$ vary from 0 to 90$\degree$ and $D$ ranges between 20 and 100\,km, covering a wide variety of configurations. As will be detailed in section \ref{toymodelres}, larger values of $D$ are irrelevant because most showers would then fly over the detector (see Figure\,\ref{fly_above} in particular), an effect that would furthermore be amplified if the Earth curvature was taken into account. Values of $\alpha$ larger than 30$\degree$ are also not realistic, because steeper slopes are not suitable to host a detector, but they are included in our study for the sake of completeness, and because these extreme cases will help us interpret the results of the study. For each pair of values $(\alpha, D)$, we process the two sets of tau decays of energies $E_{\nu}=10^9$\,GeV and $E_\nu=10^{10}$\,GeV with the three methods {\it microscopic}, {\it Radio Morphing} and {\it Cone Model}. We then use the fraction of tau decays inducing a trigger by the detector to perform a relative comparison between $(\alpha, D)$ configurations. This treatment allows to directly assess the effect of topography on neutrino-induced shower detection efficiency ---the purpose of this paper--- for a reasonable amount of computing time, given the large number of configurations ($\alpha$, D) considered in this study. \\ \section{Results} \label{res} We have computed the detection efficiency for our toy setup through the three independent simulation chains presented in section \ref{principle}. Detection efficiency is defined here as the ratio of the number of showers detected to the total of $100$ selected tau decays. The parameters ranges explored initially are distances $D=\{20, 30, 40, 60, 80, 100\}$ km and slopes $\alpha=\{0, 5, 10, 15, 20, 45, 90\}$ degrees. This coarse step is mainly motivated by computation time and disk space considerations for the {\it microscopic} simulation. We first show a relative comparison of the different methods before discussing the effects of the topography on the detection efficiency. \begin{figure*}[tb] \center \includegraphics[width=0.49\textwidth]{./figures/ZhairesVSRM_1e+18.pdf} \includegraphics[width=0.49\textwidth]{./figures/ZhairesVSRM_1e+19.pdf} \caption{{\it Left:} Detection efficiency as a function of the distance $D$ and slope $\alpha$ for the simulation set with a primary neutrino energy of $10^9$\,GeV. Comparison between ZHAireS and {\it Radio Morphing} (respectively {\it top} and {\it middle} plots, while the difference is plotted at the {\it bottom}) and conservative thresholds ({\it left}) and aggressive thresholds ({\it right}). {\it Right:} Same for a primary neutrino energy of $10^{10}$\,GeV.} \label{Zhaires_RM_sens} \end{figure*} \begin{figure*}[tb] \center \includegraphics[width=0.49\textwidth]{./figures/ConeVSRM_1e+18.pdf} \includegraphics[width=0.49\textwidth]{./figures/ConeVSRM_1e+19.pdf} \caption{ {\it Left:} Detection efficiency as a function of distance $D$ and slope $\alpha$ for the simulation set with a primary neutrino energy of $10^9$\,GeV. Results are plotted for the {\it Cone Model} ({\it top}) and {\it Radio Morphing} ({\it middle}), as well as the difference ({\it Radio-Morphing} - {\it Cone Model}) ({\it bottom}). Conservative ({\it left}) and aggressive ({\it right}) threshold hypothesis are also considered ({\it right}). {\it Right:} Same for a primary neutrino energy of $10^{10}$\,GeV.} \label{Cone_RM_sens} \end{figure*} \begin{figure*}[tb] \center \includegraphics[width=0.49\textwidth]{./figures/distance_slices_1e+18.pdf} \includegraphics[width=0.49\textwidth]{./figures/distance_slices_1e+19.pdf} \includegraphics[width=0.49\textwidth]{./figures/residuals_1e+18.pdf} \includegraphics[width=0.49\textwidth]{./figures/residuals_1e+19.pdf} \caption{{\it Top:} Detection efficiency as a function of slope $\alpha$ for a distance $D= 40$\,km for neutrinos energies of $10^9$ ({\it left}) and $10^10$\,GeV ({\it right}). Comparisons between the microscopic (solid lines), {\it Radio Morphing} (dashed lines) and {\it Cone Model} (dash-dotted lines), for conservative (black lines) and aggressive (red lines) threshold hypothesis. {\it Bottom:} Differences {\it ZHAireS-RadioMorphing} and {\it ZHAireS - Cone Model} for the data shown in the top pannel, following the same color code.} \label{distance_slices_1e18} \end{figure*} \subsection{Relative comparison} Figure \ref{Zhaires_RM_sens} shows that the {\it Radio Morphing} treatment induces trigger efficiencies at most 15\% higher than {\it microscopic} simulations. This confirms results obtained in \cite{Zilles:2018kwq} and qualifies the {\it Radio Morphing} chain as a valid tool for the study presented in this article. Taking advantage of the factor $\sim$100 gain in computation time of {\it Radio Morphing} compared to {\it microscopic} simulations\,\cite{Zilles:2018kwq} we then decrease the simulation step size down to 2$^{\circ}$ for slope $\alpha$ and 5\,km for the distance to decay $D$, allowing for a more detailed study of the effect of topography on the array detection efficiency. This refined analysis is presented in Figure \ref{Cone_RM_sens}, where results of the {\it Cone Model} are also shown. The distribution of the {\it Cone Model} detection efficiency in the ($\alpha$,$D$) plane follows a trend similar to the {\it Radio Morphing} one, with differences within $\pm$20\% for most of the parameter space ($\alpha,D$). There are however some differences, in particular a significant under-estimation with the {\it Cone Model} in the ranges ($\alpha>30\degree$, $D>80$\,km) and ($\alpha<20\degree$, $D<30$\,km). There is also a flatter distribution as a function of slope for the conservative trigger hypothesis, which results in an over-estimation for ($\alpha>30\degree$, $D<40$\,km) for the {\it Cone Model}, also visible on Figure \ref{distance_slices_1e18}. Discrepancies are not surprising since the {\it Cone Model} is an approximate method as already pointed out in section \ref{cone}. One should however be reminded that slopes $\alpha>30\degree$ correspond to extreme cases, very rare in reality and which cannot be considered for actual deployment. For realistic slope values $\alpha<30\degree$, the {\it Cone Model} detection efficiencies differ from those of the {\it microscopic} approach by -30\% at most. The {\it Cone Model} can thus safely be used to provide in a very short amount of time a rough and conservative estimate of the neutrino sensitivity for realistic topographies. This result also provides an {\it a posteriori} validation of the initial computation of the GRAND array sensitivity \cite{Martineau:2016yj}, even though the cone was then parametrized from showers simulated in the 30-80\,MHz frequency range. \subsection{Toy-setup discussion} \label{toymodelres} Below we study how the topography affects the detection potential of neutrino-induced air showers by a radio array. To do that, we use the results of the {\it Radio Morphing} chain, which provide at the same time good reliability and fine topography granularity as explained in the previous section. Despite statistical fluctuations obviously visible in Figures \ref{Cone_RM_sens} and \ref{distance_slices_1e18}, general trends clearly appear. Four striking features can in particular be singled-out: \\ $\bullet$ A significant increase of the detection efficiency for slopes varying from 0 degree up to few degrees: the detection efficiency for a flat area is lower by a factor $3$ compared to an optimal configuration $\qty(\alpha, D) \approx \qty(10 \degree, 25\,{\rm km})$. This result is consistent with the study presented in \cite{GRANDWP}, where the effective area computed for a real topography on a mountainous site was found to be four times larger than for a flat site. \\ $\bullet$ Limited variation of the detection efficiency for slopes between $\sim2\degree$ and $\sim20\degree$. \\ $\bullet$ An efficiency slowly decreasing for slopes larger than $\sim20\degree$. This is in particular valid for distances $D$ shorter than 40\,km, where the detection efficiency is nearly null. \\ $\bullet$ A slow decrease of the detection efficiency with increasing value of $D$. \medskip To interpret these results, we may first consider that two conditions have to be fulfilled to perform radio-detection of showers: first the radio beam must hit the detector, then enough antennas (five in this study) have to trigger on the corresponding radio signal. In order to disentangle these two factors ---one mostly geometrical, the other experimental---, we display in Figure \ref{fly_above} the fraction of events reaching the detector as a function of the parameters $\qty(\alpha, D)$. These events are defined by a non-null intersection between the detector plane and a $3\degree$ half-aperture cone centered on the shower trajectory, a conservative and model-independent criterion. \begin{figure*}[tb] \center \includegraphics[width=0.49\textwidth]{./figures/fly_above_1E+9.pdf} \includegraphics[width=0.49\textwidth]{./figures/fly_above_1E+10.pdf} \caption{ {\it Left:} Fraction of events intersecting the detection area as a function of distance $D$ and slope $\alpha$ for the simulation set with a primary neutrino energy of $10^9$\,GeV. {\it Right:} Same for a primary neutrino energy of $10^{10}$\,GeV.} \label{fly_above} \end{figure*} It appears from Figure \ref{fly_above} that the large fraction ---around 90\%--- of showers flying above the detector is the main cause of the limited efficiency of a flat detection area. As a corollary, the steep rise of detection efficiency with increasing slope is clearly due to the increasing fraction of intercepted showers. Figures \ref{Cone_RM_sens} and \ref{fly_above} however differ significantly for configurations corresponding to $\alpha>20\degree$ and $D<40$\,km: the fraction of intercepted events varies marginally with $\alpha$ at a given $D$, while the detection efficiency drops. This means that the first condition for detection ---detector inside the radio beam--- is fulfilled for these configurations, but the second ---sufficient number of triggered antennas--- is not, because the tau decay is too close, and the radio footprint at ground consequently too small. The situation may be compared ---with a 90$\degree$ rotation of the geometry of the problem--- to the radio-detection of "standard" air showers with zenith angle $\theta<60\degree$, which suffers from limited efficiency for sparse array\,\cite{Charrier:2018fle}. A larger density of detection units would certainly improve detection efficiency, but the need for large detection areas, imposed by the very low rate of neutrino events, discards this option. Finally the slow decrease in efficiency with increasing value of $D$ is mostly due to geometry, as the fraction of intersecting events diminishes with $D$ in similar proportion. \begin{figure*}[tb] \center \includegraphics[width=0.49\textwidth]{./figures/integ_1e+18.pdf} \includegraphics[width=0.49\textwidth]{./figures/integ_1e+19.pdf} \caption{{\it Left:} Average detection efficiency over a constant detector area as a function of slope for the simulation set with a primary neutrino energy of $10^9$\,GeV. {\it Right:} Same for a primary neutrino energy of $10^{10}$\,GeV. In both cases values are computed with the {\it Radio Morphing} treatment.} \label{constant_average_detection_eff} \end{figure*} \medskip Yet, one could argue that this result is biased by the detector layout defined in our toy-setup. The infinite width of the detection plane combined with a limit on the detector elevation ($3000$\,m above the reference altitude, see section \ref{toymodeldescription}) indeed implies that a detector deployed over mild slopes is larger than one deployed over steeper ones in this study. A value $\alpha =10\degree$ for example allows for a detector extension of $3/\sin\alpha\sim17$\,km, while $\alpha = 70\degree$ implies a value six times smaller. Considering a constant detector area for all configurations ($\alpha$,D) and comparing their effective area ---or expected event rates--- would avoid such bias, but would require a complete Monte-Carlo simulation. This is beyond the scope of this study, and would be useful only if real topographies were taken into account. It is however possible to estimate this bias by studying how the {\it constant area detector efficiency} varies with slope. This quantity is defined as the detection efficiency averaged over $D$ and weighted with a factor $\sin\alpha$. As $D$ measures the amount of empty space in front of the detector (see section \ref{toymodeldescription}), averaging the efficiency over all values of $D$ allows us to take into account all possible shower trajectories for a given slope value. The factor $\sin\alpha$ corrects for the variation of the detector area with slope. The {\it constant area detector efficiency} can therefore be understood as a proxy for the event rate per unit area of a detector deployed on a plane of slope $\alpha$, facing an infinite flat area. The {\it constant area average efficiency} computed from the {\it Radio Morphing} results is displayed as a function of slope on Figure \ref{constant_average_detection_eff}. \\ Beyond a certain threshold ($\sim20\degree$ for the conservative case, $\sim30\degree$ for the aggressive one), there is no significant variation of its value with $\alpha$, because the poor performance of steep slopes for close-by showers (i.e. small values of $D$) compensates for the larger area factor $\sin\alpha$. Figure \ref{constant_average_detection_eff} also confirms the clear gain of a slope ---even mild--- compared to a detector deployed over flat ground. \\ Only showers propagating towards the slope were considered in this study, but one can deduce from Figures \ref{distance_slices_1e18} and \ref{fly_above} that the opposite trajectory (corresponding to a down-going slope, i.e. $\alpha<0$) results in a $\sim0$ detection probability. For showers traveling transverse to the slope inclination (i.e. along the East-West axis in our configuration), basic geometric considerations allow to infer that the situation is probably comparable to horizontal ground. The boost effect of value 3 determined for showers propagating towards the detector plane thus certainly corresponds to a best-case scenario. Computing the net effect of a non-flat topography on the neutrino detection efficiency for random direction of arrivals cannot be performed with this toy-setup configuration (see section \ref{toymodeldescription} for details). However we note that a study presented in Ref. \cite{GRANDWP} points towards a boost factor of $\sim$2 on the detection of upward-going showers for the specific site used in that work. \subsection*{Acknowledgments} We are grateful to Clementina Medina and Jean-Christophe Hamilton for suggesting this study. This work is supported by the APACHE grant (ANR-16-CE31-0001) of the French Agence Nationale de la Recherche. We also thank the France China Particle Physics Laboratory for its support. The simulations were performed using the computing resources at the CC-IN2P3 Computing Centre (Lyon/Villeurbanne – France), partnership between CNRS/IN2P3 and CEA/DSM/Irfu. \section{References} \bibliographystyle{elsarticle-num}
2024-02-18T23:39:55.528Z
2020-10-22T02:19:02.000Z
algebraic_stack_train_0000
837
7,593
proofpile-arXiv_065-4197
\section{\label{sec:Introduction}Introduction} The ability to prepare and manipulate quantum states of nano-mechanical systems is of interest in metrology and for tests of fundamental quantum physics. Ground state cooling has already been achieved in cryogenic chambers with silicon membranes and other microwave devices\,\cite{Verhagen2012,Chan2011}. However, there is a desire to produce quantum states of motion with levitated particles that are not physically tethered to their surroundings, and which therefore have significantly longer decoherence times. If realised, these systems would be a platform for many novel experiments; tests of wave function collapse models \,\cite{Yin2013}, for ultra-sensitive metrology\,\cite{Geraci2010}, and to probe gravitational decoherence\,\cite{Albrecht2014}. Most of the progress towards preparing ground state systems has been made with optically levitated particles, where recent experiments are currently capable of detecting - and are limited by - photon shot noise\,\cite{Jain2016}. Although optical traps are the most widely used for trapping microscopic particles, they can face problems with heating due to the high laser intensities\,\cite{Rahman2016} and the intrinsic noise associated with the trapping force. While optical traps are still very good for high-frequency scenarios, these difficulties become more severe at low frequencies. Magnetic traps are free from these problems and have recently been demonstrated as suitable for trapping and cooling nano-diamonds\,\cite{Hsu2016, OBrien2019}. The traps are typically three orders of magnitude larger than their optical counterparts, and consequently operate at much lower frequencies, of around $100$Hz as opposed to $100$kHz for an optical trap. This comes with the advantage of being able to hold and manipulate large particles, but also makes it unfeasible to cool on time-scales much longer than the relatively long oscillation period. Current experiments\,\cite{Jain2016,Hsu2016} have estimated the phonon reheating rate for these systems in high vacuum ($10^{-8}$ mbar) to be $\Gamma_{\text{th}}\approx100$Hz and it is expected that this will be significantly reduced at lower pressures. In this article we consider methods for improving the quantum measurement efficiency of levitated nano-particles, and go on to analyse how best to apply feedback and assess the fundamental cooling limits. Direct feedback of a position measurement in the form developed by Wiseman and Milburn\,\cite{Wiseman1994} has been shown to be effective in controlling the motion of optically levitated ions\,\cite{Bushev2006}, but we find it to be less suitable here. The cooling strategies employed with direct feedback rely heavily on a separation of time-scales between the damping and trap frequency that is impractical in larger traps. Instead, our starting point is to adapt the real time state estimation and the feedback strategies discussed by Doherty et al.\,\cite{A.C.Doherty1999} for use in this newly accessible low-frequency regime. Having considered several options for tracking a particle's position and momentum, we suggest making measurements in two steps. At first, scattered light from the particle can be imaged with a quadrant photo-diode, and an externally applied damping force can be used for cooling. After damping the particle's motion to sub-optical-wavelength amplitudes, significantly better resolution can be achieved by measuring how the particle scatters light into a mirror mode. An ideal candidate particle for future experiments would be an approximately spherical nano-diamond, which are of interest due to access to internal nitrogen-vacancies (NV). This second quantum handle on the particle is crucial for many proposed future experiments\,\cite{Yin2013,Albrecht2014} and may also provide a route to having fine control over micron, as opposed to nanometre, sized particles. We go on to show that the proposed methods could be used to produce motional states of microscopic oscillators with average phonon occupancy $\langle n\rangle < 3$ and state purity $P\approx0.44$, achievable with realistic measurement efficiencies for current experiments. This is a regime where it should already be possible to see signs of quantum behaviour in the particle motion, and could provide a starting point for preparation of more exotic macroscopic superposition states. With advances in isolation from environmental heating, and improvements in light collection efficiency, there are no fundamental limits to these techniques being used to reach the quantum motional ground state. The rest of this article is organised as follows. In Sec. II we review the stochastic master equation that results from measuring the motion of a particle in front of a single mirror. In Sec. III we discuss the merits and limitations of various measurement schemes, and the practicality of real time state estimation. In Sec. IV we show the effectiveness of feedback by estimation, in cooling and squeezing mechanical motion. We conclude and present outlooks in Sec. V. \section{\label{sec:model}Model} Levitated, trapped particles for the purpose of cooling are, by design, simple oscillators. Our model describes the motion of a magnetically confined particle and its interaction with an optical probe beam. We will treat the internal dynamics of the light scattering process adiabatically, and model the particle as a point dipole in the Rayleigh regime - alternatives for larger particles will be discussed in the conclusions. The Hamiltonians of the freely oscillating particle, $H_{sys}$, the optical field, $H_F$, and the interaction Hamiltonian, $H_I$, are given by \begin{equation} H_{\text{sys}} = \frac{p^{2}}{2m} +\frac{m \omega^{2} x^{2}}{2} \text{,} \end{equation} \begin{equation} H_{\text{F}} = \sum_{k}\hbar \omega_{k} b_{k}^{\dagger}b_{k}, \end{equation} \begin{equation} H_{\text{I}} = \sum_{k} \hbar \sqrt{\gamma}\big(b_{k}\exp(i \mathbf{k}.\mathbf{r})+b_{k}^{\dagger}\exp(-i \mathbf{k}.\mathbf{r})\big) \text{,} \end{equation} where $m$ is the particle mass, $\omega$ is the magnetic trap frequency, $\gamma$ is the scattering rate into each mode of the optical electric field, $b_k(b_k^{\dagger})$ is the usual quantised field mode amplitude, with wavenumber and angular frequency of k, $\omega_k$ respectively. The momentum recoil due to the scattered photons is represented by $\mathbf{k.r}$, where $\mathbf{r}$ is the particle's position. It is sufficient to model the motion of the particle in 1D, as although some cooling is often applied along each trap axis, the frequencies of each motional degree of freedom can be well separated and safely decoupled, as is done in current experiments \cite{Hsu2016}. Continuous measurement theory allows for quantification of the disturbance caused to the particle in relation to the amount of position information carried away by the field\,\cite{Jacobs2006}. We will go on to discuss the merits and drawbacks of various measurement schemes, but first we outline the details of the method we assess to be the most suitable for magnetically levitated particles. \subsection{\label{sec:detection}Motional side-band detection} The set-up we consider uses a mirror to introduce a standing wave mode across the levitated particle, where some of the scattered light from the illumination probe will be collected, as shown schematically in Fig.~\ref{fig:setup}. The mirrors here can be quite large, capturing a significant fraction of the light scattered along the primary trap axis. The particle motion adds side-bands to the spectrum of light scattered in the mirror mode, positioned at $\pm \omega$ from the optical frequency. Continuous measurement of these side-bands can be used to infer the particle's current position after filtering out the elastically scattered signal. This is a non-intrusive set-up that could be implemented in magnetic traps to give a significant increase in measurement efficiency and resulting position resolution, over current imaging schemes. \begin{figure}[ht] \begin{center} \textbf{}\par\medskip \includegraphics[scale=0.4]{setup.pdf} \caption{Sketch of apparatus for measuring the intensity of a standing wave, modulated by a particle's motion along its main trap axis $x$. The trap centre is marked a distance $L$ from the mirror, with the probe light incident at an angle $\theta$. The range of motion over which this measurement would be valid is restricted about a node of the standing light field, and has also been marked. With shaped illumination, light collection efficiencies $>15\%$ could be reasonably achieved.} \label{fig:setup} \end{center} \end{figure} The interaction Hamiltonian, considering only emission into the mirror mode, is \begin{equation} H_{\text{I}}=\hbar\sqrt{\gamma}\sin(k_{L}(L+\hat{x}))\big(b+b^{\dagger}\big)\text{.} \end{equation} If the position of the trap centre is taken to be where $k_{L}L=\pi/4$, we can define the corresponding system operator \begin{equation} \hat{c}=\sin\big(k_{L}(L+\hat{x})\big)\approx\frac{1}{\sqrt{2}}\big(1+k_{L}\hat{x})\big)\text{,} \end{equation} \noindent where we have performed a Taylor expansion in the Lamb-Dicke regime. This expansion is possible when the typical length of the oscillation is small compared with the wavelength of incident light, $k_{L} x \ll 1$ (some initial cooling would be required to reach this regime). We note that this operator has two separate components, describing the effects of constant amplitude elastically scattered light and position-dependent modulated light. We can then apply continuous measurement theory from quantum optics\,\cite{gardiner2004quantum,carmichael2009open} to the system. Under the usual Born and Markov approximations, for this form of the interaction Hamiltonian, we can think of the operator $\hat{c}$ as being applied to the system whenever a photon is emitted into the field $\rho \to \hat{c}\rho\hat{c}^{\dagger}/\langle \hat{c}^{\dagger}\hat{c}\rangle$. A stochastic increment $dN$ can be used to model whether or not a photon is detected in a given time-step in the environment, taking a value of zero or one respectively. Its average value should be the detection rate, \begin{equation} \langle dN\rangle = \gamma \langle \hat{c}^{\dagger}\hat{c}\rangle dt \text{,} \end{equation} corresponding to the expected value of measuring a scattered photon in the mirror mode in a time interval $dt$. In the limit where the component of elastically scattered light is comparatively large it is helpful to make a diffusion approximation, as is commonly done when considering homodyne detection\,\cite{Wiseman1993}. This is indeed the case here and so \begin{equation} \begin{split} dN &= \gamma \hat{c}^{\dagger}\hat{c}\, dt \approx \frac{\gamma}{2}dt +\gamma k_{L}\hat{x}dt \\ &= \frac{\gamma}{2}dt + \gamma k_{L}\langle x\rangle dt + \frac{\sqrt{\gamma}}{2} dW \text{,} \end{split} \end{equation} where in the last line we have followed the usual analysis for random events occurring quickly enough to be treated as continuous noise, splitting the increment on the right hand side into a sum of two parts. one deterministic and the other stochastic. The Wiener increment $dW$ represents Gaussian white noise. This signal corresponds directly to what would be measured experimentally by a photo detector. The resulting state evolution is described by a master equation conditioned on the Gaussian measurement collapses \cite{Jacobs2006} \begin{equation} d\rho=-\frac{i}{\hbar}\big[H_{\text{sys}},\rho\big]dt + 2\kappa\,\mathcal{D}[x]\rho \, dt + \sqrt{2\eta k}\,\mathcal{H}[x] \rho \, dW \text{.} \label{master} \end{equation} Here $\mathcal{D}[x]$ is the usual Lindblad super-operator that describes dissipation and $\mathcal{H}[x]$ is the measurement super-operator that localises the particle based on the information gathered; \begin{equation} \mathcal{D}[\hat{c}]\rho=\hat{c}\rho \hat{c}^{\dagger}-\frac{1}{2}(\hat{c}^{\dagger}\hat{c}\rho+\rho \hat{c}^{\dagger}\hat{c}) \text{,} \end{equation} \begin{equation} \mathcal{H}[\hat{c}]\rho=\hat{c}\rho+\rho \hat{c}^{\dagger}-\langle \hat{c}+\hat{c}^{\dagger}\rangle \rho \text{.} \end{equation} The \textit{measurement strength} $\kappa$ is defined as the ratio between the scattering rate and the reduction in position uncertainty of the particle $\delta\alpha$ due to each photon, \begin{equation} \kappa = \frac{\gamma}{\delta\alpha^{2}} = \frac{\gamma k_{L}^{2}}{2} \text{.} \end{equation} \noindent This measurement strength reflects the rate of information gained about the system and the corresponding disturbance this necessarily causes. This exact expression for $\kappa$ would be accurate if the scattering was exclusively along the $x$ axis, the true value will be less in any other case where we should only count the momentum kicks projected along the $x$ direction. This is a small correction and should not be a problem given that $\kappa$ otherwise scales with increasing scattering rate off the particle, and can be adjusted by increasing the laser power. The parameter $\eta$ is the quantum efficiency, and accounts for the fraction of photons collected (after projection along the measurement axis) and any further loss that occurs in the detector. The measured photo-current can be expressed as a renormalisation of the now continuous photon count $\langle dN\rangle$, after subtracting the elastically scattered signal in post-processing, \begin{equation} dI = \langle x\rangle dt + \frac{1}{\sqrt{8\eta \kappa}}dW \text{.} \label{measurement} \end{equation} It has been suggested that light collection efficiency of $\eta \approx 0.15$ could be reasonably expected when monitoring an optically trapped ion in front of a mirror\,\cite{Bushev2006}. One of the significant advantages of magnetic levitation is that the illumination light is independent of the trapping mechanism, which allows it to be shaped to optimize detection efficiency. This is of crucial importance when relying on active feedback cooling in order to counteract the random motion induced by the measurement itself. The shot-noise in optically trapped nano-particle experiments currently poses a major obstacle to reaching the ground state, with typical collection efficiencies $\eta < 0.01$\,\cite{Jain2016}. \section{\label{sec:m&e}Measurement \& State estimation} \subsection{\label{sec:measurement}Measurement} The main obstacles to ground state cooling using active feedback are environmental heating mechanisms and the fundamental disturbance associated with making measurements. In order to reach the quantum regime it will be necessary for environmental heating to be made negligibly small on the time-scales of the measurement and feedback. A reasonable goal would be to cool a particle in a time comparable to the oscillation period of a $\omega=2\pi\times100$Hz trap. In this case the phonon reheating rate would need to be reduced to around $\Gamma_{\text{th}}=k_{B}T\gamma_{\text{th}}/\hbar\omega\sim1$Hz, where $T$ represents the surrounding gas temperature and $\gamma_{\text{th}}$ is the thermal damping rate. Current typical reheating values are around $100$Hz, and below 10 mbar, thermal decoherence is expected to be linear in gas pressure and in the temperature of the environment. By better isolating the particle, or with the help of cryogenically cooling the trap chamber, reheating rates two orders of magnitude lower could feasibly be reached. Attempting to cool on shorter time-scales comes with its own physical limitations, and requires stronger measurements which are in turn a separate source of heating. It is helpful to consider the necessary measurement strength to reach a desired position resolution in a given time. A simple estimate of the resolution achievable across an interval $\Delta t$, can be found be integrating the measurement record \cite{Doherty2012}, \begin{equation} \Delta I = \int_{t}^{t+\Delta t} dI \approx \langle x\rangle \Delta t + \int_{t}^{t+\Delta t} \frac{dW}{\sqrt{8\eta \kappa}} \text{.} \end{equation} In this expression we have assumed that the expected value of the position of the particle will not change much over the time interval. This is not a well justified assumption but will allow us to determine an upper bound for the resolution. The integrated measurement signal $\Delta I$ has a mean value of $\sqrt{8\eta \kappa}\langle x\rangle \Delta t$ that grows linearly in time, and its width grows as the square root, $\sigma=\sqrt{\Delta t}$. Continuous measurement over this interval could therefore resolve at best, \begin{equation} \delta x \approx \frac{1}{\sqrt{8 \Delta t\, \eta \kappa}} \text{,} \label{resolution} \end{equation} with a signal to noise ratio of one. We would like to achieve resolution comparable to the size of the quantum ground state $x_{0} = \sqrt{\hbar/2m\omega}$, in some time interval which for now we will consider to be on the order of a mechanical oscillation $\Delta t = 1/\omega$, to outpace a realistic thermal heating rate, \begin{equation} \delta x_{\omega} = \sqrt{\frac{\omega}{8\eta \kappa}} \equiv x_{0} \text{.} \end{equation} From this, we can conclude in order to approach ground state cooling, it is necessary for $\kappa/x_{0}^{2} \sim \omega/8\eta$. This places a lower bound on the necessary measurement strength, with the trade-off for going to higher values being greater back-action heating and stochastic drift. Actively counteracting the disturbance caused by a probe light, relies on efficiently gathering as much useful information as possible from every scattered photon. This along with the necessary resolution requirement, are the criteria for a suitable measurement. We can now assess the merits and shortcomings of various measurement techniques. Camera-like imaging has been used in previous experiments with particles in low frequency traps. A camera follows a particle's position in a plane perpendicular to the direction of light being scattered from it. However, it is light scattered parallel to this plane that imparts the most recoil to the visible motion of the particle. This translates to a very low quantum efficiency. For example, $15\%$ light collection efficiency from a radiating point dipole $f(\theta) = 3/4 \cos(\theta)$, translates to detecting $\sim 1\%$ of the imparted recoil in the imaging plane. Meanwhile a measurement of a particle's motion parallel to the light being scattered, with the same collection efficiency, translates to detecting around $\sim 19\%$ of the relevant recoil (as in Fig.~\ref{fig:setup}). Even so, imaging is simple to implement and for the purpose of initially damping the position variance to around a fraction of a micron, low quantum efficiency will not be an issue. For comparison, a $0.1\mu m$ diameter diamond in a trap $\omega = 2\pi\times100$Hz, will only be quantum limited when approaching the ground state variance of roughly $x_{0}\approx 0.1nm$. Many high efficiency measurements capable of resolving beyond optical-wavelength amplitude motion, require the particle to already be tightly confined. In a large trap this necessitates some initial cooling so that the particle does not move outside the range of these measurement techniques. Feedback using imaging measurements is well suited for this. Introducing a cavity around the suspended particle is often a reliable way to improve light collection efficiency. Homodyning light from a standing wave cavity can be used to efficiently track the position of a particle, however this necessarily introduces a dipole potential tied to the measurement strength, and has its own associated challenges\,\cite{Steck2006}. Sideband cooling with near resonance light within a cavity has also been proposed as a useful aid in achieving ground state cooling\,\cite{Genoni2016}. However, this would not be compatible with the efficient on-axis light collection available in magnetic traps, and under optimal conditions, stops being beneficial for cooling compared to active feedback alone when $\eta\sim0.2$. This level of efficiency would hopefully be surpassed in future experiments with enhanced directional scattering. A sensitive velocity measurement was proposed for ion cooling by exploiting electromagnetically induced transparency\,\cite{Rabl2005}. This phenomenon could be observed in a travelling wave cavity with a diamond containing an NV centre, however, the velocity information would only be contained in the spontaneously emitted radiation from a necessarily weakly excited state. For a very massive particle this would be an extremely weak measurement $\kappa/x_{0}^{2}\ll \Gamma_{\text{th}}$, unable to suitably resolve the particle for damping on short time-scales. As discussed in the model section, the most suitable method we have found, involves measuring the amplitude modulation of a standing wave due to a particle's motion in front of a single mirror. This technique has been successfully demonstrated with trapped ions\,\cite{Steixner2005,Bushev2006} and has the potential to be very effective for monitoring magnetically levitated nanoscopic particles, when combined with initial cooling of the oscillation amplitude to around a single optical wavelength. \subsection{\label{sec:estimation}State estimation} \begin{figure}[ht] \includegraphics[scale=0.7,trim={0 0 0 1.3cm},clip]{measurement.pdf} \caption{Simulation of a trapped particle undergoing measurement, using (\ref{x}-\ref{Cxp}). The normalised measurement strength, $\kappa x_{0}^{2} / \omega = 1$, with $0.1\%$ quantum efficiency, and an initial particle energy corresponding to a temperature of $T=1\mu K$. The top figure shows a numerically generated example of a position measurement and the middle figure shows the results of continuous state estimation using the same signal. The estimated mean position plotted beside the true value, and the shaded region covers 2 standard deviations in the estimate. The bottom figure shows the improvement in the standard deviation in both position and momentum due to the measurement, and the dashed line here indicates the width of the motional ground state.}\label{fig:measurement} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.7,trim={0 0 0 1.3cm},clip]{heating.pdf} \caption{Simulation of particle heating due to measurement over several oscillation cycles, using (\ref{x}-\ref{Cxp}). The normalised measurement strength, $\kappa x_{0}^{2} / \omega = 1$, with $0.1\%$ quantum efficiency. The particle is initially in its ground state with temperature $T=0K$. This figure is otherwise organised in the same way as Fig. \ref{fig:measurement}.} \label{fig:heating} \end{figure} It will be necessary to process the measurement signal in order to perform feedback cooling, since it is not possible to achieve damping by making shifts in the system Hamiltonian proportional to the position alone. Using the equations of motion that describe the particle, combined with the measurement record, the full system state can be continuously estimated. This type of information processing can quickly converge on both the true position and momentum values of the particle, whilst updating the expected error in the estimation. Using the master equation (\ref{master}), and the fact that $d\langle c\rangle = Tr[c\, d\rho]$, we can find equations of motion for the relevant position and momentum moments to describe a Gaussian state undergoing measurement, \begin{equation} d\langle x\rangle = \frac{1}{m} \langle p\rangle dt + \sqrt{8\eta \kappa}\,V_{x}\,dW \text{,} \label{x} \end{equation} \begin{equation} d\langle p\rangle = -m\omega^{2}\langle x\rangle dt + \sqrt{8\eta \kappa}\,C_{xp}\,dW \text{,} \label{p} \end{equation} \begin{equation} \partial_{t}V_{x} = \frac{2}{m} C_{xp} - 8\eta \kappa\,V_{x}^{2} \text{,} \label{Vx} \end{equation} \begin{equation} \partial_{t}V_{p} = -2m\omega^{2}C_{xp} + 2\hbar^{2}\,\kappa - 8\eta \kappa\,C_{xp}^{2} \text{,} \label{Vp} \end{equation} \begin{equation} \partial_{t}C_{xp} = \frac{1}{m}\,V_{p} - m\omega^{2}V_{x} - 8\eta \kappa\,V_{x}\, C_{xp} \text{.} \label{Cxp} \end{equation} where $V_{x}$ and $V_{p}$ are the position and momentum variances, and $C_{xp} = (1/2)\langle[x,p]_{+}\rangle -\langle x\rangle\langle p\rangle$ is symmetrised covariance. The stochastic increments here can be re-written in terms of the measurement record $dI$, and the equations can be solved to estimate the particle's full motional state. This process would need to be carried out on time-scales $\delta t \ll 1$ms in a $\omega = 2\pi\times 100$Hz trap. The particle's motion is expected to look thermal when cooling starts and this provides a good guess for the particle's initial state variances. The measurement process itself also drives any state towards looking Gaussian, ensuring the continued reliability of these state equations. This procedure is not dissimilar to estimating the velocity by taking the derivative of the position signal, by passing it through a suitable band pass filter. In fact these state equations are exactly equivalent to the Kalman equations for a noisy classical system, and do indeed act like filters but with dynamic quality factors and cut off frequencies. Kalman equations are designed to update information about a system based on a series of imperfect measurements, and produce an estimate of the system that improves with time better than a series of measurements being made independently\,\cite{jacobs1993introduction}. The effectiveness of estimating the state of a levitated particle over a single oscillation cycle, is illustrated in Fig.~\ref{fig:measurement}, for a general position measurement. The true state is numerically modelled using the Gaussian moment equations (\ref{x} - \ref{Cxp}), with an initial temperature of $1\mu K$, which might be realistically achieved with classical feedback damping. The stochastic measurement record (\ref{measurement}) is also numerically generated based on the the current true state. This is then used to update a second set of the same Gaussian moment equations to simulate the state estimation procedure. The state estimate is initiated with thermal variances, where as the true state is modelled as a coherent state with thermal energy. The estimator quickly converges on the true state of the system, until reaching the resolution limit set bu the measurement strength and quantum efficiency. This full state model confirms the rough resolution limit (\ref{resolution}). The Gaussian state equations can also be used to illustrate the heating effects due to the measurement itself Fig.~\ref{fig:heating}. For the considered measurement strengths this is more easily visible with a state initially prepared at $T=0K$. Without any other sources of environmental heating, the measurement will add energy into the system until it reaches a temperature associated with the magnitude of the photon shot noise. This temperature is higher with more intense illumination and presents a trade off when trying to achieving a better resolution. \section{\label{sec:analysis}Feedback Cooling} There are two well established approaches to applying feedback that take into account the effects of quantum noise; direct feedback of a force proportional to the measurement signal\,\cite{Wiseman1994}, and feedback based on real time state estimation\,\cite{A.C.Doherty1999}. It is important to know whether feedback should be treated as direct in order to correctly account for how the noise in the measurement and in the system will be correlated. The simplest approach to damping is to apply a force proportional and opposite to a particle's current velocity, and if measuring the velocity explicitly, this can be implemented as direct feedback\,\cite{Rabl2005}. Similarly, in the case of a high quality oscillator it is sufficient to feedback a signal proportional to the slowly varying momentum quadrature\,\cite{Doherty2012}. Both these techniques require cooling over at least hundreds of oscillation cycles, which is not feasible in low frequency traps. In this case, indirect feedback using the state estimation is necessary, where the low trap frequencies will in fact be beneficial. \subsection{\label{sec:feedback}Feedback procedure} The optimal feedback strategy can be determined using classical control theory. In a classical system there would not be noise fundamentally linked to the measurement strength, but this can be artificially enforced. This is useful because it allows well developed control methods to be adapted for cooling\,\cite{Steck2006,Steck2004,Doherty1998}. Our sketch of the idea follows closely the work in Ref.~\cite{A.C.Doherty1999}. For this system it turns out not to be optimal to include the estimated state variances in the feedback function. They will be necessary to continuously solve for the mean position and momentum but the feedback will not directly involve the variance values. The feedback Hamiltonian should simply be some linear function of the momentum and position operators scaled by functions of the estimated first order moments, \begin{equation} H_{f} = f(\langle x\rangle,\langle p\rangle) x + g(\langle x\rangle,\langle p\rangle) p \text{.} \end{equation} To find the appropriate form of the functions $f$ and $g$ we can define a cost function for the parameter we want to minimise, in this case the energy, \begin{equation} C = \int_{0}^{t}\big[Tr(\mathbf{x}^{T}P\mathbf{x}\rho) + q^{2}\mathbf{u}^{T}Q\mathbf{u}\big] \text{.} \end{equation} Here $\textbf{x} = \{x,p\}$ is the state vector, and $\textbf{u} = -K\langle\textbf{x}\rangle$ is the feedback vector we want to introduce in the dynamical equations for the mean moments (\ref{x},\ref{p}); the optimal form of the matrix $K$ is what needs to be determined. The matrices $P$ and $Q$ are chosen so that the cost function represents the system energy, \begin{equation} P=Q= \begin{pmatrix} m\omega^{2} & 0\\ 0 & 1/m \end{pmatrix} \text{.} \end{equation} The matrix $Q$ can be interpreted as accounting for an energy cost associated with the feedback. Including it in this way reflects a restriction on the magnitude of the feedback weighted by the parameter $q$, which will work out to be inversely proportional to the system damping rate. Optimal feedback should attempt to localise both position and momentum simultaneously. This is not often a viable option due to the difficulty in creating terms proportional to the momentum operator in the Hamiltonian. A position term in the Hamiltonian can be introduced simply by using an externally applied force. One option to introduce a momentum operator, would be to shift the origin of the position coordinates, which in the rest frame of the trap manifests itself as a shift to the canonical momentum; \begin{equation} p \to m(\dot{x} + v) \text{,} \end{equation} \begin{equation} H_{\text{sys}} \to \frac{p^{2}}{2m} -pv +\frac{m \omega^{2} x^{2}}{2} \text{.} \end{equation} Where $v$ is the velocity at which the trap centre is shifted. This could be implemented in a low frequency magnetic trap either mechanically or with extra applied fields. The shifts would have to be small, given the measurement's sensitivity to where the particle sits in the standing wave field, but a piezoelectric device could be used to shake the trap in a controlled manner to achieve damping. This would be a unique level of control over both position and momentum for a nano-mechanical system. Assuming that this could be successfully implemented, the optimal feedback Hamiltonian takes the form, \begin{equation} H_{f} = \frac{1}{q} \big(\langle p\rangle x + \langle x\rangle p\big) \text{.} \end{equation} We can define $\Gamma = 1/q$, to be the system damping rate, and the parameter $q$ can now be interpreted as a bound on the feedback response time. This accounts for the physical limitations of the feedback mechanism, and places an upper bound on the optimal damping rate. For an infinitely broadband signal $q\to 0$ and the damping rate could be arbitrarily high. With feedback, the new equations for the damped position and momentum are \begin{equation} d\langle x\rangle = \frac{1}{m} \langle p\rangle dt + \sqrt{8\eta \kappa}\,V_{x}\,dW - \Gamma \langle x\rangle \text{,} \label{dx} \end{equation} \begin{equation} d\langle p\rangle = -m\omega^{2}\langle x\rangle dt + \sqrt{8\eta \kappa}\,C_{xp}\,dW - \Gamma \langle p \rangle\text{.} \label{dp} \end{equation} \subsection{\label{sec:results}Cooling results} In this system the introduction of linear feedback has no effect on the estimated variances conditioned on the measurement record. Their dynamics are governed by the measurement alone and we can therefore find the steady state values for our feedback controlled state from the original equations for the Gaussian moments (\ref{Vx},\ref{Vp},\ref{Cxp}), \begin{equation} \tilde{V}_{x} = \frac{2m\omega}{\hbar}\, V_{x} = \bigg(\frac{2}{\eta}\,\frac{1}{\xi + 1}\bigg)^{1/2} \text{,} \label{fVx} \end{equation} \begin{equation} \tilde{V}_{p} = \frac{2}{\hbar m\omega} \,V_{p} =\bigg(\frac{2}{\eta}\,\frac{\xi^{2}}{\xi + 1}\bigg)^{1/2} \text{,} \label{fVp} \end{equation} where $\xi = \sqrt{1+4/\eta \chi^{2}}$ and $\chi=m \omega^{2}/2\hbar\eta \kappa$. These normalised variances are equal to one for a minimum uncertainty state. This is the case for unit efficiency and when the parameter $\xi \to 1$, which in turn is the case when the measurement strength $\kappa\to0$. Relative to the trap frequency in optical traps, $k$ is usually very small, but with a strong measurement $\kappa x_{0}^{2} > \omega$, the steady state position variance is noticeably squeezed compared to the harmonic oscillator's natural ground state. Fig\,\ref{fig:var} shows how the conditional variances vary for the range of measurement strengths accessible in low frequency magnetic traps. \begin{figure}[ht] \begin{center} \textbf{}\par\medskip \includegraphics[scale=1]{variances.pdf} \caption{Final resolution of the normalised position and momentum variances of trapped particle, from the steady state solutions of a Guassian estimator (\ref{fVx},\ref{fVp}). Variance values less than 1 are squeezed compared to the harmonic oscillator ground state. The solid lines correspond to a measurement with perfect efficiency $\eta=1$ and the dashed lines $\eta=0.15$, these values and the measurement strength would vary depending on the nature of the measurement.} \label{fig:var} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.7,trim={0 0 0 1.3cm},clip]{damping.pdf} \caption{Simulation of a damped levitated particle, using (\ref{dx},\ref{dp},\ref{Vx},\ref{Vp},\ref{Cxp}). The normalised measurement strength, $\kappa x_{0}^{2} / \omega = 1$, with $10\%$ quantum efficiency, and initial particle energy corresponding to a temperature of $T=1\mu K$. The top figure shows a numerically generated example of a position measurement. The bottom figure shows the evolution of the mean position of the true state. The standard deviation of the motion from $t=\pi \to 2\pi$ is highlighted and can be seen to be significantly smaller than the fundamental shot noise in the original measurement signal.} \label{fig:damping} \end{center} \end{figure} \begin{figure}[t] \begin{center} \textbf{}\par\medskip \includegraphics[scale=0.75]{limits.pdf} \caption{Average steady state phonon occupancy of a trapped nano-particle after undergoing active feedback, calculated using the equations for a damped Gaussian state with excess noise (\ref{phonon}). The effective damping rate (feedback gain) was chosen to be $\Gamma/\omega = 10$, strong enough to remove almost all stochastic drift due to the measurement disturbance. The quantum efficiencies from the top line down are, $\eta = 0.05, 0.1, 0.2, 0.5, 1$. The final occupancies range from $\langle n\rangle <3$, for currently feasible experimental parameters ($\eta=0.2,k=1$), to near zero with perfect collection efficiency and a weaker measurement.} \label{fig:limits} \end{center} \end{figure} \noindent The estimated variances are the best that could be resolved with a given measurement. We can then average over the measurement record to account for the excess variance due to the particle's motion. The applied feedback should limit this as much as possible, keeping the mean position and momentum values centred on zero. Using the equations for the mean position and momentum (\ref{dx},\ref{dp}), and following the rules of Ito calculus, we can calculate the excess variances, which we have distinguished with a superscript `$E$', \begin{equation} \partial_{t}\tilde{V}_{x}^{E} = -2\Gamma \tilde{V}_{x}^{E} + 2\omega\tilde{C}_{xp}^{E} + \frac{2\omega}{\chi}\tilde{V}_{x}^{2} \text{,} \label{VxE} \end{equation} \begin{equation} \partial_{t}\tilde{V}_{p}^{E} = -2\Gamma \tilde{V}_{p}^{E} - 2\omega\tilde{C}_{xp}^{E} + \frac{2\omega}{\chi}\tilde{C}_{xp}^{2} \text{,} \label{VpE} \end{equation} \begin{equation} \partial_{t}\tilde{C}_{xp}^{E} = -2\Gamma \tilde{C}_{xp}^{E} - \omega\big(\tilde{V}_{x}^{E}-\tilde{V}_{p}^{E}\big) + \frac{2\omega}{\chi}\tilde{V}_{x}\tilde{C}_{xp} \text{.} \label{CxpE} \end{equation} The final state is always improved with stronger damping which effectively counteracts the measurement shot noise, as well as removing the initial thermal energy. The return for increasing $\Gamma$ quickly drops off, and for moderate damping rates $\Gamma > \omega$ the steady state variances approach the ideal limits given by the measurement resolution. This is reassuring since physically there would certainly be a bound to the feedback response time. Fig\,\ref{fig:damping} shows a simulation of the feedback procedure for experimentally reasonable parameters $\eta=0.1, T_{\text{initial}}=1\mu K, k\omega/x_{0}^{2} = 1, \Gamma=10$. The state is again modelled as a coherent state with thermal energy and feedback is applied based on a numerically simulated state estimator. The particle's motion is almost completely damped after a single oscialltion cycle and the excess variance in the mean position is highlighted, $\tilde{V}_{x}^{E} \sim 0.1$. The remaining motion is small compared to the fundamental resolution limit due to the photon shot noise. From the steady state expressions we can also find the purity of the final state\,\cite{Zurek1993}, \begin{equation} Tr(\rho^{2}) = (\hbar/2)(V_{x}V_{p}-C_{xp}^{2})^{-1/2} \text{.} \end{equation} If the damping is strong, the steady state value is approximately that of a conditional state without any excess. With perfect detection the final measured state looks pure, and becomes increasingly mixed as the efficiency drops, \begin{equation} P_{c}=Tr(\rho_{c}^{2}) = \sqrt{\eta} \text{.} \end{equation} To reach the lowest temperatures, $k$ would ideally be kept as low as possible to avoid squeezing due to the measurement. There is a balance then between resolving the particle fast enough to outpace environmental heating, and wanting a weak probe to minimise squeezing. Notably however, state purity has no dependence on the measurement strength, suggesting that the squeezed states with higher energy could reasonably be expected to have quantum properties which are just as visible. The final average phonon number can be calculated using the combined conditional variances based on a particular measurement, and the excess variance seen when averaging over trajectories, \begin{equation} \langle n\rangle = \frac{\langle x^{2}\rangle}{2}+\frac{\langle p^{2}\rangle}{2}-\frac{1}{2} \text{.} \label{phonon} \end{equation} Steady state phonon occupancy, calculated with (\ref{phonon}), is shown in Fig\,\ref{fig:limits}, for a range of measurement strengths and quantum efficiencies. These are the expected values that would be observed after damping, taking into account the estimated variance in the measurement signal (\ref{fVx},\ref{fVp}), and the excess variance associated with the remaining particle motion (\ref{VxE}-\ref{CxpE}). \noindent \section{\label{sec:Conclusions}Conclusions and Outlook} In this article, we have analysed processes for state estimation and feedback cooling of a low-frequency, magnetically levitated nano-particle. Monitoring the particle's position through modulation of a standing wave in front of a mirror was chosen as the most suitable option, over monitoring the light output from a cavity. This should be relatively simple to integrate into current experiments, and would allow for a high degree of variation in the measurement strength which would be primarily dependent on the intensity of the probe beam. The need to damp both the particle momentum and position independently is likely to be the largest experimental difficulty after achieving sufficient isolation from environmental heating. The unique nature of the static magnets that make up these traps may make it possible to control the particle by dynamically shifting the trap centre, and alternate methods using a sequence of strong controlled laser pulses are also possible. We suggest that measurement efficiency comparable to or greater than that achievable in ion traps, $\eta = 0.15$, could realistically be reached in an experiment. Optimal feedback via state estimation with this level of efficiency could produce states competitively near the quantum ground state with some additional degree of squeezing, $\langle n\rangle < 3$, with purity $P\approx0.44$, in only a few oscillation periods. In current experiments there are many factors to consider in order to extend the system reheating time, which will be the main barrier to achieving lower temperatures as it prevents the use of a less disruptive measurement probe. As these values improve, and with the possibility of highly directional scattering for better collection efficiency, it may soon be feasible to reach below single phonon occupancies using the methods outlined in this article. Most related experiments have so far assessed success based on a temperature associated with the measured motional power spectrum. Alternatively, there are recent proposals for distinguishing quantum motion via dynamical model selection using solely position measurements\,\cite{Ralph2018}. They look to identify quantum statistics from a series of position measurements after introducing a small perturbation to the trapping potential. The distinguishability is closely related to state purity, which should be safely within reach of the proposed cooling methods. All of the methods discussed are applicable to sub-micron sized Rayleigh scatterers that can be effectively treated as point dipoles. High quality nano-diamonds of this size have been produced for exactly the purpose of trapping and cooling\,\cite{Frangeskou2018}. Microscopic particles on the other hand would not usually be suitable for the sub-wavelength measurements suggested. However, large diamonds could still be cooled by tracking the position of point-like NV impurities within them. Additionally, strong coupling between an NV spin and the mechanical oscillation of a nano-diamond can be engineered using a strong magnetic field gradient. There are proposals for generating low number Fock states and possible spatial superposition states, by manipulating a Jaynes-Cummings type interaction Hamiltonian, in states prepared near the quantum ground state \cite{PhysRevA.88.033614}. \section*{Acknowledgements} The authors thank B. D'Urso, B.R. Slezak, C.W. Lewandowski and P. Nachman for discussions which motivated the present work, and for their helpful comments regarding experimental details. L.S.W. acknowledges support from the EPSRC through a DTP studentship and SUPA for a PECRE award to visit Montana State University.
2024-02-18T23:39:55.794Z
2019-03-26T01:42:08.000Z
algebraic_stack_train_0000
846
7,022
proofpile-arXiv_065-4202
\section{#1}\setcounter{equation}{0}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{displaymath}}{\begin{displaymath}} \newcommand{\end{displaymath}}{\end{displaymath}} \newcommand{\alpha}{\alpha} \newcommand{$B \to X_s e^+ e^-$ }{$B \to X_s e^+ e^-$ } \newcommand{$b \to s e^+ e^-$ }{$b \to s e^+ e^-$ } \newcommand{$b \to c e \bar\nu $ }{$b \to c e \bar\nu $ } \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal O}}{{\cal O}} \newcommand{\frac}{\frac} \newcommand{\tilde{C}}{\tilde{C}} \newcommand{K^+\rightarrow\pi^+\nu\bar\nu}{K^+\rightarrow\pi^+\nu\bar\nu} \newcommand{K^+\rightarrow\pi^+\nu\bar\nu}{K^+\rightarrow\pi^+\nu\bar\nu} \newcommand{K_{\rm L}\rightarrow\pi^0\nu\bar\nu}{K_{\rm L}\rightarrow\pi^0\nu\bar\nu} \newcommand{K_{\rm L}\rightarrow\pi^0\nu\bar\nu}{K_{\rm L}\rightarrow\pi^0\nu\bar\nu} \newcommand{K_{\rm L} \to \mu^+\mu^-}{K_{\rm L} \to \mu^+\mu^-} \newcommand{K_{\rm L} \to \mu^+ \mu^-}{K_{\rm L} \to \mu^+ \mu^-} \newcommand{K_{\rm L} \to \pi^0 e^+ e^-}{K_{\rm L} \to \pi^0 e^+ e^-} \def\frac{\as}{4\pi}{\frac{\alpha_s}{4\pi}} \def\gamma_5{\gamma_5} \newcommand{\IM\lambda_t}{{\rm Im}\lambda_t} \newcommand{\RE\lambda_t}{{\rm Re}\lambda_t} \newcommand{\RE\lambda_c}{{\rm Re}\lambda_c} \renewcommand{\baselinestretch}{1.3} \textwidth=17.5cm \textheight=22.0cm \oddsidemargin -0.06cm \topmargin -1.8cm \baselineskip -3.5cm \parskip 0.3cm \tolerance=10000 \parindent 0pt \def\ltap{\raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$<$}} \def\gtap{\raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$}} \begin{document} \vskip 30pt \begin{center} {\Large \bf Looking for \boldmath$B\rightarrow X_s \ell^+\ell^-$ in non-minimal Universal Extra Dimensional model} \\ \vspace*{1cm} \renewcommand{\thefootnote}{\fnsymbol{footnote}} {{\sf Avirup Shaw$^1$\footnote{email: avirup.cu@gmail.com}} }\\ \vspace{10pt} {{\em $^1$Theoretical Physics, Physical Research Laboratory,\\ Ahmedabad 380009, India}} \normalsize \end{center} \begin{abstract} \noindent Non-vanishing boundary localised terms significantly modify the mass spectrum and various interactions among the Kaluza-Klein excited states of 5-Dimensional Universal Extra Dimensional scenario. In this scenario we compute the contributions of Kaluza-Klein excitations of gauge bosons and third generation quarks for the decay process $B\rightarrow X_s\ell^+\ell^-$ incorporating next-to-leading order QCD corrections. We estimate branching ratio as well as Forward Backward asymmetry associated with this decay process. Considering the constraints from some other $b \to s$ observables and electroweak precision data we show that significant amount of parameter space of this scenario has been able to explain the observed experimental data for this decay process. From our analysis we put lower limit on the size of the extra dimension by comparing our theoretical prediction for branching ratio with the corresponding experimental data. Depending on the values of free parameters of the present scenario, lower limit on the inverse of the radius of compactification ($R^{-1}$) can be as high as $\geq 760$ GeV. {Even this value could slightly be higher if we project the upcoming measurement by Belle II experiment.} Unfortunately, the Forward Backward asymmetry of this decay process would not provide any significant limit on $R^{-1}$ in the present model. \vskip 5pt \noindent \end{abstract} \renewcommand{\thesection}{\Roman{section}} \setcounter{footnote}{0} \renewcommand{\thefootnote}{\arabic{footnote}} \section{Introduction} Standard Model (SM) of particle physics has {\it almost} been accomplished by the discovery of Higgs boson at the Large Hadron Collider (LHC) \cite{Aad:2012tfa, Chatrchyan:2012xdj}. However, the SM scenario is not the ultimate one, because there exists several experimental data in various directions, such as massive neutrinos, Dark Matter (DM) enigma, observed baryon asymmetry etc., that cannot be addressed within the SM. This in turn, ensures that new physics (NP) is indeed a reality of nature. Moreover, experimental data for several flavour (specially $B$-physics) physics observables show significant deviation from the corresponding SM expectations. For example, in $B$-physics experiments at LHCb, Belle and Babar have pointed at intriguing lepton flavour universality violating (LFUV) effects for both the charge current ($\mathcal{R}_{D^{(*)}}$~\cite{average} and $\mathcal{R}_{J/\psi}$~\cite{Aaij:2017tyk}) as well as the flavour changing neutral current (FCNC) ($\mathcal{R}_K$ \cite{Aaij:2019wad} and $\mathcal{R}_{K^*}$\cite{Aaij:2017vbb}) processes. In the latter case, involving processes are described at the quark level by the transition $b \to s \ell^+ \ell^- (\ell \equiv e-{\rm the\;electron}, \mu-{\rm the\;muon})$ which is highly suppressed in SM. Therefore, even for small deviation between the SM prediction and experimental data, these types of observables have always been very instrumental to probe the favorable kind among the various NP models that exist in the literature. {Apart from these, there exists several $B$-physics observables which could also be used for the detection of NP scenarios.} {Following the above argument in the current article, we will calculate an inclusive decay mode $B\rightarrow X_s \ell^+\ell^-$ in a NP scenario namely non-minimal Universal Extra Dimensional (nmUED) model\footnote{In this model we have already calculated several $B$-physics obeservables, for example branching fractions of some rare decay processes e.g., $B_s \rightarrow \mu^+ \mu^-$ \cite{Datta:2015aka}, $B \to X_s\gamma$ \cite{Datta:2016flx} and $\mathcal{R}_{D^{(*)}}$ anomalies \cite{Biswas:2017vhc}.}. This inclusive decay mode $B\rightarrow X_s \ell^+\ell^-$ has been considered as one of the harbingers for the detection of NP scenario. The reason is that, this decay mode is one of the most significant and relatively clean decay modes. $B\to X_s\ell^+\ell^-$ decay is significant in a sense that this decay mode not only helps the detection of NP scenario but also presents more complex test of the SM. For example, in comparison with the $B\to X_s\gamma$ decay, different contributions add to the inclusive $B\to X_s\ell^+\ell^-$ decay. Moreover, it is particularly attractive because, as a three body decay process it also offers more kinematic observables such as the invariant di-lepton mass spectrum and the Forward-Backward asymmetry \cite{Hurth:2003vb, Benzke:2017woq}. At the quark level this process is also governed by $b \to s \ell^+ \ell^-$ transition. The effective Hamiltonian of this decay process is characterised by three different Wilson Coefficients (WCs): $C_7$, $C_9$ and $C_{10}$. Among these WCs, $C_{10}$ and $C_7$ for nmUED model have already been calculated in our previous studies \cite{Datta:2015aka} and \cite{Datta:2016flx} respectively. Consequently, calculation of the WC $C_9$ using relevant one loop Feynman diagrams in the context of nmUED model is one of the primary tasks of this article. The full calculational details of the WC $C_9$ have been given in Sec.\;\ref{sec:Heff:BXsee:nlo}. To the best of our knowledge, this is to be the first article where we will show the calculation of the WC $C_9$ in the context of nmUED model in detail. Finally, with these different WCs $C_7$, $C_9$ and $C_{10}$ we compute the coefficients of electroweak dipole operators for photon and gluon for the first time in the nmUED scenario. Eventually, we can readily calculate the decay amplitude for this process $B\rightarrow X_s \ell^+\ell^-$ in the nmUED scenario.} In most of the cases, experimental data for several observables for the decay mode $B\rightarrow X_s \ell^+\ell^-$ have been more explored for two regions\footnote{The reason for choosing these two regions is given in Sec.\;\ref{sec:Heff:BXsee:nlo}.} of di-lepton invariant mass square $q^2$ \bigg($\equiv (p_{_{\ell^+}}+p_{_{\ell^-}})^2$\bigg) spectrum. In these two regions, the experimental data of branching ratio (Br) are given by Babar collaboration\footnote{These experimental data have also been used in two recent articles \cite{Feng:2016wph, Kumar:2019qbv} in context of the same decay process.} \cite{Lees:2013nxa} \begin{eqnarray} &&{\rm Br}(B\rightarrow X_{_s}\ell^+\ell^-)_{_{q^2\in[1,6]{\rm GeV}^2}}^{\rm exp}=(1.60^{+0.41+0.17}_{-0.39-0.13}\pm 0.18)\times10^{-6},\; \nonumber\\ &&{\rm Br}(B\rightarrow X_{_s}\ell^+\ell^-)_{_{q^2\in[14.4,\;25]{\rm GeV}^2}}^{\rm exp}=(0.57^{+0.16+0.03}_{-0.15-0.02}\pm 0.00)\times10^{-6} \;,(\ell=e,\;\mu)\;. \label{EXP-BR-BtoXsll} \end{eqnarray} The SM predictions for the above quantities are \cite{Huber:2015sra} \begin{eqnarray} &&{\rm Br}(B\rightarrow X_{_s}\ell^+\ell^-)_{_{q^2\in[1,6]{\rm GeV}^2}}^{\rm SM}=(1.62\pm 0.09)\times10^{-6},\; \nonumber\\ &&{\rm Br}(B\rightarrow X_{_s}\ell^+\ell^-)_{_{q^2\in[14.4,25]{\rm GeV}^2}}^{\rm SM}=(2.53\pm 0.70)\times10^{-7}\;, (\ell=e,\;\mu)\;. \label{SM-BR-BtoXsll} \end{eqnarray} Moreover, apart from the branching ratio, Forward-Backward asymmetry ($A_{FB}$) could also help for the detection of NP scenario. For this decay process $B\rightarrow X_{_S}\ell^+\ell^-\;(\ell=e,\;\mu)$ for the two distinct regions of $q^2$ the experimental values of this observable are given by Belle Collaboration\cite{Sato:2014pjr} \begin{eqnarray} &&A_{_{FB}}(B\rightarrow X_{_s}\ell^+\ell^-)\Big|_{q^2\in[1,\;6]\;{\rm GeV}^2}^{\rm exp} =0.30\pm0.24\pm0.04\;, \nonumber\\ &&A_{_{FB}}(B\rightarrow X_{_s}\ell^+\ell^-)\Big|_{q^2\in[14.4,\;25]\;{\rm GeV}^2}^{\rm exp} =0.28\pm0.15\pm0.02\;, \label{exp-AFB0} \end{eqnarray} while the corresponding SM expectations are \cite{Fukae:1998qy, Ali:2002jg, Sato:2014pjr} \begin{eqnarray} &&A_{_{FB}}(B\rightarrow X_{_s}\ell^+\ell^-)\Big|_{q^2\in[1,\;6]\;{\rm GeV}^2}^{\rm SM} =-0.07\pm0.04\;, \nonumber\\ &&A_{_{FB}}(B\rightarrow X_{_s}\ell^+\ell^-)\Big|_{q^2\in[14.4,\;25]\;{\rm GeV}^2}^{\rm SM} =0.40\pm0.04\;. \label{SM-AFB0} \end{eqnarray} Therefore, from the above data it is clearly evident that the SM predictions for the respective observables coincide with the experimental data within a few standard deviations. Hence, by investigating these observables one can search any favourable NP scenario and also tightly constrain the parameter space of that scenario. With this spirit, in this article we evaluate the decay amplitude for the process $B\rightarrow X_{_s}\ell^+\ell^-$ in nmUED scenario. In literature one can find several articles, e.g., \cite{Feng:2016wph, Lunghi:1999uk} which have been dedicated to the exploration of the same decay process in the context of several beyond SM (BSM) scenarios. In the present article, {in order to serve our purposes} we are particularly focused on an extension of SM with one flat space-like dimension ($y$) compactified on a circle $S^1$ of radius $R$. All the SM fields are allowed to propagate along the extra dimension $y$. This model is called as 5-dimensional (5D) Universal Extra Dimensional (UED) \cite{Appelquist:2000nn} scenario. The fields manifested on this manifold are usually defined in terms of towers of 4-Dimensional (4D) Kaluza-Klein (KK) states while the zero-mode of the KK-towers is designated as the corresponding 4D SM field. A discrete symmetry ${Z}_2$ ($y \leftrightarrow -y$) has been needed to generate chiral SM fermions in this scenario. Consequently, the extra dimension is defined as $S^1/Z_2$ orbifold and eventually physical domain extends from $y = 0$ to $y = \pi R$. As a result, the $y \leftrightarrow -y$ symmetry has been translated as a conserved parity which is known as KK-parity $=(-1)^n$, where $n$ is called the KK-number. This KK-number ($n$) is identified as discretised momentum along the $y$-direction. From the conservation of KK-parity the lightest Kaluza-Klein particle (LKP) with KK-number one ($n=1$) cannot decay to a pair of SM particles and becomes absolutely stable. Hence, the LKP has been considered as a potential DM candidate in this scenario \cite{Servant:2002hb, Servant:2002aq, Cheng:2002ej, Majumdar:2002mw, Burnell:2005hm, Kong:2005hn, Kakizaki:2006dz, Belanger:2010yx}. Furthermore, few variants of this model can address some other shortcomings of SM, for example, gauge coupling unifications \cite{Dienes:1998vh, Dienes:1998vg, Bhattacharyya:2006ym}, neutrino mass \cite{Hsieh:2006qe, Fujimoto:2014fka} and fermion mass hierarchy \cite{Archer:2012qa} etc. At the $n^{th}$ KK-level all the KK-state particles have the mass $\sqrt{(m^2+(nR^{-1})^2)}$. Here, $m$ is considered as the zero-mode mass (SM particle mass) which is very small with respect to $R^{-1}$. Therefore, this UED scenario contains almost degenerate mass spectrum at each KK-level. Consequently, this scenario has lost its phenomenological relevance, specifically, at the colliders. However, this degeneracy in the mass spectrum can be lifted by radiative corrections \cite{Georgi:2000ks, Cheng:2002iz}. There are two different types of radiative corrections. The first one is considered as bulk corrections (which are finite and only non-zero for KK-excitations of gauge bosons) and second one is regarded as boundary localised corrections that proportional to logarithmically cut-off\footnote{UED is considered as an effective theory and it is characterised by a cut-off scale $\Lambda$.} scale ($\Lambda$) dependent terms. The boundary correction terms can be embedded as 4D kinetic, mass and other possible interaction terms for the KK-states at the two fixed boundary points ($y=0$ and $y=\pi R$) of this orbifold. As a matter of fact, it is very obvious to include such terms in an extra dimensional theory like UED since these boundary terms have played the role of counterterms for cut-off dependent loop-induced contributions. In the minimal version of UED (mUED) models there is an assumption that these boundary terms are tuned in such a way that the 5D radiative corrections exactly vanish at the cut-off scale $\Lambda$. However, in general this assumption can be avoided and without calculating the actual radiative corrections one might consider kinetic, mass as well as other interaction terms localised at the two fixed boundary points to parametrise these unknown corrections. Therefore, this specific scenario is called as nmUED \cite{Dvali:2001gm, Carena:2002me, delAguila:2003bh, delAguila:2003kd, delAguila:2003gv, Schwinn:2004xa, Flacke:2008ne, Datta:2012xy, Flacke:2013pla}. In this scenario, not only the radius of compactification ($R$), but also the coefficients of different boundary localised terms (BLTs) have been considered as free parameters which can be constrained by various experimental data of different physical observables. In literature one can find different such exercise regarding various phenomenological aspects. As for example limits on the values of the strengths of the BLTs have been achieved from the estimation of electroweak observables \cite{Flacke:2008ne, Flacke:2013pla}, S, T and U parameters \cite{delAguila:2003gv, Flacke:2013nta}, DM relic density \cite{Bonnevier:2011km, Datta:2013nua}, production as well as decay of SM Higgs boson \cite{Dey:2013cqa}, collider study of LHC experiments \cite{Datta:2012tv, Datta:2013yaa, Datta:2013lja, Shaw:2014gba, Shaw:2017whr, Ganguly:2018pzs}, $R_b$ \cite{Jha:2014faa}, branching ratios of some rare decay processes e.g., $B_s \rightarrow \mu^+ \mu^-$ \cite{Datta:2015aka} and $B \to X_s\gamma$ \cite{Datta:2016flx}, $\mathcal{R}_{D^{(*)}}$ anomalies \cite{Biswas:2017vhc, Dasgupta:2018nzt}, flavour changing rare top decay \cite{Dey:2016cve, Chiang:2018oyd} and unitarity of scattering amplitudes involving KK-excitations \cite{Jha:2016sre}. In this article we estimate the contributions of KK-excited modes to the decay of $B\rightarrow X_s\ell^+\ell^-$ in a 5D UED model with {\it non-vanishing} BLT parameters. Our calculation includes next-to-leading order (NLO) QCD corrections. To the best of our knowledge, this is to be the first article where we will study the decay of $B\rightarrow X_s\ell^+\ell^-$ in the framework of nmUED. Considering the present experimental data of the concerned FCNC process we will put constraints on the BLT parameters. Furthermore, we would like to investigate how far the lower limit on $R^{-1}$ to higher values can be extended using non-zero BLT parameters. Consequently, it will be an interesting part of this exercise is to see whether this lower limit of $R^{-1}$ is comparable with the results obtained from our previous analysis \cite{Datta:2015aka, Datta:2016flx} or not? Several years ago the same analysis \cite{Buras:2003mk} has been performed in the context of minimal version of UED model, however, the present experimental data have been changed with respect to that time. Therefore, it will be a relevant job to revisit the lower bound on {$R^{-1}$} in UED model by comparing the current experimental result \cite{Lees:2013nxa, Sato:2014pjr} with theoretical estimation using {\it vanishing} BLT parameters. {Furthermore, we estimate the probable bounds on the parameter space of the nmUED scenario by considering the upcoming measurement by Belle II experiment for the $B\rightarrow X_s\ell^+\ell^-$ decay observables.} In the following section \ref{model}, we will give a brief description of the nmUED model. Then in section \ref{sec:Heff:BXsee:nlo} we will show the calculational details of branching ratio and Forward-Backward asymmetry for the present process. In section \ref{anls} we will present our numerical results. Finally, we conclude the results in section \ref{concl}. \section{KK-parity conserving nmUED scenario: A brief overview}\label{model} Here we present the technicalities of nmUED scenario required for our analysis. For further discussion regarding this scenario one can look into\cite{Dvali:2001gm, Carena:2002me, delAguila:2003bh, delAguila:2003kd, delAguila:2003gv, Schwinn:2004xa, Flacke:2008ne, Datta:2012xy, Datta:2012tv, Datta:2013yaa, Datta:2013lja, Shaw:2014gba, Shaw:2017whr, Ganguly:2018pzs, Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc}. In the present scenario we preserve a $Z_2$ symmetry by considering equal strength of boundary terms at both the boundary points ($y=0$ and $y=\pi R$). Consequently, KK-parity has been restored in this scenario which makes the LKP stable. Hence, this present scenario can give a potential DM candidate (such as first excited KK-state of photon). A comprehensive exercise on DM in nmUED can be found in \cite{Datta:2013nua}. We begin with the action for 5D fermionic fields associated with their boundary localised kinetic term (BLKT) of strength $r_f$ \cite{Schwinn:2004xa, Datta:2013nua, Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \begin{eqnarray} S_{fermion} = \int d^5x \left[ \bar{\Psi}_L i \Gamma^M D_M \Psi_L + r_f\{\delta(y)+\delta(y - \pi R)\} \bar{\Psi}_L i \gamma^\mu D_\mu P_L\Psi_L \right. \nonumber \\ \left. + \bar{\Psi}_R i \Gamma^M D_M \Psi_R + r_f\{\delta(y)+\delta(y - \pi R)\}\bar{\Psi}_R i \gamma^\mu D_\mu P_R\Psi_R \right], \label{factn} \end{eqnarray} where $\Psi_L(x,y)$ and $\Psi_R(x,y)$ represent the 5D four component Dirac spinors that can be expressed in terms of two component spinors as \cite{Schwinn:2004xa, Datta:2013nua, Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \begin{equation} \Psi_L(x,y) = \begin{pmatrix}\phi_L(x,y) \\ \chi_L(x,y)\end{pmatrix} = \sum_n \begin{pmatrix}\phi^{(n)}_L(x) f_L^n(y) \\ \chi^{(n)}_L(x) g_L^n(y)\end{pmatrix}, \label{fermionexpnsn1} \end{equation} \begin{equation} \Psi_R(x,y) = \begin{pmatrix}\phi_R(x,y) \\ \chi_R(x,y) \end{pmatrix} = \sum_n \begin{pmatrix}\phi^{(n)}_R(x) f_R^n(y) \\ \chi^{(n)}_R(x) g_R^n(y) \end{pmatrix}. \label{fermionexpnsn2} \end{equation}\\ $f_{L(R)}$ and $g_{L(R)}$ are the associated KK-wave-functions which can be written as the following \cite{Carena:2002me, Flacke:2008ne, Datta:2013nua, Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \begin{eqnarray} f_L^n = g_R^n = N^f_n \left\{ \begin{array}{rl} \displaystyle \frac{\cos\left[m_{f^{(n)}} \left (y - \frac{\pi R}{2}\right)\right]}{\cos[ \frac{m_{f^{(n)}} \pi R}{2}]} &\mbox{for $n$ even,}\\ \displaystyle \frac{{-}\sin\left[m_{f^{(n)}} \left (y - \frac{\pi R}{2}\right)\right]}{\sin[ \frac{m_{f^{(n)}} \pi R}{2}]} &\mbox{for $n$ odd,} \end{array} \right. \label{flgr} \end{eqnarray} and \begin{eqnarray} g_L^n =-f_R^n = N^f_n \left\{ \begin{array}{rl} \displaystyle \frac{\sin\left[m_{f^{(n)}} \left (y - \frac{\pi R}{2}\right)\right]}{\cos[ \frac{m_{f^{(n)}} \pi R}{2}]} &\mbox{for $n$ even,}\\ \displaystyle \frac{\cos\left[m_{f^{(n)}} \left (y - \frac{\pi R}{2}\right)\right]}{\sin[ \frac{m_{f^{(n)}} \pi R}{2}]} &\mbox{for $n$ odd.} \end{array} \right. \end{eqnarray} Normalisation constant ($N^f_n$) for $n^{th}$ KK-mode can easily be obtained from the following orthonormality conditions \cite{Datta:2013nua, Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \begin{equation}\label{orthonorm} \begin{aligned} &\left.\begin{array}{r} \int_0 ^{\pi R} dy \; \left[1 + r_{f}\{ \delta(y) + \delta(y - \pi R)\}\right]f_L^mf_L^n\\ \int_0 ^{\pi R} dy \; \left[1 + r_{f}\{ \delta(y) + \delta(y - \pi R)\}\right]g_R^mg_R^n \end{array}\right\}=&&\delta^{n m}~; &&\left.\begin{array}{l} \int_0 ^{\pi R} dy \; f_R^mf_R^n\\ \int_0 ^{\pi R} dy \; g_L^mg_L^n \end{array}\right\}=&&\delta^{n m}~, \end{aligned} \end{equation} and it takes the form as {\small \vspace*{-.3cm} \begin{equation}\label{norm} N^f_n=\sqrt{\frac{2}{\pi R}}\Bigg[ \frac{1}{\sqrt{1 + \frac{r^2_f m^2_{f^{(n)}}}{4} + \frac{r_f}{\pi R}}}\Bigg]. \end{equation} } Here, $m_{f^{(n)}}$ is the KK-mass of $n^{th}$ KK-excitation acquired from the given transcendental equations \cite{Carena:2002me, Datta:2013nua, Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \begin{eqnarray} \frac{r_{f} m_{f^{(n)}}}{2}= \left\{ \begin{array}{rl} -\tan \left(\frac{m_{f^{(n)}}\pi R}{2}\right) &\mbox{for $n$ even,}\\ \cot \left(\frac{m_{f^{(n)}}\pi R}{2}\right) &\mbox{for $n$ odd.} \end{array} \right. \label{fermion_mass} \end{eqnarray} Let us discuss the Yukawa interactions in this scenario as the large top quark mass plays a significant role in amplifying the quantum effects in the present study. The action of Yukawa interaction with BLTs of strength $r_y$ is written as \cite{Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \begin{eqnarray} \label{yukawa} S_{Yukawa} &=& -\int d^5 x \Big[\lambda^5_t\;\bar{\Psi}_L\widetilde{\Phi}\Psi_R +r_y \;\{ \delta(y) + \delta(y-\pi R) \}\lambda^5_t\bar{\phi_L}\widetilde{\Phi}\chi_R+\textrm{h.c.}\Big]. \end{eqnarray} The 5D coupling strength of Yukawa interaction for the third generations are represented by $\lambda^5_t$. Embedding the KK-wave-functions for fermions (given in Eqs.\;\ref{fermionexpnsn1} and \ref{fermionexpnsn2}) in the actions given in Eq.\;\ref{factn} and Eq.\;\ref{yukawa} one finds the bi-linear terms containing the doublet and singlet states of the quarks. For $n^{th}$ KK-level the mass matrix can be expressed as the following \cite{Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \begin{equation} \label{fermion_mix} -\begin{pmatrix} \bar{\phi_L}^{(n)} & \bar{\phi_R}^{(n)} \end{pmatrix} \begin{pmatrix} m_{f^{(n)}}\delta^{nm} & m_{t} {\mathscr{I}}^{nm}_1 \\ m_{t} {\mathscr{I}}^{mn}_2& -m_{f^{(n)}}\delta^{mn} \end{pmatrix} \begin{pmatrix} \chi^{(m)}_L \\ \chi^{(m)}_R \end{pmatrix}+{\rm h.c.}. \end{equation} Here, $m_t$ is identified as the mass of SM top quark while $m_{f^{(n)}}$ is obtained from the solution of the transcendental equations given in Eq.\;\ref{fermion_mass}. ${\mathscr{I}}^{nm}_1$ and ${\mathscr{I}}^{nm}_2$ are the overlap integrals which are given in the following\cite{Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \[ {\mathscr{I}}^{nm}_1=\left(\frac{1+\frac{r_f}{\pi R}}{1+\frac{r_y}{\pi R}}\right)\times\int_0 ^{\pi R}\;dy\; \left[ 1+ r_y \{\delta(y) + \delta(y - \pi R)\} \right] g_{R}^m f_{L}^n,\] \;\;{\rm and}\;\;\[{\mathscr{I}}^{nm}_2=\left(\frac{1+\frac{r_f}{\pi R}}{1+\frac{r_y}{\pi R}}\right)\times\int_0 ^{\pi R}\;dy\; g_{L}^m f_{R}^n .\] The integral ${\mathscr{I}}^{nm}_1$ is non vanishing for both the conditions of $n=m$ and $n\neq m$ . However, for $r_y = r_f$, this integral becomes unity (when $n =m$) or zero ($n \neq m$). On the other hand, the integral ${\mathscr{I}}^{nm}_2$ is non vanishing only when $n=m$ and becomes unity in the limit $r_y = r_f$. At this stage we would like to point out that, in our analysis we choose a condition of equality ($r_y$=$r_f$) to elude the complicacy of mode mixing and develop a simpler form of fermion mixing matrix \cite{Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc}. Following this motivation, in the rest of our analysis we will maintain the equality condition\footnote{However, in general, one can choose unequal strengths of boundary terms for kinetic and Yukawa interaction for fermions.} $r_y=r_f$. Implying the alluded equality condition ($r_y=r_f$) the resulting mass matrix (given in Eq.\;\ref{fermion_mix}) can readily be diagonalised by following bi-unitary transformations for the left- and right-handed fields respectively \cite{Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \begin{equation} U_{L}^{(n)}=\begin{pmatrix} \cos\alpha_{tn} & \sin\alpha_{tn} \\ -\sin\alpha_{tn} & \cos\alpha_{tn} \end{pmatrix},~~U_{R}^{(n)}=\begin{pmatrix} \cos\alpha_{tn} & \sin\alpha_{tn} \\ \sin\alpha_{tn} & -\cos\alpha_{tn} \end{pmatrix}, \end{equation} with the mixing angle $\alpha_{tn}\left[ = \frac12\tan^{-1}\left(\frac{m_{t}}{m_{f^{(n)}}}\right)\right]$. The gauge eigen states $\Psi_L(x,y)$ and $\Psi_R(x,y)$ are related with the mass eigen states $T^1_t$ and $T^2_t$ by the given relations \cite{Datta:2015aka, Datta:2016flx, Biswas:2017vhc} \begin{tabular}{p{8cm}p{8cm}} {\begin{align} &{\phi^{(n)}_L} = \cos\alpha_{tn}T^{1(n)}_{tL}-\sin\alpha_{tn}T^{2(n)}_{tL},\nonumber \\ &{\chi^{(n)}_L} = \cos\alpha_{tn}T^{1(n)}_{tR}+\sin\alpha_{tn}T^{2(n)}_{tR},\nonumber \end{align}} & {\begin{align} &{\phi^{(n)}_R} = \sin\alpha_{tn}T^{1(n)}_{tL}+\cos\alpha_{tn}T^{2(n)}_{tL},\nonumber \\ &{\chi^{(n)}_R} = \sin\alpha_{tn}T^{1(n)}_{tR}-\cos\alpha_{tn}T^{2(n)}_{tR}. \end{align}} \end{tabular} Both the physical eigen states $T^{1(n)}_t$ and $T^{2(n)}_t$ share the same mass eigen value at each KK-level. For $n^{th}$ KK-level it takes the form as $M_{t^{(n)}} \equiv \sqrt{m_{t}^{2}+m^2_{f^{(n)}}}$. In the following we present the kinetic action (governed by $SU(2)_L \times U(1)_Y$ gauge group) of 5D gauge and scalar fields with their respective BLKTs \cite{Flacke:2008ne, Datta:2014sha, Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc, Dey:2016cve} \begin{eqnarray} S_{gauge} &=& -\frac{1}{4}\int d^5x \bigg[ W^a_{MN} W^{aMN}+r_W \left\{ \delta(y) + \delta(y - \pi R)\right\} W^a_{\mu\nu} W^{a\mu \nu}\nonumber \\ &+& B_{MN} B^{MN}+r_B \left\{ \delta(y) + \delta(y - \pi R)\right\} B_{\mu\nu} B^{\mu \nu}\bigg], \label{pure-gauge} \end{eqnarray} \vspace*{-1cm} \begin{eqnarray} S_{scalar} &=& \int d^5x \bigg[ (D_{M}\Phi)^\dagger(D^{M}\Phi) + r_\phi \left\{ \delta(y) + \delta(y - \pi R)\right\} (D_{\mu}\Phi)^\dagger(D^{\mu}\Phi) \bigg], \label{higgs} \end{eqnarray} where, $r_W$, $r_B$ and $r_\phi$ are identified as the strength of the BLKTs for the respective fields. 5D field strength tensors are written as \begin{eqnarray}\label{ugfs} W_{MN}^a &\equiv& (\partial_M W_N^a - \partial_N W_M^a-{\tilde{g}_2}\epsilon^{abc}W_M^bW_N^c),\\ \nonumber B_{MN}&\equiv& (\partial_M B_N - \partial_N B_M). \end{eqnarray} $W^a_M (\equiv W^a_\mu, W^a_4)$ and $B_M (\equiv B_\mu, B_4)$ ($M=0,1 \ldots 4$) are represented as the 5D gauge fields corresponding to the gauge groups $SU(2)_L$ and $U(1)_Y$ respectively. 5D covariant derivative is given as $D_M\equiv\partial_M+i{\tilde{g}_2}\frac{\sigma^{a}}{2}W_M^{a}+i{\tilde{g}_1}\frac{Y}{2}B_M$, where, ${\tilde{g}_2}$ and ${\tilde{g}_1}$ represent the 5D gauge coupling constants. Here, $\frac {\sigma^{a}}{2} (a\equiv 1\ldots 3)$ and $\frac Y2$ are the generators of $SU(2)_L$ and $U(1)_Y$ gauge groups respectively. 5D Higgs doublet is represented by $\Phi=\left(\begin{array}{cc} \phi^+\\\phi^0\end{array}\right)$. Each of the gauge and scalar fields which are involved in the above actions (Eqs.\;\ref{pure-gauge} and \ref{higgs}) can be expressed in terms of appropriate KK-wave-functions as \cite{Datta:2014sha, Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc, Dey:2016cve} \begin{equation}\label{Amu} V_{\mu}(x,y)=\sum_n V_{\mu}^{(n)}(x) a^n(y),\;\;\;\;\ V_{4}(x,y)=\sum_n V_{4}^{(n)}(x) b^n(y) \end{equation} and \begin{equation}\label{chi} \Phi(x,y)=\sum_n \Phi^{(n)}(x) h^n(y), \end{equation} where $(V_\mu, V_4)$ generically represents both the 5D $SU(2)_L$ and $U(1)_Y$ gauge bosons. Before proceeding further, we would like to make a few important remarks which could help the reader to understand the following gauge and scalar field structure as well as the corresponding KK-wave function. We know that physical neutral gauge bosons generate due to the mixing of $B$ and $W^3$ fields and hence the KK-decomposition of neutral gauge bosons become very intricate in the present extra dimensional scenario because of the existence of two types of mixings both at the bulk as well as on the boundary. Therefore, in this situation without the condition $r_W=r_B$, it would be very difficult to diagonalise the bulk and boundary actions simultaneously by the same 5D field redefinition\footnote{However, in general one can proceed with $r_W\neq r_B$, but in this situation the mixing between $B$ and $W^3$ in the bulk and on the boundary points produce off-diagonal terms in the neutral gauge boson mass matrix.}. Hence, in the following we will sustain the equality condition $r_W=r_B$ \cite{Datta:2014sha, Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc, Dey:2016cve}. Consequently, similar to the mUED scenario, one obtains the same structure of mixing between KK-excitations of the neutral component of the gauge fields (i.e., the mixing between $W^{3(n)}$ and $B^{(n)}$) in nmUED scenario. Therefore, the mixing between $W^{3(1)}$ and $B^{(1)}$ (i.e., the mixing at the first KK-level) gives the $Z^{(1)}$ and $\gamma^{(1)}$. This $\gamma^{(1)}$ (first excited KK-state of photon) is absolutely stable by the conservation of KK-parity and it possesses the lowest mass among the first excited KK-states in the nmUED particle spectrum. Moreover, it can not decay to pair of SM particles. Therefore, this $\gamma^{(1)}$ has been played the role of a viable DM candidate in this scenario \cite{Datta:2013nua}. In the following, we have given the gauge fixing action (contains a generic BLKT parameter $r_V$ for gauge bosons) appropriate for nmUED model\cite{Datta:2014sha, Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc, Dey:2016cve} \begin{eqnarray} S_{gauge\;fixing} &=& -\frac{1}{\xi _y}\int d^5x\Big\vert\partial_{\mu}W^{\mu +}+\xi_{y}(\partial_{y}W^{4+}+iM_{W}\phi^{+}\{1 + r_{V}\left( \delta(y) + \delta(y - \pi R)\right)\})\Big \vert ^2 \nonumber \\&&-\frac{1}{2\xi_y}\int d^5x [\partial_{\mu}Z^{\mu}+\xi_y(\partial_{y}Z^{4}-M_{Z}\chi\{1+ r_V( \delta(y) + \delta(y - \pi R))\})]^2\nonumber \\ &-&\frac{1}{2\xi_y}\int d^5x [\partial_{\mu}A^{\mu}+\xi_y\partial_{y}A^{4}]^2, \label{gauge-fix} \end{eqnarray} where $M_W(M_Z)$ is the mass of SM $W^\pm (Z)$ boson. For a detailed study on gauge fixing action/mechanism in nmUED we refer to \cite{Datta:2014sha}. The above action (given in Eq.\;\ref{gauge-fix}) is somewhat intricate and at the same time very crucial for this nmUED scenario where we will calculate one loop diagrams (required for present calculation) in Feynman gauge. In the presence of the BLKTs the Lagrangian leads to a non-homogeneous weight function for the fields with respect to the extra dimension. This inhomogeneity compels us to define a $y$-dependent gauge fixing parameter $\xi_y$ as \cite{Datta:2014sha, Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc, Dey:2016cve} \begin{equation}\label{gf_para} \xi =\xi_y\,(1+ r_V\{ \delta(y) + \delta(y - \pi R)\}), \end{equation} where $\xi$ is not dependent on $y$. This relation can be treated as {\em renormalisation} of the gauge fixing parameter since the BLKTs are in some sense played the role of counterterms taking into account the unknown ultraviolet contribution in loop calculations. In this sense, $\xi_y$ is the bare gauge fixing parameter while $\xi$ can be seen as the renormalized gauge fixing parameter taking the values $0$ (Landau gauge), $1$ (Feynman gauge) or $\infty$ (Unitary gauge) \cite{Datta:2014sha}. In the present scenario appropriate gauge fixing procedure enforces the condition $r_V=r_\phi$~\cite{Datta:2014sha, Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc, Dey:2016cve}. Consequently, KK-masses for the gauge and the scalar field are equal ($m_{V^{(n)}}(=m_{\phi^{(n)}})$) and satisfy the same transcendental equation (Eq.~\ref{fermion_mass}). At the $n^{th}$ KK-level the physical gauge fields ($W^{\mu (n)\pm}$) and charged Higgs ($H^{(n)\pm}$) share the same\footnote{Similarly one can find the mass eigen values for the KK-excited $Z$ boson and pseudo scalar $A$. Moreover, their mass eigen values are also identical to each other at any KK-level. For example at $n^{th}$ KK-level it takes the form as $\sqrt{M_{Z}^{2}+m^2_{V^{(n)}}}$. } mass eigen value and is given by\cite{Datta:2014sha, Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc, Dey:2016cve} \begin{equation}\label{MWn} M_{W^{(n)}} = \sqrt{M_{W}^{2}+m^2_{V^{(n)}}}\;. \end{equation} Moreover, in the ’t-Hooft Feynman gauge, the mass of Goldstone bosons ($G^{(n)\pm}$) corresponding to the gauge fields $W^{\mu (n)\pm}$ has the same value $M_{W^{(n)}}$\cite{Datta:2014sha, Jha:2014faa, Datta:2015aka, Datta:2016flx, Biswas:2017vhc, Dey:2016cve}. Additionally, we would like to mention that, as in the present article we are dealing with a process that involves off-shell amplitude, hence we need to use the method of background fields \cite{Deshpande:1982mi, Buras:2003mk}. We have already mentioned that the same decay process has already been calculated in the article \cite{Buras:2003mk} in the context of 5D UED and further the authors have also used the same background fields. For this reason, in the {\it Appendix A} of that article \cite{Buras:2003mk} the authors have discussed the background field method and also given the corresponding prescription for the 5D UED scenario. We can readily adopt this prescription in the present nmUED scenario because the basic structure of both these models are similar. We hence refrain from providing the details of this method in the present scenario. However, using that prescription (given in \cite{Buras:2003mk}) we can easily evaluate the Feynman rules necessary for our present calculation. In Appendix \ref{fyerul} we give the necessary Feynman rules derived for the 5D background field method in the 5D nmUED scenario in Feynman gauge. Up to this we provide the relevant information of the present scenario. At this stage it is important to mention that the interactions for our calculation can be evaluated by integrating out the 5D action over the extra space-like dimension ($y$) after plugging the appropriate $y$-dependent KK-wave-function for the respective fields in 5D action. As a consequence some of the interactions are modified by so called overlap integrals with respect to their mUED counterparts. The expressions of the overlap integrals have been given in Appendix \ref{fyerul}. For further information of these overlap integrals we refer the reader to check \cite{Datta:2015aka}. \section{\boldmath$B\to X_{\lowercase{s}} \lowercase{\ell}^+\lowercase{\ell}^-$ in nmUED} \label{sec:Heff:BXsee:nlo} The semileptonic inclusive decay $B\to X_s \ell^+\ell^-$ is quite suppressed in the SM, however it is very compelling for finding NP signature. Therefore, several $B$-physics experimental collaborations (Belle, Babar) have been involved to measure several observables (mainly decay branching ratio, Forward-Backward asymmetry) associated with this decay process. In the context of SM, the dominant perturbative contribution has been evaluated in \cite{Grinstein:1988me} and later two loop QCD corrections\footnote{Research regarding higher order perturbative contributions has been studied extensively and has already reached at a high level accuracy. For example, one can find NNLO QCD corrections in \cite{Bobeth:1999mk} and including QED corrections in \cite{Bobeth:2003at, Huber:2005ig}. Moreover, updated analysis of all angular observables in the $B\to X_s \ell^+\ell^-$ decay has been given in \cite{Huber:2015sra}. It also contains all available perturbative NNLO QCD, NLO QED corrections and includes subleading power corrections.} have been described in the refs. \cite{Misiak:1992bc, Buras:1994dj}. Since in this particular decay mode, a lepton-antilepton pair is present, therefore, more structures contribute to the decay rate and some subtleties arise in theoretical description for this process. For the decay to be dominated by perturbative contributions then one has to eliminate $c\bar{c}$ resonances that show up as large peaks in the di-lepton invariant mass spectrum by judicious choice of kinematic cuts. Consequently this leads to ``perturbative di-lepton invariant mass windows'', namely the low di-lepton mass region $1~{\rm GeV^2} < q^2 < 6~{\rm GeV^2}$, and also the high di-lepton mass region with $q^2 > 14.4~{\rm GeV^2}$. In this section we will describe the details of the calculation of the branching ratio and Forward-Backward asymmetry of $B\to X_s\ell^+\ell^-$ in nmUED model. Since the basic gauge structure of the present nmUED model is similar to SM, therefore, leading order (LO) contributions to electroweak dipole operators are one loop suppressed as in SM. However, in the present model due to the presence of large number of KK-particles we encounter more one loop diagrams in comparison to SM. Hence, we will evaluate the total contributions of these KK-particles to the electroweak dipole operators and just simply add them to SM contribution. With this spirit following the same technique of the ref.\cite{Buras:2003mk} we will evaluate the relevant WCs of the electroweak dipole operators at the LO level. Then following the prescription of as given in \cite{Misiak:1992bc, Buras:1994dj} we will include NLO QCD correction to the concerned decay process. \subsection{Effective Hamiltonian for \boldmath$B\to X_s\ell^+\ell^-$} The effective Hamiltonian for the decay $B\to X_s\ell^+\ell^-$ at hadronic scales $\mu=\mathcal{O}(m_b)$ can be written as \cite{Buras:2003mk} \begin{equation} \label{Heff2_at_mu} {\cal H}_{\rm eff}(b\to s \ell^+\ell^-) = {\cal H}_{\rm eff}(b\to s\gamma) - \frac{G_{\rm F}}{\sqrt{2}} V_{ts}^* V_{tb} \left[ C_{9V}(\mu) Q_{9V}+ C_{10A}(M_W) Q_{10A} \right]\,, \end{equation} where $G_F$ represents the Fermi constant and $V_{ij}$ are the elements of Cabibbo-Kobayashi-Maskawa (CKM) matrix. In the above expression (Eq.\;\ref{Heff2_at_mu}) apart from the relevant operators\footnote{The explicit form of the effective Hamiltonian for $b\to s \gamma$ is given in \cite{Buras:2003mk, Datta:2016flx}.} for $B\to X_s\gamma$ there are two new operators \cite{Buras:2003mk} \begin{equation}\label{Q9V} Q_{9V} = (\bar{s} b)_{V-A} (\bar{\ell}\ell)_V\,, \qquad Q_{10A} = (\bar{s} b)_{V-A} (\bar{\ell}\ell)_A\,, \end{equation} where $V$ and $A$ refer to vector and axial-vector current respectively. They are produced via the electroweak penguin diagrams shown in Fig.~\ref{magnetic_pen} and the other relevant Feynman diagrams needed to maintain gauge invariance (for nmUED scenario) has been given in \cite{Datta:2015aka}. For the purpose of convenience the above WCs (given in Eq.\;\ref{Heff2_at_mu}) can be defined in terms of two new coefficients $\tilde C_{9}$ and $\tilde C_{10}$ as \cite{Buras:2003mk, Buras:1994dj} \begin{eqnarray} \label{C9_10} C_{9V}(\mu) &=& \frac{\alpha}{2\pi} \tilde C_9(\mu), \\ C_{10A}(\mu) &=& \frac{\alpha}{2\pi} \tilde C_{10}(\mu), \end{eqnarray} where, \begin{equation} \tilde C_{10}(\mu) = - \frac{Y(x_t, r_f, r_V, R^{-1})}{\sin^2\theta_{w}}\;. \end{equation} The function $Y(x_t, r_f, r_V, R^{-1})$ in the context of nmUED scenario has been calculated in \cite{Datta:2015aka}. $\theta_w$ is the Weinberg angle and $\alpha$ represents the fine structure constant. The operator $Q_{10A}$ does not evolve under QCD renormalisation and its coefficient is independent of $\mu$. On the other hand using the results of NLO QCD corrections to $\tilde C_{9}(\mu)$ in the SM given in \cite{Misiak:1992bc, Buras:1994dj} we can readily obtain this coefficient in the present nmUED model under the naive dimensional regularisation (NDR) renormalisation scheme as \begin{eqnarray}\label{c9_eff} \tilde{C}_9^{\rm eff}(q^2)&=&\tilde{C}_9^{\rm NDR}\tilde{\eta}\left(\frac{q^2}{m^2_b}\right)+h\left(z,\frac{q^2}{m^2_b}\right)\left(3C^{(0)}_1+C^{(0)}_2+3C^{(0)}_3+C^{(0)}_4+3C^{(0)}_5+C^{(0)}_6\right) \\ \nonumber &&-\frac 12 h\left(1,\frac{q^2}{m^2_b}\right)\left(4C^{(0)}_3+4C^{(0)}_4+3C^{(0)}_5+C^{(0)}_6\right)-\frac 12 h\left(0,\frac{q^2}{m^2_b}\right)\left(C^{(0)}_3+4C^{(0)}_4\right) \\ \nonumber &&+\frac 29\left(3C^{(0)}_3+C^{(0)}_4+3C^{(0)}_5+C^{(0)}_6\right), \end{eqnarray} where, \begin{equation}\label{C9tilde} \tilde{C}_9^{\rm NDR}(\mu) = P_0^{\rm NDR} + \frac{Y(x_t, r_f, r_V, R^{-1})}{\sin^2\theta_{w}} -4 Z(x_t, r_f, r_V, R^{-1}) + P_E E(x_t, r_f, r_V, R^{-1})\;. \end{equation} The value\footnote{The analytic formula for $P_0^{\rm NDR}$ has been given in \cite{Buras:1994dj}.} of $P_0^{\rm NDR}$ is $2.60\pm 0.25$ \cite{Buras:2003mk} and $P_E$ is ${\cal O}(10^{-2})$ \cite{Buras:1994dj}. Using the relation given in \cite{Buras:1994dj, Buras:2003mk} we can express the function $Z$ in nmUED scenario as \begin{equation} \label{Zfunction} Z(x_t, r_f, r_V, R^{-1})=C(x_t, r_f, r_V, R^{-1})+\frac 14 D(x_t, r_f, r_V, R^{-1})\;, \end{equation} while the function $C(x_t, r_f, r_V, R^{-1})$ for nmUED scenario has been calculated in \cite{Datta:2015aka}. The function $\tilde{\eta}$ given in the Eq.\;\ref{c9_eff} represents single gluon corrections to the matrix element $Q_9$ and it takes the form as \cite{Buras:1994dj} \begin{eqnarray} \tilde{\eta}\left(\frac{q^2}{m^2_b}\right)=1+\frac{\alpha_s}{\pi}\omega\left(\frac{q^2}{m^2_b}\right), \end{eqnarray} where $\alpha_s$ is the QCD fine structure constant. The explicit form of the functions $\omega$, $h$ and other WCs (e.g., given in Eq.\;\ref{c9_eff}) required for the present decay process have been given in Appendix \ref{NDR}. The functions $D(x_t, r_f, r_V, R^{-1})$ and $E(x_t, r_f, r_V, R^{-1})$ which we evaluate in this article are given in the following \begin{equation} D(x_t, r_f, r_V, R^{-1})=D_0(x_t)+ \sum_{n=1}^\infty D_n(x_t,x_{f^{(n)}},x_{V^{(n)}})\;, \label{dsum} \end{equation} and \begin{equation} E(x_t, r_f, r_V, R^{-1})=E_0(x_t)+ \sum_{n=1}^\infty E_n(x_t,x_{f^{(n)}},x_{V^{(n)}})\;, \label{esum} \end{equation} with $x_t=\frac{m^2_t}{M^2_W}$, $x_{V^{(n)}}=\frac{m^2_{V^{(n)}}}{M^2_W}$ and $x_{f^{(n)}}=\frac{m^2_{f^{(n)}}}{M^2_W}$. $m_{V^{(n)}}$ and $m_{f^{(n)}}$ can be obtained from transcendental equation given in Eq.\;\ref{fermion_mass}. The functions $D_0(x_t)$ and $E_0(x_t)$ are the corresponding SM contributions at the electroweak scale \cite{Buras:2003mk, Misiak:1992bc, Buras:1994dj, Inami:1980fz, Buras:1994qa} \begin{equation}\label{DSM} D_0(x_t)=-{4\over9}\ln x_t+{{-19x_t^3+25x_t^2}\over{36(x_t-1)^3}} +{{x_t^2(5x_t^2-2x_t-6)} \over{18(x_t-1)^4}}\ln x_t~, \end{equation} \begin{equation}\label{ESM} E_0(x_t)=-{2\over 3}\ln x_t+{{x_t^2(15-16x_t+4x_t^2)}\over{6(1-x_t)^4}} \ln x_t+{{x_t(18-11x_t-x_t^2)} \over{12(1-x_t)^3}}~. \end{equation} Now we will depict the nmUED contribution to the electroweak penguin diagrams. We have already mentioned that the KK-masses and couplings involving KK-excitations are non-trivially modified with respect to their UED counterparts due to the presence of different BLTs in the nmUED action. Therefore, it would not be possible to obtain the expressions of $D$ and $E$ functions in nmUED simply by rescaling the results of UED model \cite{Buras:2003mk}. Consequently, we have evaluated the functions $D_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ and $E_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ {\it independently} for the nmUED scenario. These functions ($D_n$ and $E_n$) represent the KK-contributions for $n^{th}$ KK-mode which are computed from the electroweak penguin diagrams (given in Fig.\;\ref{magnetic_pen}) in nmUED model for photon and gluon respectively. Furthermore, it is quite evident from Eqs.\;\ref{dn} and \ref{en} that they are remarkably different from that of the UED expression (given in Eqs.\;3.31 and 3.32 of ref.\;\cite{Buras:2003mk}). However, from our expressions (given in Eqs.\;\ref{dn} and \ref{en}) we can reconstruct the results of UED version (given in the Eqs.\;3.31 and 3.32 of the ref.\;\cite{Buras:2003mk}) if we set the boundary terms to zero i.e., $r_f, r_V = 0$. \begin{figure}[th!] \begin{center} \includegraphics[scale=0.80,angle=0]{fig1a} \includegraphics[scale=0.80,angle=0]{fig1b} \includegraphics[scale=0.80,angle=0]{fig1c}\\ \includegraphics[scale=0.80,angle=0]{fig1d} \includegraphics[scale=0.80,angle=0]{fig1e} \includegraphics[scale=0.80,angle=0]{fig1f} \caption{Relevant electroweak penguin diagrams contributing to the decay of $B\to X_s\ell^+\ell^-$.} \label{magnetic_pen} \end{center} \end{figure} To this end, we would like to mention that in our calculation of one loop penguin diagrams (in order to measure the contributions of KK-excitation to the decay of $B\rightarrow X_s\ell^+\ell^-$) we consider only those interactions which couple a zero-mode field to a pair of KK-excitations carrying equal KK-number. Although, in nmUED scenario due to the KK-parity conservation one can also have non-zero interactions involving KK-excitations with KK-numbers $n, m~{\rm and}~p$ where $n+m+p$ is an even integer. However, we have explicitly checked that the final results would not change remarkably even if one considers the contributions of all the possible off-diagonal interactions \cite{Jha:2014faa, Datta:2015aka, Datta:2016flx}. For the $n^{th}$ KK-level the electroweak {\it photon} penguin function (which is obtained from penguin diagrams given in Fig.\;\ref{magnetic_pen}) takes the form as \begin{eqnarray}\label{dn} D_n(x_t,x_{f^{(n)}},x_{V^{(n)}})&=&\frac 23E_n(x_t,x_{f^{(n)}},x_{V^{(n)}})-\frac{1}{36(-1+x_{f^{(n)}}-x_{V^{(n)}})^4}\\ \nonumber &&\bigg[(-1+x_{f^{(n)}}-x_{V^{(n)}})\bigg\{-2(I^n_1)^2\bigg(43x^2_{f^{(n)}}-65x_{f^{(n)}}(1+x_{V^{(n)}})\\ \nonumber &&+16(1+x_{V^{(n)}})^2\bigg)+(I^n_2)^2\bigg(11x^2_{f^{(n)}}-7x_{f^{(n)}}(1+x_{V^{(n)}})\\ \nonumber &&+2(1+x_{V^{(n)}})^2 \bigg)\bigg\}-6x^2_{f^{(n)}}\bigg\{(I^n_2)^2x_{f^{(n)}}+2(I^n_1)^2\\ \nonumber &&\bigg(6-5x_{f^{(n)}}+6x_{V^{(n)}}\bigg)\bigg\}\ln\bigg(\frac{x_{f^{(n)}}}{1+x_{V^{(n)}}}\bigg) \bigg]\\ \nonumber &&+\frac{1}{36(-1+x_t+x_{f^{(n)}}-x_{V^{(n)}})^4}\bigg[(-1+x_t+x_{f^{(n)}}-x_{V^{(n)}})\bigg\{\\ \nonumber &&(I^n_1)^2\Bigg(11x^3_t+x^2_{f^{(n)}}(-86+11x_t)- x^2_t(93+7x_{V^{(n)}})+32(1+x_{V^{(n)}})^2\\ \nonumber &&+2x_t(1+x_{V^{(n)}})(66+x_{V^{(n)}})+x_{f^{(n)}}\bigg(x_t(-179+22x_t-7x_{V^{(n)}})\\ \nonumber &&130(1+x_{V^{(n)}})\bigg)\Bigg)+(I^n_2)^2\Bigg(11x^2_{f^{(n)}}+11x^2_t-7x_t(1+x_{V^{(n)}})\\ \nonumber &&+2(1+x_{V^{(n)}})^2+x_{f^{(n)}}\bigg(22x_t-7(1+x_{V^{(n)}})\bigg) \Bigg)\bigg\}-6(x_t+x_{f^{(n)}})^2\\ \nonumber &&\bigg\{(I^n_2)^2(x_t+x_{f^{(n)}}) +(I^n_1)^2\bigg((x_t+x_{f^{(n)}})(-10+x_t)\\ \nonumber &&+12(1+x_{V^{(n)}})\bigg)\bigg\}\ln\bigg(\frac{x_t+x_{f^{(n)}}}{1+x_{V^{(n)}}}\bigg) \bigg], \end{eqnarray} while the function $E_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ is regarded as the corresponding contribution for $gluon$ penguins given by the first two diagrams of Fig.\;\ref{magnetic_pen}. The expression of the function $E_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ in nmUED is given as the following \begin{eqnarray}\label{en} E_n(x_t,x_{f^{(n)}},x_{V^{(n)}})&=&-\frac{1}{36(-1+x_{f^{(n)}}-x_{V^{(n)}})^4}\bigg[(-1+x_{f^{(n)}}-x_{V^{(n)}})\bigg\{\\ \nonumber &&(I^n_1)^2\bigg(50x^2_{f^{(n)}}-58x_{f^{(n)}}(1+x_{V^{(n)}})-4(1+x_{V^{(n)}})^2\bigg)+(I^n_2)^2\\ \nonumber &&\bigg(7x^2_{f^{(n)}}-29x_{f^{(n)}}(1+x_{V^{(n)}})+16(1+x_{V^{(n)}})^2 \bigg)\bigg\}-6(1+x_{V^{(n)}})\\ \nonumber &&\bigg\{(I^n_2)^2(1+x_{V^{(n)}})(2-3x_{f^{(n)}}+2x_{V^{(n)}}) +2(I^n_1)^2\\ \nonumber &&\bigg(6x^2_{f^{(n)}}-9x_{f^{(n)}}(1+x_{V^{(n)}})+2(1+x_{V^{(n)}})^2\bigg)\bigg\}\ln\bigg(\frac{x_{f^{(n)}}}{1+x_{V^{(n)}}}\bigg) \bigg]\\ \nonumber &&+\frac{1}{36(-1+x_t+x_{f^{(n)}}-x_{V^{(n)}})^4}\bigg[(-1+x_t+x_{f^{(n)}}-x_{V^{(n)}})\bigg\{\\ \nonumber &&(I^n_1)^2\Bigg(7x^3_t+x^2_{f^{(n)}}(50+7x_t)+ x^2_t(21-29x_{V^{(n)}})-4(1+x_{V^{(n)}})^2\\ \nonumber &&+2x_t(1+x_{V^{(n)}})(-21+8x_{V^{(n)}})+x_{f^{(n)}}\bigg(-58+71x_t+14x^2_t\\ \nonumber &&-29(2+x_t)x_{V^{(n)}}\bigg)\Bigg)+(I^n_2)^2\Bigg(7x^2_{f^{(n)}}+7x^2_t-29x_t(1+x_{V^{(n)}})\\ \nonumber &&+16(1+x_{V^{(n)}})^2+x_{f^{(n)}}\bigg(14x_t-29(1+x_{V^{(n)}})\bigg) \Bigg)\bigg\}-6(1+x_{V^{(n)}})\\ \nonumber &&\bigg\{(I^n_2)^2(1+x_{V^{(n)}})(2-3x_{f^{(n)}}-3x_t+2x_{V^{(n)}}) +(I^n_1)^2\Bigg(12x^2_{f^{(n)}}\\ \nonumber &&-3x^2_t(-3+x_{V^{(n)}})+2x_t(-8+x_{V^{(n)}})(1+x_{V^{(n)}})+4(1+x_{V^{(n)}})^2\\ \nonumber &&-3x_{f^{(n)}}\bigg(6-7x_t+(6+x_t)x_{V^{(n)}}\bigg)\Bigg)\bigg\}\ln\bigg(\frac{x_t+x_{f^{(n)}}}{1+x_{V^{(n)}}}\bigg) \bigg]. \end{eqnarray} In the above expressions, $I^n_1$ and $I^n_2$ represent overlap integrals whose analytic forms have been given in the Appendix \ref{fyerul} (see Eqs.\;\ref{i1} and \ref{i2}). \vspace*{-0.5cm} \subsection{The Differential Decay Rate} \label{sec:Heff:BXsee:nlo:rate} We are now in a stage where on the basis of effective Hamiltonian given in Eq.\;\ref{Heff2_at_mu} we can readily define the differential decay rate in the NDR scheme \cite{Misiak:1992bc,Buras:1994dj} \begin{equation} \label{rateee} R(q^2) \equiv \frac{1}{\Gamma(b \to c e\bar\nu)}\frac{{d}\Gamma (b \to s \ell^+\ell^-)}{d q^2} \, = \frac{\alpha^2}{4\pi^2} \left|\frac{V^*_{ts}V_{tb}}{V_{cb}}\right|^2 \frac{\left(1-\frac{q^2}{m^2_b}\right)^2}{f(z)\kappa(z)} U(q^2). \end{equation} Here, \begin{equation} \label{fz} f(z)=1-8z^2+8z^6-z^8-24z^4\ln(z), \end{equation} is the phase-space factor and \begin{equation} \label{kz} \kappa(z)\simeq 1-\frac{2\alpha_s(\mu)}{3\pi}\bigg[\bigg(\pi^2-\frac{31}{4}\bigg)(1-z)^2+\frac 32\bigg], \end{equation} represents the single gluon QCD correction to $b\to c e\bar\nu$ decay \cite{Cabibbo:1978sw,Kim:1989ac} with $z=\frac{m_c}{m_b}$. The function $U(q^2)$ is expressed as \begin{equation}\label{US} U(q^2)= \left(1+\frac{2q^2}{m^2_b}\right)\left(|\tilde{C}_9^{\rm eff}(q^2)|^2 + |\tilde{C}_{10}|^2\right) + 4 \left( 1 + \frac{2m^2_b}{q^2}\right) |C_{7\gamma}^{(0){\rm eff}}|^2 + 12 C_{7\gamma}^{(0){\rm eff}} \ {\rm Re}\,\tilde{C}_9^{\rm eff}(q^2), \end{equation} where, $\tilde{C}_9^{\rm eff}(q^2)$ is given in Eq.\;\ref{c9_eff}. The explicit formula for $C_{7\gamma}^{(0){\rm eff}}$ is shown in the Appendix \ref{NDR}. Among the several terms given in Eq.\;\ref{US},\;\ $|\tilde{C}_9^{\rm eff}(q^2)|^2$ is almost similar to that of the SM, $|\tilde{C}_{10}|^2$ is appreciably enhanced, however the last two terms are suppressed. Furthermore, the last term in Eq.\;\ref{US} is negative and hence its suppression results are responsible for an enhancement of $U(q^2)$ in addition to the one due to $\tilde{C}_{10}$. Using Eq.\;\ref{rateee}, one can easily evaluate branching ratio for the present decay process for a given range of $q^2$. In the numerical calculations we will use the value 0.104 for ${\rm Br}(B\to X_c e\bar\nu)_{\rm exp}$. \subsection{Forward-Backward Asymmetry} \label{FBA} For the present decay process $B\to X_s\ell^+\ell^-$ another observable called Forward-Backward asymmetry could be instrumental for the detection of NP scenario. It is non-zero only at the NLO level. The unnormalised expression is given as \cite{Ali:1991is} \begin{eqnarray}\label{unABF} \bar{A}_{FB}(q^2)&\equiv&\frac{1}{\Gamma (b \to c e\bar\nu)} \int_{-1}^1 d \cos \theta_\ell \frac{ d^2 \Gamma (b \to s \ell^+\ell^-)} { dq^2 d\cos \theta_\ell} {\rm sgn} (\cos \theta_l), \\ &=& -3\frac{\alpha^2}{4\pi^2} \left|\frac{V^*_{ts}V_{tb}}{V_{cb}}\right|^2 \frac{\left(1-\frac{q^2}{m^2_b}\right)^2}{f(z)\kappa(z)} \tilde C_{10} \left[\frac{q^2}{m^2_b} {\rm Re}\,\tilde{C}_9^{\rm eff}(q^2) +2 C_{7\gamma}^{(0){\rm eff}}\right]. \end{eqnarray} Here, $\theta_\ell$ represents the angle of the $\ell^+$ with respect to $b$-quark direction in the centre-of-mass system of the di-lepton pair. The normalised form can be expressed as \begin{eqnarray} \label{nAFB} A_{FB}=\frac{\bar{A}_{FB}(q^2)}{R(q^2)}, \end{eqnarray} while the global Forward-Backward asymmetry in a region $q^2\in[a,\;b]\;{\rm GeV}^2$ can be defined as \cite{Feng:2016wph, Lunghi:1999uk} \begin{eqnarray} &&A_{_{FB}}\Big|_{q^2\in[a,\;b]\;{\rm GeV}^2} ={\int_a^b dq^2\bar{A}_{_{FB}}(q^2)\over\int_a^bdq^2R(q^2)}\;. \label{global-AFB} \end{eqnarray} In the following section we will present the numerical estimation of these observables for the allowed parameter space in nmUED scenario. \section{Analysis and results}\label{anls} The effective Hamiltonian (given in Eq.\;\ref{Heff2_at_mu}) required for the decay $B\rightarrow X_s\ell^+\ell^-$ contains different WCs and in our analysis we evaluate KK-contributions to each of these coefficients at each KK-level. In this article, for the first time we have calculated the KK-contributions to the coefficients of electroweak dipole operators in the nmUED scenario. The functions $D_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ (given in Eq.~\ref{dn}) and $E_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ (given in Eq.~\ref{en}) represent the $n^{th}$ level KK-contributions to the coefficients for the dipole operators for photon and gluon respectively. These functions ($D_n$ and $E_n$) depend on gauge boson as well as fermion KK-masses\footnote{We use $M_W=80.38$ GeV for SM $W^\pm$ gauge boson mass and $m_t=173.1$ GeV for SM top quark mass as given in ref.\;\cite{Tanabashi:2018oca}.} in the nmUED scenario. Furthermore, other coefficients needed for the concerned decay process in nmUED scenario have been given in our previous articles \cite{Datta:2015aka, Datta:2016flx}. At this point we would like to mention that, considering the analysis of the effect of SM Higgs mass on vacuum stability in UED model \cite{Datta:2012db}, we sum the KK-contributions up to 5 KK-levels\footnote{Analysis in earlier articles used 20-30 KK-levels while adding up the contributions from KK-modes.} and finally we add up the total KK-contributions with the SM counterpart. In fact, we have explicitly checked that the numerical values would not differ remarkably as the sum over the KK-modes, in this case, is converging\footnote{The summation of KK-contribution is convergent in UED type models with one extra space-like dimension, as far as one loop calculation is concerned\cite{Dey:2004gb}.} in nature. More specifically, during the calculation of loop diagrams, the summation of KK-levels becomes saturated after consideration of a certain number of KK-levels. Consequently, the final results would not change significantly whether we consider 5 KK-levels or 20 KK-levels during the evaluation of KK-contributions for the loop diagrams. In support of our assumption, at the end of the following subsection, we will present two tables (Tables \ref{sum-kk_low} and \ref{sum-kk_high}) which will ensure the insensitivity on the number of KK-levels in summation. \subsection{Constraints and choice of range of BLT parameters} Here we briefly discuss the following constraints that have been imposed in our analysis. \begin{itemize} \item Several rare decay processes, for example $B_s\rightarrow \mu^+\mu^-$ and $B\rightarrow X_s\gamma$ have always been very crucial for searching any favourable kind of NP scenario. The latest experimental values for branching ratios of these processes are given in the following \begin{table}[H] \begin{center} \begin{tabular}{|c|c|} \hline Process & Experimental value of branching ratio\\ \hline $B_s\rightarrow \mu^+\mu^-$ & $(2.8^{+0.8}_{-0.7})\times 10^{-9}$ \cite{Aaboud:2018mst}\\ \hline $B\rightarrow X_s\gamma$ & $(3.32\pm 0.16)\times 10^{-4}$ \cite{Amhis:2016xyh}\\ \hline \end{tabular} \caption{Experimental values for branching ratios of $B_{s} \rightarrow \mu^+ \mu^-$ and $B\rightarrow X_s\gamma$.} \label{t:4} \end{center} \end{table} In the context of nmUED scenario, thorough analyses on the above mentioned rare decay processes have been performed in refs.~\cite{Datta:2015aka} and \cite{Datta:2016flx} respectively. Using the expressions of ${\rm Br}(B_s\rightarrow \mu^+\mu^-)$ and ${\rm Br}(B\rightarrow X_s\gamma)$ given in \cite{Datta:2015aka} and \cite{Datta:2016flx} we have treated the branching ratios of these rare decay processes as constraints in our present analysis. \item Electroweak precision test (EWPT) is an essential and important tool for constraining any form of BSM physics. In the nmUED model, corrections to Peskin-Takeuchi parameters S, T, and U appear via the correction to the Fermi constant $G_F$ at tree level. This is a remarkable contrast with respect to the minimal version of the UED model where these corrections appear via one loop processes. Detail study on EWPT for the present version of nmUED model has been provided in \cite{Datta:2015aka, Biswas:2017vhc}. Following the same approach given in refs. \cite{Datta:2015aka, Biswas:2017vhc} we have applied EWPT as one of the constraints in our analysis. \end{itemize} To this end, we would like to mention the range of values of BLT parameters used in our analysis. In general BLT parameters may be positive or negative. However, it is readily evident from Eq.\;\ref{norm} that, for ${r_f}/{R}=-\pi$ the zero-mode solution becomes divergent and beyond ${r_f}/{R} = - \pi$ the zero-mode fields become ghost-like. Hence, any values of BLT parameters lesser than $- \pi R$ should be discarded. Although, for the sake of completeness we have shown numerical results for some negative BLT parameters. However, analysis of electroweak precision data \cite{Datta:2015aka, Biswas:2017vhc} disfavours large portion of negative BLT parameters. \vspace*{-0.5cm} \subsection{Numerical results}\label{nr} We are in a position, where, we would like to present the primary results of our analysis. \vspace*{-0.5cm} \subsubsection{Branching ratio}\label{br} In Figs.\;\ref{low_bran} and \ref{high_bran} we have depicted the variation of branching ratio of $B\rightarrow X_s\ell^+\ell^-$ as a function of scaled BLT parameters ($R_V\equiv r_V/R$ and $R_f\equiv r_f/R$) and inverse of the radius of compactification ($R^{-1}$) for two different di-lepton mass square regions $q^2\in [1, 6]~{\rm GeV^2}$ and $q^2\in [14.4, 25]~{\rm GeV^2}$ respectively. We have mentioned earlier that non-vanishing BLT parameters non-trivially modify the KK-masses and various couplings among the KK-excitations in the nmUED scenario. Therefore, in the following we will discuss that how these BLT parameters affect the concerned decay process. For each of the $q^2$ regions we present five panels corresponding to five different values of scaled gauge BLT parameter $R_V$. In each panel, we show the dependence of the branching ratio with $R^{-1}$ for five different values of scaled fermion BLT parameters $R_f$. If we focus on a particular curve specified $R_V$ and $R_f$, then we observe that the branching ratio monotonically decreases with respect to increasing values of $R^{-1}$. It is quite expected in a scenario like nmUED, where the masses of KK-excited states are basically characterised by $R^{-1}$, i.e., with the increasing values of $R^{-1}$ the masses of KK-excited states are increased. Therefore with the increasing values of KK-masses, the one loop functions involved in this decay process are suppressed, which in turn decrease the decay width (and branching ratio). Further, depending on the BLT parameters, after a certain value of $R^{-1}$ the branching ratio asymptotically converges to its SM value as $R^{-1}\rightarrow \infty$. This behaviour clearly indicates the decoupling behavior of the KK-mode contribution. Moreover, it is clearly evident from the Figs.\;\ref{low_bran} and \ref{high_bran} that branching ratio of $B\rightarrow X_s\ell^+\ell^-$ increases with the increment of both of the BLT parameters. For example, if we concentrate on a particular panel specified by a fixed value of $R_V$ then one can see that, with the increasing values of $R_f$ the branching ratio is enhanced. The reason is that, with the increasing values of $R_f$, KK-fermion masses decrease, consequently the loop functions are enhanced. Therefore, the branching ratio increase with higher values of $R_f$. At the same time, if we look at all the panels of any particular figure (either Fig.\;\ref{low_bran} or Fig.\;\ref{high_bran}) then we will readily conclude that the other BLT parameter $R_V$ affects the branching ratio in a similar manner like $R_f$. However, the branching ratio is a bit extra sensitive to the variation of $R_f$ rather than $R_V$. It can be explained by observing the interactions which are involved in this calculation listed in Appendix \ref{fyerul}. As per earlier discussion (see the paragraph before the beginning of the section \ref{sec:Heff:BXsee:nlo}) the interactions are modified by the overlap integrals $I^n_1$ and $I^n_2$. $I^n_1$ modify the interactions of third generations of quarks with charged-Higgs scalar ($H^{(n)\pm}$) and gauge bosons ($W^{(n)\pm}$) while the interactions between the fifth component of $W$-boson and third generations of quarks are modified by $I^n_2$. Therefore, due to the combined effects of the top-Yukawa coupling and $SU(2)$ gauge interaction $I^n_1$ dominates over $I^n_2$ which is controlled by $SU(2)$ gauge interaction only. Hence, $R_f$ has a better control on the $B\rightarrow X_s\ell^+\ell^-$ amplitude ({\it via} $I^n_1$) over $R_V$. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.9,angle=0]{fig2a} \includegraphics[scale=0.9,angle=0]{fig2b} \includegraphics[scale=0.9,angle=0]{fig2c} \includegraphics[scale=0.9,angle=0]{fig2d} \includegraphics[scale=0.9,angle=0]{fig2e} \caption{Variation of the branching ratio of $B\rightarrow X_s\ell^+\ell^-$ with $R^{-1}$ (TeV) for various values of $R_f(=r_f/R)$. The five panels represent different values of $R_V(=r_V/R)$. We sum the contributions up to 5 KK-levels in different loop functions while calculating WCs. The horizontal grey band depicts the 1$\sigma$ allowed range of experimental value of the branching ratio for $q^2\in [1, 6]~{\rm GeV^2}$.} \label{low_bran} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.9,angle=0]{fig3a} \includegraphics[scale=0.9,angle=0]{fig3b} \includegraphics[scale=0.9,angle=0]{fig3c} \includegraphics[scale=0.9,angle=0]{fig3d} \includegraphics[scale=0.9,angle=0]{fig3e} \caption{Variation of the branching ratio of $B\rightarrow X_s\ell^+\ell^-$ with $R^{-1}$ (TeV) for various values of $R_f(=r_f/R)$. The five panels represent different values of $R_V(=r_V/R)$. We sum the contributions up to 5 KK-levels in different loop functions while calculating WCs. The horizontal grey band depicts the 1$\sigma$ allowed range of experimental value of the branching ratio for $q^2\in [14.4, 25]~{\rm GeV^2}$.} \label{high_bran} \end{center} \end{figure} At this point, we would like to comment on the values of BLT parameters. It is clearly evident from the figures (Figs.\;\ref{low_bran} and \ref{high_bran}) that negative values of BLT parameters are not very encouraging for the present purpose, because we can not get any strong lower limit on $R^{-1}$. For negative BLT parameters the KK-masses are larger with respect to positive BLT parameters. Therefore, enhanced KK-mass suppresses the loop functions, and consequently decay amplitude decreases. Apart from this, constraint of EWPT would prefer larger values of $R^{-1}$ for negative BLT parameters\cite{Datta:2015aka, Biswas:2017vhc}. Hence, in the case of our present purpose the positive values of BLT parameters are more preferable. For example, for $q^2\in [1, 6]~{\rm GeV^2}$ if we choose $R_V = 2, \; R_f = 6$, $R^{-1} > 680 (690) \rm\;GeV$ (see Table \ref{sum-kk_low}) when we consider the sum up to 5(20) KK-levels. On the other hand lower limit on $R^{-1}$ changes to $ > 760 (770) \rm\;GeV$ for $R_f = R_V = 6$ (see Table \ref{sum-kk_low}). In the case of other region of $q^2 (\in [14.4, 25]~{\rm GeV^2})$, the lower limits on $R^{-1}$ for the above mentioned BLT parameters changes to $ > 570 (580) \rm\;GeV$ (see Table \ref{sum-kk_high}) and $ > 720 (730) \rm\;GeV$ (see Table \ref{sum-kk_high}), respectively for the KK-sum up to 5(20) level. We have obtained these limits on $R^{-1}$ by comparing the branching ratio evaluated from the present calculation to the experimental data (given in Eq.\;\ref{EXP-BR-BtoXsll}) with 1$\sigma$ upward error bar. From these numbers we find that the limits are slightly better than that of the results obtained from the analysis $B\rightarrow X_s\gamma$ \cite{Datta:2016flx}, however, in the same ball park of those obtained from the analysis of $B_s \rightarrow \mu^+ \mu^-$ \cite{Datta:2015aka}. Furthermore, if we look at the Figs.\;\ref{low_bran} and \ref{high_bran} (or Tables\;\ref{sum-kk_low} and \ref{sum-kk_high}) then we find that the lower limits on $R^{-1}$ would not drastically change after a certain positive values of BLT parameters. For example, in the present analysis we have restricted ourselves for the choice of BLT parameters (both $R_V$ and $R_f$) up to 6. The reason is that, beyond this choice we expect that the lower limit on $R^{-1}$ would not change significantly for larger values of BLT parameters. \begin{table}[H] \begin{center} \hspace*{-1cm} \resizebox{19cm}{!}{ \begin{tabular}{|c||c|c||c|c||c|c||c|c||c|c|} \hline {}&\multicolumn{2}{|c||}{$R_V=-2$}&\multicolumn{2}{|c||}{$R_V=0$} &\multicolumn{2}{|c||}{$R_V=2$}&\multicolumn{2}{|c||}{$R_V=4$}&\multicolumn{2}{|c||}{$R_V=6$}\\ \hline{$R_f$}& 5 KK-level & 20 KK-level & 5 KK-level & 20 KK-level & 5 KK-level & 20 KK-level & 5 KK-level & 20 KK-level & 5 KK-level & 20 KK-level \\ \hline -2&215.73&224.19&283.62&289.23&377.06&381.14&437.53&443.33&487.00&489.26\\ 0&382.15&388.95&451.27&464.93&472.55&482.32&478.76&485.35&530.98&549.54\\ 2&385.45&392.72&498.00&508.18&510.01&518.05&536.48&548.76&588.70&598.28\\ 4&390.26&394.83&525.48&529.81&676.65&688.72&717.88&726.93&745.36&750.21\\ 6&421.04&430.52&528.23&533.45&684.89&694.54&761.85&768.14&764.60&770.42\\ \hline \end{tabular} } \end{center} \caption[]{Lower limits on $R^{-1}$ (in GeV) evaluated from branching ratio of $B\rightarrow X_s\ell^+\ell^-$ for several values of BLT parameters for $q^2\in [1, 6]~{\rm GeV^2}$ showing the insensitivity on the number of KK-modes in summation.} \label{sum-kk_low} \end{table} \begin{table}[H] \begin{center} \hspace*{-1cm} \resizebox{19cm}{!}{ \begin{tabular}{|c||c|c||c|c||c|c||c|c||c|c|} \hline {}&\multicolumn{2}{|c||}{$R_V=-2$}&\multicolumn{2}{|c||}{$R_V=0$} &\multicolumn{2}{|c||}{$R_V=2$}&\multicolumn{2}{|c||}{$R_V=4$}&\multicolumn{2}{|c||}{$R_V=6$}\\ \hline{$R_f$}& 5 KK-level & 20 KK-level & 5 KK-level & 20 KK-level & 5 KK-level & 20 KK-level & 5 KK-level & 20 KK-level & 5 KK-level & 20 KK-level \\ \hline -2&93.98&102.71&135.20&143.26&173.18&186.51&201.16&208.14&214.90&219.42\\ 0&275.38&287.70&294.61&306.26&321.36&335.18&385.31&402.21&451.28&462.56\\ 2&278.12&289.12&335.84&346.81&404.55&415.20&476.01&487.36&528.23&538.32\\ 4&283.62&294.45&357.83&365.52&569.46&566.05&632.68&640.82&687.64&698.48\\ 6&324.84&334.60&451.28&465.75&572.20&586.54&676.65&697.35&726.12&737.88\\ \hline \end{tabular} } \end{center} \caption[]{Lower limits on $R^{-1}$ (in GeV) evaluated from branching ratio of $B\rightarrow X_s\ell^+\ell^-$ for several values of BLT parameters for $q^2\in [14.4, 25]~{\rm GeV^2}$ showing the insensitivity on the number of KK-modes in summation.} \label{sum-kk_high} \end{table} In Table \ref{sum-kk_low} and Table \ref{sum-kk_high} (for two different regions of $q^2$) we have enlisted specific values of lower limits on $R^{-1}$ corresponding to different choices of BLT parameters. The numbers in the tables (Table\;\ref{sum-kk_low} and Table\;\ref{sum-kk_high}) also indicate that our results are not very sensitive to the number of KK-levels considered in the sum while calculating loop diagrams corresponding to different WCs. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.9,angle=0]{fig4a} \includegraphics[scale=0.9,angle=0]{fig4b} \caption{Left and right panels represent the exclusion contours obtained from branching ratio of $B\rightarrow X_s\ell^+\ell^-$ decay in $R_f - R^{-1}$ plane for low and high di-lepton mass square regions respectively for five different choices of $R_V$. These exclusion curves have been drawn with the values of lower limit of $R^{-1}$ while we sum the contributions up to 5 KK-levels in different loop functions required for the calculation of WCs. The area below a particular curve (fixed $R_V$) has been excluded by the experimental value of the branching ratio with 1$\sigma$ error bar.} \label{lowerR} \end{center} \end{figure} In the left and right panels of Fig.\;\ref{lowerR} we present the region of parameter space which has been excluded by the currently measured experimental values of branching ratios of $B\rightarrow X_s\ell^+\ell^-$ for two different $q^2$ regions $[1, 6]~{\rm GeV^2}$ and $[14.4, 25]~{\rm GeV^2}$ respectively. In both of these panels we have depicted contours corresponding to five different values of $R_V$ in $R_f-R^{-1}$ plane. The region under a individual curve (specified by a fixed value of $R_V$) has been excluded by comparing the experimentally measured branching ratio of $B\rightarrow X_s\ell^+\ell^-$ to its theoretical prediction in the nmUED scenario. The curves represent the contours of constant branching ratios of $B\rightarrow X_s\ell^+\ell^-$ corresponding to the 1$\sigma$ upper limit of its experimentally measured value. One can understand the nature of these contour curves with the help of Figs.\;\ref{low_bran} and \ref{high_bran}. With the larger values of $R^{-1}$ KK-masses increase which lead to suppression in the decay width (and branching ratio). Hence, in order to overcome this suppression one requires larger values of $R_f$ and $R_V$. The larger values of BLT parameter enhance the decay dynamics in two ways. First of all, these would diminish the KK-masses. Secondly, larger values of $R_f$ would increase the interaction strengths via the overlap integral $I^n_1$ where as increasing values of $R_V$ would increase interaction strengths via $I^n_2$. To this end, we would like to mention that, as per as the BLT parameters are concerned there is no sharp contrast in behaviour of decay branching ratio between two different regions of $q^2$. However, the lower limits of $R^{-1}$ which we have obtained from our present analysis are slightly different for two different regions of $q^2$. In the case of low $q^2$ region ($\in [1, 6]~{\rm GeV^2}$) the lower limit is higher than that of the case in high $q^2$ region ($\in [14.4, 25]~{\rm GeV^2}$). For example (considering only 5 KK-level sum), in the low $q^2$ region if we set $R_V=4$, $R_f=2$ the lower limit on $R^{-1}$ is 536.38 GeV while for the same set of BLT parameters $R^{-1}$ is 476.01 GeV for the high $q^2$ region. This feature is true for all combinations of BLT parameters. This feature indicates that, in the second case, masses of the KK-particles which are involved in the loop diagrams are relatively lighter with respect to the first case. This behaviour is quite expected, because in the second case the phase space suppression is larger with respect to first one, hence to compensate this suppression one requires relatively lighter mass particles which are involved in the loop diagrams needed for the calculation of different WCs. $\bullet$ {\bf Revisit at the lower limit on {\boldmath$R^{-1}$} obtained from {\boldmath$B\to X_s\ell^+\ell^-$} in UED scenario}\\ Before we proceed any further, we would like to revisit the lower limit on $R^{-1}$ obtained from our analysis in the UED scenario considering the current experimental results of the branching ratio of $B\rightarrow X_s\ell^+\ell^-$. We can obtain the UED results from our analysis in the limit when both the BLT parameters vanish, i.e., for $R_V=R_f=0$. In this limit KK-mass for $n^{th}$ KK-level simply becomes $nR^{-1}$. Moreover, the overlap integrals $I^n_1$ and $I^n_2$ become unity. Hence, under this circumstance, the functions $D_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ and $E_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ given in Eqs.\;\ref{dn} and \ref{en} would transform themselves into their UED forms. We have explicitly checked that in this vanishing BLT limit the expressions of the functions $D_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ and $E_n(x_t,x_{f^{(n)}},x_{V^{(n)}})$ are identical with the forms that given in ref.\;\cite{Buras:2003mk}\footnote{The authors of the article \cite{Buras:2003mk} have not considered any radiative corrections to the KK-masses in their analysis. Consequently the KK-mass at the $n^{th}$ KK-level is $nR^{-1}$.}. Under the same vanishing BLT limit condition, similar transformation is also applicable for other functions (e.g., $C_n, Y_n, D'_n$ and $E'_n$ which have been calculated in our previous articles \cite{Datta:2015aka,Datta:2016flx}) required for the present calculations. Now from our present analysis we can readily derive the lower limit on $R^{-1}$ from the Tables \ref{sum-kk_low} and \ref{sum-kk_high}. That is for $R_V = R_f =0$, the value of lower limit on $R^{-1}$ for $q^2\in [1, 6]~{\rm GeV^2}$ is 451.27 GeV, whereas for $q^2\in [14.4, 25]~{\rm GeV^2}$ the value changes to 294.61 GeV. It is needless to say that these results are not very strong but almost consistent with those values that are obtained from previous analyses in UED scenario. For example $(g-2)_\mu$ \cite{Nath:1999aa}, $\rho$-parameter \cite{Appelquist:2002wb}, FCNC process \cite{Buras:2003mk,Buras:2002ej, Agashe:2001xt, Chakraverty:2002qk}, $Zb\bar{b}$ \cite{Jha:2014faa, Oliver:2002up} and electroweak observables \cite{Strumia:1999jm, Rizzo:1999br, Carone:1999nz} put a lower bound of about 300-600 GeV on $R^{-1}$. On the other hand, from the projected tri-lepton signal at 8 TeV LHC one can derive lower limit on $R^{-1}$ up to 1.2 TeV\cite{Belyaev:2012ai, Golling:2016gvc, Gershtein:2013iqa}. At this point it is worth mentioning that the values of lower limit on $R^{-1}$, that obtained from the above mentioned analyses (for minimal version of UED scenario), have already been ruled out by the LHC data. The reason is that the recent analyses including LHC data exclude $R^{-1}$ up to 1.4~TeV~\cite{Choudhury:2016tff, Beuria:2017jez, Chakraborty:2017kjq, Deutschmann:2017bth}. \subsubsection{Forward-Backward asymmetry}\label{fwbk} Finally, in Figs.\;\ref{low_afb} and \ref{high_afb} we have shown the Forward-Backward asymmetry (actually global Forward-Backward asymmetry defined in Eq.\;\ref{global-AFB}) for the decay $B\rightarrow X_s\ell^+\ell^-$ for two $q^2$ regions $[1, 6]~{\rm GeV^2}$ and $[14.4, 25]~{\rm GeV^2}$ respectively. In each figure there are five panels corresponding to five different values of $R_V$. In each panel we have depicted the variation of Forward-Backward asymmetry with respect to $R^{-1}$ for five different values of $R_f$. Unlike the decay branching ratio, the behaviour of Forward-Backward asymmetry has been significantly affected by the two different regions of $q^2$. For example in the high $q^2$ region this asymmetry is always positive for the entire range of given $R^{-1}$ for every combination of BLT parameter, whereas for the low $q^2$ region the sign (either positive or negative) of this asymmetry is crucially dependent on the BLT parameters for the lower values of $R^{-1}$, although, it is always negative for higher values of $R^{-1}$. We have already mentioned that, in the present decay process among all the WCs, only $\tilde{C}_{10}$ is moderately enhanced by NP effects. Furthermore, this coefficient is independent of $q^2$ but depends only on the parameters of NP scenario. Now this coefficient has been appeared with a factor proportional to $\frac{q^2}{m^2_b}$ both in the numerator as well as in the denominator of the definition of global Forward-Backward asymmetry. Hence, depending on the value of $q^2$ the factor $\frac{q^2}{m^2_b}$ could play crucial role for the defined asymmetry. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.9,angle=0]{fig5a} \includegraphics[scale=0.9,angle=0]{fig5b} \includegraphics[scale=0.9,angle=0]{fig5c} \includegraphics[scale=0.9,angle=0]{fig5d} \includegraphics[scale=0.9,angle=0]{fig5e} \caption{Variation of the Forward-Backward asymmetry of $B\rightarrow X_s\ell^+\ell^-$ with $R^{-1}$ (TeV) for various values of $R_f(=r_f/R)$. The five panels represent different values of $R_V(=r_V/R)$. We sum the contributions up to 5 KK-levels in different loop functions while calculating WCs. The horizontal grey band depicts the 1$\sigma$ allowed range of experimental value of the Forward-Backward asymmetry for $q^2\in [1, 6]~{\rm GeV^2}$.} \label{low_afb} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.9,angle=0]{fig6a} \includegraphics[scale=0.9,angle=0]{fig6b} \includegraphics[scale=0.9,angle=0]{fig6c} \includegraphics[scale=0.9,angle=0]{fig6d} \includegraphics[scale=0.9,angle=0]{fig6e} \caption{Variation of the Forward-Backward asymmetry of $B\rightarrow X_s\ell^+\ell^-$ with $R^{-1}$ (TeV) for various values of $R_f(=r_f/R)$. The five panels represent different values of $R_V(=r_V/R)$. We sum the contributions up to 5 KK-levels in different loop functions while calculating WCs. The horizontal grey band depicts the 1$\sigma$ allowed range of experimental value of the Forward-Backward asymmetry for $q^2\in [14.4, 25]~{\rm GeV^2}$.} \label{high_afb} \end{center} \end{figure} In the case of low $q^2$ region, apart from the factor of $\frac{q^2}{m^2_b}$, some of the WCs could control the behaviour of Forward-Backward asymmetry for the lower values of $R^{-1}$. Since in this situation the masses of KK-modes are not very high, so Forward-Backward asymmetry have been hallmarked by the characteristics of different WCs. Now in every panel specified by a fixed value of $R_V$, we observe that, the asymmetry always shows monotonically decreasing behaviour for negative value of $R_f$. We have earlier mentioned that for negative values of $R_f $ KK-mass high, therefore, the loop functions are suppressed which in turn decrease the asymmetry. On the other hand when $R_f$ changes to positive side then due to relatively smaller values of KK-mass, loop functions are enhanced so that the WCs are increased and consequently the Forward-Backward asymmetry shows increasing behaviour. Then with the increasing values of $R^{-1}$ this asymmetry decreases. Moreover, the same argument is also applicable for the $R_V$, because, if we look at the all panels, then we can readily infer that the above mentioned effects due to $R_f$ are slightly magnified by increasing values of $R_V$. At this point we would like to point out that, using this asymmetry we can maximally achieve the lower limit on $R^{-1}$ up to $\simeq 600$ GeV. This limit can be obtained by comparing the theoretically estimated value of Forward-Backward in the present nmUED model with 1$\sigma$ lower bound of the corresponding experimental data. However, this value is not so competent with one that we have obtained from the branching ratio. On the other hand for high $q^2$ region, the factor $\frac{q^2}{m^2_b}$ is highly dominating in nature. Therefore, unlike very lower values of $R^{-1}$, the WCs will not get any scope to control the characteristics of the Forward-Backward asymmetry. As a result after a certain value of $R^{-1}$, the numerator as well as the denominator of Forward-Backward asymmetry are totally affected by the same way by the factor $\frac{q^2}{m^2_b}$. Hence, the asymmetry practically becomes independent of $R^{-1}$. This is clearly evident from the plots, where this asymmetry is almost parallel to the $R^{-1}$. Depending on the values of the BLT parameters, the saturation behaviour starts from different values of $R^{-1}$. However, it is also evident from the different panels of Fig.\;\ref{high_afb} that, even for different combination of BLT parameters the threshold points (basically the value of $R^{-1}$) of this saturation behaviour are not very distinct from each other. \subsubsection{Possible bounds on the nmUED scenario with upcoming measurements by the Belle II for the $B\to X_s \ell^+\ell^-$ observables} {In near future we will have new measurements by the Belle II experiment for the $B\to X_s \ell^+\ell^-$ observables. Therefore, at this stage it would be very relevant to discuss the possible bounds on the parameter space of the nmUED scenario in light of upcoming measurements by the Belle II experiment for the $B\to X_s \ell^+\ell^-$ observables. Belle II can significantly improve the present situation with its two orders of magnitude larger data sample. Consequently, we can expect the reduction of systematic uncertainties for the various observables. In order to check the possible bounds on the parameter space of nmUED scenario in the context of upcoming measurements by the Belle II for the $B\to X_s \ell^+\ell^-$ decay observables we follow the prescription given in \cite{Huber:2015sra, Kou:2018nap}. According to this prescription, the bounds can be implemented via the ratios $R_9$ and $R_{10}$ under the assumption of no NP contributions to the electromagnetic and chromomagnetic dipole operators (i.e., $R_{7,8} = 1$), where the ratios are defined as $R_{i}=\frac{C_i}{C^{\rm SM}_i}$ ($C_i$s are different WCs, with $i=7,8,9,10$). In Fig.\;4 of \cite{Huber:2015sra} (in all three panels) we can find a tiny area in $R_9-R_{10}$ plane that could be reached by upcoming results by Belle II experiment. For all cases, within this tiny area both the values of $R_9$ and $R_{10}$ are very much close to the unity. In other words this fact indicates that the deviation between NP and SM prediction is very small. We translate this fact (using lower panel of Fig.\;4 of \cite{Huber:2015sra}) in nmUED scenario in terms of the ratios $R_9$ and $R_{10}$ from which we have obtained the bounds on the model parameters from the perspective of upcoming measurements by the Belle II for the $B\to X_s \ell^+\ell^-$ observables. In the nmUED scenario, we have determined the values of the model parameters for which the ratios $R_9$ and $R_{10}$ should be restricted within the tiny area in $R_9-R_{10}$ plane that could be reached by upcoming results by Belle II experiment. The values of the lower limit of $R^{-1}$ for different combination of BLT parameters $R_f$ and $R_V$ have been slightly shifted to the higher values with respect to that of the values which we have obtained from our main analysis of this article. For example, when $R_V=2$ and $R_f=4$, the lower limit on $R^{-1}$ is 680.27 GeV, while this limit changes to 772.81 GeV for $R_V=6$ and $R_f=6$. This behaviour is true for all combination of BLT parameters. Here, we would like to mention that, these values are obtained when we consider the sum up to 5 KK-levels. This kind of result implies that the deviation between the SM expectation and the upcoming measurement by Belle II for the $B\to X_s \ell^+\ell^-$ decay observables will be decreasing in nature. Consequently, the role of NP is expected to be more restricted for the $B\to X_s \ell^+\ell^-$ decay observables. Therefore, one can constrain any NP model more precisely using the upcoming measurement by Belle II for the $B\to X_s \ell^+\ell^-$ decay observables. Moreover, the tendency of increasing of the lower limit of $R^{-1}$ indicates that NP model (in our case nmUED scenario) approaches to the direction of decoupling limit. Because, we have already mentioned that in a scenario like nmUED, where the masses of KK-excited states (NP particles in the present case) are essentially characterised by $R^{-1}$, therefore, with the increasing values of $R^{-1}$ the masses of KK-excited states are increased. Consequently, the effect of these KK-excited states will be decreased.} \section{Summary and conclusion}\label{concl} In view of the findings of new physics effects, we have estimated the contributions of KK-excitations to the decay of $B\rightarrow X_s\ell^+\ell^-$ in a $4 + 1$ dimensional non-minimal Universal Extra Dimensional scenario which is allowed to propagate all Standard Model particles. This specific scenario is characterised by different boundary localised terms (kinetic, Yukawa etc.). Actually in the 5-dimensional Universal Extra Dimensional scenario, the unknown radiative corrections to the masses and couplings are parametrised by the strength of these boundary localised terms. Hence, in the presence of these terms the KK-mass spectra as well as the interaction strengths among the various KK-excitations are transformed in a non-trivial manner in the 4-dimensional effective theory with respect to the minimal version of Universal Extra Dimensional scenario. In the present article we have used two different categories of BLT parameters. For example strengths for the boundary terms of fermions and Yukawa interactions are represented by $r_f$ while $r_V$ represents the strengths of boundary terms for the gauge as well as Higgs sectors. We have examined the effects of these BLT parameters on $B\rightarrow X_s\ell^+\ell^-$ decay process. The effective Hamiltonian for the decay process $B\rightarrow X_s\ell^+\ell^-$ is characterised by several Wilson Coefficients $C_7$, $C_9$ and $C_{10}$. {In non-minimal Universal Extra Dimensional scenario the coefficients $C_7$ and $C_{10}$ have already been calculated in our previous articles. However, for the first time we have calculated the coefficient $C_9$ in the non-minimal Universal Extra Dimensional scenario using the relevant Feynman (penguin) diagrams shown in Fig.\;\ref{magnetic_pen}. With these several Wilson Coefficients we have computed the coefficients of electroweak dipole operators for photon and gluon for the first time in the non-minimal Universal Extra Dimensional scenario}. Applying the advantage of Glashow Iliopoulos Maiani (GIM) mechanism we have included contributions from all three generations of quarks in our analysis. We evaluate the total contribution that obtained from the penguin diagrams and then added it with the corresponding Standard Model counterpart. Considering a recent analysis relating the stability on Higgs boson mass and cut-off of a Universal Extra Dimensional scenario \cite{Datta:2012db}, we have considered the summation up to 5 KK-levels in our calculation. Furthermore, we have incorporated next-to-leading QCD corrections in our analysis. For the present decay process in order to maintain preturbativity, one has to impose appropriate choice of kinematic cuts to eliminate $c\bar{c}$ resonances which shows large peaks in the di-lepton invariant mass spectrum. Consequently, this gives two distinct perturbative di-lepton invariant mass square regions, called the low di-lepton mass square region $1~{\rm GeV^2} < q^2 < 6~{\rm GeV^2}$, and also the high di-lepton mass square region with $q^2 > 14.4~{\rm GeV^2}$. In these two regions, experimental data for branching ratio as well as Forward-Backward asymmetry are available for the decay $B\rightarrow X_s \ell^+\ell^-$. However, there exists only a narrow window between the Standard Model prediction and the experimental data for both the regions and for both quantities (branching ratio and Forward-Backward asymmetry). Comparing our theoretical predictions with the corresponding experimental data (with 1$\sigma$ error bar) we have constrained the parameter space of the present version of non-minimal Universal Extra Dimensional scenario. During our analysis we have used the branching ratios of some rare decay processes such as $B_s\to \mu^+\mu^-$ and $B\to X_s\gamma$ as well as electroweak precision data as constraints. As we have already mentioned that from our analysis we can also reproduce the results of the minimal version of Universal Extra Dimensional scenario by setting the BLT parameters as zero (i.e., $R_f = R_V = 0$). Hence, from our analysis we have revisited the lower limit on $R^{-1}$ in the framework of minimal Universal Extra Dimensional scenario. Using the experimental data of the branching ratio the lower limit becomes 451.27 (294.61) GeV for low (high) $q^2$ region. Definitely these results are comparable with those values that are obtained from the earlier analysis exist in the literature, although, ruled out from recent collider analysis at the LHC. However, by the virtue of the presence of different non-zero BLT parameters we can improve the results of lower limit on $R^{-1}$ in the present version of non-minimal Universal Extra Dimensional scenario. For example, for $R_V=6$ and $R_f=6$ using branching ratio we obtain the lower limit of $R^{-1}$ $\geq 760$ GeV for the low $q^2$ region while the limit changes to $R^{-1}$ $\geq 720$ for high $q^2$ region. Obviously these results in the context of non-minimal Universal Extra Dimensional scenario is very promising because it excludes a large portion of the parameters space of the present scenario. Also the obtained lower limit on $R^{-1}$ is in the same ball park as the limit obtained from previous analysis on $B_s \to \mu^+\mu^-$ \cite{Datta:2015aka} in non-minimal Universal Extra Dimensional scenario. Furthermore, from Fig.\;\ref{lowerR} it is clearly evident that the lower limits on $R^{-1}$ are relatively more competitive for positive values of the BLT parameters rather than their negative values. Unfortunately, the limits which we have obtained on the parameters space (of non-minimal Universal Extra Dimensional scenario) using the Forward-Backward asymmetry of the decay $B\rightarrow X_s\ell^+\ell^-$ are not so competitive. {Moreover, we have tried to determine the possible bounds on the model parameters of non-minimal Universal Extra Dimensional scenario with upcoming measurements by the Belle II for the $B\to X_s \ell^+\ell^-$ observables. We have found that, for all combination of BLT parameters $R_f$ and $R_V$ the lower limit of $R^{-1}$ have been slightly shifted to the higher values with respect to that of the values which we have achieved from our main analysis of this article.} {\bf Acknowledgements} The author is grateful to Anindya Datta for taking part at the initial stage and for many useful discussions. The author is very thankful to Andrzej J. Buras for useful suggestion. The author would also like to give thank to Anirban Biswas for computational support. \begin{appendices} \renewcommand{\thesection}{\Alph{section}} \renewcommand{\theequation}{\thesection-\arabic{equation}} \setcounter{equation}{0} \section{Some important functions and Wilson Coefficients that are required for the calculation of \boldmath$B\to X_s \ell^+\ell^-$ in nmUED}\label{NDR} \begin{itemize} \item Functions\cite{Buras:1994dj}: \begin{eqnarray} \omega\left(\frac{q^2}{m^2_b}\right)&=&-{2\over9}\pi^2-{4\over3}{\rm Li}_{_2}\left(\frac{q^2}{m^2_b}\right)-{2\over3}\ln \left(\frac{q^2}{m^2_b}\right)\ln\left(1-\frac{q^2}{m^2_b}\right) \\ \nonumber &&-{5+4\frac{q^2}{m^2_b}\over3\bigg(1+2\frac{q^2}{m^2_b}\bigg)}\ln\left(1-\frac{q^2}{m^2_b}\right) -{2\frac{q^2}{m^2_b}\bigg(1+\frac{q^2}{m^2_b}\bigg)\bigg(1-2\frac{q^2}{m^2_b}\bigg)\over3\bigg(1-\frac{q^2}{m^2_b}\bigg)^2\bigg(1+2\frac{q^2}{m^2_b}\bigg)}\ln \left(\frac{q^2}{m^2_b}\right) \\ \nonumber &&+{5+9\frac{q^2}{m^2_b}-6\left(\frac{q^2}{m^2_b}\right)^2\over6\bigg(1-\frac{q^2}{m^2_b}\bigg)\bigg(1+2\frac{q^2}{m^2_b}\bigg)}\;, \end{eqnarray} and \begin{eqnarray} \label{hz} h\left(z,\frac{q^2}{m^2_b}\right)&=&{8\over27}-{8\over9}\ln{m_{_b}\over\mu}-{8\over9}\ln z+{16z^2m^2_b\over9q^2} \\ \nonumber &&-{4\over9}\left(1+{2z^2m^2_b\over q^2}\right)\sqrt{\bigg|1-{4z^2m^2_b\over q^2}\bigg|} \left\{\begin{array}{ll}\ln\Bigg|{\sqrt{1-\frac{4z^2m^2_b}{q^2}}+1\over\sqrt{1-\frac{4z^2m^2_b}{q^2}}-1}\Bigg|-i\pi, &{\rm if}\: \frac{4z^2m^2_b}{q^2}<1\\ 2\arctan{1\over\sqrt{\frac{4z^2m^2_b}{q^2}-1}},&{\rm if}\: \frac{4z^2m^2_b}{q^2}>1\end{array}\right. \\ \label{h0} h\left(0,\frac{q^2}{m^2_b}\right)&=&{8\over27}-{8\over9}\ln{m_{_b}\over\mu}-{4\over9}\ln \left(\frac{q^2}{m^2_b}\right) + \frac 49 i\pi. \end{eqnarray} \newpage \item Wilson Coefficients:\\{\underline{$C_1\ldots C_6$}}\cite{Buras:1994qa} \begin{eqnarray} \label{c1} C^{(0)}_1(M_W)&=&\frac{11}{2}\frac{\alpha_s(M_W)}{4\pi}\;, \\ \label{c2} C^{(0)}_2(M_W)&=&1-\frac{11}{6}\frac{\alpha_s(M_W)}{4\pi}\;,\\ \label{c3_4} C^{(0)}_3(M_W)&=&-\frac 13 C^{(0)}_4(M_W)=-\frac{\alpha_s(M_W)}{24\pi}=\widetilde{E}(x_t, r_f, r_V, R^{-1})\;,\\ C^{(0)}_5(M_W)&=&-\frac 13 C^{(0)}_6(M_W)=-\frac{\alpha_s(M_W)}{24\pi}=\widetilde{E}(x_t, r_f, r_V, R^{-1})\;, \label{c5_5} \end{eqnarray} where, \begin{eqnarray} \label{Etilde} \widetilde{E}(x_t, r_f, r_V, R^{-1})=E(x_t, r_f, r_V, R^{-1})-\frac 23\;. \end{eqnarray} {\underline{$C_7$}}\cite{Buras:1994dj} \begin{eqnarray} \label{C7eff} C_{7\gamma}^{(0){\rm eff}} & = & \eta^\frac{16}{23} C_{7\gamma}^{(0)}(M_W) + \frac{8}{3} \left(\eta^\frac{14}{23} - \eta^\frac{16}{23}\right) C_{8G}^{(0)}(M_W) + C_2^{(0)}(M_W)\sum_{i=1}^8 h_i \eta^{a_i}, \end{eqnarray} with \begin{equation} \eta = \frac{\alpha_s(M_W)}{\alpha_s(m_b)},~~~\alpha_s(m_b) = \frac{\alpha_s(M_Z)}{1 - \beta_0 \frac{\alpha_s(M_Z)}{2\pi} \, \ln(M_Z/m_b)}, \qquad \beta_0=\frac{23}{3}~, \label{eq:asmumz} \end{equation} and \begin{eqnarray}\label{c78} C^{(0)}_{7\gamma} (M_W) &=& -\frac{1}{2} D'(x_t, r_f, r_V, R^{-1}),\\ C^{(0)}_{8G}(M_W) &=& -\frac{1}{2} E'(x_t, r_f, r_V, R^{-1}). \end{eqnarray} The values of $a_i$, $h_i$ and $\bar h_i$ can be obtained from \cite{Buras:2003mk}. The functions $D'(x_t, r_f, r_V, R^{-1})$ and $E'(x_t, r_f, r_V, R^{-1})$ are the total (SM+nmUED) contributions at the LO as given in \cite{Datta:2016flx}. \end{itemize} \section{Feynman rules for \boldmath{$B\rightarrow X_s\ell^+\ell^-$} in nmUED}\label{fyerul} In this Appendix we have given the relevant Feynman rules for our calculations. All momenta and fields are assumed to be incoming. $\hat{A}$ represents background photon field. \newpage 1) $\hat{A}^{\mu}W^{\nu\pm}S^{\mp}$ $\displaystyle : {g_2}{s_w M_{W^{(n)}}} g_{\mu\nu} C$, where $C$ is given in the following: \begin{equation} \begin{aligned} \hat{A}^{\mu} W^{\nu(n)+} G^{(n)-}: C &= 0,\\ \hat{A}^{\mu} W^{\nu(n)-} G^{(n)+}: C &= 0,\\ \hat{A}^{\mu} W^{\nu(n)+} H^{(n)-}: C &= 0,\\ \hat{A}^{\mu} W^{\nu(n)-} H^{(n)+}: C &= 0, \end{aligned} \end{equation} where $g_2$ is represent the $SU(2)$ gauge coupling constant while $s_w$ is denoted as $\sin$ of Weinberg angle ($\theta_w$). 2) $\hat{A}^{\mu}S^{\pm}_1S^{\mp}_2$ $\displaystyle : -{ig_2}{s_w} (k_2-k_1)_{\mu} C$, where $C$ is given in the following: \begin{equation} \begin{aligned} \hat{A}^{\mu} G^{(n)+} G^{(n)-}: C &= 1,\\ \hat{A}^{\mu} H^{(n)+} H^{(n)-}: C &= 1,\\ \hat{A}^{\mu} G^{(n)+} H^{(n)-}: C &= 0,\\ \hat{A}^{\mu} G^{(n)-} H^{(n)+}: C &= 0, \end{aligned} \end{equation} where the scalar fields $S\equiv H,G.$ 3) $\hat{A}^{\mu}(k_1)W^{\nu+}(k_2)W^{\lambda-}(k_3)$ $\displaystyle :$ \begin{equation} ig_2s_w \left[g_{\mu\nu} (k_2 -k_1+k_3)_\lambda + g_{\mu\lambda} (k_1 -k_3 - k_2)_\nu + g_{\lambda\nu} (k_3 -k_2)_\mu\right]. \end{equation} 4) $\hat{A}^{\mu}{\overline{f}_1} f_2$ $\displaystyle : {i g_2}{s_w} \gamma_\mu C$, where $C$ is given in the following: \begin{equation} \begin{aligned} \hat{A}^{\mu} \bar{u_i} u_i: C &= \frac23,\\ \hat{A}^{\mu} {\overline{T}^{1(n)}_i} T^{1(n)}_i: C &= \frac23,\\ \hat{A}^{\mu} {\overline{T}^{2(n)}_i} T^{2(n)}_i: C &= \frac23,\\ \hat{A}^{\mu} {\overline{T}^{1(n)}_i} T^{2(n)}_i: C &= 0,\\ \hat{A}^{\mu} {\overline{T}^{2(n)}_i} T^{2(n)}_i: C &= 0. \end{aligned} \end{equation} \newpage 5) $G^{\mu}{\overline{f}_1} f_2$ $\displaystyle : {i g_s}{T^a_{\alpha\beta}} \gamma_\mu C$, where $C$ is given in the following: \begin{equation} \begin{aligned} G^{\mu} \bar{u_i} u_i: C &= 1,\\ G^{\mu} {\overline{T}^{1(n)}_i} T^{1(n)}_i: C &= 1,\\ G^{\mu} {\overline{T}^{2(n)}_i} T^{2(n)}_i: C &= 1,\\ G^{\mu} {\overline{T}^{1(n)}_i} T^{2(n)}_i: C &= 0,\\ G^{\mu} {\overline{T}^{2(n)}_i} T^{2(n)}_i: C &= 0. \end{aligned} \end{equation} 6) $S^{\pm}{\overline{f}_1} f_2$ $\displaystyle = \frac{g_2}{\sqrt{2} M_{W^{(n)}}} (P_L C_L + P_R C_R)$, where $C_L$ and $C_R$ are given in the following: \begin{equation} \begin{aligned} & G^+ \bar{u_i} d_j : & &\left\{\begin{array}{l}C_L = -m_i V_{ij},\\ C_R = m_j V_{ij},\end{array}\right. &&G^- \bar{d_j} u_i : & &\left\{\begin{array}{l}C_L = -m_j V_{ij}^*,\\ C_R = m_i V_{ij}^*,\end{array}\right.\\ & G^{(n)+}{\overline{T}^{1(n)}_i} d_j : & &\left\{\begin{array}{l}C_L = -m_1^{(i)} V_{ij},\\ C_R = M_1^{(i,j)} V_{ij},\end{array}\right. &&G^{(n)-}\bar{d_j}T^{1(n)}_i : & &\left\{\begin{array}{l}C_L = -M_1^{(i,j)} V_{ij}^*,\\ C_R = m_1^{(i)} V_{ij}^*,\end{array}\right.\\ & G^{(n)+}{\overline{T}^{2(n)}_i} d_j : & &\left\{\begin{array}{l}C_L = m_2^{(i)} V_{ij},\\ C_R =-M_2^{(i,j)} V_{ij},\end{array}\right. &&G^{(n)-}\bar{d_j}T^{2(n)}_i : & &\left\{\begin{array}{l}C_L = M_2^{(i,j)} V_{ij}^*,\\ C_R =-m_2^{(i)} V_{ij}^*,\end{array}\right.\\ & H^{(n)+}{\overline{T}^{1(n)}_i} d_j : & &\left\{\begin{array}{l}C_L = -m_3^{(i)} V_{ij},\\ C_R = M_3^{(i,j)} V_{ij},\end{array}\right. &&H^{(n)-}\bar{d_j}T^{1(n)}_i : & &\left\{\begin{array}{l}C_L = -M_3^{(i,j)} V_{ij}^*,\\ C_R = m_3^{(i)} V_{ij}^*,\end{array}\right.\\ & H^{(n)+}{\overline{T}^{2(n)}_i} d_j : & &\left\{\begin{array}{l}C_L = m_4^{(i)} V_{ij},\\ C_R =-M_4^{(i,j)} V_{ij},\end{array}\right. &&H^{(n)-}\bar{d_j}T^{2(n)}_i : & &\left\{\begin{array}{l}C_L = M_4^{(i,j)} V_{ij}^*,\\ C_R =-m_4^{(i)} V_{ij}^*.\end{array}\right. \end{aligned} \end{equation} 7) $W^{\mu\pm}{\overline{f}_1}f_2$ $\displaystyle : \frac{i g_2}{\sqrt{2}} \gamma_\mu P_L C_L$, where $C_L$ is given in the following: \begin{equation} \begin{aligned} & W^{\mu+}\bar{u_i} d_j : && C_L = V_{ij}, && W^{\mu-}\bar{d_j} u_i : && C_L = V^*_{ij},\\ & W^{\mu(n)+}{\overline{T}^{1(n)}_i}d_j : && C_L = I^n_1\;c_{in} V_{ij}, &&W^{\mu(n)-}\bar{d_j}{{T}^{1(n)}_i} : && C_L = I^n_1\;c_{in} V^*_{ij},\\ & W^{\mu(n)+}{\overline{T}^{2(n)}_i}d_j : && C_L = -I^n_1\;s_{in} V_{ij}, &&W^{\mu(n)-}\bar{d_j}{{T}^{2(n)}_i} : && C_L = -I^n_1\;s_{in}V^*_{ij}, \end{aligned} \end{equation} where the fermion fields $f\equiv u, d, T^1_t, T^2_t$. The mass parameters $m_x^{(i)}$ are given in the following \cite{Datta:2015aka}: \begin{equation} \label{mparameters} \begin{aligned} m_1^{(i)} &= I^n_2\;m_{V^{(n)}}c_{in} +I^n_1\;m_i s_{in},\\ m_2^{(i)} &= -I^n_2\;m_{V^{(n)}}s_{in}+I^n_1\;m_i c_{in},\\ m_3^{(i)} &= -I^n_2\;iM_W c_{in} +I^n_1\;i\frac{m_{V^{(n)}}m_i}{M_W}s_{in},\\ m_4^{(i)} &= I^n_2\;iM_W s_{in}+I^n_1\;i\frac{m_{V^{(n)}}m_i}{M_W}c_{in}, \end{aligned} \end{equation} where $m_i$ denotes the mass of the zero-mode {\it up-type} fermion and $c_{in}=\cos(\alpha_{in})$ and $s_{in}=\sin(\alpha_{in})$ with $\alpha_{in}$ as defined earlier. And the mass parameters $M_x^{(i,j)}$ are given in the following \cite{Datta:2015aka}: \begin{equation}\label{Mparameters} \begin{aligned} M_1^{(i,j)} &= I^n_1\;m_j c_{in},\\ M_2^{(i,j)} &= I^n_1\;m_j s_{in},\\ M_3^{(i,j)} &= I^n_1\;i\frac{m_{V^{(n)}}m_j}{M_W}c_{in},\\ M_4^{(i,j)} &= I^n_1\;i\frac{m_{V^{(n)}}m_j}{M_W}s_{in}, \end{aligned} \end{equation} where $m_j$ denotes the mass of the zero-mode {\it down-type} fermion. In all the Feynman vertices the factors $I^n_1$ and $I^n_2$ are represented as the overlap integrals given in the following \cite{Datta:2015aka} \begin{equation} I^n_1 = 2\sqrt{\frac{1+\frac{r_V}{\pi R}}{1+\frac{r_f}{\pi R}}}\left[ \frac{1}{\sqrt{1 + \frac{r^2_f m^2_{f^{(n)}}}{4} + \frac{r_f}{\pi R}}}\right]\left[ \frac{1}{\sqrt{1 + \frac{r^2_V m^2_{V^{(n)}}}{4} + \frac{r_V}{\pi R}}}\right]\frac{m^2_{V^{(n)}}}{\left(m^2_{V^{(n)}} - m^2_{f^{(n)}}\right)}\frac{\left(r_{f} - r_{V}\right)}{\pi R}, \label{i1} \end{equation} \begin{equation} I^n_2 = 2\sqrt{\frac{1+\frac{r_V}{\pi R}}{1+\frac{r_f}{\pi R}}}\left[ \frac{1}{\sqrt{1 + \frac{r^2_f m^2_{f^{(n)}}}{4} + \frac{r_f}{\pi R}}}\right]\left[ \frac{1}{\sqrt{1 + \frac{r^2_V m^2_{V^{(n)}}}{4} + \frac{r_V}{\pi R}}}\right]\frac{m_{V^{(n)}}m_{f^{(n)}}}{\left(m^2_{V^{(n)}} - m^2_{f^{(n)}}\right)}\frac{\left(r_{f} - r_{V}\right)}{\pi R}. \label{i2} \end{equation} \end{appendices} \providecommand{\href}[2]{#2}\begingroup\raggedright
2024-02-18T23:39:55.805Z
2019-06-10T02:13:25.000Z
algebraic_stack_train_0000
847
17,652
proofpile-arXiv_065-4520
\section*{Appendix} This appendix is formatted as follows. \begin{enumerate} \item We discuss the \textbf{datasets} used in \Cref{app:datasets}. \item \textbf{Implementation details} for our experiments are provided in \Cref{app:implementation}. \item We provide examples of the \textbf{multiplicity of CLUEs} in \Cref{app:multiplicity}. \item We discuss the application of \textbf{uncertainty sensitivity analysis} in high dimensional spaces in \Cref{app:high_dim_sensitivity}. \item We visualize CLUE's \textbf{optimization in the latent space} in \Cref{app:view_latent_space}. \item We compare CLUE to existing \textbf{feature importance} techniques in \Cref{app:feature_importance}. \item We provide additional \textbf{examples} of CLUEs and U-FIDO counterfactuals in \Cref{app:additional_examples}. \item We provide additional \textbf{experimental results} in \Cref{app:more_experiments}. \item We note additional details of our \textbf{computational evaluation} framework for counterfactual explanations of uncertainty in \Cref{app:additional_details_functional}. \item We include more details on the \textbf{setup} of our user studies in \Cref{app:human_experiment_details}. \end{enumerate} \clearpage \section{Datasets}\label{app:datasets} We employ 5 datasets in our experiments, 4 tabular and one composed of images. All of them are publicly available. Their details are given in \Cref{tab:appendix_datasets}. \begin{table}[h] \centering \caption{Summary of datasets used in our experiments. (*) We use a 7 feature version of COMPAS, however, other versions exist.} \label{tab:appendix_datasets} \begin{tabular}{@{}cccccc@{}} \toprule Name & Targets & Input Type & N. Inputs & N. Train & N. Test \\ \midrule LSAT & Continuous & Continuous \& Categorical & $4$ & $17432$ & $4358$ \\ COMPAS & Binary & Continuous \& Categorical & $7^{*}$ & $5554$ & $618$ \\ Wine (red) & Continuous & Continuous & $11$ & $1438$ & $160$ \\ Credit & Binary & Continuous \& Categorical & $24$ & $27000$ & $3000$ \\ MNIST & Categorical & Image (greyscale) & $28{\times}28$ & $60000$ & $10000$ \\ \bottomrule \end{tabular} \end{table} We use the LSAT loading script from \citet{Cole2019AvoidingRV}'s github page. The raw data can be downloaded from (\url{https://raw.githubusercontent.com/throwaway20190523/MonotonicFairness/master/data/law_school_cf_test.csv}) and (\url{https://raw.githubusercontent.com/throwaway20190523/MonotonicFairness/master/data/law_school_cf_train.csv}). For the COMPAS criminal recidivism prediction dataset we use a modified version of \citet{zafar2017parity}'s loading and pre-processing script. It can be found at (\url{https://github.com/mbilalzafar/fair-classification/blob/master/disparate_mistreatment/propublica_compas_data_demo/load_compas_data.py}). We add an additional feature: ``days served'' which we compute as the difference, measured in days, between the ``c\_jail\_in'' and ``c\_jail\_out'' variables. The raw data is found at (\url{https://github.com/propublica/compas-analysis/blob/master/compas-scores-two-years.csv}). The red wine quality prediction dataset can be obtained from and is described in detail at (\url{https://archive.ics.uci.edu/ml/datasets/wine+quality}). The default of credit card clients dataset, which we refer to as ``Credit'' in this work, can be obtained from and is described in detail at (\url{https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients}). Note that this dataset is different from the also commonly used German credit dataset. The MNIST handwritten digit image dataset can be obtained from (\url{http://yann.lecun.com/exdb/mnist/}). \clearpage \section{Implementation Details}\label{app:implementation} \subsection{Inference in BNNs} We choose a Monte Carlo (MC) based inference approach for our BNNs due to these not being limited to localized approximations of the posterior. Specifically, we make use of scale adapted SG-HMC \citep{BO_BNN}, an approach to stochastic gradient Hamiltonian Monte Carlo with automatic hyperparameter discovery. This technique estimates the mass matrix and the noise introduced by stochasticity in the gradients using exponentially decaying moving average filters during the chain's burn-in phase. We use a fixed step size of $\epsilon=0.01$ and batch sizes of $512$. We set a diagonal 0 mean Gaussian prior $p(\mathbf{w}) = \mathcal{N}(\mathbf{w}; \mathbf{0}, \sigma^{2}_{w}\cdot I)$ over each layer of weights. We place a per-layer conjugate Gamma hyperprior over $\sigma^{2}_{w}$ with parameters $\alpha = \beta = 10$. We periodically update $\sigma^{2}_{w}$ for each layer using Gibbs sampling. On MNIST, we burn in our chain for 25 epochs, using the first 15 to estimate SG-HMC parameters. We re-sample momentum parameters every 10 steps and perform a Gibbs sweep over the prior variances every 45 steps. We save parameter settings every 2 epochs until a total of 300 sets of weights are stored. This makes for a total of 625 epochs. For tabular datasets, we perform a burn-in of 400 epochs, using the first 120 to estimate SG-HMC parameters. We save weight configurations every 20 epochs until a total of 100 sets if weights are saved. This makes for a total of 2500 epochs. Momentum is re-sampled every 10 epochs and the prior over weights is re-sampled every $50$ epochs. We use a batch size of 512 for all datasets. \subsection{Computing Uncertainty Estimates} \label{app:uncert} In this work, we consider NNs which parametrize two types of distributions over target variables: the categorical for classification problems and the Gaussian for regression. For classification, our networks output a probability vector with elements $f_{k}(\mathbf{x}, \mathbf{w})$, corresponding to classes $\{c_{k}\}_{k=1}^{K}$. The likelihood function is $p(y | \mathbf{x}, \mathbf{w}) = Cat(y; f(\mathbf{x}, \mathbf{w}))$. Given a posterior distribution over weights $p(\mathbf{w} | {\cal D})$, we use marginalization \Cref{eq:marginalisation} to translate uncertainty in $\mathbf{w}$ into uncertainty in predictions. Unfortunately, this operation is intractable for BNNs. We resort to approximating the predictive posterior with $M$ MC samples: \begin{align*} p(\mathbf{y}^{*} | \mathbf{x}^{*}, {\cal D}) &= \EX_{p(\mathbf{w} | {\cal D})}[p(\mathbf{y}^{*} | \mathbf{x}^{*}, \mathbf{w})] \\ &\approx \frac{1}{M}\sum^{M}_{m=0} f(\mathbf{x}^{*}, \mathbf{w}); \quad \mathbf{w}\sim p(\mathbf{w} | {\cal D}) . \end{align*} The resulting predictive distribution is categorical. We quantify its uncertainty using entropy: \begin{gather*} H(\mathbf{y}^{*} | \mathbf{x}^{*}, {\cal D}) = \sum^{K}_{k=1} p(y^{*}{=}c_{k} | \mathbf{x}^{*}, {\cal D}) \log p(y^{*}{=}c_{k} | \mathbf{x}^{*}, {\cal D}) . \end{gather*} This quantity contains aleatoric and epistemic components $(H_{a}, H_{e})$. The former is estimated as: \begin{align*} H_{a} = \EX_{p(\mathbf{w} | {\cal D})}[H(y^{*} | \mathbf{x}^{*}, \mathbf{w})] \approx \frac{1}{M}\sum^{M}_{m} H(y^{*} | \mathbf{x}^{*}, \mathbf{w});\quad \mathbf{w} \sim p(\mathbf{w} | {\cal D}) . \end{align*} The epistemic component can be obtained as the difference between the total and aleatoric entropies. This quantity is also known as the mutual information between $\mathbf{y}^{*}$ and $\mathbf{w}$: \begin{align*} H_{e} = I(\mathbf{y}^{*}, \mathbf{w} | \mathbf{x}^{*}, {\cal D}) = H(y^{*} | \mathbf{x}^{*}, {\cal D}) - \EX_{p(\mathbf{w} | {\cal D})}[H(y^{*} | \mathbf{x}^{*}, \mathbf{w})] . \end{align*} For regression, we employ heteroscedastic likelihood functions. Their mean and variance are parametrized by our NN: $p(\mathbf{y}^{*} | \mathbf{x}^{*}, \mathbf{w}) = \mathcal{N}(\mathbf{y}; f_{\mu}(\mathbf{x}^{*}, \mathbf{w}), f_{\sigma^{2}}(\mathbf{x}^{*}, \mathbf{w}))$. Marginalizing over $\mathbf{w}$ with MC induces a Gaussian mixture distribution over outputs. Its mean is obtained as: \begin{gather*} \boldsymbol{\mu}_{a} \approx \frac{1}{M}\sum^{M}_{m=0} f_{\mu}(\mathbf{x}^{*}, \mathbf{w}); \quad \mathbf{w}\sim p(\mathbf{w} | {\cal D}). \end{gather*} There is no closed-form expression for the entropy of this distribution. Instead, we use the variance of the GMM as an uncertainty metric. It also decomposes into aleatoric and epistemic components $(\sigma^{2}_{a}, \sigma^{2}_{e})$: \begin{align*} \sigma^{2}(\mathbf{y}^{*} | \mathbf{x}^{*}, {\cal D})\,{=}\,\underbrace{\EX_{p(\mathbf{w} | {\cal D})}[\sigma^{2}(\mathbf{y}^{*} | \mathbf{x}^{*}, \mathbf{w})]}_{\sigma^{2}_{a}} + \underbrace{\sigma^{2}_{p(\mathbf{w} | {\cal D})}[\mathbf{\mu}(\mathbf{y}^{*} | \mathbf{x}, \mathbf{w})]}_{\sigma^{2}_{e}} . \end{align*} These are also estimated with MC: \begin{gather*} \sigma^{2}(\mathbf{y}^{*} | \mathbf{x}^{*}, {\cal D}) \approx \underbrace{\frac{1}{M}\sum^{M}_{m} \mu(\mathbf{y}^{*} | \mathbf{x}^{*}, \mathbf{w})^{2} - (\frac{1}{M}\sum^{M}_{m} \mu(\mathbf{y}^{*} | \mathbf{x}^{*}, \mathbf{w}))^{2}}_{\sigma^{2}_{e}} + \underbrace{\frac{1}{M}\sum^{M}_{m} \sigma^{2}(\mathbf{y}^{*} | \mathbf{x}^{*}, \mathbf{w})}_{\sigma^{2}_{a}};\,\, \mathbf{w} \sim p(\mathbf{w} | {\cal D}) . \end{gather*} Here, $\sigma^{2}_{e}$ reflects model uncertainty - our lack of knowledge about $\mathbf{w}$ - while $\sigma^{2}_{a}$ tells us about the irreducible uncertainty or noise in our training data. In \Cref{fig:SGHMC_moons_decomp}, we show the fit obtained with a BNN with scale adapted SG-HMC on the toy moons dataset. We would like to highlight 2 key differences with respect to the logistic regression example shown in \Cref{fig:logistic_decomp}. Neural networks are very flexible models. They are capable of perfectly fitting non-linear manifolds, such as moons. In consequence, when these models present \textit{aleatoric uncertainty} it is most often due to the inputs not containing enough information to predict the targets. As little such noise exists in our particular instantiation of moons, our estimates of aleatoric entropy are close to 0. Despite their flexibility, selecting a NN involves adopting some inductive biases \citep{wilson2020bayesian}. Additionally, unlike logistic regression, the weight space posterior of a BNN is very difficult to characterize. Both of these things are reflected in the BNN predictive posterior's \textit{epistemic uncertainty} only growing in the vertical axis, instead of in all directions. % \begin{figure}[h] \vskip -0.1in \begin{center} \centerline{\includegraphics[width=0.85\linewidth]{images/appendix/SGHMC_uncertainty.pdf}} \vskip -0.1in \caption{Left: Training points and BNN predictive distribution obtained on the moons dataset with SG-HMC. Center: Aleatoric entropy $H_{a}$ expressed by the model matches regions of class overlap. Right: Epistemic entropy $H_{e}$ grows as we move away from the data. } \label{fig:SGHMC_moons_decomp} \end{center} \vskip -0.3in \end{figure} \subsection{Architectures and other Network Hyperparameters} For all datasets, our BNNs are fully connected networks with residual connections. Auxiliary VAEs and VAEACs used for tabular data use fully connected encoders and decoders with residual connections and batch normalization at every layer. For MNIST, we employ 6 convolutional bottleneck residual blocks \citep{he2016deep} for both encoders and decoders. We use the same architecture for the VAEACs used as ground truth generative models in the computationally grounded evaluation framework put forth in \Cref{sec:functional_framework}. Note that the ground truth VAEAC models have slightly larger input spaces due to them modeling inputs and targets jointly. All architectural hyperparameters are provided in \Cref{tab:appendix_architectures}. In order to improve the artificial sample quality of our ``ground truth'' VAEACs, we leverage a two-stage VAE configuration \citep{2-level_VAE}. For all datasets, the lower level VAEs use the standard tabular data VAE architecture described above, with 2 hidden layers. We use 300 hidden units for MNIST and 150 for other datasets. Additional details on our use of two-stage VAEs are provided in \Cref{app:additional_details_functional}. \begin{table}[h] \centering \caption{Network architecture hyperparameters used in all experiments. Depth refers to number of hidden layers or residual blocks. Latent dimension values marked with a star (*) refer to the second level VAEs for ``ground truth'' VAEACs.} \label{tab:appendix_architectures} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}ccccccc@{}} \toprule Dataset & BNN Depth & BNN Width & VAE / VAEAC Depth & VAE Width & VAEAC Width & VAE / VAEAC Latent Dim \\ \midrule LSAT & 2 & 200 & 3 & 300 & 350 & 4 (*4) \\ COMPAS & 2 & 200 & 3 & 300 & 350 & 4 (*4) \\ Wine & 2 & 200 & 3 & 300 & 350 & 6 (*6) \\ Credit & 2 & 200 & 3 & 300 & 350 & 8 (*8) \\ MNIST & 2 & 1200 & 6 & - & - & 20 (*8) \\ \bottomrule \end{tabular} } \end{table} We train all generative models with the RAdam optimizer \citep{liu2019radam} with a learning rate of $1e^{-4}$ for tabular data and $3e^{-4}$ for MNIST. We found RAdam to yield marginally better results than Adam. We convert categorical inputs to our BNNs into one-hot vectors. When building DGMs, we model continuous inputs with diagonal, unit variance (heteroscedastic) Gaussian distributions. This choice makes these models weigh all input dimensions equally, a desirable trait for explanation generation. We place categorical distributions over discrete inputs, expressing them as one-hot vectors. For the LSAT, COMPAS, and Credit datasets, where there are both continuous and discrete features, data likelihood values are obtained as the product of Gaussian likelihoods and categorical likelihoods. During the CLUE optimization procedure, we approximate gradients through one-hot vectors with the softmax function's gradients. This is known as the softmax straight-through estimator \citep{straight_through}. It is biased but works well in practice. For MNIST, we model pixels as the probabilities of a product of Bernoulli distributions. We feed these probabilities directly into our BNNs and DGMs. We normalize all continuously distributed features such that they have 0 mean and unit variance. This facilitates model training and also ensures that all features are weighed equally under CLUE's pairwise distance metric in \Cref{eq:c3_clue_objective}. For MNIST, this normalization is applied to whole images instead of individual pixels. Categorical variables are not normalized. Changing a categorical variable implies changing two bits in the corresponding one-hot vector. This creates the same $\ell_{1}$ regularization penalty as shifting a continuously distributed variable two standard deviations. \subsection{CLUE Hyperparameters}\label{app:CLUE_hyperparams} As mentioned in \Cref{sec:experiments}, in our experiments we only apply CLUE to points that present uncertainty above a rejection threshold. The rejection thresholds used for each dataset are displayed in \Cref{tab:CLUE_hyperparams}. The same table contains the values of $\lambda_{x}$ used in all experiments. In practice we define $\lambda^{'}_{x} = \lambda_{x} \cdot d$, where $d$ is the input space dimensionality of a dataset. This makes the strength of CLUE's pairwise input space distance metric agnostic to dimensionality. We choose a significantly larger value of $\lambda^{'}_{x}$ for MNIST due to there being a large number of pixels that are always black. \begin{table}[h] \centering \caption{Values of CLUE's input space similarity weight $\lambda_{x}$ and uncertainty rejection thresholds used for all experiments. Next to each dataset's name is the the type of uncertainty quantified: standard deviation ($\sigma$) or entropy ($H$). We report $\lambda_{x}$ upscaled by each dataset's input dimensionality $d$.} \label{tab:CLUE_hyperparams} \begin{tabular}{@{}cccccc@{}} \toprule Dataset & LSAT ($\sigma$) & COMPAS ($H$) & Wine ($\sigma$) & Credit ($H$) & MNIST ($H$)\\ \midrule $\lambda_{x} \cdot d$ & 1.5 & 2 & 2.5 & 3 & 25 \\ $\mathcal{H}$ threshold & 1 & 0.2 & 2 & 0.5 & 0.5 \\ \bottomrule \end{tabular} \end{table} \section{Multiplicity of CLUEs}\label{app:multiplicity} We exploit the non-convexity of CLUE's objective to generate diverse CLUEs. We initialize CLUE with $\mathbf{z}_{0}\,{=}\,\mu_{\phi}(\mathbf{z} | \mathbf{x}_{0}) + \epsilon$, where $\epsilon\,{=}\,\mathcal{N}(\mathbf{z}; \mathbf{0}, \sigma_{0}\mathbf{I})$, and perform \Cref{alg:CLUE_algorithm} multiple times to obtain different CLUEs. We choose $\sigma_{0}\,{=}\,0.15$. In \Cref{fig:MNIST_multiplicity}, we showcase different CLUEs for the same original MNIST inputs. Different counterfactuals represent digits of different classes. Despite this, all explanations resemble the original datapoints being explained. Being exposed to this multiplicity could potentially inform practitioners about similarities of an original input to multiple classes that lead their model to be uncertain. Different initializations lead to CLUEs that explain away different amounts of uncertainty. In a few rare cases CLUE fails: the algorithm does not produce a feature configuration which has significantly lower uncertainty than the original input. This is the case for the third CLUE in the bottom 2 rows of \Cref{fig:MNIST_multiplicity}. We attribute this to a disadvantageous initialization of $\mathbf{z}$. \begin{figure}[htb] \vskip -0in \begin{center} \centerline{\includegraphics[width=1\linewidth]{images/appendix/multiplicity_MNIST.pdf}} \vskip -0.0in \caption{We generate 5 possible CLUEs for 11 MNIST digits score above the uncertainty rejection threshold. Below each digit or counterfactual is the predictive entropy it is assigned $H$ and the class of maximum probability $c$.} \label{fig:MNIST_multiplicity} \end{center} \vskip -0.1in \end{figure} In \Cref{fig:COMPAS_multiplicity}, we show multiple CLUEs for a single individual from the COMPAS dataset. In this case, uncertainty can be reduced by changing the individual's prior convictions and charge degree, or by changing their sex and age range. Making both sets of changes simultaneously also reduces uncertainty. \begin{figure}[htb] \vskip -0.1in \begin{center} \centerline{\includegraphics[width=\linewidth]{images/appendix/flat_tabular_multiplicity.pdf}} \vskip -0in \caption{The leftmost entry is an uncertain COMPAS test sample. To its right are four candidate CLUEs. The first three successfully reduce uncertainty past our rejection threshold, while the rightmost does not.} \label{fig:COMPAS_multiplicity} \end{center} \vskip -0.2in \end{figure} \section{Sensitivity Analysis in High Dimensional Spaces}\label{app:high_dim_sensitivity} In high-dimensional input spaces, $\nabla_{\mathbf{x}} \mathcal{H}$ will often not point in the direction of the data manifold. This can result in meaningless explanations. In \Cref{fig:app_sensitivity_fail}, we show an example where a step in the direction of $-\nabla_{\mathbf{x}} \mathcal{H}$ leads to a seemingly noisy input configuration for which the predictive entropy is low. An ``adversarial examples for uncertainty'' is generated. Aggregating these steps for every point in the test set leads to an uncertainty sensitivity analysis explanation that resembles white noise. \begin{figure}[htb] \vskip -0.05in \begin{center} \centerline{\includegraphics[width=0.8\linewidth]{images/appendix/sensitivity_fail.pdf}} \vskip -0.02in \caption{Left: A digit from the MNIST test set with large predictive entropy. Center: The same digit after a step is taken in the direction of $-\nabla_{\mathbf{x}} \mathcal{H}$. Non-zero weight is assigned to pixels that are always zero valued. Right: Uncertainty sensitivity analysis for the entire MNIST test set.} \label{fig:app_sensitivity_fail} \end{center} \vskip -0.25in \end{figure} \section{Visualizing Optimization in Latent Space}\label{app:view_latent_space} \Cref{fig:2d_CREDIT_CLUE} shows a 2 dimensional latent space trajectory from $\mathbf{z}_{0}$ to $\mathbf{z}_{CLUE}$ for a test point from the COMPAS dataset. In practice, we use larger latent spaces to ensure CLUEs are relevant.% \begin{figure}[htb] \vskip -0.05in \begin{center} \centerline{\includegraphics[width=0.75\linewidth]{images/appendix/latent_space_visualization/CREDIT_latent_space_entropy.pdf}} \vskip -0.15in \caption{Left: CLUE latent trajectory for a test point from the Credit dataset in a two-dimensional latent space. The blue dot marks the start of the trajectory and the orange one marks the end. Uncertainty levels are displayed in greyscale. Right: Changes in aleatoric entropy for inputs regenerated from latent codes along the trajectory.} \label{fig:2d_CREDIT_CLUE} \end{center} \vskip -0.2in \end{figure} \clearpage \section{Comparing CLUE to Feature Importance Estimators}\label{app:feature_importance} Among machine learning practitioners, two of the most popular approaches for determining feature importance from back-box models are LIME and SHAP \cite{bhatt2019explainable}. LIME locally approximates the back-box model of interest around a specific test point with a surrogate linear model \citep{LIME_interpretability}. This surrogate is trained on points sampled from nearby the input of interest. The surrogate model's weights for each class can be interpreted as each feature's contribution towards the prediction of said class. Kernel SHAP extends lime by introducing a kernel such that resulting explanations have desirable properties \citep{shap}. For SHAP, a reference input is chosen. It allows importance to be only assigned where the inputs are different from the reference. For MNIST, the reference is an entirely black image. Note that alternative versions of SHAP exist that incorporate information about internal NN dynamics into their explanations. However, they produce very noisy explanations when applied to our BNNs. We conjecture that this high variance might be induced by disagreement among the multiple weight configurations from our BNNs. \begin{figure}[htb] \vskip -0.0in \begin{center} \centerline{\includegraphics[width=0.7\linewidth]{images/appendix/appendix_certain_shap_lime.pdf}} \caption{High confidence MNIST test examples together with LIME and SHAP explanations for the top 3 predicted classes. The model being investigated is a BNN with architecture described in \Cref{app:implementation}. The highest probability class is denoted by $\hat{y}$.} \label{fig:feature_importance_certain} \end{center} \vskip -0.1in \end{figure} \Cref{fig:feature_importance_certain} shows examples of LIME and Kernel SHAP being applied to a BNN for high confidence MNIST test digits. We use the default LIME hyperparameters for MNIST: the ``quickshift'' segmentation algorithm with kernel size 1, maximum distance 5 and a ratio of 0.2. We plot the top 10 segments with weight greater than 0.01. We draw 1000 samples with both methods. Using the same configuration, we generate LIME and SHAP explanations for some MNIST digits to which our BNN assigns predictive entropy above our rejection threshold. The results are displayed in \Cref{fig:feature_importance_uncertain}. \begin{figure}[htb] \vskip -0.0in \begin{center} \centerline{\includegraphics[width=0.9\linewidth]{images/appendix/appendix_uncertain_shap_lime.pdf}} \caption{Ten MNIST test digits for which our BNN's predictive entropy is above the rejection threshold. A single CLUE example is provided for each one. For each digit, the top scoring class is denoted by $\hat{y}$. LIME and SHAP explanations are provided for the three most likely classes.} \label{fig:feature_importance_uncertain} \end{center} \vskip -0.1in \end{figure} A positive CLUE attribution means that the addition of that feature will make our model more certain. A positive feature importance attribution means the presence of that feature serves as evidence towards a predicted class. A negative CLUE attribution means that the the absence of that feature will make the model more certain. A negative feature importance attribution means the absence of that feature would serve as evidence for a particular prediction. While CLUE and feature importance techniques solve similar problems and both provide saliency maps, CLUE highlights regions that need to be added or removed to make the input certain to a predictive model. In some cases, we see that feature importance negative attribution aligns with CLUE negative attribution, suggesting the features which negatively contribute to the model's predicted probability are the features that need to be removed to increase the models' certainty. CLUE's ability to suggest the addition of unobserved features (positive CLUE attribution) is unique. The feature importance methods under consideration are difficult to retrofit for uncertainty. They are unable to add features; they are limited to explaining the contribution of existing features. This may suffice if our input contains all the information needed to make a prediction for a certain class but otherwise results in noisy, potentially meaningless, explanations. Generative-model based methods methods are counterfactual because they do not assign importance to the observed features but rather propose alternative features based on the data manifold \cite{duvenaud_counterfactual}. This is the case for FIDO and CLUE. Generative modeling allows for increased flexibility, which is required when dealing with uncertain inputs. Quantitatively contrasting feature importance and uncertainty explanations under existing evaluation criteria \cite{bhatt2020evaluating} is an interesting direction for future work. Methods like LIME and SHAP require a choice of class to produce explanations. This complicates their use in scenarios where our model is uncertain and multiple classes have similarly high predictive probability. On the other hand CLUEs are class agnostic. \section{Additional CLUE and U-FIDO Examples}\label{app:additional_examples} We provide additional examples of CLUEs generated for high uncertainty MNIST digits in \Cref{fig:appendix_additional_CLUE}. U-FIDO counterfactuals generated for the same inputs are shown in \Cref{fig:appendix_additional_UFIDO}. Both methods often attribute importance to the same features. However, in almost all cases, CLUE is able to reduce the original input's uncertainty significantly more than U-FIDO. The latter method suggests smaller changes. We attribute this to U-FIDO's input masking mechanism being less flexible than CLUE's latent space generation mechanism. \begin{figure}[htb] \vskip -0.1in \begin{center} \centerline{\includegraphics[width=0.75\linewidth]{images/appendix/appendix_examples_MNIST.pdf}} \vskip -0.0in \caption{CLUEs generated for MNIST digits for which our BNN's predictive entropy is above the rejection threshold. The BNNs predictive entropy for both original inputs and CLUEs is shown under the corresponding images.} \label{fig:appendix_additional_CLUE} \end{center} \vskip -0.1in \end{figure} \begin{figure}[htb] \vskip -0.1in \begin{center} \centerline{\includegraphics[width=0.75\linewidth]{images/appendix/appendix_examples_MNIST_FIDO.pdf}} \vskip -0.02in \caption{U-FIDO counterfactuals generated for MNIST digits for which our BNN's predictive entropy is above the rejection threshold. The BNNs predictive entropy for both original inputs and counterfactuals is shown under the corresponding images.} \label{fig:appendix_additional_UFIDO} \end{center} \vskip -0.1in \end{figure} \clearpage \section{Additional Experimental Results}\label{app:more_experiments} \subsection{Ablation Experiments}\label{app:more_ablation} In this subsection, we modify some of CLUE's components individually and observe the effects on the procedure's results. \textbf{Initialization Strategy:} \Cref{fig:app_CLUE_init} compares \Cref{alg:CLUE_algorithm}'s encoder-based initialization $\mathbf{z}_{0}\,=\,\mu_{\phi}(\mathbf{z} | \mathbf{x}_{0})$ with $\mathbf{z}_{0}\,{=}\,\mathbf{0}$ on all datasets under consideration. For the LSAT, COMPAS and Wine datasets, both approaches produce indistinguishable results. On Credit, our second highest dimensional dataset, using an encoder-based initialization allows for CLUEs to stay slightly closer to original inputs in terms of $\ell_{1}$ distance. The difference between both approaches is largest on MNIST. We conjecture that this might be due to the higher dimensional nature of the latent space used with this dataset making optimization more difficult. By initializing $\mathbf{z}$ as the VAE encoder's mean, our optimizer starts near a local minima of $d(\mathbf{x}, \mathbf{x}_{0})$ and potentially of $\mathcal{L}(\mathbf{z})$. When \Cref{alg:CLUE_algorithm} is applied, the magnitude of $\nabla_{z} H$ might not be large enough to escape this basin of attraction. Thus, CLUE tends to leave most input features unchanged, only addressing those with most potential to reduce uncertainty. This is also desirable behavior for low uncertainty inputs; the closest low uncertainty sample is the input itself. \begin{figure*}[h] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/no_encoder/no_encoder_compas.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/no_encoder/no_encoder_default_credit.pdf} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/no_encoder/no_encoder_wine.pdf} \end{subfigure} \quad \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/no_encoder/no_encoder_lsat.pdf} \end{subfigure} \vskip\baselineskip % \includegraphics[width=.48\textwidth]{images/appendix/no_encoder/no_encoder_MNIST.pdf} \caption{Initialization strategy experiment results for all datasets under consideration. Colorbars' horizontal blue line denotes each dataset's rejection threshold.} \label{fig:app_CLUE_init} \end{figure*} \clearpage \textbf{Capacity of CLUE's DGM:} To capture our predictive model's reasoning, CLUE's DGM must be flexible enough to preserve atypical features in the inputs. As shown in \Cref{fig:app_VAE_capacity}, reconstructions from low-capacity VAEs do not preserve the predictive uncertainty of original inputs. The CLUEs generated from these DGMs either leave the inputs unchanged or present large values of $\Delta\mathbf{x}$ while barely reducing $H$: these degenerate CLUEs simply emphasize regions of large reconstruction error. As our DGM's capacity increases, so does the amount of uncertainty preserved in the auto-encoding operation. The amount of predictive uncertainty explained by CLUEs, which is given by the difference between the autoencoded input uncertainty (orange bars) and CLUE uncertainty (blue bars), increases. We see a clear relationship between dataset dimensionality and size of latent space needed for CLUE to be effective. \begin{figure*}[h] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/vae_scan/vae_scan_compas.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/vae_scan/vae_scan_default_credit.pdf} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/vae_scan/vae_scan_wine.pdf} \end{subfigure} \quad \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/vae_scan/vae_scan_lsat.pdf} \end{subfigure} \vskip\baselineskip % \includegraphics[width=.48\textwidth]{images/appendix/vae_scan/vae_scan_MNIST.pdf} \caption{Amount of uncertainty explained away and $\ell_{1}$ distance between original inputs and CLUEs for every dataset under consideration and different capacity VAEs. } \label{fig:app_VAE_capacity} \end{figure*} \clearpage \textbf{Output Space Regularization Parameter $\mathbf{\lambda_{y}}$:} In \Cref{fig:output_space_regularisation}, we show how increasing $\lambda_{y}$ reduces the proportion of samples for which the predicted class differs between original inputs and CLUEs. Interestingly, on LSAT, Wine and COMPAS, a small, but non-zero, value of $\lambda_{y}$ results in more uncertainty being explained away by CLUE. However, strongly enforcing similarity of predictions generally comes at the cost of smaller amounts of uncertainty being explained away. COMPAS predictions stay the same for all values of $\lambda_{y}$. Class predictions only depend on 2 of this dataset's input features (Age and Previous Convictions) \citep{COMPAS_2feature_citation}. We find that the remaining features can increase or reduce confidence in the prediction given by the two key features, but never change it. CLUEs only change non key features, reinforcing the current classification. On MNIST, we find that, for certain values of $\lambda_{y}$, classifying CLUEs results in a lower error rate than classifying original inputs. This is shown in \Cref{fig:MNIST_less_error}. We did not observe this effect for other datasets. \begin{figure*}[h] \centering \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/pred_change/pred_change_compas.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/pred_change/pred_change_default_credit.pdf} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/pred_change/pred_change_wine.pdf} \end{subfigure} \quad \begin{subfigure}[b]{0.475\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/pred_change/pred_change_lsat.pdf} \end{subfigure} \caption{CLUE $\Delta \mathcal{H}$ vs prediction change for all datasets under consideration. Prediction change refers to the proportion of CLUEs classified differently than their corresponding original inputs. All values shown are averages across all testset points above the uncertainty rejection threshold.} \label{fig:output_space_regularisation} \end{figure*} \begin{figure}[htb] \vskip -0.0in \begin{center} \centerline{\includegraphics[width=0.92\linewidth]{images/appendix/pred_change/pred_change_correct_MNIST.pdf}} \vskip -0.02in \caption{Left: Prediction change refers to the proportion of CLUEs classified differently than their corresponding original inputs. Setting a value of $\lambda_{y}$ of around 0.7 results in class predictions for CLUEs being closer to the true labels than the original class predictions. Right: Reduction in predictive entropy achieved by CLUE. All values shown are averages across all testset points above the uncertainty rejection threshold.} \label{fig:MNIST_less_error} \end{center} \vskip -0.1in \end{figure} \textbf{Applying CLUE to non-Bayesian NNs:} These models are unable to capture model uncertainty. We train deterministic NNs on every dataset under consideration using the architectures described in \Cref{app:CLUE_hyperparams}. We generate counterfactuals for their noise uncertainty. As shown in \Cref{fig:regular_NN_CLUEs}, CLUE is effective at explaining away noise uncertainty for regular NNs. More uncertain inputs are subject to larger changes in terms of $\ell_{1}$ distance. \begin{figure*}[h] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/regular_NN/regular_NN_compas.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/regular_NN/regular_NN_default_credit.pdf} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/regular_NN/regular_NN_wine.pdf} \end{subfigure} \quad \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{images/appendix/regular_NN/regular_NN_lsat.pdf} \end{subfigure} \vskip\baselineskip % \includegraphics[width=.48\textwidth]{images/appendix/regular_NN/regular_NN_MNIST.pdf} \caption{Amount of noise uncertainty explained away vs $\ell_{1}$ shift in input space for all datasets under consideration when applying CLUE to regular NNs. The colorbar indicates the original samples' predictive uncertainty.} \label{fig:regular_NN_CLUEs} \end{figure*} \clearpage \subsection{Verifying Results from our Computational Evaluation Framework}\label{app:verifying_computational} Our computational evaluation framework relies on generating artificial data. There is reasonable concern that the characteristics of this data may not reflect that of real-world data, biasing our results. As explained in \Cref{app:additional_details_functional}, we are careful to use powerful g.t. DGM models that generate high quality artificial data. Be that as it may, we validate the results from our computational evaluation framework by performing an analogous \textit{informativeness} vs \textit{relevance} experiment on real data. As we do not have access to the generative process of the real data, we can not exactly quantify the uncertainty of inputs or how in-distribution they are. We instead resort to quantifying \textit{informativeness} as the amount of our BNN's predictive uncertainty explained away $\Delta \mathcal{H} = \mathcal{H}(\mathbf{y}|\mathbf{x}_{0}) - \mathcal{H}(\mathbf{y}|\mathbf{x}_{c})$. We measure \textit{relevance} as the $L_{2}$ distance of each counterfactual to its $L_{2}$ nearest neighbor within the train-set $d_{NN{-}2}(\mathbf{x}_{c}, {\cal D})$. We report mean values of $\Delta \mathcal{H}$, $d_{NN{-}2}(\mathbf{x}_{c}, {\cal D})$ and $\frac{\Delta \mathcal{H}}{d_{NN{-}2}(\mathbf{x}_{c}, {\cal D})}$ across all uncertain test points. A test point is deemed to be uncertain according to the criteria outlined in \Cref{app:CLUE_hyperparams}. The hyperparameters employed for CLUE also match those provided in \Cref{app:CLUE_hyperparams}. The step size used with local sensitivity analysis $\eta$ and U-FIDO’s $\lambda_{b}$ parameter are found via grid search with a methodology analogous to the one described in \Cref{sec:functional_evaluation}. \begin{table}[h] \centering \caption{Quantities of \textit{informativeness} ($\Delta \mathcal{H}$, higher is better), \textit{relevance} ($d_{NN{-}2}(\mathbf{x}_{c}, {\cal D})$, lower is better) and their ratio ($\frac{\Delta \mathcal{H}}{d_{NN{-}2}(\mathbf{x}_{c}, {\cal D})}$, higher is better) obtained on real data from the LSAT, COMPAS and Wine datasets. The numbers in parenthesis indicate dataset dimensionality.} \label{tab:real_data_1} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}c|ccc|ccc|ccc@{}} \toprule Method & LSAT (4) & & & COMPAS (7) & & & Wine (11) & & \\ & $d_{NN{-}2}(\mathbf{x}_{c}, \mathbf{x}_{0})$ & $\Delta \mathcal{H}$ & $\nicefrac{\Delta \mathcal{H}}{d_{NN{-}2}}$ & $d_{NN{-}2}(\mathbf{x}_{c}, \mathbf{x}_{0})$ & $\Delta \mathcal{H}$ & $\nicefrac{\Delta \mathcal{H}}{d_{NN{-}2}}$ & $d_{NN{-}2}(\mathbf{x}_{c}, \mathbf{x}_{0})$ & $\Delta \mathcal{H}$ & $\nicefrac{\Delta \mathcal{H}}{d_{NN{-}2}}$ \\ \cmidrule(l){2-10} Sensitivity & 0.482 & 0.003 & 0.0192 & 7.975 & 0.265 & 0.033 & 3.317 & 0.481 & 0.154 \\ CLUE & 0.080 & 0.092 & 1.664 & 0.067 & 0.014 & 0.737 & 1.274 & 1.409 & 1.188 \\ U-FIDO & 0.085 & 0.077 & 0.969 & 0.084 & 0.022 & 0.627 & 1.223 & 1.307 & 1.241 \\ \bottomrule \end{tabular}% } \end{table} \begin{table}[h] \centering \caption{Quantities of \textit{informativeness} ($\Delta \mathcal{H}$, higher is better), \textit{relevance} ($d_{NN{-}2}(\mathbf{x}_{c}, {\cal D})$, lower is better) and their ratio ($\frac{\Delta \mathcal{H}}{d_{NN{-}2}(\mathbf{x}_{c}, {\cal D})}$, higher is better) obtained on real data from the Credit and MNIST datasets. The numbers in parenthesis indicate dataset dimensionality.} \label{tab:real_data_2} \resizebox{0.73\textwidth}{!}{% \begin{tabular}{@{}c|ccc|ccc@{}} \toprule Method & Credit (23) & & & MNIST (784) & & \\ & $d_{NN{-}2}(\mathbf{x}_{c}, \mathbf{x}_{0})$ & $\Delta \mathcal{H}$ & $\nicefrac{\Delta \mathcal{H}}{d_{NN{-}2}}$ & $d_{NN{-}2}(\mathbf{x}_{c}, \mathbf{x}_{0})$ & $\Delta \mathcal{H}$ & $\nicefrac{\Delta \mathcal{H}}{d_{NN{-}2}}$ \\ \cmidrule(l){2-7} Sensitivity & 0.770 & 0.224 & 0.121 & 6.903 & 0.601 & 0.087 \\ CLUE & 1.025 & 0.147 & 0.293 & 4.374 & 0.628 & 0.153 \\ U-FIDO & 1.863 & 0.017 & 0.052 & 4.887 & 0.409 & 0.088 \\ \bottomrule \end{tabular}% } \end{table} As shown in \Cref{tab:real_data_1} and \Cref{tab:real_data_2}, CLUE outperforms U-FIDO in terms of $d_{NN{-}2}$ on all datasets except wine, where both approaches are very similar. The same is true for the ratio $\frac{\Delta \mathcal{H}}{d_{NN{-}2}}$. Like in the artificial data experiments from \Cref{sec:functional_evaluation}, the difference between both methods is most stark for high dimensional datasets (MNIST and Credit). Here, CLUE is able to explain away more uncertainty while providing counterfactuals that are similarly close to the training data. Sensitivity is able to greatly reduce uncertainty in high dimensions. However, this comes at the cost going off the data manifold. In low dimensions, there are less possible directions in which steps can be taken, rendering the direct gradient based approach less powerful. We note that the similarity of these results to the ones obtained in the analogous experiments from \Cref{sec:functional_evaluation} suggest the unbiasedness of our computational evaluation framework. \subsection{Additional Analysis of User Study} \label{add_user} While the main text showed the mean accuracy of CLUE over all tabular questions, we also consider the breakdown of accuracy by dataset and by test point certainty in~\Cref{tab:human_subject_acc}. CLUE outperforms all baselines on both datasets. We find that sensitivity does significantly worse in higher dimensions (on COMPAS), lending further credence to the intuition described in~\Cref{app:high_dim_sensitivity}. When splitting by the certainty of test points, we immediately notice that accuracy for uncertain test points is quite high for all methods. This similarity is expected since certain context points are the only factor that varies between each method's survey. Survey participants seemed to not use the certain context points to identify uncertain test points. This is probably due to pilot procedure, wherein Participant A carefully paired test points with relevant uncertain context points. Indeed, the random baseline, which controls for the possibility that our task can be solved without access to a relevant counterfactual, performs best on uncertain test points. However, we note a large difference between methods' results when identifying certain test points. CLUE's accuracy almost doubles the second best method's (\textit{Human CLUE}). When generating \textit{Human CLUE}s Participant B had knowledge of the uncertain context point, but not the test point (just like other methods). For this reason, we expect to see dissimilarity in methods' performance on certain test points. CLUE's ability to bring about most relevant contrast is one possible explanation for why it does so much better than baselines for certain context points. \begin{table}[htb] \centering \caption{Accuracy ($\%$) of participants on the Tabular main survey broken down by dataset and by certainty of test points.} \label{tab:human_subject_acc} \begin{tabular}{c|c|cc|cc} \toprule & Combined & LSAT & COMPAS & Certain Test & Uncertain Test\\ \midrule CLUE & $\mathbf{82.22}$ & $\mathbf{83.33}$ & $\mathbf{81.11}$ & $\mathbf{71.00}$ & $96.25$\\ \textit{Human CLUE} & $62.22$ & $61.11$ & $63.33$ & $38.00$ & $92.50$ \\ Random & $61.67$ & $62.22$ & $61.11$ & $31.00$ & $\mathbf{100}$ \\ Local Sensitivity & $52.78$ & $56.67$ & $48.89$ & $20.90$ & $92.50$ \\ \bottomrule \end{tabular} \vspace{2px} \end{table} \section{Additional Details on the Generative Model used in the Proposed Computationally Grounded Evaluation Framework}\label{app:additional_details_functional} The framework described in \Cref{fig:functional_framework} uses a conditional DGM, specifically a VAEAC \citep{VAEAC}, to both generate artificial data and to evaluate explanations for said data. VAEs are known for generating blurry or overly smoothed data. For our evaluation framework to work well, we require the ground truth DGM to generate sharp data, with atypical characteristic that would to lead to a predictor being uncertain. We can ensure that this is the case by using a large latent dimensionality. However, this brings forth another well-known issue with VAEs: distribution mismatch \citep{2-level_VAE,disentangling_natural_clustering,VAE_distribution_match}. The region of latent space where the encoder places probability mass, also known as the aggregate posterior, \begin{align*} q_{\phi}(\mathbf{z}) = \int q_{\phi}(\mathbf{z} | \mathbf{x}) p(\mathbf{x})\,d\mathbf{x} \end{align*} does not match the prior $p(\mathbf{z})$. \begin{figure}[h] \vskip -0.08in \begin{center} \centerline{\includegraphics[width=0.6\linewidth]{images/appendix/latent_space_visualization/latent_space_entropy.pdf}} \vskip -0.05in \caption{Predictive entropy estimates for artificial MNIST digits generated from a 2-dimensional VAE latent space. The MNIST test set digits have been projected onto the latent space and are displayed with a different color per class.} \label{fig:MNIST_latent_entropy} \end{center} \vskip -0.25in \end{figure} To visualize this phenomenon, we train a BNN and a VAE on MNIST. We sample points from the VAE's latent space and evaluate their uncertainty with the BNN. As shown in \Cref{fig:MNIST_latent_entropy}, clusters of same-class digits form in latent space. The aggregate posterior presents low density in the spaces between clusters. Digits generated from these areas are of low-quality, causing our BNN to be uncertain. The outer regions of latent space, where the isotropic Gaussian prior has low density, also generate uncertain digits. Recently, \citet{2-level_VAE} have proposed the two-level VAE as a solution to distribution mismatch. After training a standard VAE, a second VAE is trained on samples from the first VAE's latent space. As illustrated in \Cref{fig:digit_to_z_to_u}, the aggregate posterior over the inner latent variables, which we denote by $q(\mathbf{u})$, more closely resembles the prior. The joint distribution over inputs and latent variables factorizes as: $p(\mathbf{x}, \mathbf{z}, \mathbf{u}) = p(\mathbf{x} | \mathbf{z}) p(\mathbf{z} | \mathbf{u}) p(\mathbf{u})$. We refer the reader to \citep{2-level_VAE} for a detailed analysis. \Cref{fig:u_vs_z_samples} shows that, while generating digits from samples of $p(\mathbf{z})$ results in a large amount of low-quality or OOD reconstructions, samples from $p(\mathbf{u})$ map to clean digits. The two-stage mechanism restores the VAE's pivotal ancestral sampling capability, ensuring that our experiments with artificial data will be representative of methods performance on real data. \begin{figure}[htb] \vskip -0.1in \begin{center} \centerline{\includegraphics[width=0.85\linewidth]{images/appendix/latent_space_visualization/under_vae_paint.png}} \vskip -0.02in \caption{In its first stage, the two-level VAE maps input samples to approximate posteriors in the outer latent space. The aggregate posterior over this latent space need not resemble the isotropic Gaussian prior. The second VAE maps samples from the outer latent space to approximate posteriors in the inner latent space. The aggregate posterior over the inner latent space more closely matches the prior.} \label{fig:digit_to_z_to_u} \end{center} \vskip -0.1in \end{figure} \begin{figure}[htb] \vskip -0.1in \begin{center} \centerline{\includegraphics[width=0.75\linewidth]{images/appendix/latent_space_visualization/u_z_generations.pdf}} \vskip -0.0in \caption{Left: Digits generated from the inner latent space of a VAEAC trained on MNIST with a two-level mechanism. Right: Digits generated from the latent space of a VAEAC trained on MNIST. $\mathbf{u}$ and $\mathbf{z}$ are drawn from $\mathcal{N}(\mathbf{0}, I)$.} \label{fig:u_vs_z_samples} \end{center} \vskip -0.3in \end{figure} In order to generate artificial data, we draw samples from the auxiliary latent space, map them back to the VAEAC's latent space and then map them to the input space. This allows for high-quality sample generation. In this way, a single VAEAC can be used for both ancestral sampling and conditional sampling. In addition, it allows us to estimate the log-likelihood of inputs as: \begin{gather}\label{eq:VAEAC_density} \log p_{gt}(\mathbf{x}) = \log \int p_{\theta_{1}}(\mathbf{x} | \mathbf{z}) p_{\theta_{2}}(\mathbf{z} | \mathbf{u}) p(\mathbf{u})\,d\mathbf{z}\,d\mathbf{u} \end{gather} In \Cref{eq:VAEAC_density} parameter subscripts refer to the outer (\nth{1} level) and inner (\nth{2} level) networks. In order to preserve computational tractability, we approximate $p_{\theta_{2}}(\mathbf{z} | \mathbf{u})$ with a point estimate placed at its mean $p_{\theta_{2}}(\mathbf{z} | \mathbf{u}) \approx \delta(\mathbf{z} - \mu_{\theta_{2}}(\mathbf{z} | \mathbf{u}))$. We further approximate \Cref{eq:VAEAC_density} with importance sampling: \begin{align}\label{eq:c1_VAEAC_density_approx} \log p_{gt}(\mathbf{x}) \approx \log \frac{1}{K} \sum^{K}_{k=1} \frac{p_{\theta_{1}}(\mathbf{x} | \mathbf{z}{=}\mu_{\theta_{2}}(\mathbf{z} | \mathbf{u}_{k})) p(\mathbf{u}_{k})} {q(\mathbf{u}_{k} | \mathbf{x})}; \quad \mathbf{u}_{k} \sim q(\mathbf{u} | \mathbf{x}) \end{align} \subsection{Comparison of Methods under a Ground Truth DGM} The two-level VAEAC setup described above partially addresses the concern that our synthetic data might not be diverse enough to highlight differences among the methods being compared. Indeed, our results from \Cref{table:dx_kneepoint} and \Cref{tab:px_kneepoint} show noticeable differences in performance across methods. We now address the opposite concern; methods that leverage auxiliary VAEs might be unfairly advantaged under our functionally grounded framework, as the generative process of our synthetic data is also VAE-based. Because VAEs are very flexible neural network based generative models, using them as a ground truth provides relatively little inductive biases for auxiliary DGMs to take advantage of. Additionally, our ground truth VAEAC captures the joint distribution of inputs and targets. The metric of interest, $\Delta \mathcal{H}_{\mathrm{gt}}$, only depends on the conditional distribution over targets $p_{gt}(\mathbf{y}|\mathbf{x})$. Our auxiliary DGMs only model inputs. Two of the methods we evaluate, U-FIDO and CLUE, leverage auxiliary DGMs. Thus, both would be equally advantaged. The $\Delta \mathcal{H}_{\mathrm{gt}}$ vs $\Delta \log p_{\mathrm{gt}}$ metric from \Cref{tab:px_kneepoint} is the most dependent on the ground truth VAEAC. However, we observe the largest difference between CLUE and U-FIDO on in this metric. \section{Details on User Study}\label{app:human_experiment_details} \subsection{Additional Details on Tabular User Study} For our pilot point selection procedure, we take points from each dataset's test set that score above the uncertainty rejection thresholds described in \Cref{app:CLUE_hyperparams} as uncertain points. Points below the thresholds are labeled as certain points. Pilot procedure participants, referred to as participant A in the main text, were not informed that the pools were split up by the points' certainty with respect to the BNN being explained. \begin{figure*}[htb] \vspace{-0.05in} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=.98\linewidth]{images/appendix/human_subj/consent_form.pdf} \caption{Consent Form for the Tabular Main Survey} \label{fig:consent_form} \end{subfigure}% ~ \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\linewidth]{images/appendix/human_subj/attention_check.pdf} \caption{Attention Check for the Tabular Main Survey} \label{fig:attention_check} \end{subfigure} \caption{Setup of tabular user studies.} \label{fig:hs_example_side} \end{figure*} We now go through the various sections of the main survey. In~\Cref{fig:consent_form}, we include the consent form used in our user studies. This user study was performed with the approval of the University of Cambridge's Department of Engineering Research Ethics Committee. Only three participants who were asked to take the survey did not provide consent and thus exited the form. We still ensured that at least ten participants took each of the four survey variants. \begin{figure*}[htb] \centering \begin{subfigure}{0.47\textwidth} \centering \includegraphics[width=.99\linewidth]{images/appendix/human_subj/clue_example_survLSAT.pdf} \caption{Two LSAT questions with certain points generated by CLUE} \label{fig:lsat_ex} \end{subfigure}% ~ \begin{subfigure}{0.53\textwidth} \centering \includegraphics[width=.99\linewidth]{images/appendix/human_subj/clue_example_surv.pdf} \caption{Two COMPAS questions with certain points generated by CLUE} \label{fig:compas_ex} \end{subfigure} \caption{Example Tabular Main Survey questions} \label{fig:hs_example_CLUES} \vspace{0.1in} \end{figure*} We then include an example question for each dataset, called an ``attention check.'' An example is shown in~\Cref{fig:attention_check}. Note that the answer to this example question is provided in line. Later in the survey, we ask participants this exact same question. We ask one attention check per dataset. If participants get the attention check wrong for both datasets, we void their results. We only had to void one result. This did not affect our criteria of ten completed surveys per variant. The consent form and attention check questions were the same for all survey variants. The main survey participants were first asked the ten LSAT questions followed by the ten COMPAS questions: we made this design decision since the dimensionality of LSAT is lower than that of COMPAS, easing participants into the task. Examples of questions from the CLUE survey variant are shown in~\Cref{fig:hs_example_CLUES}. \subsection{MNIST User Study} \label{app:mnist_user} In order to validate CLUE on image data, we create a modified MNIST dataset with clear failure modes for practitioners to identify. We first discard all classes except four, seven, and nine. We then manually identify forty sevens from the training set which have dashes crossing their stems. Using K-nearest-neighbors, we identify the twelve sevens closest to each of the ones manually selected. We delete these 520 sevens from our dataset. We repeat the same procedure for fours which have a closed, triangle-shaped top. We do not delete any digits from the test set. We train a BNN on this new dataset. Our BNN presents high epistemic uncertainty when tested on dashed sevens and closed fours as a consequence of the sparsity of these features in the train set. We evaluate the test set of fours, sevens, and nines with our BNN. Datapoints that surpass our uncertainty threshold are selected as candidates to be shown in our user study as uncertain context examples or test questions. We show example CLUEs for a four and a seven that display the characteristics of interest in \Cref{fig:modified_MNIST_example}. \begin{figure}[ht] \vskip -0.3in \begin{center} \centerline{\includegraphics[width=0.5\textwidth]{images/appendix/modified_MNIST_examples.pdf}} \caption{Examples of high uncertainty digits containing characteristics that are uncommon in our modified MNIST dataset. Their corresponding CLUEs and $\Delta$CLUEs are displayed beside them.} \label{fig:modified_MNIST_example} \end{center} \vskip -0.25in \end{figure} Leveraging the modified MNIST dataset, we run another user study with $10$ questions and two variants. Unlike our tabular experiments, we show practitioners a set of five \textit{context points} to start, as opposed to a pair. This set of \textit{context points} is chosen at random from the training set. The first variant involves showing users the set of \textit{context points}, labeled with if their uncertainty surpasses our predefined threshold. We then ask users to predict if new test points will be certain or uncertain to the BNN. The second variant contains the same labeled context points and test datapoints. However, together with uncertain context points, practitioners are shown CLUEs of how the input features can be changed such that the BNN's uncertainty falls below the rejection threshold. The practitioners are then asked to decide if new points' predictions will be certain or not. If CLUE works as intended, practitioners taking the second variant should be able to identify points on which the BNN will be uncertain more accurately. \begin{figure*}[htb] \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=.99\linewidth]{images/appendix/human_subj/context_set.png} \caption{Example Context Set with CLUEs} \label{fig:context} \end{subfigure}% ~ \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=.99\linewidth]{images/appendix/human_subj/four.pdf} \caption{Example question} \label{fig:four_MNIST} \end{subfigure} \caption{MNIST User Study Setup} \label{fig:hs_mnist} \end{figure*} The first variant was shown to $5$ graduate students with machine learning expertise who only received context points and rejection labels (uncertain or not). This group was able to correctly classify $67\%$ of the new test points as high or low uncertainty. The second variant was shown to $5$ other graduate students with machine learning expertise who received context points together with CLUEs in cases of high uncertainty. This group was able to reach an accuracy of $88\%$ on new test points. This user study suggests CLUEs are useful for practitioners in image-based settings as well. \section{Introduction}\label{sec:intro} There is growing interest in probabilistic machine learning models, which aim to provide reliable estimates of uncertainty about their predictions \citep{Mackay_laplace}. These estimates are helpful in high-stakes applications such as predicting loan defaults or recidivism, or in work towards autonomous vehicles. Well-calibrated uncertainty can be as important as making accurate predictions, leading to increased robustness of automated decision-making systems and helping prevent systems from behaving erratically for out-of-distribution (OOD) test points. In practice, predictive uncertainty conveys skepticism about a model's output. However, its utility need not stop there: we posit predictive uncertainty could be rendered more useful and actionable if it were expressed in terms of model inputs, answering the question: \textit{``Which input patterns lead my prediction to be uncertain?''} Understanding which input features are responsible for predictive uncertainty can help practitioners learn in which regions the training data is sparse. For example, when training a loan default predictor, a data scientist (i.e., practitioner) can identify sub-groups (by age, gender, race, etc.) under-represented in the training data. Collecting more data from these groups, and thus further constraining their model’s parameters, could lead to accurate predictions for a broader range of clients. In a clinical scenario, a doctor (i.e., domain expert) can use an automated decision-making system to assess whether a patient should receive a treatment. In the case of high uncertainty, the system would suggest that the doctor should not rely on its output. If uncertainty were explained in terms of which features the model finds anomalous, the doctor could appropriately direct their attention. While explaining predictions from deep models has become a burgeoning field \citep{interpretability_overview,bhatt2019explainable}, there has been relatively little research on explaining what leads to neural networks' predictive uncertainty. In this work, we introduce Counterfactual Latent Uncertainty Explanations (CLUE), to our knowledge, the first approach to shed light on the subset of input space features that are responsible for uncertainty in probabilistic models. Specifically, we focus on explaining Bayesian Neural Networks (BNNs). We refer to the explanations given by our method as CLUEs. CLUEs try to answer the question: \textit{``What is the smallest change that could be made to an input, while keeping it in distribution, so that our model becomes certain in its decision for said input?''} CLUEs can be generated for tabular and image data on both classification and regression~tasks. \begin{figure}[t] \vskip -0.2in \begin{center} \centerline{\includegraphics[width=0.85\linewidth]{images/CLUE_workflow_v4.pdf}} \vskip -0.05in \caption{Workflow for automated decision making with transparency. Our probabilistic classifier produces a distribution over outputs. In cases of high uncertainty, CLUE allows us to identify features which are responsible for class ambiguity in the input (denoted by $\Delta$ and highlighted in dark blue). Otherwise, we resort to existing feature importance approaches to explain certain decisions.}% \label{fig:intro_CLUE_workflow} \end{center} \vskip -0.2in \end{figure} An application of CLUE is to improve transparency in the real-world deployment of a probabilistic model, such as a BNN, by complementing existing approaches to model interpretability \citep{LIME_interpretability,integrated_gradients,duvenaud_counterfactual}. When the BNN is confident in its prediction, practitioners can generate an explanation via earlier feature importance techniques. When the BNN is uncertain, its prediction may well be wrong. This potentially wrong prediction could be the result of factors not related to the actual patterns present in the input data, e.g. parameter initialization, randomness in mini-batch construction, etc. An explanation of an uncertain prediction will be disproportionately affected by these factors. Indeed, recent work on feature attribution touches on the unreliability of saliency maps when test points are OOD~\citep{adebayo2020debugging}. Therefore, when the BNN is uncertain, it makes sense to provide an explanation of why the BNN is uncertain (i.e., CLUE) instead of an explanation of the BNN's prediction. This is illustrated in \Cref{fig:intro_CLUE_workflow}. Our code is at: \href{https://github.com/cambridge-mlg/CLUE}{\texttt{github.com/cambridge-mlg/CLUE}}. We highlight the following contributions: \begin{itemize} \item We introduce CLUE, an approach that finds counterfactual explanations of uncertainty in input space, by searching in the latent space of a deep generative model (DGM). We put forth an algorithm for generating CLUEs and show how CLUEs are best displayed. \item We propose a computationally grounded approach for evaluating counterfactual explanations of uncertainty. It leverages a separate conditional DGM as a synthetic data generator, allowing us to quantify how well explanations reflect the true generative process of the data. \item We evaluate CLUE quantitatively through comparison to baseline approaches under the above framework and through ablative analysis. We also perform a user study, showing that CLUEs allow practitioners to predict on which new inputs a BNN will be uncertain. \end{itemize} \section{Preliminaries}\label{sec:related_work} \subsection{Uncertainty in BNNs}\label{sec:background} Given a dataset ${\cal D}\,{=}\,\{\mathbf{x}^{(n)}, \mathbf{y}^{(n)}\}_{n=1}^{N}$, a prior on our model's weights $p(\mathbf{w})$, and a likelihood function $p({\cal D} | \mathbf{w}){=}\prod_{n=1}^{N}p(\mathbf{y}^{(n)}| \mathbf{x}^{(n)}, \mathbf{w})$, the posterior distribution over the predictor's parameters $p(\mathbf{w} | {\cal D}) \,{\propto}\,p({\cal D} | \mathbf{w})p(\mathbf{w})$ encodes our uncertainty about what value $\mathbf{w}$ should take. Through marginalization, this parameter uncertainty is translated into predictive uncertainty, yielding reliable error bounds and preventing overfitting: \begin{align}\label{eq:marginalisation} p(\mathbf{y}^{*}| \mathbf{x}^{*}, {\cal D}) = \int p(\mathbf{y}^{*}| \mathbf{x}^{*}, \mathbf{w}) p(\mathbf{w} | {\cal D})\,d\mathbf{w} . \end{align} For BNNs, both the posterior over parameters and predictive distribution \Cref{eq:marginalisation} are intractable. Fortunately, there is a rich literature concerning approximations to these objects \citep{Mackay_laplace,probabilistic_backpropagation,yarin_thesis}. In this work, we use scale-adapted Stochastic Gradient Hamiltonian Monte Carlo (SG-HMC) \citep{BO_BNN}. For regression, we use heteroscedastic Gaussian likelihood functions, quantifying uncertainty using their standard deviation, $\sigma(\mathbf{y} | \mathbf{x})$. For classification, we take the entropy $H(\mathbf{y} | \mathbf{x})$ of categorical distributions as uncertainty. Details are given in \Cref{app:implementation}. Hereafter, we use $\mathcal{H}$ to refer to any uncertainty metric, be it $\sigma$ or $H$. Predictive uncertainty can be separated into two components, as shown in \Cref{fig:logistic_decomp}. Each conveys different information to practitioners \citep{Stefan_thesis}. Irreducible or \emph{aleatoric uncertainty} is caused by inherent noise in the generative process of the data, usually manifesting as class overlap. Model or \emph{epistemic uncertainty} represents our lack of knowledge about $\mathbf{w}$. Stemming from a model being under-specified by the data, epistemic uncertainty arises when we query points off the training manifold. Capturing model uncertainty is the main advantage of BNNs over regular NNs. It enables the former to be used for uncertainty aware tasks, such as OOD detection \citep{daxberger2019BNN_VAE}, continual learning \citep{variational_continual_learning}, active learning \citep{stefan_uncertainy_decomposition}, and Bayesian optimization \citep{BO_BNN}. \begin{figure}[t] \vskip -0.3in \begin{center} \centerline{\includegraphics[width=0.9\linewidth]{images/logistic_uncertainty.pdf}} \vskip -0.1in \caption{Left: Training points and predictive distribution for variational Bayesian Logistic Regression on the Moons dataset. Center: Aleatoric entropy $H_{a}$ matches regions of class non-separability. Right: Epistemic entropy $H_{e}$ grows away from the data. Both uncertainties are detailed in \Cref{app:uncert}.} \label{fig:logistic_decomp} \end{center} \vskip -0.2in \end{figure} \subsection{Uncertainty Sensitivity Analysis}\label{sec:uncertainty_sensitivity} To the best of our knowledge, the only existing method for interpreting uncertainty estimates is Uncertainty Sensitivity Analysis \citep{uncertainty_sensitivity}. This method quantifies the global importance of an input dimension to a chosen metric of uncertainty $\mathcal{H}$ using a sum of linear approximations centered at each test point: \begin{align}\label{eq:c2_uncertainty_sensitivity} I_{i} = \frac{1}{\abs{{\cal D}_{\text{test}}}}\sum^{\abs{{\cal D}_{\text{test}}}}_{n=1} \left| \frac{\partial \mathcal{H}(\mathbf{y}_{n} | \mathbf{x}_{n})}{\partial x_{n, i}} \right| . \end{align} As discussed by \citet{stop_explaining_black_box}, linear explanations of non-linear models, such as BNNs, can be misleading. Even generalized linear models, which are often considered to be ``inherently interpretable,'' like logistic regression, produce non-linear uncertainty estimates in input space. This can be seen in \Cref{fig:logistic_decomp}. Furthermore, high-dimensional input spaces limit the actionability of these explanations, as $\nabla_{\mathbf{x}} \mathcal{H}$ will likely not point in the direction of the data manifold. In \Cref{fig:CLUE_vs_sens_adversarial} and \Cref{app:high_dim_sensitivity}, we show how this can result in sensitivity analysis generating meaningless explanations. Our method, CLUE, leverages the latent space of a DGM to avoid working with high-dimensional input spaces and to ensure explanations are in-distribution. CLUE does not rely on crude linear approximations. The counterfactual nature of CLUE guarantees explanations have tangible meaning. \begin{figure}[h] \vskip -0.05in \begin{center} \centerline{\includegraphics[width=0.95\linewidth]{images/CLUE_vs_sens_MNIST.png}} \vskip -0.05in \caption{Left: Taking a step in the direction of maximum sensitivity leads to a seemingly noisy input configuration for which $H$ is small. Right: Minimizing CLUE's uncertainty-based objective in terms of a DGM's latent variable $\mathbf{z}$ produces a plausible digit with a corrected lower portion.} \label{fig:CLUE_vs_sens_adversarial} \end{center} \vskip -0.1in \end{figure} \vspace{-0.15cm} \subsection{Counterfactual Explanations}\label{sec:related_work_counterfactual} The term ``counterfactual'' captures notions of what would have happened if something had been different. Two meanings have been used by ML subcommunities. 1) Those in causal inference make causal assumptions about interdependencies among variables and use these assumptions to incorporate % consequential adjustments when particular variables are set to new values~\citep{counterfact_fair,Pearl2019causal}. 2) In contrast, the interpretability community recently used ``counterfactual explanations'' to explore how input variables must be modified to change a model's output \textit{without} making explicit causal assumptions~\citep{wachter2018counterfactual}. As such, counterfactual explanations can be seen as a case of contrastive explanations~\citep{dhurandhar2018explanations,byrne2019counterfactuals}. In this work, we use ``counterfactual'' in a sense similar to 2): we seek to make small changes to an input in order to reduce the uncertainty assigned to it by our model, without explicit causal assumptions. Multiple counterfactual explanations can exist for any given input, as the functions we are interested in explaining are often non-injective \citep{diverse_counterfactuals}. We are concerned with counterfactual input configurations that are close to the original input $\mathbf{x}_{0}$ according to some pairwise distance metric $d(\cdot, \cdot)$. Given a desired outcome $c$ different from the original one $\mathbf{y}_{0}$ produced by predictor $p_{I}$, counterfactual explanations $\mathbf{x}_{c}$ are usually generated by solving an optimization problem that resembles: \begin{gather}\label{eq:c2_generic_counterfactual} \mathbf{x}_{c} = \textstyle{\argmax_{\mathbf{x}}} \left( p_{I}(\mathbf{y}{=}c | \mathbf{x}) - d(\mathbf{x}, \mathbf{x}_{0})\right)\;\; \text{s.t.} \;\; \mathbf{y}_{0}{\neq}c . % \end{gather} Naively optimizing \Cref{eq:c2_generic_counterfactual} in high-dimensional input spaces may result in the creation of adversarial inputs which are not actionable \citep{adversarial_goodfellow}. Telling a person that they would have been approved for a loan had their age been ${-10}$ is of very little use. To right this, recent works define linear constraints on explanations \citep{actionable_recourse,certifAI}. An alternative more amenable to high dimensional data is to leverage DGMs (which we dub \textit{auxiliary DGMs}) to ensure explanations are in-distribution \citep{dhurandhar2018explanations,joshi2018xgems,duvenaud_counterfactual,booth2020bayes,tripp2020sampleefficient}. CLUE avoids the above issues by searching for counterfactuals in the lower-dimensional latent space of an auxiliary DGM. This choice is well suited for uncertainty, as the DGM constrains CLUE's search space to the data manifold. When faced with an OOD input, CLUE returns the nearest in-distribution analog, as shown in \Cref{fig:CLUE_vs_sens_adversarial}. \section{Proposed Method}\label{sec:propose_CLUE} Without loss of generality, we use $\mathcal{H}$ to refer to any differentiable estimate of uncertainty ($\sigma$ or $H$). We introduce an auxiliary latent variable DGM: $p_{\theta}(\mathbf{x}) = \int p_{\theta}(\mathbf{x} | \mathbf{z}) p(\mathbf{z})\,d\mathbf{z}$. In the rest of this paper, we will use the decoder from a variational autoencoder (VAE). Its encoder is denoted as $q_{\phi}(\mathbf{z} | \mathbf{x})$. We write these models' predictive means as $\EX_{p_{\theta}(\mathbf{x} | \mathbf{z})}[\mathbf{x}]{=}\mu_{\theta}(\mathbf{x} | \mathbf{z})$ and $\EX_{q_{\phi}(\mathbf{z} | \mathbf{x})}[\mathbf{z}]{=}\mu_{\phi}(\mathbf{z} | \mathbf{x})$ respectively. CLUE aims to find points in latent space which generate inputs similar to an original observation $\mathbf{x}_{0}$ but are assigned low uncertainty. This is achieved by minimizing \Cref{eq:c3_clue_objective}. CLUEs are then decoded as \Cref{eq:c3_CLUE_counterfact}. \begin{gather}\label{eq:c3_clue_objective} \mathcal{L}(\mathbf{z}) = \mathcal{H}(\mathbf{y} | \mu_{\theta}(\mathbf{x} | \mathbf{z})) + d(\mu_{\theta}(\mathbf{x} | \mathbf{z}), \mathbf{x}_{0}) ,\\ \mathbf{x}_{\text{CLUE}} = \mu_{\theta}(\mathbf{x} | \mathbf{z}_{\text{CLUE}}) \;\; \text{where} \;\; \mathbf{z}_{\text{CLUE}} = \textstyle{\argmin_{\mathbf{z}}} \mathcal{L}(\mathbf{z}) .\label{eq:c3_CLUE_counterfact} \end{gather} The pairwise distance metric takes the form $d(\mathbf{x}, \mathbf{x}_{0})\,{=}\,\lambda_{x} d_{x}(\mathbf{x}, \mathbf{x}_{0}) + \lambda_{y} d_{y}(f(\mathbf{x}), f(\mathbf{x}_{0}))$ such that we can enforce similarity between uncertain points and CLUEs in both input and prediction space. The hyperparameters $(\lambda_{x}, \lambda_{y})$ control the trade-off between producing low uncertainty CLUEs and CLUEs which are close to the original inputs. In this work, we take $d_{x}(\mathbf{x}, \mathbf{x}_{0})\,{=}\,\norm{\mathbf{x} - \mathbf{x}_{0}}_{1}$ to encourage sparse explanations. For regression, $d_{y}(f(\mathbf{x}), f(\mathbf{x}_{0}))$ is mean squared error. For classification, we use cross-entropy. Note that the best choice for $d(\cdot,\cdot)$ will be task-specific. \begin{figure}[t] \vspace{-0.15in} \begin{minipage}[]{0.47\textwidth} \begin{algorithm}[H] \SetAlgoLined \KwData{original datapoint $\mathbf{x}_{0}$, distance function $d(\cdot, \cdot)$, Uncertainty estimator $\mathcal{H}$, DGM decoder $\mu_{\theta}(\cdot)$, DGM encoder $\mu_{\phi}(\cdot)$} Set initial value of $\mathbf{z} = \mu_{\phi}(\mathbf{z} | \mathbf{x}_{0})$\; \While{loss $\mathcal{L}$ is not converged}{ Decode: $\mathbf{x} = \mu_{\theta}(\mathbf{x} | \mathbf{z})$\; Use predictor to obtain $\mathcal{H}(\mathbf{y} | \mathbf{x})$ \; $\mathcal{L} = \mathcal{H}(\mathbf{y} | \mathbf{x} ) + d(\mathbf{x}, \mathbf{x}_{0})$\; Update $\mathbf{z}$ with $\nabla_{\mathbf{z}}$ $\mathcal{L}$\; } Decode explanation: $\mathbf{x}_{\text{CLUE}} = \mu_{\theta}(\mathbf{x} | \mathbf{z})$\; \KwResult{Uncertainty counterfactual $\mathbf{x}_{\text{CLUE}}$} \caption{CLUE} \label{alg:CLUE_algorithm} \end{algorithm} \end{minipage} ~ \begin{minipage}[]{0.52\textwidth} \centerline{\includegraphics[width=0.8\linewidth]{images/optimisation_loop.png}} \captionof{figure}{\label{fig:c3_CLUE_optimisation_loop}Latent codes are decoded into inputs for which a BNN generates uncertainty estimates; their gradients are backpropagated to latent space.} \end{minipage} \vspace{-0.1in} \end{figure} The CLUE algorithm and a diagram of our procedure are provided in \Cref{alg:CLUE_algorithm} and \Cref{fig:c3_CLUE_optimisation_loop}, respectively. The hyperparameter $\lambda_{x}$ is selected by cross validation for each dataset such that both terms in \Cref{eq:c3_clue_objective} are of similar magnitude. We set $\lambda_{y}$ to $0$ for our main experiments, but explore different values in \Cref{app:more_ablation}. We minimize \Cref{eq:c3_clue_objective} with Adam by differentiating through both our BNN and VAE decoder. To facilitate optimization, the initial value of $\mathbf{z}$ is chosen to be $\mathbf{z}_{0}{=}\mu_{\phi}(\mathbf{z} | \mathbf{x}_{0})$. Optimization runs for a minimum of three iterations and a maximum of $35$ iterations, with a learning rate of $0.1$. If the decrease in $\mathcal{L}(\mathbf{z})$ is smaller than $\nicefrac{\mathcal{L}(\mathbf{z}_{0})}{100}$ for three consecutive iterations, we apply early stopping. CLUE can be applied to batches of inputs simultaneously, allowing us to leverage GPU-accelerated matrix computation. Our implementation is detailed in full in \Cref{app:implementation}. As noted by \citet{wachter2018counterfactual}, individual counterfactuals may not shed light on all important features. Fortunately, we can exploit the non-convexity of CLUE's objective to address this. We initialize CLUE with $\mathbf{z}_{0}\,{=}\,\mu_{\phi}(\mathbf{z} | \mathbf{x}_{0}) + \epsilon$, where $\epsilon\,{=}\,\mathcal{N}(\mathbf{z}; \mathbf{0}, \sigma_{0}\mathbf{I})$, and perform \Cref{alg:CLUE_algorithm} multiple times to obtain different CLUEs. We find $\sigma_{0}\,{=}\,0.15$ to give a good trade-off between optimization speed and CLUE diversity. \Cref{app:multiplicity} shows examples of different CLUEs obtained for the~same~inputs. \begin{wrapfigure}{R}{0.50\textwidth} \vspace{-0.2in} \begin{center} \begin{subfigure}{0.22\textwidth} \includegraphics[width=\linewidth]{images/intext_examples_MNIST.pdf} \caption{MNIST} \end{subfigure} \begin{subfigure}{0.27\textwidth} \includegraphics[width=\linewidth]{images/lsat_ex1.pdf} \vspace{0px} \caption{LSAT} \end{subfigure} \end{center} \vspace{-0.1in} \caption{\label{fig:showing_CLUES}Example image and tabular CLUEs.} \vspace{-0.05in} \end{wrapfigure} We want to ensure noise from auxiliary DGM reconstruction does not affect CLUE visualization. For tabular data, we use the change in percentile of each input feature with respect to the training distribution as a measure of importance. We only highlight continuous variables for which CLUEs are separated by $15$ percentile points or more from their original inputs. All changes to discrete variables are highlighted. For images, we report changes in pixel values by applying a sign-preserving quadratic function to the difference between CLUEs and original samples: $\Delta{\text{CLUE}}{=}\abs{\Delta{\mathbf{x}}}{\cdot}\Delta{\mathbf{x}}$ with $\Delta{\mathbf{x}}{=}\mathbf{x}_{\text{CLUE}}{-}\mathbf{x}_{0}$. This is showcased in \Cref{fig:showing_CLUES} and in \Cref{app:additional_examples}. It is common for approaches to generating saliency maps to employ constraints that encourage the contiguity of highlighted pixels \citep{duvenaud_counterfactual,Dabkowski2017saliency}. We do not employ such constraints, but we note they might prove useful when applying CLUE to natural images. \section{A Framework for Evaluating Counterfactual Explanations of Uncertainty Computationally}\label{sec:functional_framework} Evaluating explanations quantitatively (without resorting to expensive user studies) is a difficult but important task \citep{towards_rigorous_interpretable_machine_learning,weller2019transparency}. We put forth a computational framework to evaluate counterfactual explanations of uncertainty. In the spirit of \citet{bhatt2020evaluating}, we desire counterfactuals that are \textit{1) informative}: they should highlight features which affect our BNN's uncertainty, and \textit{2) relevant}: counterfactuals should lie close to the original inputs and represent plausible parameter settings, lying close to the data manifold. Recall, from \Cref{fig:CLUE_vs_sens_adversarial}, that inputs for which our BNN is certain can be constructed by applying adversarial perturbations to uncertain ones. Alas, evaluating these criteria requires access to the generative process of the data. To evaluate the above requirements, we introduce an additional DGM that will act as a ``ground truth'' data generating process (g.t.\ DGM). Specifically, we use a variational autoencoder with arbitrary conditioning \citep{VAEAC} (g.t.\ VAEAC). \begin{figure}[t] \vskip -0.2in \begin{center} \centerline{\includegraphics[width=1\linewidth]{images/functional_eval_color.pdf}} \vskip -0.02in \caption{Pipeline for computational evaluation of counterfactual explanations of uncertainty. The VAEAC which we treat as a data generating process is colored in green. Colored in orange is the auxiliary DGM used by the approach being evaluated. For approaches that do not use an auxiliary DGM, like Uncertainty Sensitivity Analysis, the orange element will not be present.} \label{fig:functional_framework} \end{center} \vskip -0.2in \end{figure} It jointly models inputs and targets $p_{\text{gt}}(\mathbf{x}, \mathbf{y})$. Measuring counterfactuals' log density under this model $\log p_{\text{gt}}(\mathbf{x}_{c})$ allows us to evaluate if they are in-distribution. The g.t. VAEAC also allows us to query the conditional distribution over targets given inputs, $p_{\text{gt}}(\mathbf{y} | \mathbf{x})$. From this distribution, we can compute an input's true uncertainty $\mathcal{H}_{\text{gt}}$, as given by the generative process of the data. This allows us to evaluate if counterfactuals address the true sources of uncertainty in the data, as opposed to exploiting adversarial vulnerabilities in the BNN. The evaluation procedure, shown in \Cref{fig:functional_framework}, is as follows: \begin{enumerate} \item Train a g.t. VAEAC on a real dataset to obtain $p_{\text{gt}}(\mathbf{x}, \mathbf{y})$ as well as conditionals $p_{\text{gt}}(\mathbf{y} | \mathbf{x})$. \item Sample artificial data $(\bar{\mathbf{x}}, \bar{\mathbf{y}})\,{\sim}\,p_{\text{gt}}(\mathbf{x}, \mathbf{y})$. Use them to train a BNN and an auxiliary DGM. \item Sample more artificial data. Generate counterfactual explanations $\bar{\mathbf{x}}_{c}$ for uncertain samples. \item Use the g.t. VAEAC to obtain the conditional distribution over targets given counterfactual inputs $p_{\text{gt}}(\mathbf{y}|\bar{\mathbf{x}}_{c})$ and $\mathcal{H}_{\text{gt}}$. Evaluate if counterfactuals are on-manifold through $\log p_{\text{gt}}(\bar{\mathbf{x}}_{c})$. \end{enumerate} Given an uncertain artificially generated test point $\bar{\mathbf{x}}_{0} \sim p_{\text{gt}}$ and its corresponding counterfactual explanation $\bar{\mathbf{x}}_{c}$, we quantify \textit{informativeness} as the amount of uncertainty that has been explained away. The variance (or entropy) of $p_{\text{gt}}(\mathbf{y}|\mathbf{x})$ reflects the ground truth aleatoric uncertainty associated with $\mathbf{x}$. Hence, for aleatoric uncertainty, we quantify \textit{informativeness} as $\Delta \mathcal{H}_{\text{gt}}\,{=}\,\EX_{p_{\text{gt}}}[\mathcal{H}_{\text{gt}}(\mathbf{y} | \bar{\mathbf{x}}_{0}) - \mathcal{H}_{\text{gt}}(\mathbf{y} | \bar{\mathbf{x}}_{c})]$. Epistemic uncertainty only depends on our BNN. It cannot be directly computed from $p_{\text{gt}}(\mathbf{y}|\mathbf{x})$. However, its reduction can be measured implicitly through the reduction in the BNN's prediction error with respect to the labels outputted by the g.t. VAEAC: $\Delta \mathit{err}_{\text{gt}}\,{=}\,\EX_{p_{gt}}[\mathit{err}_{\text{gt}}(\bar{\mathbf{x}}_{0}) - \mathit{err}_{\text{gt}}(\bar{\mathbf{x}}_{c})]$. Here $\mathit{err}_{\text{gt}}(\mathbf{x})\,{=}\,d_{y}(p(y|\mathbf{x}), \argmax_{y}p_{\text{gt}}(y|\mathbf{x}))$. Approaches that exploit adversarial weaknesses in the BNN will not transfer to the g.t. VAEAC, failing to reduce uncertainty or error. We assess the \textit{relevance} of counterfactuals through their likelihood under the g.t. VAEAC $\log p_{gt}(\bar{\mathbf{x}}_{c})$ and through their $\ell_1$ distance to the original inputs ${\norm{\Delta \bar{\mathbf{x}}}_{1}\,{=}\,\norm{\bar{\mathbf{x}}_{0} - \bar{\mathbf{x}}_{c}}_{1}}$. We refer to~\Cref{app:additional_details_functional} for a detailed discussion on g.t. VAEACs and their use for comparing counterfactual generations. \section{Experiments}\label{sec:experiments} We validate CLUE on LSAT academic performance regression \citep{LSAT}, UCI Wine quality regression, UCI Credit classification \citep{UCI_repo}, a $7$ feature variant of COMPAS recidivism classification \citep{COMPAS}, and MNIST image classification \citep{MNIST}. For each, we select roughly the $20\%$ most uncertain test points as those for which we reject our BNNs' decisions. We only generate CLUEs for ``rejected'' points. Rejection thresholds, architectures, and hyperparameters are in~\Cref{app:implementation}. Experiments with non-Bayesian NNs are in~\Cref{app:more_ablation}. As a baseline, we introduce a localized version of Uncertainty Sensitivity Analysis \citep{uncertainty_sensitivity}. It produces counterfactuals by taking a single step, of size $\eta$, in the direction of the gradient of an input's uncertainty estimates $\mathbf{x}_{c}\,{=}\,\mathbf{x}_{0} - \eta \nabla_{\mathbf{x}}\mathcal{H}(\mathbf{y}|\mathbf{x}_{0})$. Averaging $\abs{\mathbf{x}_{0}-\mathbf{x}_{c}}$ across a test set, we recover~\Cref{eq:c2_uncertainty_sensitivity}. As a second baseline, we adapt FIDO \citep{duvenaud_counterfactual}, a counterfactual feature importance method, to minimize uncertainty. We dub this approach U-FIDO. This method places a binary mask $\mathbf{b}$ over the set of input variables $\mathbf{x}_{U}$. The mask is modeled by a product of Bernoulli random variables: $p_{\boldsymbol{\rho}}(\mathbf{b})\,{=}\,\prod_{u \in U} \mathrm{Bern}(b_{u}; \rho_{u})$. The set of masked inputs $\mathbf{x}_{B}$ is substituted by its expectation under an auxiliary conditional generative model $p(\mathbf{x}_{B} | \mathbf{x}_{U\setminus B})$. We use a VAEAC. U-FIDO finds the masking parameters $\boldsymbol{\rho}$ which minimize \Cref{eq:ufido-optimisation}: \begin{align}\label{eq:ufido-optimisation} \mathcal{L}(\boldsymbol{\rho}) &= \EX_{p_{\boldsymbol{\rho}}(\mathbf{b})}[ \mathcal{H}(\mathbf{y}|\mathbf{x}_{c}(\mathbf{b})) + \lambda_{b} \norm{\mathbf{b}}_{1}],\\ \label{eq:ufido-generation} \mathbf{x}_{c}(\mathbf{b}) &= \mathbf{b} \odot \mathbf{x}_{0} + (1-\mathbf{b}) \odot \EX_{p(\mathbf{x}_{B} | \mathbf{x}_{U\setminus B})}[\mathbf{x}_{B}]. \end{align} Counterfactuals are generated by \Cref{eq:ufido-generation}, where $\odot$ is the Hadamard product. We compare CLUE to feature importance methods \citep{LIME_interpretability,shap} in \Cref{app:feature_importance}. \subsection{Computational Evaluation}\label{sec:functional_evaluation} We compare CLUE, Localized Sensitivity, and U-FIDO using the evaluation framework put forth in \Cref{sec:functional_framework}. We would like counterfactuals to explain away as much uncertainty as possible while staying as close to the original inputs as possible. We manage this \textit{informativeness} (large $\Delta \mathcal{H}_{\text{gt}}$) to \textit{relevance} (small $\norm{\Delta \bar{\mathbf{x}}}_{1}$) trade-off with the hyperparameters $\eta$, $\lambda_{x}$, and $\lambda_{b}$ for Local Sensitivity, CLUE, and U-FIDO, respectively. We perform a logarithmic grid search over hyperparameters and plot Pareto-like curves. Our two metrics of interest take minimum values of $0$ but their maximum is dataset and method dependent. For Sensitivity, $\norm{\Delta \bar{\mathbf{x}}}_{1}$ grows linearly with $\eta$. For CLUE and U-FIDO, these metrics saturate for large and small values of $\lambda_{x}$ (or $\lambda_{b}$). As a result, the values obtained by these methods do not overlap. As shown in \Cref{fig:MNIST_dx_kneepoint}, CLUE is able to explain away more uncertainty ($\Delta \mathcal{H}_{\text{gt}}$) than U-FIDO, and U-FIDO always obtains smaller values of $\norm{\Delta \bar{\mathbf{x}}}_{1}$ than CLUE. \begin{minipage}[]{0.68\textwidth} \vspace{-0.05in} \centering \captionof{table}{\label{table:dx_kneepoint} $\Delta \mathcal{H}_{\text{gt}}$ vs $\norm{\Delta \bar{\mathbf{x}}}_{1}$ measure obtained by all methods on all datasets under consideration. Lower is better. The dimensionality of each dataset is listed next to their names. \textit{e} and \textit{a} indicate results for epistemic ($\Delta \mathit{err}_{\text{gt}}$) and aleatoric ($\Delta \mathcal{H}_{\text{gt}}$) uncertainty respectively.} \vspace{-0.07in} \resizebox{\textwidth}{!}{ \begin{tabular}{@{}c|cccccccccc@{}} \toprule Method & \multicolumn{2}{c}{LSAT (4)} & \multicolumn{2}{c}{COMPAS (7)} & \multicolumn{2}{c}{Wine (11)} & \multicolumn{2}{c}{Credit (23)} & \multicolumn{2}{c}{MNIST (784)} \\ & e & a & e & a & e & a & e & a & e & a \\ \midrule Sensitivity & 0.70 & 0.67 & \textbf{0.71} & \textbf{0.13} & 0.69 & 0.03 & 0.63 & 0.50 & 0.66 & 0.68 \\ CLUE & 0.52 & 0.64 & \textbf{0.71} & 0.18 & \textbf{0.01} & 0.14 & 0.52 & \textbf{0.29} & \textbf{0.26} & \textbf{0.27} \\ U-FIDO & \textbf{0.36} & \textbf{0.51} & \textbf{0.71} & 0.31 & 0.22 & \textbf{0.02} & \textbf{0.45} & 0.63 & 0.38 & 0.50 \\ \bottomrule \end{tabular} } \end{minipage} \begin{minipage}[]{0.3\textwidth} \vspace{0pt} \centerline{\includegraphics[width=\linewidth]{images/knee_pointMNIST_maintext_use.pdf}} \vspace{-0.07in} \captionof{figure}{\label{fig:MNIST_dx_kneepoint}MNIST knee-points.} \end{minipage} \vspace{-0.01in} To construct a single performance metric, we scale all measurements by the maximum values obtained between U-FIDO or CLUE, e.g. $(\sqrt{2}\cdot\max(\Delta \mathcal{H}_{\text{gt U-FIDO}}, \Delta \mathcal{H}_{\text{gt CLUE}}))^{-1}$, linearly mapping them to $[0, \nicefrac{1}{\sqrt{2}}]$. We then negate $\Delta \mathcal{H}_{\text{gt}}$, making its optimum value $0$. We consider each method's best performing hyperparameter configuration, as determined by its curve's point nearest the origin, or \textit{knee-point}. The euclidean distance from each method's knee-point to the origin acts as a metric of relative performance. The best value is $0$ and the worst is $1$. Knee-point distances, computed across three runs, are shown for both uncertainty types in \Cref{table:dx_kneepoint}. Local Sensitivity performs poorly on all datasets except COMPAS. We attribute this to the implicit low dimensionality of COMPAS: only two features are necessary to accurately predict targets \citep{COMPAS_2feature_citation}. U-FIDO's masking mechanism allows for counterfactuals that leave features unchanged. It performs well in low dimensional problems but suffers from variance as dimensionality grows. We conjecture that optimization in latent (instead of input) space makes CLUE robust to data complexity. We perform an analogous experiment where \textit{relevance} is quantified as proximity to the data manifold: $\Delta \log p_{gt} \!= \!\min(0, \log p_{gt}(\bar{\mathbf{x}}_{c}) - \log p_{gt}(\bar{\mathbf{x}}_{0}))$. Here, $\log p_{gt}(\bar{\mathbf{x}}_{0})$ refers to the log-likelihood of the artificial data for which counterfactuals are generated. Results are in \Cref{tab:px_kneepoint}, where CLUE performs best in $8/10$ tasks. Generating counterfactuals with a VAE ensures that CLUEs are \textit{relevant}. In \Cref{app:verifying_computational}, we perform an analogous \textit{informativeness} vs \textit{relevance} experiment on real data. We obtain similar results to our computational evaluation framework, validating~its~reliability. \begin{table}[h] \centering \caption{$\Delta \log p_{gt}$ vs $\norm{\Delta \bar{\mathbf{x}}}_{1}$ measure obtained by all methods on all datasets under consideration. Lower is better. \textit{e} and \textit{a} indicate epistemic ($\Delta \mathit{err}_{\text{gt}}$) and aleatoric ($\Delta \mathcal{H}_{\text{gt}}$) uncertainty respectively.} \label{tab:px_kneepoint} \vspace{-0.07in} \resizebox{0.75\textwidth}{!}{% \begin{tabular}{@{}c|cccccccccc@{}} \toprule Method & \multicolumn{2}{c}{LSAT (4)} & \multicolumn{2}{c}{COMPAS (7)} & \multicolumn{2}{c}{Wine (11)} & \multicolumn{2}{c}{Credit (23)} & \multicolumn{2}{c}{MNIST (784)} \\ & e & a & e & a & e & a & e & a & e & a \\ \midrule Sensitivity & 0.697 & 0.672 & \textbf{0.707} & 0.122 & 0.691 & \textbf{0.001} & 0.623 & 0.454 & 0.682 & 0.698 \\ CLUE & 0.419 & \textbf{0.070} & \textbf{0.707} & \textbf{0.044} & \textbf{0} & 0.128 & \textbf{0} & \textbf{0.009} & \textbf{0.273} & \textbf{0.146} \\ U-FIDO & \textbf{0} & \textbf{0} & \textbf{0.707} & 0.303 & 0.224 & \textbf{0} & 0.233 & 0.628 & 0.450 & 0.516 \\ \bottomrule \end{tabular} } \end{table} \input{user_study.tex} \subsection{Analysis of CLUE's Auxiliary Deep Generative Model}\label{sec:ablation} We study CLUE's reliance on its auxiliary DGM. Further ablative analysis is found in \Cref{app:more_experiments}. \textbf{Initialization Strategy:} We compare \Cref{alg:CLUE_algorithm}'s encoder-based initialization $\mathbf{z}_{0}\,{=}\,\mu_{\phi}(\mathbf{z} | \mathbf{x}_{0})$ with $\mathbf{z}_{0}\,{=}\,\mathbf{0}$. As shown in \Cref{fig:maintext_ablation}, for high dimensional datasets, like MNIST, initializing $\mathbf{z}$ with the encoder's mean leads to CLUEs that require smaller changes in input space to explain away similar amounts of uncertainty (i.e., more \textit{relevant}). In \Cref{app:more_ablation}, similar behavior is observed for Credit, our second highest dimensional dataset. On other datasets, both approaches yield indistinguishable results. CLUEs could plausibly be generated with differentiable DGMs that lack an encoding mechanism, such as GANs. However, an appropriate initialization strategy should be employed.% \textbf{Capacity of CLUE's DGM:} \Cref{fig:maintext_ablation} shows how auto-encoding uncertain MNIST samples with low-capacity VAEs significantly reduces these points' predictive entropy. CLUEs generated with these VAEs highlight features that the VAEs are unable to reproduce but are not reflective of our BNN's uncertainty. This results in large values of $\norm{\Delta{\mathbf{x}}}_{1}$; although the counterfactual examples are indeed more certain than the original samples, they contain unnecessary changes. As our auxiliary DGMs' capacity increases, the amount of uncertainty preserved when auto-encoding inputs increases as well. $\norm{\Delta{\mathbf{x}}}_{1}$ decreases while the predictive entropy of our CLUEs stays the same. More expressive DGMs allow for generating sparser, more \textit{relevant}, CLUEs. Fortunately, even in scenarios where our predictor's training dataset is limited, we can train powerful DGMs by leveraging unlabeled~data. \begin{figure*}[t] \vskip -0.05in \centering \begin{subfigure}{0.55\textwidth} \centering \includegraphics[width=0.95\linewidth]{images/no_encoder_MNIST_maintext.pdf} \end{subfigure}% ~ \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=1\linewidth]{images/vae_scan_MNIST_maintext.pdf} \end{subfigure} \vskip -0.05in \caption{Left: CLUEs are similarly \textit{informative} under encoder-based and encoder-free initializations. The colorbar indicates the original samples' uncertainty. Its horizontal blue line denotes our rejection threshold. Right: Auxiliary DGMs with more capacity result in more \textit{relevant} CLUEs.} \label{fig:maintext_ablation} \vskip -0.15in \end{figure*} \section{Conclusion}\label{sec:conclusion} With the widespread adoption of data-driven decision making has come a need for the development of ML tools that can be trusted by their users. This has spawned a subfield of ML dedicated to interpreting deep learning systems' predictions. A transparent deep learning method should also inform stakeholders when it does not know the correct prediction \citep{bhatt2021uncertainty}. In turn, this creates a need for being able to interpret why deep learning methods are uncertain. We address this issue with Counterfactual Latent Uncertainty Explanations (CLUE), a method that reveals which input features can be changed to reduce the uncertainty of a probabilistic model, like a BNN. We then turn to assessing the utility, to stakeholders, of counterfactual explanations of predictive uncertainty. We put forth a framework for computational evaluation of these types of explanations. Quantitatively, CLUE outperforms simple baselines. Finally, we perform a user study. It finds that users are better able to predict their models' behavior after being exposed to~CLUEs. \section*{Acknowledgements} JA acknowledges support from Microsoft Research through its PhD Scholarship Program. UB acknowledges support from DeepMind and the Leverhulme Trust via the Leverhulme Centre for the Future of Intelligence (CFI) and from the Mozilla Foundation. AW acknowledges support from a Turing AI Fellowship under grant EP/V025379/1, The Alan Turing Institute under EPSRC grant EP/N510129/1 \& TU/B/000074, and the Leverhulme Trust via CFI. \subsection{User Study}\label{sec:user_study} Human-based evaluation is a key step in validating the utility of tools for ML explainability \citep{hoffman2018metrics}. We want to assess the extent to which CLUEs help machine learning practitioners identify sources of uncertainty in ML models compared to using simple linear approximations (Local Sensitivity) or human intuition. To do this, we propose a forward-simulation task \citep{towards_rigorous_interpretable_machine_learning}, focusing on an appropriate local test to evaluate CLUEs. We show practitioners one datapoint below our ``rejection'' threshold and one datapoint above. The former is labeled as ``certain'' and the latter as ``uncertain'';~we refer to these as \textit{context points}. The certain \textit{context point} serves as a local counterfactual explanation for the uncertain \textit{context point}. Using both \textit{context points} for reference, practitioners are asked to predict whether a new test point will be above or below our threshold (i.e., will our BNN's uncertainty be high or low for the new point). Our survey compares the utility of the certain \textit{context points} generated by CLUE relative to those from baselines. \begin{figure}[] \vspace{-0.05in} \centering \includegraphics[width=.95\linewidth]{images/workflow_final.pdf} \caption{Experimental workflow for our tabular data user study.} \label{fig:workflow_human_final} \vspace{-0.05in} \end{figure} \begin{figure}[] \centering \includegraphics[width=.9\linewidth]{images/compas_ex1.pdf} \caption{Example question shown to main survey participants for the COMPAS dataset: \textit{Given the uncertain example on the left and the certain example in the middle, will the model be certain on the test example on the right?} The red text highlights the features that differ between \textit{context points}.} \label{fig:compas_ex_main} \vspace{-0.1in} \end{figure} In our survey, we compare four different methods, varying how we select certain \textit{context points}. We either 1) select a certain point at random from the test set as a control, generate a counterfactual certain point with 2) Local Sensitivity or with 3) CLUE, or 4) display a human selected certain point (\textit{Human CLUE}). To generate a \textit{Human CLUE}, we ask participants (who will not take the main survey) to pair uncertain \textit{context points} with similar certain points. We select the points used in our main survey with a pilot procedure similar to \citet{grgic2018human}. This procedure, shown in~\Cref{fig:workflow_human_final}, prevents us from injecting biases into point selection and ensures \textit{context points} are relevant to test points. In our procedure, a participant is shown a pool of randomly selected certain and uncertain points. We ask this participant to select points from this pool: these will be test points. We then ask the participant to map each selected test point to a similar uncertain point without replacement. In this way, we obtain uncertain \textit{context points} that are relevant to test points. We use the LSAT and COMPAS datasets in our user study. Ten different participants take each variant of the main survey: our participants are ML graduate students, who serve as proxies for practitioners in industry. The main survey consists of $18$ questions, $9$ per dataset. An example question is shown in \Cref{fig:compas_ex_main}. The average participant accuracy by variant is: CLUE ($82.22\%$), \textit{Human CLUE} ($62.22\%$), Random ($61.67\%$), and Local Sensitivity ($52.78\%$). We measure the statistical significance of CLUE's superiority with unpaired Wilcoxon signed-rank tests \citep{demvsar2006statistical} of CLUE vs each baseline. We obtain the following p-values: Human-CLUE ($2.34e{-}5$), Random ($1.47e{-}5$), and Sensitivity ($2.60e{-}9$).\footnote{Our between-group experimental design could potentially result in dependent observations, which violates the tests' assumptions. However, the p-values obtained are extremely low, providing confidence in rejecting the null hypothesis of equal performance among approaches.} Additional analysis is included in~\Cref{add_user}. We find that linear explanations (Local Sensitivity) of a non-linear function (BNN) mislead practitioners and perform worse than random. While \textit{Human CLUE} explanations are real datapoints, CLUE generates explanations from a VAE. We conjecture that CLUE's increased flexibility produces relevant explanations in a broader range of cases. In our tabular data user study, we only show one pair of \textit{context points} per test point. We find that otherwise the survey is difficult for practitioners to follow, due to non-expertise in college admissions or criminal justice. Using MNIST, we run a smaller scale study, wherein we show participants larger sets of \textit{context points}. Results are in~\Cref{app:mnist_user}.%
2024-02-18T23:39:57.099Z
2021-03-19T01:18:03.000Z
algebraic_stack_train_0000
922
15,175
proofpile-arXiv_065-4617
\section{Introduction} Finitary birepresentation theory of finite type Soergel bimodules in characteristic zero has been a topic of intensive study, with many interesting results, in the last couple of years~\cite{kmmz,mackaay-mazorchuk,mmmtz2019,mackaay-tubbenhauer,zimmermann}. In this paper, we iniate the study of a class of finitary and triangulated birepresentations of affine type $A$ Soergel bimodules. The bicategories of these Soergel bimodules are no longer finitary and, therefore, new phenoma show up in their birepresentation theory. For example, there are no known interesting triangulated birepresentations in finite type, whereas we do give examples of such birepresentations in affine type $A$. To describe these, let us briefly recall the decategorified setting first. In type $A$, as is well-known, there are evaluation maps from the affine Hecke algebra to the finite type Hecke algebra. These are homomorphisms of algebras, so any representation of the latter algebra can be pulled back to a representation of the former algebra through such a map. These so-called {\em evaluation representations} form an important and well-studied class of finite-dimensional representations of affine type $A$ Hecke algebras, see e.g.~\cite{chari-pressley, dufu, lnt} and references therein. Several authors (\cite[Introduction]{mackaay-thiel} and \cite[Section 1.6]{elias2018}) have conjectured that these evaluation maps can be categorified by monoidal {\em evaluation functors} from affine type $A$ Soergel bimodules to the homotopy category of bounded complexes in finite type $A$ Soergel bimodules. In this paper, we indeed define such functors and use them to categorify the aforementioned evaluation representations in the form of triangulated birepresentations, obtained by pulling back the triangulated birepresentations induced by finitary birepresentations of finite type $A$ Soergel bimodules through these functors. Moreover, in case the original finitary birepresentation is simple transitive, we show that the evaluation birepresentation admits a {\em finitary cover}, i.e., a finitary birepresentation together with an essentially surjective and epimorphic morphism of additive birepresentations from that cover to the evaluation birepresentation. This categorifies the well-known fact that the corresponding evaluation representations are quotients of certain cell representations defined by Graham and Lehrer~\cite{Graham-Lehrer}. Let us finish this introduction with a disclaimer. We do not present a theory of triangulated birepresentations in this paper. Some ingredients for such a theory can already be found in the literature, e.g.~\cite{elias2018, elias-hogancamp, hogancamp, laugwitz-miemietz, stevenson}, but many foundational results are still missing. For a start, it is not clear which parts of finitary birepresentation theory, e.g. the notion of simple transitive birepresentation, the categorical (weak) Jordan-H\"older theorem, the relation with (co)algebra $1$-morphisms, the double-centralizer theorem (see~\cite{mmmtz2020} and references therein), generalize to the triangulated setting and/or in which form exactly. These questions need to be answered first, before one can even think of categorifying the induction product of evaluation representations from~\cite[Section 2.5]{lnt}. Finally, all of this is just for affine type $A$. Hecke algebras of other affine Coxeter types also have interesting finite-dimensional representations, but there are no evaluation morphisms in those cases, so other ideas will be needed to categorify those representations. In other words, the results in this paper are (hopefully) just the tip of a (tricky) triangulated iceberg. \subsubsection*{Plan of the paper} In Section~\ref{sec:decatreminders}, we recall the basics of extended and non-extended affine Hecke algebras of affine type $A$, the evaluation maps, the Graham-Lehrer cell modules and the evaluation representations. Everything in this section is well-documented in the literature and we only recall the details that are needed in the rest of this paper. In Section~\ref{sec:soergel}, we briefly recall Soergel calculus in finite and affine type $A$, the latter both in the non-extended and the extended version. Again, nothing new is presented, so the specialists can skip this section and move on to the next one. Of course, in the remainder we often refer to the diagrammatic equations in this section, which is exactly why we recall them. In Section~\ref{sec:Rouquier}, we first recall some basic results on Rouquier complexes in finite type $A$ and then focus on a special type of Rouquier complex, which is fundamental for the definition of the evaluation functors in the next section. In particular, we develop a mixed diagrammatic calculus for morphisms between products of Bott-Samelson bimodules and these special Rouquier complexes, all in finite type $A$. To the best of our knowledge, this extension of the usual Soergel calculus is new. In Section~\ref{sec:evaluationfunctors}, we define the evaluation functors by assigning a bounded complex of finite type $A$ Soergel bimodules (or, more precisely, of finite type $A$ Bott-Samelson bimodules) to each extended affine type $A$ Bott-Samelson bimodule and a map between such complexes to each generating extended affine type $A$ Soergel calculus diagram. The main result of this section, and of this paper, is that this assigment is well-defined up to homotopy equivalence. In Section~\ref{sec:bireps}, we first introduce the notion of a triangulated birepresentation of an additive bicategory and define evaluation birepresentations of Soergel bimodules in extended affine type $A$, which are important examples. We then prove that each evaluation birepresentation has a (possibly non-unique) finitary cover. Finally, we study in detail the simplest non-trivial evaluation birepresentations, which are the ones induced by cell birepresentations of finite type $A$ with subregular apex. As we show, these admit a simple transitive finitary cover whose underlying algebra is a signed version of the zigzag algebra of affine type $A$. \subsubsection*{Acknowledgments} M.M. was supported in part by Funda\c{c}{\~a}o para a Ci\^{e}ncia e a Tecnologia (Portugal), projects UID/MAT/04459/2013 (Center for Mathematical Analysis, Geometry and Dynamical Systems - CAMGSD) and PTDC/MAT-PUR/31089/2017 (Higher Structures and Applications). V.M. is partially supported by EPSRC grant EP/S017216/1. P.V. was supported by the Fonds de la Recherche Scientifique - FNRS under Grant no. MIS-F.4536.19. \section{The decategorified story}\label{sec:decatreminders} Fix $d\in\mathbb{N}_{\geq 3}$ and let $\widehat{\mathfrak{S}}_d$ be the affine Weyl group of type $\widehat{A}_{d-1}$. It is generated by $s_0,\dotsc, s_{d-1}$, subject to relations \begin{equation*} s_i^2 = 1, \mspace{40mu} s_is_j =s_js_i \mspace{10mu}\text{if}\mspace{10mu} \vert i-j\vert>1, \mspace{40mu} s_is_{i+1}s_i = s_{i+1}s_is_{i+1}, \end{equation*} for $i=0,\ldots, d-1$, with indices taken modulo $d$. The {\em extended} affine Weyl group $\widehat{\mathfrak{S}}_d^{\mathrm{ext}}$ is the semidirect product \[ \langle \rho \rangle \ltimes \widehat{\mathfrak{S}}_d, \] where $\langle \rho \rangle$ is an infinite cyclic group generated by $\rho$ and \[ \rho s_i \rho^{-1} =s_{i+1}. \] The finite Weyl group of type $A_{d-1}$ is the symmetric group on $d$ letters, $\mathfrak{S}_d$, corresponding to the subgroup of $\widehat{\mathfrak{S}}_d$ generated by $s_i$ for $1\leq i\leq d-1$. \subsection{Hecke algebras}\label{sec:hecke} Let $\Bbbk=\mathbb{C}(q)$, where $q$ is a formal parameter. The \emph{extended affine Hecke} algebra $\widehat{H}^{ext}_d$ is the $\Bbbk$-algebra generated by $T_0,\dotsc ,T_{d-1}$ and $\rho^{\pm 1}$, with relations \begin{gather} (T_i+q)(T_i-q^{-1}) = 0, \mspace{40mu} T_iT_j =T_jT_i \mspace{10mu}\text{if}\mspace{10mu} \vert i-j\vert>1, \mspace{40mu} T_iT_{i+1}T_i = T_{i+1}T_iT_{i+1}, \label{eq:affHeckeSnrels} \\ \rho\rho^{-1}=1=\rho^{-1}\rho,\mspace{40mu} \rho T_i \rho^{-1} = T_{i+1}, \label{eq:affHeckeRrels} \end{gather} for $i,j=0,\ldots, d-1$ (with indices taken modulo $d$ again). Note that $T_i$ is invertible for every $i=0,\ldots, d-1$ with \[ T_i^{-1}=T_i+q-q^{-1}. \] As is well-known, $\widehat{H}^{ext}_d$ is a $q$-deformation of the group algebra $\mathbb{C}[\widehat{\mathfrak{S}}_d^{\mathrm{ext}}]$ with basis (the {\em regular basis}) given by $\{\rho^m T_w\mid m\in \mathbb{Z}, w\in \widehat{\mathfrak{S}}_d\}$, where $T_w:=T_{i_1} \cdots T_{i_{\ell}}$ for any {\em reduced expression} (rex) $s_{i_1}\cdots s_{i_{\ell}}$ of $w$. \smallskip Another presentation is given in terms of the \emph{Kazhdan--Lusztig generators} $b_i:=T_i+q$, for $i=0,\ldots, d-1$, and $\rho^{\pm 1}$, subject to relations \begin{gather} b_i^{2}= [2] b_i, \mspace{40mu} b_ib_j =b_jb_i \mspace{10mu}\text{if}\mspace{10mu} \vert i-j\vert>1, \mspace{40mu} b_ib_{i+1}b_i + b_{i+1} = b_{i+1}b_ib_{i+1} + b_{i}, \label{eq:fdHeckebrels} \\ \rho\rho^{-1}=1=\rho^{-1}\rho,\mspace{40mu} \rho b_i \rho^{-1} = b_{i+1}, \label{eq:affHeckebrels} \end{gather} for $i=0,\ldots, d-1$, where $[2]:=q+q^{-1}$. Note that $T_i=b_i-q$ and $T_i^{-1}=b_i-q^{-1}$, for every $i=0,\ldots, d-1$. The {\em Kazhdan--Lusztig basis} is given by $\{\rho^m b_w \mid m\in \mathbb{Z}, w \in \widehat{\mathfrak{S}}_d \}$, where $b_w$ is defined for an arbitrary rex of $w$ (and is independent of that choice). \smallskip The (non-extended) \emph{affine Hecke algebra} $\widehat{H}_d$ is the subalgebra of $\widehat{H}^{ext}_d$ generated by either $T_0, T_1,\dotsc, T_{d-1}$ subject to relations~\eqref{eq:affHeckeSnrels}, or $b_0,b_1,\dotsc ,b_{d-1}$ subject to relations~\eqref{eq:fdHeckebrels}. \smallskip The \emph{finite Hecke algebra} $H_d$ is the $\Bbbk$-subalgebra of $\widehat{H}_d$ generated by either $T_1,\dotsc ,T_{d-1}$ subject to relations~\eqref{eq:affHeckeSnrels}, or $b_1,\dotsc ,b_{d-1}$ subject to relations~\eqref{eq:fdHeckebrels}. \subsection{Evaluation maps} \begin{defn}\label{defn:evaluationmap} For any $a\in\mathbbm{k}^{\times}$, there are two \emph{evaluation maps} $\ev_a, \ev'_a\colon\widehat{H}^{ext}_d\to H_d $. These are defined as the homomorphisms of $\mathbbm{k}$-algebras determined by \begin{align*} \ev_a(T_i) &= T_i, \quad \mathrm{for}\;1\leq i \leq d-1, \\ \ev_a(\rho) &= a T_1^{-1}\dotsm T_{d-1}^{-1} \end{align*} and \begin{align*} \ev'_a(T_i) &= T_i, \quad \mathrm{for}\;1\leq i \leq d-1, \\ \ev'_a(\rho) &= a T_1\dotsm T_{d-1}, \end{align*} respectively. \end{defn} The definition implies that \begin{equation*} \ev_a(T_0) = \ev_a(\rho^{-1}T_1\rho) = T_{d-1}\dotsm T_2T_1 T_2^{-1} \dotsm T_{d-1}^{-1} \end{equation*} and \begin{equation*} \ev'_a(T_0) = \ev'_a(\rho^{-1}T_1\rho) = T^{-1}_{d-1}\dotsm T^{-1}_2T_1 T_2 \dotsm T_{d-1}, \end{equation*} so the restrictions of $\ev_a$ and $\ev'_a$ to $\widehat{H}_d$ do not depend on $a$. In terms of the Kazhdan--Lusztig generators we have \begin{align*} \ev_a(b_i) &= b_i,\quad \mathrm{for}\; 1\leq i \leq d-1,\\ \ev_a(b_0) &= \ev_a(\rho^{-1}b_1\rho) = (b_{d-1}-q)\dotsm(b_{1}-q)b_1(b_1-q^{-1})\dotsm (b_{d-1}-q^{-1}) \end{align*} and \begin{align*} \ev'_a(b_i) &= b_i,\quad \mathrm{for}\; 1\leq i \leq d-1,\\ \ev'_a(b_0) &= \ev'_a(\rho^{-1}b_1\rho) = (b_{d-1}-q^{-1})\dotsm(b_{1}-q^{-1})b_1(b_1-q)\dotsm (b_{d-1}-q). \end{align*} Another way of saying this is that the evaluation maps do not preserve the bar involution, but rather satisfy \begin{equation}\label{eq:rel-evaluation-maps} \overline{\ev_a(x)}=\ev'_{\overline{a}}(\overline{x}), \end{equation} for any $x\in \widehat{H}^{ext}_d$ and $a=a(q)\in \mathbbm{k}^{\times}$. One can also define $\ev_a$ and $\ev'_a$ using a third presentation of $\widehat{H}^{ext}_d$, called the \emph{Bernstein presentation}. In that presentation, $\widehat{H}^{ext}_d$ is defined as some sort of semidirect product of $H_d$ and $\mathbbm{k}[Y_1^{\pm 1}\dotsc ,Y_d^{\pm 1}]$. However, there are several possible choices for the algebra of Laurent polynomials. In~\cite{elias2018}, two such choices are given with different variables: $y_1,\dotsc , y_d$ and $y_1^*,\dotsc ,y_d^*$ respectively. The interaction of $H_d$ and these polynomial algebras is defined by \begin{align*} T_i^{-1}y_iT_i^{-1} &= y_{i+1} \intertext{and} T_i y_i^* T_i &= y_{i+1}^*, \end{align*} respectively, for $i=1,\ldots, d-2$. The relation between these two Bernstein presentations and our first presentation of $\widehat{H}^{ext}_d$ is given by \begin{align*} y_1 &= \rho T_{d-1}\dotsm T_{2}T_1 , \\ y_i &= T_{i-1}^{-1}\dotsm T_2^{-1}T_1^{-1}\rho T_{d-1}\dotsm T_{i+1}T_i, \quad i=2, \ldots, d-1, \intertext{resp.} y_1^* &= \rho T_{d-1}^{-1}\dotsm T_{2}^{-1}T_1^{-1} , \\ y_i^* &= T_{i-1}\dotsm T_2T_1\rho T_{d-1}^{-1}\dotsm T_{i+1}^{-1}T_i^{-1}, \quad i=2,\ldots,d-1. \end{align*} It follows that the evaluation map $\ev_a\colon \widehat{H}^{ext}_d\to H_d$ is the unique homomorphism of algebras sending $T_i$ to $T_i$, for $i=1,\ldots, d-1$, and $y_1$ to $a$, while $\ev'_a\colon \widehat{H}^{ext}_d\to H_d$ is the unique homomorphism of algebras sending $T_i$ to $T_i$, for $i=1,\ldots, d-1$, and $y_1^*$ to $a$. The latter coincides with the flattening map $\flat$ in~\cite[\S 2.6]{elias2018} for $a=1$. We will categorify the evaluation map $\ev_a$ in~\fullref{sec:defevalfunctor}. The categorification of $\ev'_a$ is very similar and the relation between the two evaluation maps in~\eqref{eq:rel-evaluation-maps} also categorifies, since the categorification of the bar-involution is given by flipping diagrams upside-down, inverting the orientation of the differentials in complexes and changing the sign of homological and grading shifts. \begin{rem} Some remarks about the various conventions in the literature are in order. We try to follow conventions close to those in~\cite{elias2018}. Our presentation of the extended affine Hecke algebra in~\fullref{sec:hecke} agrees with~\cite{elias2018}, as does the relation between the standard generators and the Kazhdan--Lusztig generators. Some authors use the inverse of $\rho$ in~\eqref{eq:affHeckeRrels}. Our choice of conventions implies the absence of certain powers of $q$ in the definition of the evaluation maps, in comparison with some of the sources in the literature. For more information on evaluation maps, see e.g.~\cite[\S5.1]{chari-pressley} and \cite[(5.0.2)]{dufu}. There are more possible evaluation maps, but we only consider these two in this paper. \end{rem} \subsection{Graham-Lehrer cell modules} Consider the $\widehat{A}_{d-1}$ Coxeter diagram $\widehat{\Gamma}_{d-1}$ with its vertices ordered as indicated (for e.g. $d=8$): \[ \dynkin [ordering=Kac,label,scale=2] A[1]7 \] For any $z\in \mathbbm{k}^{\times}$, the {\em Graham-Lehrer cell module} $\widehat{M}_z$ of $\widehat{H}_d$ corresponding to $z$ and the partition $(d-1,1)$ has underlying vector space \[ \widehat{M}_z:=\mathrm{Span}_{\mathbbm{k}}\left\{m_i \mid i=0,\ldots, d-1\right\}, \] where the indices of the $m_i$ have to be taken modulo $d$ by convention, and the action of $\widehat{H}_d$ on $\widehat{M}_z$ is given by \begin{equation}\label{eq:GL} b_i m_j = \begin{cases} [2] m_i, & \text{if}\; j\equiv i \bmod d;\\ z m_1, & \text{if}\; i-1\equiv 0\equiv j \bmod d;\\ z^{-1} m_0, & \text{if}\; i\equiv 0\equiv j-1 \bmod d;\\ m_j, &\text{if}\; i\equiv j\pm 1\bmod d,\;\text{but none of the above};\\ 0, & \text{else}. \end{cases} \end{equation} It is easy to see that $\widehat{M}_z$ is isomorphic to $W_{d-2,\pm \sqrt{z}}(d)$ in~\cite[Definition 2.6]{Graham-Lehrer}, where $m_i$ is identified with the cup diagram on a cylinder with $d-2$ endpoints at the bottom, $d$ endpoints at the top and with only one cup, whose endpoints are $i$ and $i+1$, and further only straight lines. When $i\ne 0$, the whole diagram corresponding to $m_i$ lives on the front part of the cylinder, but when $i=0$, the cup of $m_0$ goes around the back of the cylinder. Note that we have used $\delta=[2]$, rather than $\delta=-[2]$. As remarked in~\cite[text above Corollary 2.9.1]{Graham-Lehrer}, $W_{d-2,\sqrt{z}}(d)$ and $W_{d-2, -\sqrt{z}}(d)$ are isomorphic, which is clear from the fact that both are isomorphic to $\widehat{M}_z$. The Graham-Lehrer cell module $\widehat{M}_z$ can be made into an $\widehat{H}^{ext}_d$-module, but not in a unique way. As a matter of fact, for each $\lambda\in \mathbbm{k}^{\times}$, we can define \begin{equation}\label{eq:GLext} \rho\, m_j = \lambda z^{\delta_{j,0}}m_{j+1}, \end{equation} for $j=0,\ldots,d-1$. It is easy to verify that this gives a well-defined action and we denote the corresponding Graham-Lehrer cell module of $\widehat{H}^{ext}_d$ by $\widehat{M}_{z,\lambda}$. Note that the restriction of $\widehat{M}_{z,\lambda}$ to $\widehat{H}_d$ is equal to $\widehat{M}_z$, for all $\lambda\in \mathbbm{k}^{\times}$, and that the action of $\rho^{d}$ on $\widehat{M}_{z,\lambda}$ is simply multiplication by $\lambda^d z$. Graham and Lehrer~\cite[Theorem 2.8]{Graham-Lehrer} defined a $\mathbbm{k}$-bilinear form \[ \langle \cdot , \cdot \rangle \colon \widehat{M}_z \otimes \widehat{M}_{z^{-1}} \to \mathbbm{k}, \] which in our notation is determined by \[ \langle m_i, m_j\rangle = \begin{cases} [2], & \text{if}\; j \equiv i \bmod d;\\ z, & \text{if}\; i\equiv 0\equiv j-1 \bmod d;\\ z^{-1}, & \text{if}\; i-1\equiv 0\equiv j \bmod d;\\ 1, &\text{if}\; i\equiv j\pm 1\bmod d,\;\text{but none of the above};\\ 0, & \text{else}. \end{cases} \] This induces a $\mathbbm{k}$-bilinear form on $\widehat{M}_{z,\lambda}\otimes \widehat{M}_{z^{-1},\lambda^{-1}}$, satisfying $\langle \rho^n b_w m_j, m_k\rangle = \langle m_j, b_w^{\star}\rho^{-n} m_k\rangle$, for any $w\in \widehat{W}$, $n\in \mathbb{Z}$ and $j,k=0,1,\ldots, d-1$, where $b_w^{\star}=b_{w^{-1}}$ is the dual Kazhdan-Lusztig basis element. Therefore, the radical of the bilinear form \[ \mathrm{rad}(\langle \cdot, \cdot\rangle)= \left\{m\in \widehat{M}_{z,\lambda}\mid \langle m,m'\rangle =0,\; \forall m'\in \widehat{M}_{z^{-1}, \lambda^{-1}} \right\} \] is an $\widehat{H}^{ext}_d$-submodule of $\widehat{M}_{z,\lambda}$. Graham and Lehrer~\cite[Theorem 2.8]{Graham-Lehrer} proved that the quotient module $\widehat{M}_{z}/\mathrm{rad}(\langle \cdot , \cdot \rangle)$ of $\widehat{H}_d$ is simple, and the same holds for the quotient module $\widehat{M}_{z,\lambda}/\mathrm{rad}(\langle \cdot , \cdot \rangle)$ of $\widehat{H}^{ext}_d$, of course. A straightforward calculation shows that the radical of the bilinear form on $\widehat{M}_{z,\lambda}$ is zero unless $z=(-q)^{\pm d}$ (independently of $\lambda$), in which case it has dimension one and is generated by \[ n_{\pm}:=\sum_{k=1}^{d} (-q)^{\mp k} m_k, \] where $m_d:=m_0$ by convention. Note that, when $z=(-q)^{\pm d}$, we have $\rho\, n_{\pm} =\lambda (-q)^{\pm 1} n_{\pm} $ and $b_i n_{\pm }=0$ for all $i=0,\ldots,d-1$. When $z=(-q)^{\pm d}$, put $\widehat{M}_{d, \lambda}^{\pm}:= \widehat{M}_{(-q)^{\pm d}, \lambda^{\pm 1}}$ and let \begin{equation}\label{eq:Lplusminus} \widehat{L}_{d, \lambda}^{\pm} :=\widehat{M}_{d, \lambda}^{\pm} /\langle n_{\pm} \rangle \end{equation} be the simple quotient $\widehat{H}^{ext}_d$-modules of dimension $d-1$. Finally, denote the restriction of these simple modules to $\widehat{H}_d$ by \begin{equation}\label{eq:Lplusminusnonext} \widehat{L}_{d}^{\pm} :=\widehat{M}_{d}^{\pm} /\langle n_{\pm} \rangle. \end{equation} As explained above, these restrictions do not depend on $\lambda\in \mathbbm{k}^{\times}$. \subsection{Evaluation modules} Let $M$ be a finite-dimensional $H_d$-module (over $\mathbbm{k}$). Recall that, for any $a\in \mathbbm{k}^{\times}$, there are two evaluation maps $\ev_a, \ev'_a\colon \widehat{H}^{ext}_d\to H_d$ (see Definition~\ref{defn:evaluationmap}). \begin{defn}\label{defn:evaluationmodule} For any $a\in \mathbbm{k}^{\times}$, the {\em evaluation modules} $M^{\mathrm{ev}_a}$ and $M^{\mathrm{ev}'_a}$ of $\widehat{H}^{ext}_d$ are the pull-backs of $M$ through $\ev_a$ and $\ev'_a$, respectively. \end{defn} The actions of $\widehat{H}^{ext}_d$ on $M^{\ev_a}$ and $M^{\ev'_a}$ can be computed using the explicit formulas in Definition~\ref{defn:evaluationmap} and below. In this paper, we only consider the case when $M:=M_d$ is the simple $H_d$-module corresponding to the partition $(d-1,1)$. There are several ways to define $M_d$ explicitly and the definition we choose here is tailor-made for categorification. Take $M_d:=\mathrm{span}_{\mathbbm{k}}\{m_1,\ldots,m_{d-1}\}$, with the action of $H_d$ being given by \begin{equation}\label{eq:M} b_i m_j = \begin{cases} [2] m_i, & \text{if}\; j=i;\\ m_i, & \text{if}\; j=i\pm 1;\\ 0, & \text{else}, \end{cases} \end{equation} for $i,j=1,\ldots, d-1$. It is easy to show that $M_d$ is simple, but this is well-known so we leave it as an exercise to the reader. The action of the $T_i^{\pm 1}=b_i-q^{\pm 1}$ is also easy to give explicitly: \[ T_i^{\pm 1} m_j= \begin{cases} q^{\mp 1} m_i, & \text{if}\; j=i;\\ m_i - q^{\pm 1} m_j, & \text{if}\; j=i\pm 1;\\ -q^{\pm 1} m_j, & \text{else}. \end{cases} \] Note that, as vector spaces, $M^{\ev_a}_d=M^{\ev'_a}_d=M_d$, and the action of $b_i\in \widehat{H}^{ext}_d$, for $i=1,\ldots,d-1$, is the same as above because $\ev_a(b_i)=b_i$. A simple calculation now shows that \begin{equation}\label{eq:action-rho} \ev_a(\rho) m_j= aT_1^{-1} \cdots T_{d-1}^{-1} m_j = \begin{cases} a(-q)^{2-d} m_{j+1}, & \text{if}\; j=1,\ldots, d-2;\\ aq \sum_{k=1}^{d-1} (-q)^{1-k} m_{k}, & \text{if}\; j=d-1, \end{cases} \end{equation} and \begin{equation}\label{eq:action-prime-rho} \ev'_a(\rho) m_j= aT_1 \cdots T_{d-1} m_j = \begin{cases} a(-q)^{d-2} m_{j+1}, & \text{if}\; j=1,\ldots, d-2;\\ aq^{-1} \sum_{k=1}^{d-1} (-q)^{k-1} m_{k}, & \text{if}\; j=d-1. \end{cases} \end{equation} The actions of $b_0$ can then be computed using the equation $b_0=\rho^{-1} b_1 \rho$, but we omit the calculation because will not need the result. Recall the simple quotients $\widehat{L}_{d, \lambda}^{\pm}$ of the Graham-Lehrer cell modules $\widehat{M}_{d, \lambda}^{\pm}$, defined in~\eqref{eq:Lplusminus}. We claim that $\widehat{L}_{d, \lambda}^{+} \cong M^{\mathrm{ev}_a}_d$ for $a=\lambda (-q)^{d-2}$. To show this, it suffices to compute the action of $\rho$ on $\widehat{L}_{d, \lambda}^{+}$ and compare it to~\eqref{eq:action-rho}. Let $\overline{m}_k$ be the image of $m_k$ under the projection $\widehat{M}_{d, \lambda}^{+}\to \widehat{L}_{d, \lambda}^{+}$, for $k=0,\ldots, d-1$. Then $\left\{\overline{m}_1,\ldots, \overline{m}_{d-1}\right\}$ is a basis of $\widehat{L}_{d, \lambda}^{+}$, because $\overline{m}_0= -\sum_{k=1}^{d-1} (-q)^{k} \overline{m}_{d-k}$. This implies that in $\widehat{L}_{d, \lambda}^{+}$ we have \[ \rho\, \overline{m}_j= \begin{cases} \lambda \overline{m}_{j+1}, & \text{if}\, j=1,\ldots,d-2; \\ - \lambda \sum_{k=1}^{d-1} (-q)^{k} \overline{m}_{d-k},& \text{if}\, j=d-1. \end{cases} \] This is indeed the same as in \eqref{eq:action-rho} because $aq=\lambda (-q)^{d-2}q =-\lambda (-q)^{d-1}$. Similarly, $\widehat{L}_{d, \lambda}^{-}\cong M^{\mathrm{ev}'_a}_d$ for $a=\lambda^{-1} (-q)^{2-d}$, as in $\widehat{L}_{d,\lambda}^{-}$ we have $\overline{m}_0= -\sum_{k=1}^{d-1} (-q)^{-k} \overline{m}_{d-k}$, so \[ \rho\, \overline{m}_j= \begin{cases} \lambda^{-1} \overline{m}_{j+1}, & \text{if}\, j=1,\ldots,d-2; \\ - \lambda^{-1} \sum_{k=1}^{d-1} (-q)^{-k} \overline{m}_{d-k},& \text{if}\, j=d-1, \end{cases} \] which is the same as in \eqref{eq:action-prime-rho} because $aq^{-1}= \lambda^{-1} (-q)^{2-d}q^{-1}=-\lambda^{-1}(-q)^{1-d}$. The two $\widehat{H}^{ext}_d$-modules $\widehat{L}_{d, \lambda}^{+}$ and $\widehat{L}_{d, \lambda}^{-}$ are actually {\em dual} to each other. Note that we could also consider the radical defined by \[ \mathrm{rad}'(\langle \cdot, \cdot\rangle)= \left\{m'\in \widehat{M}_{z^{-1}, \lambda^{-1}}\mid \langle m,m'\rangle =0,\; \forall m\in \widehat{M}_{z, \lambda} \right\}, \] which is an $\widehat{H}^{ext}_d$-submodule of $\widehat{M}_{z^{-1}, \lambda^{-1}}$. As before, this radical is zero unless $z=(-q)^{\pm d}$. For these two values of $z$ and any value of $\lambda\in \mathbbm{k}^{\times}$, the two simple quotients of $\widehat{M}_{z^{-1},\lambda^{-1}}$ are isomorphic to $\widehat{L}_{d,\lambda}^{\mp}$ and the bilinear form descends to a perfect pairing \[ \widehat{L}_{d,\lambda}^{+} \otimes \widehat{L}_{d,\lambda}^{-} \to \mathbbm{k}. \] By the above, this is equivalent to a perfect pairing \[ M_d^{\ev_a}\otimes M_d^{\ev'_{a^{-1}}}\to \mathbbm{k}, \] for $a=\lambda (-q)^{d-2}$. \section{Rouquier complexes}\label{sec:Rouquier} For $\EuScript{A}$ a $\mathbb{C}$-linear, additive category, we write ${\EuScript{K}^b}(\EuScript{A})$ for the homotopy category of bounded complexes in $\EuScript{A}$. If $\EuScript{A}$ is monoidal, then the usual monoidal product of chain complexes equips ${\EuScript{K}^b}(\EuScript{A})$ with a monoidal structure as well. If $\EuScript{A}$ is graded, then ${\EuScript{K}^b}(\EuScript{A})$ is bigraded and we denote the shift inherited from $\EuScript{A}$ by $\langle \cdot \rangle$ and the homological shift by $[\cdot ]$. \begin{rem}\label{rem:shifts-etc2} Throughout this section, we sometimes state and prove diagrammatic equations in ${\EuScript{K}^b}(\EuScript{BS}_d)$, instead of ${\EuScript{K}^b}(\EuScript{S}_d)$. This makes no real difference in our case, as the differentials of the complexes in ${\EuScript{K}^b}(\EuScript{BS}_d)$ below are always given by matrices of homogeneous diagrams of the same degree, so they always give rise to objects in ${\EuScript{K}^b}(\EuScript{S}_d)$. See also Remark~\ref{rem:shifts-etc1}. \end{rem} Let $\EuScript{C}=\EuScript{S}_d$. For the simple reflection $s_i\in W$ the \emph{Rouquier complex} $\mathrm{T}_i:=\mathrm{T}_{s_i}\in {\EuScript{K}^b}(\EuScript{S}_d)$ is defined by \begin{equation*} \textcolor{blue}{\rT_i} := \underline{\textcolor{blue}{\rB_i}}\xra{ \mspace{15mu} \xy (0,0)*{ \tikzdiagc[yscale=-.3,xscale=.25]{ \draw[ultra thick,blue] (-1,0) -- (-1, 1)node[pos=0, tikzdot]{}; }}\endxy \mspace{15mu} } R\langle 1\rangle , \end{equation*} with $\mathrm{B}_i$ placed in homological degree zero (we always underline terms in homological degree zero). This complex is invertible in ${\EuScript{K}^b}(\EuScript{S}_d)$, with inverse given by \begin{equation*} \textcolor{blue}{\rT_i^{-1}} := R \langle -1 \rangle \xra{ \mspace{15mu} \xy (0,0)*{ \tikzdiagc[yscale=.3,xscale=.25]{ \draw[ultra thick,blue] (-1,0) -- (-1, 1)node[pos=0, tikzdot]{}; }}\endxy \mspace{15mu} }\underline{\textcolor{blue}{\rB_i}}, \end{equation*} as follows from the homotopy equivalences which we recall below. These complexes were introduced in~\cite{rouquier-braid} and categorify the usual generators of the braid group, in particular, they satisfy the braid relations up to homotopy equivalence~\cite[Theorem 3.2]{rouquier-braid}. By Matsumoto's theorem, this implies that, for any $w\in S_n$, the complex $\mathrm{T}_w$ can be defined as \[ \mathrm{T}_{w}:=\mathrm{T}_{i_1}\cdots \mathrm{T}_{i_{\ell}}, \] where $\underline{w}=s_{i_1}\cdots s_{i_{\ell}}$ is any rex of $w$ (i.e., up to homotopy equivalence, the complex does not depend on the choice of rex). In subsection~\ref{sec:diagshortcutsI}, we briefly recall the results on Rouquier complexes that are relevant for the definition of the evaluation functor. For more details, see~\cite{rouquier-braid}, \cite[\S3]{elias-krasner} and \cite[Chapter 19]{e-m-t-w}. In Subsection~\ref{sec:diagshortcutsII}, we introduce a special Rouquier complex, denoted $\mathrm{T}_{\rho}$, and develop a diagrammatic calculus for morphisms in ${\EuScript{K}^b}(\EuScript{S}_d)$ whose source and/or target contain tensor powers of $\mathrm{T}_{\rho}$ and $\mathrm{T}_{\rho}^{-1}$. To the best of our knowledge, this extension of Soergel calculus has not appeared in the literature before. \subsection{Some diagrammatic shortcuts I: general Rouquier complexes}\label{sec:diagshortcutsI} For $i\in \{1,\ldots, d-1\}$, let $\phi_i\colon\mathrm{T}_i^{-1}\mathrm{T}_i\to R$ denote the homotopy equivalence (where $1$ stands for the identity map) \begin{equation}\label{eq:phiRRm} \begin{tikzpicture}[anchorbase,scale=0.7] \node at (0,-3.5) {$\underline{R}$}; \draw[thick,crossline,<->] (0,-3.0) to (0,-1.5);\node at (.2,-2.25) {\tiny $1$}; \draw[thick,->] (-.5,-3.0) .. controls (-1,-2.25) and (-1.25,-.25) .. (-0.65,.5); \draw[thick,<-] ( .5,-3.0) .. controls ( 1,-2.25) and ( 1.25,-.25) .. ( 0.65,.5); \node at (-4,0) {$\textcolor{blue}{\rB_i}\langle -1\rangle$}; \draw[thick,->] (-3, .45) to (-1.25, 1.2); \draw[thick,<-] (-3, .15) to (-1.25, .9); \draw[thick,crossline,->] (-3,-.3) to (-0.5,-1); \node at (0, 1) {$\underline{\textcolor{blue}{\rB_i} \textcolor{blue}{\rB_i}}$}; \node at (0,0) {$\oplus$}; \node at (0,-1) {$\underline{R}$}; \draw[thick,->] (1.25, 1.2) to (3.25, .45); \draw[thick,<-] (1.25, .9) to (3.25, .15); \draw[thick,crossline,->] ( .5,-1) to (3.25,-.3); \node at (4,0) {$\textcolor{blue}{\rB_i}\langle 1\rangle$}; \draw[ultra thick,blue] (-2,-.9) -- (-2, -1.4)node[pos=0, tikzdot]{}; \node at (1.75,-1.225) {-};\draw[ultra thick,blue] (2,-.9) -- (2, -1.4)node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (-2.5,1.35) -- (-2.5, 1.75)node[pos=0, tikzdot]{};\draw[ultra thick,blue] (-2.1,1) -- (-2.1, 1.75); \draw[ultra thick,blue] (2.1,1) -- (2.1, 1.75);\draw[ultra thick,blue] (2.5,1) -- (2.5, 1.4)node[pos=1, tikzdot]{}; \begin{scope}[shift={(.2,.2)},yscale=-1] \draw[ultra thick,blue] (-2,0) -- (-2, -.25);\draw[ultra thick,blue] (-2,0) -- (-2.25, .2);\draw[ultra thick,blue] (-2,0) -- (-1.75, .2); \end{scope} \begin{scope}[shift={(4,.2)}] \draw[ultra thick,blue] (-2,0) -- (-2, -.25);\draw[ultra thick,blue] (-2,0) -- (-2.25, .2);\draw[ultra thick,blue] (-2,0) -- (-1.75, .2); \end{scope} \node at (1.4,-2) { -\! \xy (0,1)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,blue] ( .9,-1.2) .. controls (1.2,-.35) and (1.8,-.35) .. (2.1,-1.2); }}\endxy }; \node at (-1.4,-2.2) { \xy (0,0)*{ \tikzdiagc[yscale=-0.6,xscale=.35]{ \draw[ultra thick,blue] ( .9,-1.2) .. controls (1.2,-.35) and (1.8,-.35) .. (2.1,-1.2); }}\endxy }; \end{tikzpicture} \end{equation} and $\psi_i\colon \mathrm{T}_i\mathrm{T}_i^{-1}\to R$ the analogous homotopy equivalence \begin{equation}\label{eq:psiRRm} \begin{tikzpicture}[anchorbase,scale=0.7] \node at (0,-3.5) {$\underline{R}$}; \draw[thick,crossline,<->] (0,-3.0) to (0,-1.5);\node at (.2,-2.25) {\tiny $1$}; \draw[thick,->] (-.5,-3.0) .. controls (-1,-2.25) and (-1.25,-.25) .. (-0.65,.5); \draw[thick,<-] ( .5,-3.0) .. controls ( 1,-2.25) and ( 1.25,-.25) .. ( 0.65,.5); \node at (-4,0) {$\textcolor{blue}{\rB_i}\langle -1\rangle$}; \draw[thick,->] (-3, .45) to (-1.25, 1.2); \draw[thick,<-] (-3, .15) to (-1.25, .9); \draw[thick,crossline,->] (-3,-.3) to (-0.5,-1); \node at (0, 1) {$\underline{\textcolor{blue}{\rB_i} \textcolor{blue}{\rB_i}}$}; \node at (0,0) {$\oplus$}; \node at (0,-1) {$\underline{R}$}; \draw[thick,->] (1.25, 1.2) to (3.25, .45); \draw[thick,<-] (1.25, .9) to (3.25, .15); \draw[thick,crossline,->] ( .5,-1) to (3.25,-.3); \node at (4,0) {$\textcolor{blue}{\rB_i}\langle 1\rangle$}; \draw[ultra thick,blue] (-2,-.9) -- (-2, -1.4)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (2,-.9) -- (2, -1.4)node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (-2.1,1.35) -- (-2.1, 1.75)node[pos=0, tikzdot]{};\draw[ultra thick,blue] (-2.5,1) -- (-2.5, 1.75); \node at (1.75,1.225) {-}; \draw[ultra thick,blue] (2.5,1) -- (2.5, 1.75); \draw[ultra thick,blue] (2.1,1) -- (2.1, 1.4)node[pos=1, tikzdot]{}; \begin{scope}[shift={(.2,.2)},yscale=-1] \draw[ultra thick,blue] (-2,0) -- (-2, -.25);\draw[ultra thick,blue] (-2,0) -- (-2.25, .2);\draw[ultra thick,blue] (-2,0) -- (-1.75, .2); \end{scope} \begin{scope}[shift={(3.9,.2)}] \node at (-2.4,-.125) {-}; \draw[ultra thick,blue] (-2,0) -- (-2, -.25);\draw[ultra thick,blue] (-2,0) -- (-2.25, .2);\draw[ultra thick,blue] (-2,0) -- (-1.75, .2); \end{scope} \node at (1.4,-2) {-\! \xy (0,1)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,blue] ( .9,-1.2) .. controls (1.2,-.35) and (1.8,-.35) .. (2.1,-1.2); }}\endxy }; \node at (-1.4,-2.2) { \xy (0,0)*{ \tikzdiagc[yscale=-0.6,xscale=.35]{ \draw[ultra thick,blue] ( .9,-1.2) .. controls (1.2,-.35) and (1.8,-.35) .. (2.1,-1.2); }}\endxy }; \end{tikzpicture} \end{equation} in ${\EuScript{K}^b}(\EuScript{S}_d)$. These maps are well-known, see e.g.~\cite[\S3]{elias-krasner}. Let further $\eta_{i,\pm}\colon\mathrm{T}_i^{\pm 1}\mathrm{T}_i^{\mp 1}\to \mathrm{T}_i^{\pm 1} R \mathrm{T}_i^{\mp 1}$ be the canonical isomorphisms ${\EuScript{K}^b}(\EuScript{S}_d)$, for any $i\in \{1,\ldots, d-1\}$, both given by $a b\mapsto a 1 b$. To simplify notation, we write $1^m$ for $1 \dotsm 1$ ($m$ times) in the sequel. The following can be checked directly. \begin{lem}\label{lem:snakeR} For any $i\in \{1,\ldots,d-1\}$, the composite maps \begin{gather*} \textcolor{blue}{\rT_i}\xra{a\mapsto 1 a} R \textcolor{blue}{\rT_i}\xra{\psi_i^{-1} \id_{\mathrm{T}_i}} \textcolor{blue}{\rT_i}\textcolor{blue}{\rT_i^{-1}}\textcolor{blue}{\rT_i}\xra{\id_{\mathrm{T}_i} \phi_i} \textcolor{blue}{\rT_i} R \xra{a b\mapsto ab}\textcolor{blue}{\rT_i} , \\ \textcolor{blue}{\rT_i}\xra{a\mapsto a 1} \textcolor{blue}{\rT_i} R \xra{\id_{\mathrm{T}_i}\phi_i^{-1}} \textcolor{blue}{\rT_i}\textcolor{blue}{\rT_i^{-1}}\textcolor{blue}{\rT_i}\xra{\psi_i \id_{\mathrm{T}_i}} R\textcolor{blue}{\rT_i} \xra{b a\mapsto ba}\textcolor{blue}{\rT_i} , \end{gather*} are both equal to $\id_{\mathrm{T}_i}$ in ${\EuScript{K}^b}(\EuScript{S}_d)$, and the composite maps \begin{gather*} \textcolor{blue}{\rT_i^{-1}}\xra{a\mapsto 1 a} R \textcolor{blue}{\rT_i^{-1}}\xra{\phi_i^{-1} \id_{\mathrm{T}_i}^{-1}} \textcolor{blue}{\rT_i^{-1}}\textcolor{blue}{\rT_i}\textcolor{blue}{\rT_i^{-1}}\xra{\id_{\mathrm{T}_i}^{-1} \psi_i} \textcolor{blue}{\rT_i^{-1}} R \xra{a b\mapsto ab}\textcolor{blue}{\rT_i^{-1}} , \\ \textcolor{blue}{\rT_i^{-1}}\xra{a\mapsto a 1} \textcolor{blue}{\rT_i^{-1}} R \xra{\id_{\mathrm{T}_i}^{-1}\psi_i^{-1}} \textcolor{blue}{\rT_i^{-1}}\textcolor{blue}{\rT_i}\textcolor{blue}{\rT_i^{-1}}\xra{\phi_i \id_{\mathrm{T}_i}^{-1}} R\textcolor{blue}{\rT_i^{-1}}\xra{b a\mapsto ba}\textcolor{blue}{\rT_i^{-1}} , \end{gather*} are both equal to $\id_{\mathrm{T}_i}^{-1}$ in ${\EuScript{K}^b}(\EuScript{S}_d)$. \end{lem} We now introduce the diagrammatics for the maps involving $\mathrm{T}_i^{\pm 1}$ that will be needed in the sequel. For any $i\in \{1,\dotsc,d-1\}$, we depict the identity morphisms of $\mathrm{T}_i^{\pm 1}$ as \[ \id_{\textcolor{blue}{\rT_i}} := \xy (0,-2.25)*{ \tikzdiagc[scale=.9]{ \draw[very thick,densely dotted,blue,-to] (.9,-.5)node[below]{\tiny $i$} -- (.9,.5); }}\endxy \mspace{60mu}\text{and}\mspace{60mu} \id_{\textcolor{blue}{\rT_i^{-1}}} := \xy (0,-2.25)*{ \tikzdiagc[scale=.9]{ \draw[very thick,densely dotted,blue,to-] (.9,-.5)node[below]{\tiny $i$} -- (.9,.5); }}\endxy \] The degree zero homotopy equivalences in \eqref{eq:phiRRm} and~\eqref{eq:psiRRm} (which are the units and counits of left and right adjunction of $\textcolor{blue}{\rT_i}$ and $\textcolor{blue}{\rT_i^{-1}}$) are then depicted as \[ \xy (0,1.1)*{ \tikzdiagc[scale=.8]{ \draw[very thick,densely dotted,blue,to-] (-.5,0) to [out=270,in=180] (0,-.7) to [out=0,in=-90] (.5,0)node[above]{\tiny $i$}; }}\endxy ,\mspace{20mu} \xy (0,1.1)*{ \tikzdiagc[scale=.8,xscale=-1]{ \draw[very thick,densely dotted,blue,to-] (-.5,0) to [out=270,in=180] (0,-.7) to [out=0,in=-90] (.5,0)node[above]{\tiny $i$}; }}\endxy \ \ ,\mspace{24mu} \xy (0,-2.5)*{ \tikzdiagc[scale=.8,yscale=-1]{ \draw[very thick,densely dotted,blue,to-] (-.5,0) to [out=270,in=180] (0,-.7) to [out=0,in=-90] (.5,0)node[below]{\tiny $i$}; }}\endxy \mspace{20mu}\text{and}\mspace{20mu} \xy (0,-2.5)*{ \tikzdiagc[scale=.8,xscale=-1,yscale=-1]{ \draw[very thick,densely dotted,blue,to-] (-.5,0) to [out=270,in=180] (0,-.7) to [out=0,in=-90] (.5,0)node[below]{\tiny $i$}; }}\endxy \] and the above remarks translate into the following diagrammatic relations. \begin{lem}\label{lem:diagsbraid} For any $i\in \{1,\ldots,d-1\}$, we have the following relations between morphisms of ${\EuScript{K}^b}(\EuScript{S}_d)$: \begingroup\allowdisplaybreaks \begin{gather} \xy (0,0)*{ \tikzdiagc[scale=.8]{ \draw[very thick,densely dotted,blue] (0,0) circle (.65);\draw [very thick,densely dotted,blue,-to] (.65,0) --(.65,0); \node[blue] at (.65,-.65) {\tiny $i$}; }}\endxy \ = 1 =\ \xy (0,-1)*{ \tikzdiagc[scale=.8,xscale=-1]{ \draw[very thick,densely dotted,blue] (0,0) circle (.65);\draw [very thick,densely dotted,blue,-to] (.65,0) --(.65,0); \node[blue] at (-.65,-.65) {\tiny $i$}; }}\endxy \\[1ex] \xy (0,-1)*{ \tikzdiagc[yscale=0.9]{ \draw[very thick,densely dotted,blue,-to] (.5,-.75)node[below]{\tiny $i$} -- (.5,.75); \draw[very thick,densely dotted,blue,to-] (-.5,-.75)node[below]{\tiny $i$} -- (-.5,.75); }}\endxy\ = \xy (0,-2)*{ \tikzdiagc[yscale=0.9]{ \draw[very thick,densely dotted,blue,-to] (1,.75) .. controls (1.2,-.05) and (1.8,-.05) .. (2,.75); \draw[very thick,densely dotted,blue,to-] (1,-.75)node[below]{\tiny $i$} .. controls (1.2,.05) and (1.8,.05) .. (2,.-.75); }}\endxy \mspace{10mu}, \mspace{80mu} \xy (0,-2)*{ \tikzdiagc[yscale=.9,xscale=-1]{ \draw[very thick,densely dotted,blue,-to] (.5,-.75)node[below]{\tiny $i$} -- (.5,.75); \draw[very thick,densely dotted,blue,to-] (-.5,-.75)node[below]{\tiny $i$} -- (-.5,.75); }}\endxy\ = \xy (0,-2)*{ \tikzdiagc[yscale=.9,xscale=-1]{ \draw[very thick,densely dotted,blue,-to] (1,.75) .. controls (1.2,-.05) and (1.8,-.05) .. (2,.75); \draw[very thick,densely dotted,blue,to-] (1,-.75)node[below]{\tiny $i$} .. controls (1.2,.05) and (1.8,.05) .. (2,.-.75); }}\endxy \\[1ex] \xy (0,-2)*{ \begin{tikzpicture}[scale=1.5] \draw[very thick,densely dotted,blue] (0,-.55)node[below]{\tiny $i$} -- (0,0); \draw[very thick,densely dotted,blue,-to] (0,0) to [out=90,in=180] (.25,.5) to [out=0,in=90] (.5,0); \draw[very thick,densely dotted,blue] (.5,0) to [out=270,in=180] (.75,-.5) to [out=0,in=270] (1,0); \draw[very thick,densely dotted,blue] (1,0) -- (1,0.75); \end{tikzpicture} }\endxy = \xy ( 0,-2.5)*{\begin{tikzpicture}[scale=1.5] \draw[very thick,densely dotted,blue,-to] (0,-.55)node[below]{\tiny $i$} to (0,.75); \end{tikzpicture} }\endxy = \xy (0,-2.5)*{ \begin{tikzpicture}[scale=1.5,xscale=-1] \draw[very thick,densely dotted,blue] (0,-.55)node[below]{\tiny $i$} -- (0,0); \draw[very thick,densely dotted,blue,-to] (0,0) to [out=90,in=180] (.25,.5) to [out=0,in=90] (.5,0); \draw[very thick,densely dotted,blue] (.5,0) to [out=270,in=180] (.75,-.5) to [out=0,in=270] (1,0); \draw[very thick,densely dotted,blue] (1,0) -- (1,0.75); \end{tikzpicture} }\endxy \mspace{60mu} \xy (0,-2.5)*{ \begin{tikzpicture}[scale=1.5] \draw[very thick,densely dotted,blue] (0,-.55)node[below]{\tiny $i$} -- (0,0); \draw[very thick,densely dotted,blue,to-] (0,0) to [out=90,in=180] (.25,.5) to [out=0,in=90] (.5,0); \draw[very thick,densely dotted,blue] (.5,0) to [out=270,in=180] (.75,-.5) to [out=0,in=270] (1,0); \draw[very thick,densely dotted,blue] (1,0) -- (1,0.75); \end{tikzpicture} }\endxy = \xy ( 0,-2.5)*{\begin{tikzpicture}[scale=1.5] \draw[very thick,densely dotted,blue,to-] (0,-.55)node[below]{\tiny $i$} to (0,.75); \end{tikzpicture} }\endxy = \xy (0,-2.5)*{ \begin{tikzpicture}[scale=1.5,xscale=-1] \draw[very thick,densely dotted,blue] (0,-.55)node[below]{\tiny $i$} -- (0,0); \draw[very thick,densely dotted,blue,to-] (0,0) to [out=90,in=180] (.25,.5) to [out=0,in=90] (.5,0); \draw[very thick,densely dotted,blue] (.5,0) to [out=270,in=180] (.75,-.5) to [out=0,in=270] (1,0); \draw[very thick,densely dotted,blue] (1,0) -- (1,0.75); \end{tikzpicture} }\endxy \end{gather} \endgroup \end{lem} \begin{rem}\label{rem:Hquadratic} Just for the record, we give some further results about Rouquier complexes, all well-known to experts. \begin{itemize} \item For any $i\in \{1,\ldots,d-2\}$, the isomorphism between $\mathrm{T}_i\mathrm{T}_{i+1}\mathrm{T}_i$ and $\mathrm{T}_{i+1}\mathrm{T}_i\mathrm{T}_{i+1}$ in ${\EuScript{K}^b}(\EuScript{S}_d)$ (see~\cite[\S3]{elias-krasner} for the maps) can be represented by the degree zero diagrams \[ \xy (0,0)*{ \tikzdiagc[scale=.55]{ \draw[very thick,densely dotted,blue,-to] (-1,-1)node[below]{\tiny $i$} -- (-.5,-.5);\draw[very thick,densely dotted,blue] (-.5,-.5) -- (0,0); \draw[very thick,densely dotted,blue,-to] ( 1,-1)node[below]{\tiny $i$} -- ( .5,-.5);\draw[very thick,densely dotted,blue] ( .5,-.5) -- (0,0); \draw[very thick,densely dotted,myred,-to] (0,-1)node[below]{\tiny $i+1$} -- (0,-.5);\draw[very thick,densely dotted,myred] (0,-.5) -- (0,0); \draw[very thick,densely dotted,myred,-to] (0,0) -- (-1,1); \draw[very thick,densely dotted,myred,-to] (0,0) -- ( 1,1); \draw[very thick,densely dotted,blue,-to] (0,0) -- (0,1); }}\endxy \mspace{60mu} \xy (0,0)*{ \tikzdiagc[scale=.55]{ \draw[very thick,densely dotted,myred,-to] (-1,-1)node[below]{\tiny $i+1$} -- (-.5,-.5);\draw[very thick,densely dotted,myred] (-.5,-.5) -- (0,0); \draw[very thick,densely dotted,myred,-to] ( 1,-1)node[below]{\tiny $i+1$} -- ( .5,-.5);\draw[very thick,densely dotted,myred] ( .5,-.5) -- (0,0); \draw[very thick,densely dotted,blue,-to] (0,-1)node[below]{\tiny $i$} -- (0,-.5);\draw[very thick,blue] (0,-.5) -- (0,0); \draw[very thick,densely dotted,blue,-to] (0,0) -- (-1,1); \draw[very thick,densely dotted,blue,-to] (0,0) -- ( 1,1); \draw[very thick,densely dotted,myred,-to] (0,0) -- (0,1); }}\endxy \] satisfying the relations \[ \xy (0,.05)*{ \tikzdiagc[scale=.45,yscale=1]{ \draw[very thick,densely dotted,myred,-to] (-1,-2)node[below]{\tiny $i+1$} -- (-1,2)node[above]{\phantom{\tiny $i+1$}}; \draw[very thick,densely dotted,myred,-to] ( 1,-2)node[below]{\tiny $\ i+1$} -- ( 1,2); \draw[very thick,densely dotted,blue,-to] (0,-2)node[below]{\tiny $i$} -- (0,2); }}\endxy = \xy (0,.05)*{ \tikzdiagc[scale=.45,yscale=1]{ \draw[very thick,densely dotted,myred] (0,-1) -- (0,1); \draw[very thick,densely dotted,myred,-to] (0,1) -- (-1,2)node[above]{\phantom{\tiny $i+1$}}; \draw[very thick,densely dotted,myred,-to] (0,1) -- (1,2); \draw[very thick,densely dotted,myred] (0,-1) -- (-1,-2)node[below]{\tiny $i+1$}; \draw[very thick,densely dotted,myred] (0,-1) -- (1,-2)node[below]{\tiny $\ i+1$} ; \draw[very thick,densely dotted,blue,-to] (0,1) -- (0,2); \draw[very thick,densely dotted,blue] (0,-1) -- (0,-2)node[below]{\tiny $i$}; \draw[very thick,densely dotted,blue] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[very thick,densely dotted,blue] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy \mspace{70mu} \xy (0,.05)*{ \tikzdiagc[scale=.45,yscale=1]{ \draw[very thick,densely dotted,blue,-to] (-1,-2)node[below]{\tiny $i$} -- (-1,2)node[above]{\phantom{\tiny $i$}}; \draw[very thick,densely dotted,blue,-to] ( 1,-2)node[below]{\tiny $i$} -- ( 1,2); \draw[very thick,densely dotted,myred,-to] (0,-2)node[below]{\tiny $i+1$} -- (0,2); }}\endxy = \xy (0,.05)*{ \tikzdiagc[scale=.45,yscale=1]{ \draw[very thick,densely dotted,blue] (0,-1) -- (0,1); \draw[very thick,densely dotted,blue,-to] (0,1) -- (-1,2)node[above]{\phantom{\tiny $i$}}; \draw[very thick,densely dotted,blue,-to] (0,1) -- (1,2); \draw[very thick,densely dotted,blue] (0,-1) -- (-1,-2)node[below]{\tiny $i$}; \draw[very thick,densely dotted,blue] (0,-1) -- (1,-2)node[below]{\tiny $i$}; \draw[very thick,densely dotted,myred,-to] (0,1) -- (0,2); \draw[very thick,densely dotted,myred] (0,-1) -- (0,-2)node[below]{\tiny $i+1$}; \draw[very thick,densely dotted,myred] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[very thick,densely dotted,myred] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy \] There are similar diagrams and relations for braid moves involving the inverses of Rouquier complexes, see e.g.~\cite[\S5]{ew-braids}. In~\fullref{rem:RRmRR} below, we introduce some new diagrams. \item For any $i\in \{1,\ldots,d-1\}$, the cone of the map $f\colon \mathrm{T}_i\to \mathrm{T}_i^{-1}$, which is the identity on $\mathrm{B}_i$ and zero everywhere else, is isomorphic to \[ \underline{R\langle -1\rangle} \xra{ \mspace{25mu} \xy (0,0)*{ \tikzdiagc[yscale=.3,xscale=.25]{ \draw[ultra thick,blue] (-1,-.1) -- (-1, 1.1)node[pos=0,tikzdot]{}node[pos=1,tikzdot]{}; \node[blue] at (-.25,0) {\tiny $i$}; }}\endxy \mspace{10mu} } R\langle 1\rangle. \] in ${\EuScript{K}^b}(\EuScript{S}_d)$. The distinguished triangle \[ \textcolor{blue}{\rT_i^{-1}} \to {\sf Cone}(f) \to \textcolor{blue}{\rT_i}\to\textcolor{blue}{\rT_i}[1] \] categorifies the quadratic relation in the Hecke algebra $H_d$. \end{itemize} \end{rem} The remaining lemmas of this subsection are probably all known to experts, but we were not able to find any concrete references for them, so we decided to give all relevant homotopy equivalences explicitly. Further, to keep the notation as simple as possible, we state some equations in ${\EuScript{K}^b}(\EuScript{BS}_d)$. Being homogeneous, they also give rise to equations between morphisms in ${\EuScript{K}^b}(\EuScript{S}_d)$, as explained in Remark~{\color{red}\ref{rem:shifts-etc2}}. \begin{lem}\label{lem:dumbbell-slide} For any $i,j\in \{1,\ldots, d-1\}$ such that $j=i\pm 1$, the following {\em dumbbell-slide} relations hold in ${\EuScript{K}^b}(\EuScript{BS}_d)$: \begin{gather*} \xy (0,-1)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted, blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \,=\, -\, \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted, blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \\ \xy (0,-1)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,myred] (0,-.35)node[below]{\tiny $j$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted,blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \,=\, \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,myred] (0,-.35)node[below]{\tiny $j$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted,blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \,+\, \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted,blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \end{gather*} \end{lem} \begin{proof} We are actually going to prove the equations in ${\EuScript{K}^b}(\EuScript{S}_d)$, fixing the shifts of the objects. For the first equation, consider \[ \begin{tikzpicture}[anchorbase,scale=0.7] \node at (-1.5,0) {\small $\textcolor{blue}{\rT_i} \colon$}; \draw[thick, ->] (-1.5,0.5) -- (-1.5,4.5); \node at (1.5,0) {\small $\underline{\textcolor{blue}{\rB_i}}$}; \node at (8.5,0) {\small $R\langle 1\rangle$}; \draw[thick,->] (2.5,0) to (7.5,0); \node at (4.85,0.5) { \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.85)node[pos=1, tikzdot]{}; }}; \draw[thick,->] (1.5,.5) to (1.5,4.5);\node at (0.25,2.4) {\xy (0,1)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy \ $+$\ \xy (0,1)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy}; \draw[thick,->] (8.7,0.5) to (8.7,4.5);\node at (9.25,2.4) {$2\ \xy (0,0)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.85)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; } }\endxy$}; \draw[thick,->] (8,1.1) to (2,4.5);\node at (5.2,3.3) {$2\ \xy (0,0)*{ \tikzdiagc[yscale=0.55,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.85)node[pos=0, tikzdot]{}; } }\endxy$}; \begin{scope}[shift={(0,5)}] \node at (-1.5,0) {\small $\textcolor{blue}{\rT_i}\langle 2\rangle \colon$}; \node at (1.5,0) {\small $\underline{\textcolor{blue}{\rB_i}\langle 2\rangle}$}; \node at (8.5,0) {\small $\underline{R\langle 3\rangle}$}; \draw[thick,->] (2.5,0) to (7.5,0); \node at (4.85,0.55) {\tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.85)node[pos=1, tikzdot]{}; }}; \end{scope} \end{tikzpicture} \] The vertical arrows correspond to the map of complexes represented by \begin{gather*} \xy (0,-1)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted, blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \,+\, \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted, blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \end{gather*} and the diagonal arrow is a homotopy. Using~\eqref{eq:relHatSlast}, we see that the map of complexes is null-homotopic. For the second equation, consider \[ \begin{tikzpicture}[anchorbase,scale=0.7] \node at (-3,0) {\small $\textcolor{myred}{\mathrm{T}_{j}} \colon$}; \draw[thick, ->] (-3,0.5) -- (-3,4.5); \node at (1.5,0) {\small $\underline{\textcolor{myred}{\mathrm{B}_{j}}}$}; \node at (8.5,0) {\small $R\langle 1\rangle$}; \draw[thick,->] (2.5,0) to (7.5,0); \node at (4.85,0.5) { \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.85)node[pos=1, tikzdot]{}; }}; \draw[thick,->] (1.5,.5) to (1.5,4.5);\node at (-0.5,2.4) {\xy (0,1)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,myred] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy \ $-$ \ \xy (0,1)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,myred] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy \ $-$ \ \xy (0,1)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy }; \draw[thick,->] (8.7,0.5) to (8.7,4.5);\node at (9.25,2.4) {$-\ \tikzdiagc[yscale=0.55,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.85)node[pos=0, tikzdot]{} node[pos=1,tikzdot]{}; \node at (1.5,-.2) {}; }$}; \draw[thick,->] (8,1.1) to (2,4.5);\node at (5,3.3) {$- \tikzdiagc[yscale=0.55,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.85)node[pos=0, tikzdot]{}; \node at (1.5,-.2) {}; } $}; \begin{scope}[shift={(0,5)}] \node at (-3,0) {\small $\textcolor{myred}{\mathrm{T}_{j}}\langle 2\rangle \colon$}; \node at (1.5,0) {\small $\underline{\textcolor{myred}{\mathrm{B}_{j}}\langle 2\rangle}$}; \node at (8.5,0) {\small $\underline{R\langle 3\rangle}$}; \draw[thick,->] (2.5,0) to (7.5,0); \node at (4.85,0.55) {\tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.85)node[pos=1, tikzdot]{}; }}; \end{scope} \end{tikzpicture} \] The vertical arrows correspond to the map of complexes represented by \begin{gather*} \xy (0,-1.5)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,myred] (0,-.35)node[below]{\tiny $j$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted,blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \,-\, \xy (0,-2.5)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,myred] (0,-.35)node[below]{\tiny $j$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted,blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \,-\, \xy (0,-2.5)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,densely dotted,blue,-to] (.6,-1)node[below]{\tiny $i$}-- (.6, 1); }}\endxy \end{gather*} and the diagonal arrow is a homotopy. Using~\eqref{eq:forcingdumbel-i-iminus}, we see that the map of complexes is null-homotopic. \end{proof} \begin{lem} There is an isomorphism \begin{equation*} \textcolor{blue}{\rT_i^{\pm 1}} \textcolor{blue}{\rB_i}\textcolor{blue}{\rT_i^{\mp 1}} \cong \textcolor{blue}{\rB_i} \end{equation*} in ${\EuScript{K}^b}(\EuScript{S}_d)$. \end{lem} \begin{proof} Recall that $\mathrm{B}_i\mathrm{B}_i\cong \mathrm{B}_i\langle 1\rangle \oplus \mathrm{B}_i\langle -1\rangle$ in $\EuScript{S}_d$. Using that isomorphism, it is easy to see that $\mathrm{T}_i \mathrm{B}_i\cong \mathrm{B}_i\langle -1\rangle$ in ${\EuScript{K}^b}(\EuScript{S}_d)$, with the homotopy equivalence between the complexes being given by \[ \begin{tikzpicture}[anchorbase,scale=0.7] \node at (-4,0) {\small $\textcolor{blue}{\rT_i} \textcolor{blue}{\rB_i} \colon$}; \draw[thick, ->] (-4.25,-0.5) -- (-4.25,-2.5); \draw[thick, ->] (-3.75,-2.5) -- (-3.75,-0.5); \node at (1.5,0) {\small $\underline{\textcolor{blue}{\rB_i} \textcolor{blue}{\rB_i}}$}; \draw[thick,->] (2.5,0.25) to (5,0.25); \draw[thick,->] (5,-0.25) to (2.5,-0.25); \node at (3.5,.75) {\xy (4,.25)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5, 0.1)node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.45) -- (1.75, 0.55);} } \endxy }; \node at (3.5,-0.75) {\xy (4,.25)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5, 0); \draw[ultra thick,blue] (1.5,0) -- (1.75, 0.45); \draw[ultra thick,blue] (1.5,0) -- (1.25, 0.45);} } \endxy }; \node at (6,0) {\small $\underline{\textcolor{blue}{\rB_i}\langle 1\rangle}$}; \draw[thick,->] (1.75,-0.5) to (1.75,-2.5); \node at (2.25,-1.5) {\xy (4,.25)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.75,-.45) -- (1.5, 0); \draw[ultra thick,blue] (1.25,-.45) -- (1.5, 0); \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.45);} } \endxy }; \draw[thick,->] (1.25,-2.5) to (1.25,-0.5); \node at (-0.75,-1.5) {$\frac{1}{2}$\xy (0,0)*{ \tikzdiagc[yscale=0.25]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,blue] (1.5,-2.2) -- (1.5, -1.4); }}\endxy $- \frac{1}{2}$\xy (0,0)*{ \tikzdiagc[yscale=0.25]{ \draw[ultra thick,blue] (1,-1) -- (1,0)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,blue] (1.5,-2.2) -- (1.5, -1.4); }}\endxy }; \begin{scope}[shift={(0,-3)}] \node at (-4,0) {\small $\textcolor{blue}{\rB_i}\langle -1\rangle \colon$}; \node at (1.5,0) {\small $\underline{\textcolor{blue}{\rB_i}\langle -1\rangle}$}; \end{scope} \end{tikzpicture} \] An analogous homotopy equivalence shows that $\mathrm{B}_i\mathrm{T}_i \cong \mathrm{B}_i\langle -1\rangle$ in ${\EuScript{K}^b}(\EuScript{S}_d)$ and thus that $\mathrm{T}_i\mathrm{B}_i\cong \mathrm{B}_i\mathrm{T}_i$ in ${\EuScript{K}^b}(\EuScript{S}_d)$. Of course, the above also implies that $\mathrm{B}_i \mathrm{T}_i^{-1}\cong \mathrm{B}_i\langle 1\rangle \cong \mathrm{T}_i^{-1}\mathrm{B}_i$ in ${\EuScript{K}^b}(\EuScript{S}_d)$. \end{proof} \begin{lem}\label{lem:RBR} For each $1\leq i\leq d-2$, there are isomorphisms \begin{equation*} f_{i,\pm}\colon \textcolor{myred}{\rT_{i+1}^{\pm 1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\rT_{i+1}^{\mp 1}} \to \textcolor{blue}{\rT_i^{\mp 1}} \textcolor{myred}{\mathrm{B}_{i+1}}\textcolor{blue}{\rT_i^{\pm 1}} \end{equation*} in ${\EuScript{K}^b}(\EuScript{S}_d)$. \end{lem} \begin{proof} In this case, the complexes are actually isomorphic, not just homotopy equivalent. In the following figure, we exhibit the isomorphism $f_{i,-}\colon \mathrm{T}_{i+1}^{-1}\mathrm{B}_i \mathrm{T}_{i+1}\to\mathrm{T}_i\mathrm{B}_{i+1}\mathrm{T}_i^{-1}$ and its inverse $g_{i,-}$ (to avoid cluttering, we do not write labels in diagrams if they are clear from context): \[ \begin{tikzpicture}[anchorbase,scale=0.7] \node at (-3,0) {\small $\textcolor{myred}{\rT_{i+1}^{- 1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\rT_{i+1}}\colon$}; \draw[thick, ->] (-3.25,-0.5) -- (-3.25,-4.5); \draw[thick, ->] (-2.75,-4.5) -- (-2.75,-0.5); \node at (1.5,0) {\small ${\textcolor{blue}{\rB_i} \textcolor{myred}{\mathrm{B}_{i+1}}\langle -1\rangle}$}; \node at (8.5,0) {\small $\begin{pmatrix} \underline{ \textcolor{myred}{\mathrm{B}_{i+1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\mathrm{B}_{i+1}}} \\[1.5ex] \underline{\textcolor{blue}{\rB_i}}\end{pmatrix}$}; \node at (16,0) {\small $\textcolor{myred}{\mathrm{B}_{i+1}} \textcolor{blue}{\rB_i}\langle 1\rangle$}; \draw[thick,->] (3.3,0) to (5.9,0); \node at (4.3,1.15) {$ \begin{pmatrix} \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.45) -- (1.75, 0.55); \draw[ultra thick,myred] (2,-.45) -- (2, 0.55); } \\[1.5ex] \tikzdiagc[yscale=-0.5,xscale=-1]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.45) -- (1.75, 0.55); } \end{pmatrix}$}; \draw[thick,->] (11.1,0) to (14.5,0); \node at (12.75,.75) {$ \Bigl( \xy (0,0.25)*{ \tikzdiagc[yscale=-0.5,xscale=-1]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.45) -- (1.75, 0.55); \draw[ultra thick,myred] (2,-.45) -- (2, 0.55);} }\endxy \ , -\;\xy (0,.25)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.45) -- (1.75, 0.55);} }\endxy \Bigr)$}; \draw[thick,<->] (1.5,-.5) to (1.5,-4.5);\node at (2,-2.4) {$-1$}; \draw[thick,<->] (16,-.5) to (16,-4.5);\node at (16.25,-2.4) {$1$}; \draw[thick,<-] (8.25,-1.1) to (8.25,-3.75);\node at (7.75,-2.4) {$\overline g_{i,-}$}; \draw[thick,->] (9,-1.1) to (9,-3.75);\node at (9.6,-2.4) {$\overline f_{i,-}$}; \begin{scope}[shift={(0,-5)}] \node at (-3,0) {\small $\textcolor{blue}{\rT_i} \textcolor{myred}{\mathrm{B}_{i+1}} \textcolor{blue}{\rT_i^{-1}}\colon$}; \node at (1.5,0) {\small ${\textcolor{blue}{\rB_i} \textcolor{myred}{\mathrm{B}_{i+1}}\langle -1\rangle}$}; \node at (8.5,0) {\small $\begin{pmatrix} \underline{\textcolor{blue}{\rB_i} \textcolor{myred}{\mathrm{B}_{i+1}} \textcolor{blue}{\rB_i}} \\[1.5ex] \underline{ \textcolor{myred}{\mathrm{B}_{i+1}}}\end{pmatrix}$}; \node at (16,0) {\small $\textcolor{myred}{\mathrm{B}_{i+1}} \textcolor{blue}{\rB_i}\langle 1\rangle$}; \draw[thick,->] (3.3,0) to (5.9,0); \node at (4.3,1.15) {$ \begin{pmatrix} \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); \draw[ultra thick,blue] (2,-.45) -- (2, 0.55); } \\[1.5ex] \tikzdiagc[yscale=-0.5,xscale=1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); } \end{pmatrix}$}; \draw[thick,->] (11.1,0) to (14.5,0); \node at (12.75,.75) {$ \Bigl( -\xy (0,0.25)*{ \tikzdiagc[yscale=-0.5,xscale=1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); \draw[ultra thick,blue] (2,-.45) -- (2, 0.55);} }\endxy \; , \;\xy (0,.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55);} }\endxy \Bigr)$}; \end{scope} \end{tikzpicture} \] Here $\overline f_{i,-}$ and $\overline g_{i,-}$ are, respectively, \[ \overline f_{i,-} = \begin{pmatrix} -\xy (0,0)*{ \tikzdiagc[yscale=-0.35,xscale=.35]{ \draw[ultra thick,myred] (0,-1) -- (0,0);\draw[ultra thick,myred] (0,0) -- (-1, 1);\draw[ultra thick,myred] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); }}\endxy & \xy (0,0)*{ \tikzdiagc[yscale=0.25]{ \draw[ultra thick,myred] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,blue] (1.5,-2.2) -- (1.5, -1.4); }}\endxy \\[1.5ex] -\xy (0,1.2)*{ \tikzdiagc[yscale=-0.25]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,myred] (1.5,-2.2) -- (1.5, -1.4); }}\endxy & 0 \end{pmatrix} ,\mspace{40mu} \overline g_{i,-} = \begin{pmatrix} -\xy (0,0)*{ \tikzdiagc[yscale=0.35,xscale=.35]{ \draw[ultra thick,myred] (0,-1) -- (0,0);\draw[ultra thick,myred] (0,0) -- (-1, 1);\draw[ultra thick,myred] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); }}\endxy & \xy (0,0)*{ \tikzdiagc[yscale=0.25]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,myred] (1.5,-2.2) -- (1.5, -1.4); }}\endxy \\[1.5ex] -\xy (0,1.2)*{ \tikzdiagc[yscale=-0.25]{ \draw[ultra thick,myred] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,blue] (1.5,-2.2) -- (1.5, -1.4); }}\endxy & 0 \end{pmatrix} . \] The maps $f_{i,-}=(1,\overline f_{i,-},1)$ and $g_{i,-}=(1,\overline g_{i,-},1)$ are mutual inverses and a pleasant exercise, using the relation in~\eqref{eq:6vertexdot}, shows that both of them are chain maps. The complexes $\mathrm{T}_{i+1}\mathrm{B}_i\mathrm{T}_{i+1}^{-1}$ and $\mathrm{T}_i^{-1}\mathrm{B}_{i+1}\mathrm{T}_i$ are isomorphic too, as they are adjoint to $\mathrm{T}_{i+1}^{-1}\mathrm{B}_i\mathrm{T}_{i+1}$ and $\mathrm{T}_i\mathrm{B}_{i+1}\mathrm{T}_i^{-1}$, respectively. Using the units and counits of adjunction, we obtain the isomorphism $f_{i,+}\colon \mathrm{T}_{i+1}\mathrm{B}_i\mathrm{T}_{i+1}^{-1}\to \mathrm{T}_i^{-1}\mathrm{B}_{i+1}\mathrm{T}_i$ and its inverse $g_{i,+}$. \end{proof} Recall the homotopy equivalences $\phi_i\colon\mathrm{T}^{-1}\mathrm{T}_i\to R$ and $\psi_i\colon \mathrm{T}_i\mathrm{T}_i^{-1}\to R$ and put $\delta_{i,+}:=\phi_i^{-1}\circ\psi_{i+1}$ and $\delta_{i,-}:=\psi_i^{-1}\circ\phi_{i+1}$ (we suppress the maps $\eta_{i,\pm}$ whenever we use the diagrams $\xy (0,0)*{\tikzdiagc[scale=0.25,yscale=1]{ \draw[ultra thick,blue] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; }}\endxy$ and $\xy (0,0)*{\tikzdiagc[scale=0.25,yscale=-1]{ \draw[ultra thick,blue] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; }}\endxy$). Below, we keep the notation from~\fullref{lem:RBR}. \begin{lem}\label{lem:dotRBR For each $1\leq i\leq d-2$, the following maps are equal to zero in ${\EuScript{K}^b}(\EuScript{S}_d)$: \begingroup\allowdisplaybreaks \begin{align}\label{eq:dotRBR-first} f_{i,\pm}\circ\bigl( \id_{\mathrm{T}_{i+1}^{\pm 1}} \xy (0,-1)*{ \tikzdiagc[scale=0.25]{ \draw[ultra thick,blue] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; \node[blue] at (0,-2.2) {\tiny $i$}; }}\endxy \id_{\mathrm{T}_{i+1}^{\mp 1}} \bigr) - \bigl( \id_{\mathrm{T}_{i}^{\mp 1}} \xy (0,-1)*{ \tikzdiagc[scale=0.25]{ \draw[ultra thick,myred] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; \node[myred] at (0,-2.2) {\tiny $i+1$}; }}\endxy \id_{\mathrm{T}_{i}^{\pm 1}} \bigr) \circ \delta_{i,\pm} \colon & \\ \notag \textcolor{myred}{\rT_{i+1}^{\pm 1}}\textcolor{myred}{\rT_{i+1}^{\mp 1}} \to& \textcolor{blue}{\rT_i^{\mp 1}} \textcolor{myred}{\mathrm{B}_{i+1}}\textcolor{blue}{\rT_i^{\pm 1}}\langle 1\rangle , \\[1.5ex] \label{eq:dotRBR-second} \bigl( \id_{\mathrm{T}_{i}^{\mp 1}} \xy (0,-1)*{ \tikzdiagc[scale=0.25,yscale=-1]{ \draw[ultra thick,myred] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; \node[myred] at (0,.6) {\tiny $i+1$}; }}\endxy \id_{\mathrm{T}_{i}^{\pm 1}} \bigr) \circ f_{i,\pm}- \delta_{i,\pm} \circ \bigl( \id_{\mathrm{T}_{i+1}^{\pm 1}} \xy (0,-1)*{ \tikzdiagc[scale=0.25,yscale=-1]{ \draw[ultra thick,blue] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; \node[blue] at (0,.6) {\tiny $i$}; }}\endxy \id_{\mathrm{T}_{i+1}^{\mp 1}} \bigr) \colon& \\ \notag \textcolor{myred}{\rT_{i+1}^{\pm 1}}\textcolor{blue}{\rB_i}\textcolor{myred}{\rT_{i+1}^{\mp 1}}\langle -1\rangle \to& \textcolor{blue}{\rT_i^{\mp 1}}\textcolor{blue}{\rT_i^{\pm 1}} , \\[1.5ex] \label{eq:dotRBR-third} f^{-1}_{i,\pm}\circ\bigl( \id_{\mathrm{T}_{i}^{\mp 1}} \xy (0,-1)*{ \tikzdiagc[scale=0.25]{ \draw[ultra thick,myred] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; \node[myred] at (0,-2.2) {\tiny $i+1$}; }}\endxy \id_{\mathrm{T}_{i}^{\pm 1}} \bigr) - \bigl( \id_{\mathrm{T}_{i+1}^{\pm 1}} \xy (0,-1)*{ \tikzdiagc[scale=0.25]{ \draw[ultra thick,blue] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; \node[blue] at (0,-2.2) {\tiny $i$}; }}\endxy \id_{\mathrm{T}_{i+1}^{\mp 1}} \bigr) \circ \delta^{-1}_{i,\pm} \colon & \\ \notag \textcolor{blue}{\rT_i^{\mp 1}}\textcolor{blue}{\rT_i^{\pm 1}} \to & \textcolor{blue}{\rT_i^{\pm 1}} \textcolor{blue}{\rB_i}\textcolor{blue}{\rT_i^{\mp 1}} \langle 1\rangle, \\[1.5ex] \label{eq:dotRBR-fourth} \bigl( \id_{\mathrm{T}_{i+1}^{\pm 1}} \xy (0,-1)*{ \tikzdiagc[scale=0.25,yscale=-1]{ \draw[ultra thick,blue] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; \node[blue] at (0,.6) {\tiny $i$}; }}\endxy \id_{\mathrm{T}_{i+1}^{\mp 1}} \bigr) \circ f^{-1}_{i,\pm}- \delta^{-1}_{i,\pm} \circ \bigl( \id_{\mathrm{T}_{i}^{\mp 1}} \xy (0,-1)*{ \tikzdiagc[scale=0.25,yscale=-1]{ \draw[ultra thick,myred] (0,-1.25) -- (0,0)node[pos=0, tikzdot]{}; \node[myred] at (0,.6) {\tiny $i+1$}; }}\endxy \id_{\mathrm{T}_{i}^{\pm 1}} \bigr) \colon& \\ \notag \textcolor{blue}{\rT_i^{\mp 1}}\textcolor{myred}{\mathrm{B}_{i+1}}\textcolor{blue}{\rT_i^{\pm 1}}\langle -1\rangle \to& \textcolor{myred}{\rT_{i+1}^{\pm 1}}\textcolor{myred}{\rT_{i+1}^{\mp 1}} . \end{align}\endgroup \end{lem} \begin{proof} We only need to prove that the maps in~\eqref{eq:dotRBR-first} and~\eqref{eq:dotRBR-second} are null-homotopic for $f_{i,-}$, the case of $f_{i,+}$ following by adjunction. Pre- and post-composing those two maps with the appropriate isomorphisms proves the analogous statement for the maps in~\eqref{eq:dotRBR-third} and~\eqref{eq:dotRBR-fourth} as well. It is not hard to compute that the map of complexes in~\eqref{eq:dotRBR-first} is given by the vertical arrows in the diagram below: \[ \begin{tikzpicture}[anchorbase,scale=0.7] \node at (-3,0) {\small $\textcolor{myred}{\rT_{i+1}^{- 1}}\textcolor{myred}{\rT_{i+1}}\colon$}; \draw[thick, ->] (-2.75,0.5) -- (-2.75,4.5); \node at (1.5,0) {\small $\textcolor{myred}{\mathrm{B}_{i+1}}\langle -1\rangle$}; \node at (8.5,0) {\small $\begin{pmatrix} \underline{ \textcolor{myred}{\mathrm{B}_{i+1}} \textcolor{myred}{\mathrm{B}_{i+1}}} \\[1.5ex] \underline{R}\end{pmatrix}$}; \node at (16,0) {\small $\textcolor{myred}{\mathrm{B}_{i+1}}\langle 1\rangle$}; \draw[thick,->] (2.7,0) to (6.3,0); \node at (4.3,1.15) {$ \begin{pmatrix} \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); } \\[1.5ex] \tikzdiagc[yscale=-0.5,xscale=-1]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; } \end{pmatrix}$}; \draw[thick,->] (10.7,0) to (14.9,0); \node at (12.75,.75) {$ \Bigl( \xy (0,0.25)*{ \tikzdiagc[yscale=-0.5,xscale=-1]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); }}\endxy \ , -\;\xy (0,.25)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; }}\endxy \Bigr)$}; \draw[thick,->] (1.5,.5) to (1.5,4.5);\node at (0.75,2.4) {$-\tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); }$}; \draw[thick,->] (16,.5) to (16,4.5);\node at (16.5,2.4) {$\tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); }$}; \draw[thick,->] (8.7,1.1) to (8.7,3.75);\node at (9,2.4) {$g$}; \draw[thick,->] (8,1.1) to (2,4.5);\node at (5,3.3) {$H_0$}; \draw[thick,->] (15.5,.6) to (9.5,4);\node at (12,3.3) {$H_1$}; \begin{scope}[shift={(0,5)}] \node at (-3,0) {\small $\textcolor{blue}{\rT_i} \textcolor{myred}{\mathrm{B}_{i+1}} \textcolor{blue}{\rT_i^{-1}} \langle 1\rangle \colon$}; \node at (1.5,0) {\small ${\textcolor{blue}{\rB_i} \textcolor{myred}{\mathrm{B}_{i+1}}}$}; \node at (8.5,0) {\small $\begin{pmatrix} \underline{\textcolor{blue}{\rB_i} \textcolor{myred}{\mathrm{B}_{i+1}} \textcolor{blue}{\rB_i}\langle 1\rangle} \\[1.5ex] \underline{ \textcolor{myred}{\mathrm{B}_{i+1}}\langle 1\rangle}\end{pmatrix}$}; \node at (16,0) {\small $\textcolor{myred}{\mathrm{B}_{i+1}} \textcolor{blue}{\rB_i}\langle 2\rangle$}; \draw[thick,->] (3,0) to (5.9,0); \node at (4.3,1.15) {$ \begin{pmatrix} \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); \draw[ultra thick,blue] (2,-.45) -- (2, 0.55); } \\[1.5ex] \tikzdiagc[yscale=-0.5,xscale=1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); } \end{pmatrix}$}; \draw[thick,->] (11.1,0) to (14.4,0); \node at (12.7,.75) {$ \Bigl( -\xy (0,0.25)*{ \tikzdiagc[yscale=-0.5,xscale=1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); \draw[ultra thick,blue] (2,-.45) -- (2, 0.55);} }\endxy \; , \;\xy (0,.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55);} }\endxy \Bigr)$}; \end{scope} \end{tikzpicture} \] where \[ g= \begin{pmatrix} -\xy (0,1)*{ \tikzdiagc[yscale=0.28,xscale=-1]{ \draw[ultra thick,blue] (1.75,-.2) -- (1.75, 1); \draw[ultra thick,blue] (1.5,-1.5) -- (1.75, -.2); \draw[ultra thick,blue] (2,-1.5) -- (1.75, -.2); \draw[ultra thick,myred] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.4,0) -- (1.4, 1)node[pos=0, tikzdot]{}; }}\endxy & 0 \\[1.5ex] -\xy (0,0)*{ \tikzdiagc[yscale=-0.25]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,myred] (1.5,-2) -- (1.5, -1.4); }}\endxy +\xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,myred] ( .9,-.5) .. controls (1.2,0) and (1.8,0) .. (2.1,-.5); \draw[ultra thick,myred] (1.5,.2) -- (1.5, .6)node[pos=0, tikzdot]{}; }}\endxy & -\xy (0,.75)*{ \tikzdiagc[yscale=0.35,xscale=1]{ \draw[ultra thick,myred] (2.1,0) -- (2.1,1)node[pos=0, tikzdot]{};} }\endxy \end{pmatrix} . \] It is also easy to check that this map is null-homotopic, with homotopies \[ H_{0} = \Bigl( -\,\xy (0,0)*{ \tikzdiagc[yscale=0.28,xscale=-1]{ \draw[ultra thick,myred] (1.75,-.2) -- (1.75, 1); \draw[ultra thick,myred] (1.5,-1.5) -- (1.75, -.2); \draw[ultra thick,myred] (2,-1.5) -- (1.75, -.2); \draw[ultra thick,blue] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{}; }}\endxy \; , \; 0 \Bigr) , \mspace{80mu} H_{1} = \begin{pmatrix} 0 \\ \xy (0,0)*{ \tikzdiagc[yscale=.6]{ \draw[ultra thick,myred] (0,0) -- (0, 1); }}\endxy \end{pmatrix} . \] This establishes~\eqref{eq:dotRBR-first}. The proof of~\eqref{eq:dotRBR-second} can be obtained by a vertical reflexion of the diagrams above and exchanging the labels $i$ and $i+1$. \end{proof} \begin{rem} The isomorphisms in~\fullref{lem:RBR} have a diagrammatic interpretation in terms of degree zero generators in ${\EuScript{K}^b}(\EuScript{BS}_d)$ \[ \xy (0,0)*{ \tikzdiagc[scale=.55,yscale=-1,xscale=-1]{ \draw[ultra thick,myred] (0,-1) -- (0,0); \draw[very thick,densely dotted,myred,to-] (-.5,.5) -- (-1,1)node[below]{\tiny $i+1$}; \draw[very thick,densely dotted,myred] (0,0) -- (-.5,.5); \draw[very thick,densely dotted,myred,-to] (0,0) -- ( 1,1)node[below]{\tiny $i+1$}; \draw[ultra thick,blue] (0,0) -- (0,1)node[below]{\tiny $i$}; \draw[very thick,densely dotted,blue,-to] (-1,-1) -- (-.5,-.5);\draw[very thick,densely dotted,blue] (-.5,-.5) -- (0,0); \draw[very thick,densely dotted,blue,to-] ( 1,-1) -- (0,0); }}\endxy \mspace{60mu} \xy (0,0)*{ \tikzdiagc[scale=.55,xscale=1]{ \draw[ultra thick,myred] (0,-1)node[below]{\tiny $i+1$} -- (0,0); \draw[very thick,densely dotted,myred,to-] (-.5,.5) -- (-1,1); \draw[very thick,densely dotted,myred] (0,0) -- (-.5,.5); \draw[very thick,densely dotted,myred,-to] (0,0) -- ( 1,1); \draw[ultra thick,blue] (0,0) -- (0,1); \draw[very thick,densely dotted,blue,-to] (-1,-1)node[below]{\tiny $i$} -- (-.5,-.5);\draw[very thick,densely dotted,blue] (-.5,-.5) -- (0,0); \draw[very thick,densely dotted,blue,to-] ( 1,-1)node[below]{\tiny $i$} -- (0,0); }}\endxy \] and relations \[ \xy (0,-2.5)*{ \tikzdiagc[scale=.45,yscale=1,xscale=-1]{ \draw[very thick,densely dotted,myred,-to] (-1,-2)node[below]{\tiny $i+1$} -- (-1,2); \draw[very thick,densely dotted,myred,to-] ( 1,-2) -- ( 1,2); \draw[ultra thick,blue] (0,-2)node[below]{\tiny $i$} -- (0,2); }}\endxy \!\!=\, \xy (0,-2.5)*{ \tikzdiagc[scale=0.45,xscale=-1]{ \draw[ultra thick,myred] (0,-1) -- (0,1); \draw[very thick,densely dotted,myred,-to] (0,1) -- (-1,2); \draw[very thick,densely dotted,myred] (0,1) -- (1,2); \draw[very thick,densely dotted,myred] (0,-1) -- (-1,-2)node[below]{\tiny $i+1$}; \draw[very thick,densely dotted,myred,-to] (0,-1) -- (1,-2); \draw[ultra thick,blue] (0,1) -- (0,2); \draw[ultra thick,blue] (0,-1) -- (0,-2)node[below]{\tiny $i$}; \draw[very thick,densely dotted,blue] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[very thick,densely dotted,blue] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy \mspace{80mu} \xy (0,-2.5)*{ \tikzdiagc[scale=.45,yscale=1,xscale=1]{ \draw[very thick,densely dotted,blue,-to] (-1,-2)node[below]{\tiny $i$} -- (-1,2); \draw[very thick,densely dotted,blue,to-] ( 1,-2) -- ( 1,2); \draw[ultra thick,myred] (0,-2)node[below]{\tiny $i+1$} -- (0,2); }}\endxy \,\ =\! \xy (0,-2.5)*{ \tikzdiagc[scale=0.45,yscale=1,xscale=1]{ \draw[ultra thick,blue] (0,-1) -- (0,1); \draw[very thick,densely dotted,blue,-to] (0,1) -- (-1,2); \draw[very thick,densely dotted,blue] (0,1) -- (1,2); \draw[very thick,densely dotted,blue] (0,-1) -- (-1,-2)node[below]{\tiny $i$}; \draw[very thick,densely dotted,blue,-to] (0,-1) -- (1,-2); \draw[ultra thick,myred] (0,1) -- (0,2); \draw[ultra thick,myred] (0,-1) -- (0,-2)node[below]{\tiny $i+1$}; \draw[very thick,densely dotted,myred] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[very thick,densely dotted,myred] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy \] Using these diagrams, \fullref{lem:dotRBR} translates into \begingroup\allowdisplaybreaks \begin{gather*} \xy (0,-1.5)*{ \tikzdiagc[scale=.6,yscale=-1,xscale=-1]{ \draw[ultra thick,myred] (0,-1) -- (0,0); \draw[very thick,densely dotted,myred,to-] (-.5,.5) -- (-1,1)node[below]{\tiny $i+1$}; \draw[very thick,densely dotted,myred] (0,0) -- (-.5,.5); \draw[very thick,densely dotted,myred,-to] (0,0) -- ( 1,1); \draw[ultra thick,blue] (0,0) -- (0,.75)node[pos=1, tikzdot]{}node[below]{\tiny $i$}; \draw[very thick,densely dotted,blue,-to] (-1,-1) -- (-.5,-.5);\draw[very thick,densely dotted,blue] (-.5,-.5) -- (0,0); \draw[very thick,densely dotted,blue,to-] ( 1,-1) -- (0,0); }}\endxy \!\! =\ \xy (0,-2.5)*{ \tikzdiagc[scale=.6,yscale=-1,xscale=-1]{ \draw[ultra thick,myred] (0,-.5) -- (0,-1)node[pos=0, tikzdot]{}; \draw[very thick,densely dotted,myred,-to] (-1,1)node[below]{\tiny $i+1$} to [out=-90,in=-180] (0,.12) to [out=0,in=-90] (1,1); \draw[very thick,densely dotted,blue,-to] (-1,-1) to [out=90,in=180] (0,-.12) to [out=0,in=90] (1,-1); }}\endxy \mspace{60mu} \xy (0,-2)*{ \tikzdiagc[scale=.6,yscale=-1,xscale=1]{ \draw[ultra thick,blue] (0,-1) -- (0,0); \draw[very thick,densely dotted,blue,to-] (-.5,.5) -- (-1,1); \draw[very thick,densely dotted,blue] (0,0) -- (-.5,.5); \draw[very thick,densely dotted,blue,-to] (0,0) -- ( 1,1)node[below]{\tiny $i$}; \draw[ultra thick,myred] (0,0) -- (0,.75)node[pos=1, tikzdot]{}node[below]{\tiny $i+1$}; \draw[very thick,densely dotted,myred,-to] (-1,-1) -- (-.5,-.5);\draw[very thick,densely dotted,myred] (-.5,-.5) -- (0,0); \draw[very thick,densely dotted,myred,to-] ( 1,-1) -- (0,0); }}\endxy =\, \xy (0,-2)*{ \tikzdiagc[scale=.6,yscale=-1,xscale=1]{ \draw[ultra thick,blue] (0,-.5) -- (0,-1)node[pos=0, tikzdot]{}; \draw[very thick,densely dotted,blue,-to] (-1,1) to [out=-90,in=-180] (0,.12) to [out=0,in=-90] (1,1)node[below]{\tiny $i$}; \draw[very thick,densely dotted,myred,-to] (-1,-1) to [out=90,in=180] (0,-.12) to [out=0,in=90] (1,-1); }}\endxy \\ \xy (0,-2)*{ \tikzdiagc[scale=.6,yscale=1,xscale=-1]{ \draw[ultra thick,blue] (0,-1)node[below]{\tiny $i$} -- (0,0); \draw[very thick,densely dotted,blue,to-] (-.5,.5) -- (-1,1); \draw[very thick,densely dotted,blue] (0,0) -- (-.5,.5); \draw[very thick,densely dotted,blue,-to] (0,0) -- ( 1,1); \draw[ultra thick,myred] (0,0) -- (0,.75)node[pos=1, tikzdot]{}; \draw[very thick,densely dotted,myred,-to] (-1,-1)node[below]{\tiny $i+1$} -- (-.5,-.5); \draw[very thick,densely dotted,myred] (-.5,-.5) -- (0,0); \draw[very thick,densely dotted,myred,to-] ( 1,-1) -- (0,0); }}\endxy \!\! =\ \xy (0,-2.5)*{ \tikzdiagc[scale=.6,yscale=1,xscale=-1]{ \draw[ultra thick,blue] (0,-.5) -- (0,-1)node[pos=0, tikzdot]{}node[below]{\tiny $i$}; \draw[very thick,densely dotted,blue,-to] (-1,1) to [out=-90,in=-180] (0,.12) to [out=0,in=-90] (1,1); \draw[very thick,densely dotted,myred,-to] (-1,-1)node[below]{\tiny $i+1$} to [out=90,in=180] (0,-.12) to [out=0,in=90] (1,-1); }}\endxy \mspace{60mu} \xy (0,-2.5)*{ \tikzdiagc[scale=.6,xscale=1]{ \draw[ultra thick,myred] (0,-1)node[below]{\tiny $i+1$} -- (0,0); \draw[very thick,densely dotted,myred,to-] (-.5,.5) -- (-1,1); \draw[very thick,densely dotted,myred] (0,0) -- (-.5,.5); \draw[very thick,densely dotted,myred,-to] (0,0) -- ( 1,1); \draw[ultra thick,blue] (0,0) -- (0,.75)node[pos=1, tikzdot]{}; \draw[very thick,densely dotted,blue,-to] (-1,-1) -- (-.5,-.5); \draw[very thick,densely dotted,blue] (-.5,-.5) -- (0,0); \draw[very thick,densely dotted,blue,to-] ( 1,-1)node[below]{\tiny $i$} -- (0,0); }}\endxy =\, \xy (0,-2.5)*{ \tikzdiagc[scale=.6,xscale=1]{ \draw[ultra thick,myred] (0,-.5) -- (0,-1)node[pos=0, tikzdot]{}node[below]{\tiny $i+1$}; \draw[very thick,densely dotted,myred,-to] (-1,1) to [out=-90,in=-180] (0,.12) to [out=0,in=-90] (1,1); \draw[very thick,densely dotted,blue,-to] (-1,-1) to [out=90,in=180] (0,-.12) to [out=0,in=90] (1,-1)node[below]{\tiny $i$}; }}\endxy \end{gather*} \endgroup There are also analogous diagrams and relations with reversed orientation of the oriented strands. \end{rem} \begin{lem}\label{lem:RRB} There exist the following isomorphisms in ${\EuScript{K}^b}(\EuScript{S}_d)$: \begin{align} \textcolor{blue}{\rT_i^{-1}}\textcolor{myred}{\rT_{i-1}^{-1}} \textcolor{blue}{\rB_i} &\cong \textcolor{myred}{\rB_{i-1}}\textcolor{blue}{\rT_i^{-1}}\textcolor{myred}{\rT_{i-1}^{-1}}, & 1 < i \leq d-1, \label{eq:RRBi} \\ \textcolor{myred}{\rT_{i-1}}\textcolor{blue}{\rT_i} \textcolor{myred}{\rB_{i-1}} &\cong \textcolor{blue}{\rB_i}\textcolor{myred}{\rT_{i-1}}\textcolor{blue}{\rT_i} , & 1 < i \leq d-1, \label{eq:RRBinv} \\ \textcolor{blue}{\rT_i^{\pm 1}}\textcolor{red}{\mathrm{T}_{i-1}^{\pm 1}}\textcolor{orange}{\mathrm{B}_j} &\cong \textcolor{orange}{\mathrm{B}_j}\textcolor{blue}{\rT_i^{\pm 1}}\textcolor{red}{\mathrm{T}_{i-1}^{\pm 1}}, & j\notin \{i-2,i-1,i,i+1\} . \label{eq:RRBi-j} \end{align} \end{lem} \begin{proof} We start with~\eqref{eq:RRBi}, proving that both chain complexes have retractions that are homotopy equivalent to each other. Here is a homotopy equivalence between the complex $\mathrm{T}_i^{-1}\mathrm{T}_{i-1}^{-1}\mathrm{B}_i$ and its retraction $(\mathrm{T}_i^{-1}\mathrm{T}_{i-1}^{-1}\mathrm{B}_i)_{\mathrm{retr}}$: \[ \begin{tikzpicture}[anchorbase,scale=0.7] \node at (-3.5,0) {\small $\textcolor{blue}{\rT_i^{-1}}\textcolor{myred}{\rT_{i-1}^{-1}} \textcolor{blue}{\rB_i}\colon$}; \node at (0,0) {\small ${\textcolor{blue}{\rB_i} \langle -2\rangle}$}; \node at (6.5,0) {\small $\begin{pmatrix}\textcolor{blue}{\rB_i} \textcolor{blue}{\rB_i} \langle -1\rangle \\[1.5ex] \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \langle -1\rangle \end{pmatrix}$}; \node at (15.5,0) {\small $\underline{\textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i}}$}; \draw[thick,->] (1,0) to (4.4,0); \node at (2.5,1.15) {$ \begin{pmatrix} \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.45) -- (1.75, 0.55); } \\[1.5ex] \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.45) -- (1.75, 0.55); } \end{pmatrix}$}; \draw[thick,->] (8.75,0) to (13.5,0); \node at (11,.75) {$ \Bigl( \xy (0,0.25)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.25,-.45) -- (1.25, 0.55); \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.45) -- (1.75, 0.55);} }\endxy \ , \xy (0,.25)*{ \tikzdiagc[yscale=0.5]{ \node at (1.15,0) {\bf $-$}; \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); \draw[ultra thick,blue] (2,-.45) -- (2, 0.55);} }\endxy \Bigr)$}; \draw[thick,<->] (0,-.5) to (0,-4.5);\node at (0.25,-2.4) {$1$}; \draw[thick,<->] (15.5,-.5) to (15.5,-4.5);\node at (15.75,-2.4) {$1$}; \draw[thick,<-] (6.25,-1) to (6.25,-3.25);\node at (5.9,-2) {$g_l$}; \draw[thick,->] (7,-1) to (7,-3.25);\node at (7.35,-2) {$f_l$}; \begin{scope}[shift={(0,-5)}] \node at (0,0) {\small ${\textcolor{blue}{\rB_i}\langle -2\rangle}$}; \node at (6.5,0) {\small $\begin{pmatrix}\textcolor{blue}{\rB_i} \langle -2\rangle \\[1.5ex] \textcolor{blue}{\rB_i} \\[1.5ex] \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \langle -1\rangle \end{pmatrix}$}; \node at (15.5,0) {\small $\underline{\textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i}}$}; \draw[thick,->] (1,.2) to (4.4,.2);\node at (2.75,2.1) {$ \begin{pmatrix} \tfrac{1}{2}\; \xy (0,0)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.75,-.6) -- (1.75, 0.6); }}\endxy \\[1.5ex] \tfrac{1}{2}\; \xy (0,0)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,-.3) -- (1.5, 0.3)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.6) -- (1.75, 0.6); }}\endxy \\[1.5ex] \xy (0,0)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.6) -- (1.75, 0.6); }}\endxy \end{pmatrix} $}; \draw[thick,<-] (1,-.2) to (4.4,-.2);\node at (2.8,-.7) {$h = \bigr(2\cdot , 0 , 0 \bigl)$}; \draw[thick,->] (8.7,0) to (13.5,0); \node at (11.05,1.15) {$ \Biggl( \! \xy (0,0)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.5,.1) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.5,-.45) -- (1.5, -.95)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.1,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.9,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,blue] (1.5,-2) -- (1.5, -1.4); }}\endxy ,\! \xy (0,0.0)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.1,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.9,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,blue] (1.5,-2) -- (1.5, -1.4); }}\endxy ,\! \xy (0,.0)*{ \tikzdiagc[yscale=0.5]{ \node at (1.25,0) {\bf $-$}; \draw[ultra thick,blue] (1.5,-1) -- (1.5, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-2) -- (1.75, .6); \draw[ultra thick,blue] (2,-2) -- (2, .6);} }\endxy \Biggr)$}; \end{scope} \begin{scope}[shift={(0,-5)}] \draw[thick,<->] (15,-.5) to (15,-4.5);\node at (15.25,-2.4) {$1$}; \draw[thick,<-] (6.25,-1.5) to (6.25,-3.8);\node at (5.9,-2.6) {$g_l'$}; \draw[thick,->] (7,-1.5) to (7,-3.8);\node at (7.35,-2.6) {$f_l'$}; \end{scope} \node at (-2.5,-10) {\small $(\textcolor{blue}{\rT_i^{-1}}\textcolor{myred}{\rT_{i-1}^{-1}} \textcolor{blue}{\rB_i})_{\mathrm{retr}}\colon$}; \begin{scope}[shift={(0,-10)}] \node at (6.5,0) {\small $\begin{pmatrix} \textcolor{blue}{\rB_i} \\[1.5ex] \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \langle -1\rangle \end{pmatrix}$}; \node at (15.5,0) {\small $\underline{\textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i}}$}; \draw[thick,->] (8.75,0) to (13.5,0); \end{scope} \draw[thick,<-] (-2.95,-.5) to (-2.95,-9.5); \draw[thick,->] (-3.7,-.5) to (-3.7,-9.5); \node at (11,-9.1) {$ \biggl( \! \xy (0,0.0)*{ \tikzdiagc[yscale=0.3]{ \draw[ultra thick,myred] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,blue] (1.5,-2) -- (1.5, -1.4); }}\endxy ,\! \xy (0,.0)*{ \tikzdiagc[yscale=0.3]{ \node at (1.25,0) {\bf $-$}; \draw[ultra thick,blue] (1.5,-1) -- (1.5, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-2) -- (1.75, .6); \draw[ultra thick,blue] (2,-2) -- (2, .6);} }\endxy \biggr)$}; \end{tikzpicture} \] The upper vertical arrows correspond to the mutually inverse maps $(1,f_l,1)$ and $(1,g_l,1)$ induced by the isomorphism $\mathrm{B}_i\mathrm{B}_i\cong \mathrm{B}_i \langle -1\rangle \oplus \mathrm{B}_i \langle 1\rangle$. The maps $f_l$ and $g_l$ are \begin{equation*} f_l = \begin{pmatrix} \tfrac{1}{2} \xy (0,1)*{ \tikzdiagc[yscale=-0.3]{ \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-.5) and (1.25,0.5) .. (1.25, .7); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-.5) and (1.75,0.5) .. (1.75, .7); \draw[ultra thick,blue] (1.5,-2.5) -- (1.5, -1.4); }}\endxy & 0 \\[1.5ex] \tfrac{1}{2} \xy (0,1)*{ \tikzdiagc[yscale=-0.3]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.5) .. (1.25, .9); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.5) .. (1.75, .9); \draw[ultra thick,blue] (1.5,-2.5) -- (1.5, -1.4); }}\endxy & 0 \\[1.5ex] 0 & \xy (0,1.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy\; \end{pmatrix} , \mspace{60mu} g_l = \begin{pmatrix} \xy (0,1)*{ \tikzdiagc[yscale=0.3]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.5) .. (1.25, .9); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.5) .. (1.75, .9); \draw[ultra thick,blue] (1.5,-2.5) -- (1.5, -1.4); }}\endxy & \xy (0,1)*{ \tikzdiagc[yscale=0.3]{ \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-.5) and (1.25,0.5) .. (1.25, .7); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-.5) and (1.75,0.5) .. (1.75, .7); \draw[ultra thick,blue] (1.5,-2.5) -- (1.5, -1.4); }}\endxy & 0 \\[1.5ex] 0 & 0 & \xy (0,1.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy\; \end{pmatrix} . \end{equation*} The lower vertical maps are the mutually up-to-homotopy inverse maps $(f_l',1)$ and $(g_l',1)$ given below. \begin{equation*} f_l' = \begin{pmatrix} -\;\xy (0,0)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,blue] (1.5,-.3) -- (1.5, 0.3)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.6) -- (1.75, 0.6); }}\endxy & \xy (0,1.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); }}\endxy & 0 \\[1.5ex] -2\; \xy (0,1)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.6) -- (1.75, 0.6); }}\endxy & 0 & \xy (0,1.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy\; \end{pmatrix} , \mspace{60mu} g_l' = \begin{pmatrix} 0 & 0 \\[1ex] \xy (0,1.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); }}\endxy & 0 \\[1ex] 0 & \xy (0,1.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy\; \end{pmatrix} . \end{equation*} The fact that they define a homotopy equivalence uses the homotopy $h$ (whose only non-zero entry is multiplication by 2) in the complex in the middle. We leave the details to the reader. The following diagram gives a homotopy equivalence between the complex $\mathrm{B}_{i-1}\mathrm{T}_i^{-1}\mathrm{T}_{i-1}^{-1}$ and its retraction $(\mathrm{B}_{i-1}\mathrm{T}_i^{-1}\mathrm{T}_{i-1}^{-1})_{\mathrm{retr}}$: \[ \begin{tikzpicture}[anchorbase,scale=0.7] \node at (-3.25,0) {\small $\textcolor{myred}{\rB_{i-1}}\textcolor{blue}{\rT_i^{-1}}\textcolor{myred}{\rT_{i-1}^{-1}}\colon$}; \node at (0.5,0) {\small ${\textcolor{myred}{\rB_{i-1}} \langle -2\rangle}$}; \node at (6.5,0) {\small $\begin{pmatrix}\textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \langle -1\rangle \\[1.5ex] \textcolor{myred}{\rB_{i-1}} \textcolor{myred}{\rB_{i-1}} \langle -1 \rangle \end{pmatrix}$}; \node at (15.5,0) {\small $\underline{\textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}}}$}; \draw[thick,->] (1.5,0) to (4.2,0); \node at (2.5,1.15) {$ \begin{pmatrix} \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); } \\[1.5ex] \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55); } \end{pmatrix}$}; \draw[thick,->] (9,0) to (13.25,0); \node at (11,.75) {$ \Bigl( \xy (0,.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,myred] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-.45) -- (1.75, 0.55); \draw[ultra thick,myred] (2,-.45) -- (2, 0.55);} }\endxy \ , -\xy (0,0.25)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.25,-.45) -- (1.25, 0.55); \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.55)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.45) -- (1.75, 0.55);} }\endxy \Bigr)$}; \draw[thick,<->] (0,-.5) to (0,-4.5);\node at (0.25,-2.4) {$1$}; \draw[thick,<->] (15.5,-.5) to (15.5,-4.5);\node at (15.75,-2.4) {$1$}; \draw[thick,<-] (6.25,-1) to (6.25,-3.25);\node at (5.9,-2) {$g_r$}; \draw[thick,->] (7,-1) to (7,-3.25);\node at (7.35,-2) {$f_r$}; \begin{scope}[shift={(0,-5)}] \node at (.5,0) {\small ${\textcolor{myred}{\rB_{i-1}} \langle -2\rangle}$}; \node at (6.5,0) {\small $\begin{pmatrix} \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \langle -1\rangle \\[1.5ex] \textcolor{myred}{\rB_{i-1}} \langle -2\rangle \\[1.5ex] \textcolor{myred}{\rB_{i-1}} \end{pmatrix}$}; \node at (15.5,0) {\small $\underline{\textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}}}$}; \draw[thick,->] (1.6,.2) to (4.4,.2);\node at (2.75,2.1) {$ \begin{pmatrix} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy \\[1.5ex] \tfrac{1}{2}\; \xy (0,0)*{ \tikzdiagc[yscale=0.5]{ \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy \\[1.5ex] \tfrac{1}{2}\; \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,myred] (1.5,-.3) -- (1.5, 0.3)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy \end{pmatrix} $}; \draw[thick,<-] (1.6,-.2) to (4.4,-.2);\node at (2.9,-.7) {$h = \bigr(0, 2\cdot , 0 \bigl)$}; \draw[thick,->] (8.7,0) to (13.25,0); \node at (11.15,1.25) {$ \Biggl( \, \xy (0,.0)*{ \tikzdiagc[yscale=0.5,xscale=-1,scale=.9]{ \draw[ultra thick,myred] (1.5,-1) -- (1.5, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-2) -- (1.75, .6); \draw[ultra thick,myred] (2,-2) -- (2, .6);} }\endxy\, ,\! - \xy (0,0)*{ \tikzdiagc[yscale=0.5,scale=.9]{ \draw[ultra thick,blue] (1.5,.1) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-.45) -- (1.5, -.95)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.1,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.9,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,myred] (1.5,-2) -- (1.5, -1.4); }}\endxy ,\! - \xy (0,0.0)*{ \tikzdiagc[yscale=0.5,scale=.9]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.1,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.9,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,myred] (1.5,-2) -- (1.5, -1.4); }}\endxy\! \Biggr)$}; \end{scope} \begin{scope}[shift={(0,-5)}] \draw[thick,<->] (15,-.5) to (15,-4.5);\node at (15.25,-2.4) {$1$}; \draw[thick,<-] (6.25,-1.5) to (6.25,-3.8);\node at (5.9,-2.6) {$g_r'$}; \draw[thick,->] (7,-1.5) to (7,-3.8);\node at (7.35,-2.6) {$f_r'$}; \end{scope} \node at (-2.5,-10) {\small $(\textcolor{myred}{\rB_{i-1}}\textcolor{blue}{\rT_i^{-1}}\textcolor{myred}{\rT_{i-1}^{-1}})_{\mathrm{retr}}\colon$}; \begin{scope}[shift={(0,-10)}] \node at (0,0) {}; \node at (6.5,0) {\small $\begin{pmatrix} \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \langle -1\rangle \\[1.5ex] \textcolor{myred}{\rB_{i-1}} \end{pmatrix}$}; \node at (15.5,0) {\small $\underline{\textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}}}$}; \draw[thick,->] (8.75,0) to (13.25,0); \end{scope} \draw[thick,<-] (-2.95,-.5) to (-2.95,-9.5); \draw[thick,->] (-3.7,-.5) to (-3.7,-9.5); \node at (11,-9.1) {$ \biggl( \, \xy (0,0)*{ \tikzdiagc[yscale=0.3,xscale=-1]{ \draw[ultra thick,myred] (1.5,-1) -- (1.5, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-2) -- (1.75, .6); \draw[ultra thick,myred] (2,-2) -- (2, .6);} }\endxy\, , - \xy (0,0)*{ \tikzdiagc[yscale=0.3]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,myred] (1.5,-2) -- (1.5, -1.4); }}\endxy\! \biggr)$}; \end{tikzpicture} \] The upper vertical arrows correspond to the mutually inverse maps $(1,f_r,1)$ and $(1,g_r,1)$, with $f_r$ and $g_r$ being \begin{equation*} f_r = \begin{pmatrix} \; \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy & 0 \\[1.5ex] 0 & \tfrac{1}{2} \xy (0,1)*{ \tikzdiagc[yscale=-0.3]{ \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-.5) and (1.25,0.5) .. (1.25, .7); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-.5) and (1.75,0.5) .. (1.75, .7); \draw[ultra thick,myred] (1.5,-2.5) -- (1.5, -1.4); }}\endxy \\[1.5ex] 0& \tfrac{1}{2} \xy (0,1)*{ \tikzdiagc[yscale=-0.3]{ \draw[ultra thick,myred] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.5) .. (1.25, .9); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.5) .. (1.75, .9); \draw[ultra thick,myred] (1.5,-2.5) -- (1.5, -1.4); }}\endxy \end{pmatrix} , \mspace{60mu} g_r = \begin{pmatrix} \; \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy & 0&0 \\[1.5ex] 0& \xy (0,1)*{ \tikzdiagc[yscale=0.3]{ \draw[ultra thick,myred] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.5) .. (1.25, .9); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.5) .. (1.75, .9); \draw[ultra thick,myred] (1.5,-2.5) -- (1.5, -1.4); }}\endxy & \xy (0,1)*{ \tikzdiagc[yscale=0.3]{ \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-.5) and (1.25,0.5) .. (1.25, .7); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-.5) and (1.75,0.5) .. (1.75, .7); \draw[ultra thick,myred] (1.5,-2.5) -- (1.5, -1.4); }}\endxy \end{pmatrix} . \end{equation*} The lower vertical arrows correspond to the mutually up-to-homotopy inverse maps $(f_r',1)$ and $(g_r',1)$ given below. \begin{equation*} f_r' = \begin{pmatrix} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy & -2\; \xy (0,-1)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy & 0 \\[1.5ex] 0& -\;\xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,myred] (1.5,-.3) -- (1.5, 0.3)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy & \xy (0,1)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy \end{pmatrix} , \mspace{60mu} g_r' = \begin{pmatrix} \; \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,blue] (1.5,-.6) -- (1.5, 0.6); \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy & 0 \\[1ex] 0 & 0 \\[1ex] 0 & \xy (0,1.25)*{ \tikzdiagc[yscale=0.5,xscale=-1]{ \draw[ultra thick,myred] (1.75,-.6) -- (1.75, 0.6); }}\endxy \end{pmatrix} . \end{equation*} We leave the details to the reader. The diagram below shows that the complexes $(\mathrm{T}_i^{-1}\mathrm{T}_{i-1}^{-1}\mathrm{B}_i)_{\mathrm{retr}}$ and $(\mathrm{B}_{i-1}\mathrm{T}_i^{-1}\mathrm{T}_{i-1}^{-1})_{\mathrm{retr}}$ are homotopy equivalent. \[ \begin{tikzpicture}[anchorbase,scale=0.7] \node at (0,-10) {\small $(\textcolor{blue}{\rT_i^{-1}}\textcolor{myred}{\rT_{i-1}^{-1}}\textcolor{blue}{\rB_i})_{\mathrm{retr}}\colon$}; \begin{scope}[shift={(0,-10)}] \node at (6.5,0) {\small $\begin{pmatrix} \textcolor{blue}{\rB_i} \\[1.5ex] \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \langle -1\rangle\end{pmatrix}$}; \node at (15.5,0) {\small $\underline{\textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i}}$}; \draw[thick,->] (8.75,.15) to (13.25,.15); \draw[thick,<-] (8.75,-.15) to (13.25,-.15); \end{scope} \node at (11,-8.9) {$ \biggl( \! \xy (0,0.0)*{ \tikzdiagc[yscale=0.25]{ \draw[ultra thick,myred] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,blue] (1.5,-2) -- (1.5, -1.4); }}\endxy ,\! \xy (0,.0)*{ \tikzdiagc[yscale=0.25]{ \node at (1.25,0) {\bf $-$}; \draw[ultra thick,blue] (1.5,-1) -- (1.5, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.75,-2) -- (1.75, .6); \draw[ultra thick,blue] (2,-2) -- (2, .6);} }\endxy \biggr)$}; \node at (11,-11.3) {$ \begin{pmatrix} \! \xy (0,1)*{ \tikzdiagc[yscale=-0.25]{ \draw[ultra thick,myred] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,blue] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,blue] (1.5,-2) -- (1.5, -1.4); }}\endxy \\[0.5ex] 0 \end{pmatrix}$}; \begin{scope}[shift={(0,-5)}] \node at (0,-10) {\small $(\textcolor{myred}{\rB_{i-1}}\textcolor{blue}{\rT_i^{-1}}\textcolor{myred}{\rT_{i-1}^{-1}})_{\mathrm{retr}}\colon$}; \begin{scope}[shift={(0,-10)}] \node at (0,0) {}; \node at (6.5,0) {\small $\begin{pmatrix} \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \langle -1\rangle \\[1.5ex] \textcolor{myred}{\rB_{i-1}}\end{pmatrix}$}; \node at (15.5,0) {\small $\underline{\textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}}}$}; \draw[thick,->] (8.75,.15) to (13.25,.15); \draw[thick,<-] (8.75,-.15) to (13.25,-.15); \end{scope} \node at (11,-8.9) {$ \biggl( \xy (0,0)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,myred] (1.5,-1) -- (1.5, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-2) -- (1.75, .6); \draw[ultra thick,myred] (2,-2) -- (2, .6);} }\endxy\, , - \xy (0,0)*{ \tikzdiagc[yscale=0.25]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,myred] (1.5,-2) -- (1.5, -1.4); }}\endxy\! \biggr)$}; \node at (11,-11.6) {$ \begin{pmatrix} 0 \\[.5ex] - \xy (0,1)*{ \tikzdiagc[yscale=-0.25]{ \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,myred] (1.5,-2) -- (1.5, -1.4); }}\endxy \end{pmatrix}$}; \end{scope} \draw[thick,->] (-.4,-10.5) to (-.4,-14.5); \draw[thick,<-] (.4,-10.5) to (.4,-14.5); \draw[thick,->] (6,-11) to (6,-13.75); \draw[thick,<-] (6.6,-11) to (6.6,-13.75); \draw[thick,->] (15.6,-10.5) to (15.6,-14.5); \draw[thick,<-] (16.2,-10.5) to (16.2,-14.5); \node at (4.6,-12.5) {$ \begin{pmatrix}0 & - \xy (0,0)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,-1) -- (1.75, 1); \draw[ultra thick,myred] (2,-1) -- (2, 1);} }\endxy \\[1ex] 0 & \xy (0,1.25)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,-1) -- (1.75, 0)node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (2,-1) -- (2, 1);} }\endxy \end{pmatrix}$}; \node at (8.05,-12.5) { $\begin{pmatrix} \xy (0,1.25)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,-1) -- (1.75, 1); \draw[ultra thick,myred] (2,-1) -- (2, 0)node[pos=1, tikzdot]{};} }\endxy & 0 \\[1ex] - \xy (0,0)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,-1) -- (1.75, 1); \draw[ultra thick,myred] (2,-1) -- (2, 1);} }\endxy & 0 \end{pmatrix}$}; \node at (16.85,-12.2) {\xy (0,0)*{ \tikzdiagc[scale=0.3]{ \draw[ultra thick,blue] (0,-1) -- (0,0); \draw[ultra thick,myred] (-1,-1) -- (0,0); \draw[ultra thick,myred] (1,-1) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,1); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[ultra thick,blue] (0,0) -- ( 1,1); }}\endxy}; \node at (14.85,-12.2) {\xy (0,0)*{ \tikzdiagc[scale=0.3,yscale=-1]{ \draw[ultra thick,blue] (0,-1) -- (0,0); \draw[ultra thick,myred] (-1,-1) -- (0,0); \draw[ultra thick,myred] (1,-1) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,1); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[ultra thick,blue] (0,0) -- ( 1,1); }}\endxy}; \end{tikzpicture} \] This finishes the proof of the existence of the isomorphism in~\eqref{eq:RRBi}. Tensoring both complexes in~\eqref{eq:RRBi} with $\mathrm{T}_{i-1}\mathrm{T}_i$ on the left and on the right yields the isomorphism in~\eqref{eq:RRBinv}. The equivalence in~\eqref{eq:RRBi-j} is clear, because the two complexes are canonically isomorphic. \end{proof} \begin{rem} The isomorphisms in~\fullref{lem:RRB} also have a diagrammatic interpretation in terms of degree zero generators in ${\EuScript{K}^b}(\EuScript{BS}_d)$ \[ \xy (0,0)*{ \tikzdiagc[scale=.6,yscale=-1,xscale=-1]{ \draw[very thick,densely dotted,blue,-to] (0,-1) -- (0,-.5); \draw[very thick,densely dotted,blue] (0,-.5) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[very thick,densely dotted,blue,-to] (0,0) -- (1,1)node[below]{\tiny $i-1$}; \draw[very thick,densely dotted,myred,-to] (0,0) -- (0,1)node[below]{\tiny $i$}; \draw[very thick,densely dotted,myred,-to] (-1,-1) -- (-.5,-.5); \draw[very thick,densely dotted,myred] (-.5,-.5) -- (0,0); \draw[ultra thick,myred] ( 1,-1) -- (0,0); }}\endxy \mspace{60mu} \xy (0,0)*{ \tikzdiagc[scale=.6,yscale=1,xscale=-1]{ \draw[very thick,densely dotted,blue,to-] (0,-1)node[below]{\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[very thick,densely dotted,blue] (0,0) -- (.5,.5); \draw[very thick,densely dotted,blue,-to] (1,1) -- (.5,.5); \draw[very thick,densely dotted,myred] (0,0) -- (0,.5); \draw[very thick,densely dotted,myred,to-] (0,.5) -- (0,1); \draw[very thick,densely dotted,myred,to-] (-1,-1) -- (0,0); \draw[ultra thick,myred] ( 1,-1)node[below]{\tiny $i$} -- (0,0); }}\endxy \mspace{60mu} \xy (0,0)*{ \tikzdiagc[scale=.6]{ \draw[very thick,densely dotted,blue,-to] (0,-1)node[below]{\tiny $i-1$} -- (0,-.5); \draw[very thick,densely dotted,blue] (0,-.5) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[very thick,densely dotted,blue] (0,0) -- (-.5,.5); \draw[very thick,densely dotted,blue,-to] (0,0) -- ( 1,1); \draw[very thick,densely dotted,myred,-to] (0,0) -- (0,1); \draw[very thick,densely dotted,myred,-to] (-1,-1) -- (-.5,-.5); \draw[very thick,densely dotted,myred] (-.5,-.5) -- (0,0); \draw[ultra thick,myred] ( 1,-1)node[below]{\tiny $i$} -- (0,0); }}\endxy \mspace{60mu} \xy (0,0)*{ \tikzdiagc[scale=.6,yscale=-1]{ \draw[very thick,densely dotted,blue,to-] (0,-1) -- (0,-.5); \draw[very thick,densely dotted,blue] (0,-.5) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-1,1)node[below]{\tiny $i-1$}; \draw[very thick,densely dotted,blue] (0,0) -- (-.5,.5); \draw[very thick,densely dotted,blue] (0,0) -- ( .5,.5); \draw[very thick,densely dotted,blue,to-] (.5,.5) -- (1,1); \draw[very thick,densely dotted,myred] (0,0) -- (0,.5); \draw[very thick,densely dotted,myred,to-] (0,.5) -- (0,1)node[below]{\tiny $i$}; \draw[very thick,densely dotted,myred,to-] (-1,-1) -- (0,0); \draw[ultra thick,myred] ( 1,-1) -- (0,0); }}\endxy \] and relations \[ \xy (0,-2.5)*{ \tikzdiagc[scale=.45,yscale=-1,xscale=-1]{ \draw[ultra thick,blue] (-1,-2) -- (-1,2)node[below]{\tiny $i-1$}; \draw[very thick,densely dotted,blue,-to] ( 1,-2) -- ( 1,2); \draw[very thick,densely dotted,myred,-to] (0,-2) -- (0,2); }}\endxy = \xy (0,-2.5)*{ \tikzdiagc[scale=0.45,yscale=-1,xscale=-1]{ \draw[very thick,densely dotted,blue,-to] (0,-1) -- (0,.2); \draw[very thick,densely dotted,blue] (0,.2) -- (0,1); \draw[ultra thick,blue] (0,1) -- (-1,2); \draw[very thick,densely dotted,blue,-to] (0,1) -- (1,2); \draw[ultra thick,blue] (0,-1) -- (-1,-2); \draw[very thick,densely dotted,blue] (0,-1) -- (.45,-1.45); \draw[very thick,densely dotted,blue,to-] (.45,-1.45) -- (1,-2); \draw[very thick,densely dotted,myred,-to] (0,1) -- (0,2)node[below]{\tiny $i$}; \draw[very thick,densely dotted,myred] (0,-1) -- (0,-1.5); \draw[very thick,densely dotted,myred,-to] (0,-2) -- (0,-1.5); \draw[very thick,densely dotted,myred] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[very thick,densely dotted,myred,-to] (-.71,.15) -- (-.71,.18); \draw[ultra thick,myred] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy \mspace{60mu} \xy (0,-2.5)*{ \tikzdiagc[scale=.45,yscale=-1]{ \draw[ultra thick,myred] (-1,-2) -- (-1,2)node[below]{\tiny $i$}; \draw[very thick,densely dotted,myred,-to] ( 1,-2) -- ( 1,2); \draw[very thick,densely dotted,blue,-to] (0,-2) -- (0,2); }}\endxy = \xy (0,-2.5)*{ \tikzdiagc[scale=0.45,yscale=-1]{ \draw[very thick,densely dotted,myred,-to] (0,-1) -- (0,.2); \draw[very thick,densely dotted,myred] (0,.2) -- (0,1); \draw[ultra thick,myred] (0,1) -- (-1,2); \draw[very thick,densely dotted,myred,-to] (0,1) -- (1,2); \draw[ultra thick,myred] (0,-1) -- (-1,-2); \draw[very thick,densely dotted,myred] (0,-1) -- (.45,-1.45); \draw[very thick,densely dotted,myred,to-] (.45,-1.45) -- (1,-2); \draw[very thick,densely dotted,blue,-to] (0,1) -- (0,2)node[below]{\tiny $i-1$}; \draw[very thick,densely dotted,blue] (0,-1) -- (0,-1.5); \draw[very thick,densely dotted,blue,-to] (0,-2) -- (0,-1.5); \draw[very thick,densely dotted,blue] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[very thick,densely dotted,blue,-to] (-.71,.15) -- (-.71,.18); \draw[ultra thick,blue] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy \mspace{60mu} \xy (0,-2.5)*{ \tikzdiagc[scale=.45,yscale=1]{ \draw[ultra thick,blue] (-1,-2)node[below]{\tiny $i-1$} -- (-1,2); \draw[very thick,densely dotted,blue,-to] ( 1,-2) -- ( 1,2); \draw[very thick,densely dotted,myred,-to] (0,-2) -- (0,2); }}\endxy = \xy (0,-2.5)*{ \tikzdiagc[scale=0.45,yscale=1]{ \draw[very thick,densely dotted,blue,-to] (0,-1) -- (0,.2); \draw[very thick,densely dotted,blue] (0,.2) -- (0,1); \draw[ultra thick,blue] (0,1) -- (-1,2); \draw[very thick,densely dotted,blue,-to] (0,1) -- (1,2); \draw[ultra thick,blue] (0,-1) -- (-1,-2); \draw[very thick,densely dotted,blue] (0,-1) -- (.45,-1.45); \draw[very thick,densely dotted,blue,to-] (.45,-1.45) -- (1,-2); \draw[very thick,densely dotted,myred,-to] (0,1) -- (0,2); \draw[very thick,densely dotted,myred] (0,-1) -- (0,-1.5); \draw[very thick,densely dotted,myred,-to] (0,-2)node[below]{\tiny $i$} -- (0,-1.5); \draw[very thick,densely dotted,myred] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[very thick,densely dotted,myred,-to] (-.71,.15) -- (-.71,.18); \draw[ultra thick,myred] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy \mspace{60mu} \xy (0,-2.5)*{ \tikzdiagc[scale=.45,xscale=-1]{ \draw[ultra thick,myred] (-1,-2)node[below]{\tiny $i$} -- (-1,2); \draw[very thick,densely dotted,myred,-to] ( 1,-2) -- ( 1,2); \draw[very thick,densely dotted,blue,-to] (0,-2) -- (0,2); }}\endxy = \xy (0,-2.5)*{ \tikzdiagc[scale=0.45,xscale=-1]{ \draw[very thick,densely dotted,myred,-to] (0,-1) -- (0,.2); \draw[very thick,densely dotted,myred] (0,.2) -- (0,1); \draw[ultra thick,myred] (0,1) -- (-1,2); \draw[very thick,densely dotted,myred,-to] (0,1) -- (1,2); \draw[ultra thick,myred] (0,-1) -- (-1,-2); \draw[very thick,densely dotted,myred] (0,-1) -- (.45,-1.45); \draw[very thick,densely dotted,myred,to-] (.45,-1.45) -- (1,-2); \draw[very thick,densely dotted,blue,-to] (0,1) -- (0,2); \draw[very thick,densely dotted,blue] (0,-1) -- (0,-1.5); \draw[very thick,densely dotted,blue,-to] (0,-2)node[below]{\tiny $i-1$} -- (0,-1.5); \draw[very thick,densely dotted,blue] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[very thick,densely dotted,blue,-to] (-.71,.15) -- (-.71,.18); \draw[ultra thick,blue] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy \] \end{rem} \begin{rem}\label{rem:RRmRR} The following will not be used in the sequel. The canonical isomorphisms $\mathrm{B}_i\mathrm{T}_j^{\pm 1}\cong\mathrm{T}_j^{\pm 1}\mathrm{B}_i$, for distant $i$ and $j$, translate into the generators \[ \xy (0,0)*{ \tikzdiagc[scale=.5,yscale=1,xscale=-1]{ \draw[very thick,densely dotted,orange,-to] (-1,-1)node[below]{\tiny $j$} -- (1,1); \draw[ultra thick,blue] ( 1,-1)node[below]{\tiny $i$} -- (-1,1); }}\endxy \mspace{40mu} \xy (0,0)*{ \tikzdiagc[scale=.5,yscale=1,xscale=1]{ \draw[very thick,densely dotted,orange,-to] (-1,-1)node[below]{\tiny $j$} -- (1,1); \draw[ultra thick,blue] ( 1,-1)node[below]{\tiny $i$} -- (-1,1); }}\endxy \mspace{40mu} \xy (0,0)*{ \tikzdiagc[scale=.5,yscale=-1,xscale=1]{ \draw[very thick,densely dotted,orange,-to] (-1,-1) -- (1,1)node[below]{\tiny $j$}; \draw[ultra thick,blue] ( 1,-1) -- (-1,1)node[below]{\tiny $i$}; }}\endxy \mspace{40mu} \xy (0,0)*{ \tikzdiagc[scale=.5,yscale=-1,xscale=-1]{ \draw[very thick,densely dotted,orange,-to] (-1,-1) -- (1,1)node[below]{\tiny $j$}; \draw[ultra thick,blue] ( 1,-1) -- (-1,1)node[below]{\tiny $i$}; }}\endxy \] satisfying the relations \[ \xy (0,-2)*{ \tikzdiagc[yscale=1.7,xscale=1.1]{ \draw[ultra thick,blue] (0,0)node[below]{\tiny $i$} ..controls (0,.25) and (.65,.25) .. (.65,.5) ..controls (.65,.75) and (0,.75) .. (0,1); \begin{scope}[shift={(.65,0)}] \draw[very thick,densely dotted,orange,-to] (0,0)node[below]{\tiny $j$} ..controls (0,.25) and (-.65,.25) .. (-.65,.5) ..controls (-.65,.75) and (0,.75) .. (0,1); \end{scope} }}\endxy = \ \xy (0,-2)*{ \tikzdiagc[yscale=1.7,xscale=-1.1]{ \draw[ultra thick, blue] (.65,0)node[below]{\tiny $i$} -- (.65,1); \draw[very thick,densely dotted,orange,-to] (0,0)node[below]{\tiny $j$} -- (0,1); }}\endxy \mspace{60mu} \xy (0,-2)*{ \tikzdiagc[yscale=1.7,xscale=-1.1]{ \draw[ultra thick,blue] (0,0)node[below]{\tiny $i$} ..controls (0,.25) and (.65,.25) .. (.65,.5) ..controls (.65,.75) and (0,.75) .. (0,1); \begin{scope}[shift={(.65,0)}] \draw[very thick,densely dotted,orange,-to] (0,0)node[below]{\tiny $j$} ..controls (0,.25) and (-.65,.25) .. (-.65,.5) ..controls (-.65,.75) and (0,.75) .. (0,1); \end{scope} }}\endxy = \ \xy (0,-2)*{ \tikzdiagc[yscale=1.7,xscale=1.1]{ \draw[ultra thick, blue] (.65,0)node[below]{\tiny $i$} -- (.65,1); \draw[very thick,densely dotted,orange,-to] (0,0)node[below]{\tiny $j$} -- (0,1); }}\endxy \mspace{60mu} \xy (0,-2)*{ \tikzdiagc[yscale=-1.7,xscale=1.1]{ \draw[ultra thick,blue] (0,0) ..controls (0,.25) and (.65,.25) .. (.65,.5) ..controls (.65,.75) and (0,.75) .. (0,1)node[below]{\tiny $i$}; \begin{scope}[shift={(.65,0)}] \draw[very thick,densely dotted,orange,-to] (0,0) ..controls (0,.25) and (-.65,.25) .. (-.65,.5) ..controls (-.65,.75) and (0,.75) .. (0,1)node[below]{\tiny $j$}; \end{scope} }}\endxy = \ \xy (0,-2)*{ \tikzdiagc[yscale=-1.7,xscale=-1.1]{ \draw[ultra thick, blue] (.65,0) -- (.65,1)node[below]{\tiny $i$}; \draw[very thick,densely dotted,orange,-to] (0,0) -- (0,1)node[below]{\tiny $j$}; }}\endxy \mspace{60mu} \xy (0,-2)*{ \tikzdiagc[yscale=-1.7,xscale=-1.1]{ \draw[ultra thick,blue] (0,0) ..controls (0,.25) and (.65,.25) .. (.65,.5) ..controls (.65,.75) and (0,.75) .. (0,1)node[below]{\tiny $i$}; \begin{scope}[shift={(.65,0)}] \draw[very thick,densely dotted,orange,-to] (0,0) ..controls (0,.25) and (-.65,.25) .. (-.65,.5) ..controls (-.65,.75) and (0,.75) .. (0,1)node[below]{\tiny $j$}; \end{scope} }}\endxy = \ \xy (0,-2)*{ \tikzdiagc[yscale=-1.7,xscale=1.1]{ \draw[ultra thick, blue] (.65,0) -- (.65,1)node[below]{\tiny $i$}; \draw[very thick,densely dotted,orange,-to] (0,0) -- (0,1)node[below]{\tiny $j$}; }}\endxy \] There are also maps $\mathrm{B}_i\to\mathrm{T}_i^{-1}$, $\mathrm{T}_i\to \mathrm{B}_i$, $R\to\mathrm{T}_i$ and $\mathrm{T}_i^{-1}\to R$ of non-zero degree in ${\EuScript{K}^b}(\EuScript{BS}_d)$, depicted respectively as \[ \xy (0,-2)*{ \tikzdiagc{ \draw[ultra thick,blue] (0,-.5) -- (0,0); \draw[very thick,densely dotted,blue,to-] (0,0) -- (0,.5); }}\endxy \mspace{60mu} \xy (0,-2)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,blue] (0,-.5) -- (0,0); \draw[very thick,densely dotted,blue,to-] (0,0) -- (0,.5); }}\endxy \mspace{60mu} \xy (0,-1)*{ \tikzdiagc{ \draw[ultra thick,white] (0,-.5) -- (0,-.25); \draw[very thick,densely dotted,blue,-to] (0,-.25) -- (0,.3)node[pos=0, tikzdot]{}; }}\endxy \mspace{60mu} \xy (0,-3)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,white] (0,-.5) -- (0,-.25); \draw[very thick,densely dotted,blue,-to] (0,-.25) -- (0,.3)node[pos=0, tikzdot]{}; }}\endxy \] and satisfying certain diagrammatic relations, which are easy to deduce. Note also that the composite \[ \xy (0,0)*{ \tikzdiagc{ \draw[very thick,densely dotted,blue,-to] (0,-1) -- (0,-.5); \draw[ultra thick,blue] (0,-.5) -- (0,0); \draw[very thick,densely dotted,blue,to-] (0,0) -- (0,.5); }}\endxy \ =: \ \xy (0,0)*{ \tikzdiagc{ \draw[very thick,densely dotted,blue,-to] (0,-1) -- (0,-.30); \draw[blue,fill=blue] (0,-.25) circle (.08); \draw[very thick,densely dotted,blue,to-] (0,-.2) -- (0,.5); }}\endxy \] is the map $\mathrm{T}_i^{-1}\to\mathrm{T}_i$ mentioned in~\fullref{rem:Hquadratic}. \end{rem} \subsection{Some diagrammatic shortcuts II: special Rouquier complexes $\mathrm{T}_\rho^{\pm 1}$}\label{sec:diagshortcutsII} In this subsection, we introduce and study a special Rouquier complex, denoted $\mathrm{T}_{\rho}$, which will play an important role in the definition of the evaluation functors. \begin{defn} Define \[ \textcolor{violet}{\rT_{\rho}}:=\mathrm{T}_{1} \dotsm \mathrm{T}_{d-1}\quad\text{and}\quad \textcolor{violet}{\rT_{\rho}^{-1}} :=\mathrm{T}^{-1}_{d-1} \dotsm \mathrm{T}_{1}^{-1} \] in ${\EuScript{K}^b}(\EuScript{S}_d)$. \end{defn} In order to develop a diagrammatic calculus for these special Rouquier complexes, we first picture the {\em identity morphisms} of $\mathrm{T}_{\rho}$ and $\mathrm{T}_{\rho}^{-1}$ as upward and downward oriented arrows, respectively: \[ \xy (0,0)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[thick,double,violet,-to] (.9,-.5) -- (.9,.5); }}\endxy\; := \xy (0,0)*{ \tikzdiagc[yscale=0.9]{ \draw[very thick,densely dotted,blue,-to] (0,-.5)node[below]{\tiny $1$} -- (0,.5); \draw[very thick,densely dotted,myred,-to] (.5,-.5)node[below]{\tiny $2$} -- (.5,.5); \node at (1,-.05) {$\cdots$}; \draw[very thick,densely dotted,mygreen,-to] (1.5,-.5)node[below]{\tiny $d-1$} -- (1.5,.5)node[above]{\phantom{\tiny $d-1$}}; }}\endxy \mspace{60mu}\text{and}\mspace{60mu} \xy (0,0)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[thick,double,violet,to-] (.9,-.5) -- (.9,.5); }}\endxy\; := \xy (0,0)*{ \tikzdiagc[yscale=0.9,xscale=-1]{ \draw[very thick,densely dotted,blue,to-] (0,-.5)node[below]{\tiny $1$} -- (0,.5); \draw[very thick,densely dotted,myred,to-] (.5,-.5)node[below]{\tiny $2$} -- (.5,.5); \node at (1,-.05) {$\cdots$}; \draw[very thick,densely dotted,mygreen,to-] (1.5,-.5)node[below]{\tiny $d-1$} -- (1.5,.5)node[above]{\phantom{\tiny $d-1$}}; }}\endxy \] Further, we introduce {\em oriented cups and caps} \begingroup\allowdisplaybreaks \begin{align*} \xy (0,1.1)*{ \tikzdiagc[yscale=0.9]{ \draw[thick,double,violet,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy &:=\ \ \ \xy (0,1.1)*{ \tikzdiagc[yscale=0.9]{ \draw[very thick,densely dotted,mygreen,to-] (-.5,0) to [out=270,in=180] (0,-.7) to [out=0,in=-90] (.5,0)node[above]{\tiny $d-1$}; \draw[very thick,densely dotted,myred,to-] (-1.25,0) to [out=270,in=180] (0,-1.45) to [out=0,in=-90] (1.25,0)node[above]{\tiny $2$}; \draw[very thick,densely dotted,blue,to-] (-1.5,0) to [out=270,in=180] (0,-1.7) to [out=0,in=-90] (1.5,0)node[above]{\tiny $1$}; \node at (-.85,-.2) {\tiny $\cdots$};\node at (.85,-.2) {\tiny $\cdots$}; }}\endxy & \xy (0,1.1)*{ \tikzdiagc[yscale=0.9,xscale=-1]{ \draw[thick,double,violet,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy &:= \xy (0,1.1)*{ \tikzdiagc[yscale=0.9,xscale=-1]{ \draw[very thick,densely dotted,blue,to-] (-.5,0) to [out=270,in=180] (0,-.7) to [out=0,in=-90] (.5,0)node[above]{\tiny $1$}; \draw[very thick,densely dotted,myred,to-] (-.75,0) to [out=270,in=180] (0,-.95) to [out=0,in=-90] (.75,0)node[above]{\tiny $2$}; \draw[very thick,densely dotted,mygreen,to-] (-1.5,0) to [out=270,in=180] (0,-1.7) to [out=0,in=-90] (1.5,0)node[above]{\tiny $d-1$}; \node at (-1.125,-.2) {\tiny $\cdots$};\node at (1.125,-.2) {\tiny $\cdots$}; }}\endxy \\[1ex] \xy (0,0)*{ \tikzdiagc[yscale=-.9]{ \draw[thick,double,violet,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy &:=\ \ \ \xy (0,-1.1)*{ \tikzdiagc[yscale=-.9]{ \draw[very thick,densely dotted,blue,to-] (-.5,0) to [out=270,in=180] (0,-.7) to [out=0,in=-90] (.5,0)node[below]{\tiny $1$}; \draw[very thick,densely dotted,myred,to-] (-.75,0) to [out=270,in=180] (0,-.95) to [out=0,in=-90] (.75,0)node[below]{\tiny $2$}; \draw[very thick,densely dotted,mygreen,to-] (-1.5,0) to [out=270,in=180] (0,-1.7) to [out=0,in=-90] (1.5,0)node[below]{\tiny $d-1$}; \node at (-1.125,-.2) {\tiny $\cdots$};\node at (1.125,-.2) {\tiny $\cdots$}; }}\endxy & \xy (0,0)*{ \tikzdiagc[yscale=-.9,xscale=-1]{ \draw[thick,double,violet,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy & :=\ \ \ \xy (0,1.1)*{ \tikzdiagc[yscale=-0.9,xscale=-1]{ \draw[very thick,densely dotted,mygreen,to-] (-.5,0) to [out=270,in=180] (0,-.7) to [out=0,in=-90] (.5,0)node[below]{\tiny $d-1$}; \draw[very thick,densely dotted,myred,to-] (-1.25,0) to [out=270,in=180] (0,-1.45) to [out=0,in=-90] (1.25,0)node[below]{\tiny $2$}; \draw[very thick,densely dotted,blue,to-] (-1.5,0) to [out=270,in=180] (0,-1.7) to [out=0,in=-90] (1.5,0)node[below]{\tiny $1$}; \node at (-.85,-.2) {\tiny $\cdots$};\node at (.85,-.2) {\tiny $\cdots$}; }}\endxy \end{align*} \endgroup These correspond to the units and counits of left and right adjunction for $\mathrm{T}_\rho$ and $\mathrm{T}_{\rho}^{-1}$ in ${\EuScript{K}^b}(\EuScript{S}_d)$. Algebraically, they can be expressed in terms of the maps given in~\fullref{sec:diagshortcutsI} \begingroup\allowdisplaybreaks \begin{align*} \xy (0,1.1)*{ \tikzdiagc[yscale=0.9]{ \draw[thick,double,violet,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy &:= (1^{d-2}\psi_{d-1}^{-1} 1^{d-2}) \circ (1^{d-3}\eta_{d-2,+}\psi_{d-2}^{-1} 1^{d-3}) \circ \dotsm \circ ( 1 \eta_{2,+}\psi_{2}^{-1} 1) \\ & \qquad \circ ( \eta_{1,+}\psi_{1}^{-1}) \colon R\to \textcolor{violet}{\rT_{\rho}} \textcolor{violet}{\rT_{\rho}^{-1}} , \\ \xy (0,1.1)*{ \tikzdiagc[yscale=0.9]{ \draw[thick,double,violet,-to] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy &:= (1^{d-2}\phi_1^{-1} 1^{d-2}) \circ (1^{d-3}\eta_{2,-}\phi_2^{-1} 1^{d-3}) \circ \dotsm \circ ( 1 \eta_{d-2,-}\phi_{d-2}^{-1} 1) \\ & \qquad \circ ( \eta_{d-1,-}\phi_{d-1}^{-1}) \colon R\to \textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{violet}{\rT_{\rho}} , \\ \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[thick,double,violet,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy &:=(\phi_{1}\eta_{1,-}^{-1}) \circ (1\phi_{2}\eta_{2,-}^{-1} 1) \circ \dotsm \circ ( 1^{d-3} \phi_{2}\eta_{2,-}^{-1} 1^{d-3}) \\ & \qquad \circ (1^{d-2} \phi_{1}1^{d-2}) \colon\textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{violet}{\rT_{\rho}} \to R , \\ \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[thick,double,violet,-to] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy &:=(\psi_{d-1}\eta_{d-1,+}^{-1}) \circ (1\psi_{d-2}\eta_{d-2,+}^{-1} 1) \circ \dotsm \circ ( 1^{d-3} \psi_{2}\eta_{2,+}^{-1} 1^{d-3}) \\ & \qquad \circ (1^{d-2} \psi_{d-1}1^{d-2}) \colon \textcolor{violet}{\rT_{\rho}} \textcolor{violet}{\rT_{\rho}^{-1}} \to R , \end{align*} \endgroup \begin{lem}\label{lem:istpy-Rrho} The oriented cups and caps satisfy the following relations in ${\EuScript{K}^b}(\EuScript{S}_d)$ \begin{equation}\label{eq:loopRrho} \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[thick,double,violet] (0,0) circle (.65);\draw [thick,double,violet,-to] (.65,0) --(.65,0); }}\endxy \ = 1 = \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[thick,double,violet] (0,0) circle (.65);\draw [thick,double,violet,to-] (-.65,0) --(-.65,0); }}\endxy \ \end{equation} \begin{equation}\label{eq:Rrhoinvert} \xy (0,0)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[thick,double,violet,-to] (.5,-.75) -- (.5,.75); \draw[thick,double,violet,to-] (-.5,-.75) -- (-.5,.75); }}\endxy\ = \xy (0,0)*{ \tikzdiagc[yscale=0.9]{ \draw[thick,double,violet,-to] (1,.75) .. controls (1.2,-.05) and (1.8,-.05) .. (2,.75); \draw[thick,double,violet,to-] (1,-.75) .. controls (1.2,.05) and (1.8,.05) .. (2,.-.75); }}\endxy \mspace{80mu} \xy (0,0)*{ \tikzdiagc[yscale=-0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[thick,double,violet,-to] (.5,-.75) -- (.5,.75); \draw[thick,double,violet,to-] (-.5,-.75) -- (-.5,.75); }}\endxy\ = \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[thick,double,violet,-to] (1,.75) .. controls (1.2,-.05) and (1.8,-.05) .. (2,.75); \draw[thick,double,violet,to-] (1,-.75) .. controls (1.2,.05) and (1.8,.05) .. (2,.-.75); }}\endxy \end{equation} \begin{equation}\label{eq:snake} \xy (0,0)*{ \begin{tikzpicture}[scale=1.5] \draw[thick,double,violet] (0,-.55) -- (0,0); \draw[thick,double,violet,-to] (0,0) to [out=90,in=180] (.25,.5) to [out=0,in=90] (.5,0); \draw[thick,double,violet] (.5,0) to [out=270,in=180] (.75,-.5) to [out=0,in=270] (1,0); \draw[thick,double,violet] (1,0) -- (1,0.75); \end{tikzpicture} }\endxy = \xy ( 0,0)*{\begin{tikzpicture}[scale=1.5] \draw[thick,double,violet,-to] (0,-.55) to (0,.75); \end{tikzpicture} }\endxy = \xy (0,0)*{ \begin{tikzpicture}[scale=1.5,xscale=-1] \draw[thick,double,violet] (0,-.55) -- (0,0); \draw[thick,double,violet,-to] (0,0) to [out=90,in=180] (.25,.5) to [out=0,in=90] (.5,0); \draw[thick,double,violet] (.5,0) to [out=270,in=180] (.75,-.5) to [out=0,in=270] (1,0); \draw[thick,double,violet] (1,0) -- (1,0.75); \end{tikzpicture} }\endxy \mspace{60mu} \xy (0,0)*{ \begin{tikzpicture}[scale=1.5] \draw[thick,double,violet] (0,-.55) -- (0,0); \draw[thick,double,violet,to-] (0,0) to [out=90,in=180] (.25,.5) to [out=0,in=90] (.5,0); \draw[thick,double,violet] (.5,0) to [out=270,in=180] (.75,-.5) to [out=0,in=270] (1,0); \draw[thick,double,violet] (1,0) -- (1,0.75); \end{tikzpicture} }\endxy = \xy ( 0,0)*{\begin{tikzpicture}[scale=1.5] \draw[thick,double,violet,to-] (0,-.55) to (0,.75); \end{tikzpicture} }\endxy = \xy (0,0)*{ \begin{tikzpicture}[scale=1.5,xscale=-1] \draw[thick,double,violet] (0,-.55) -- (0,0); \draw[thick,double,violet,to-] (0,0) to [out=90,in=180] (.25,.5) to [out=0,in=90] (.5,0); \draw[thick,double,violet] (.5,0) to [out=270,in=180] (.75,-.5) to [out=0,in=270] (1,0); \draw[thick,double,violet] (1,0) -- (1,0.75); \end{tikzpicture} }\endxy \end{equation} \end{lem} \begin{proof} The relations in~\eqref{eq:snake} are a consequence of~\fullref{lem:snakeR}. The other relations are immediate. \end{proof} The next diagrammatic generators involving oriented strands are the {\em mixed crossings}, which correspond to the following degree-zero isomorphisms in in ${\EuScript{K}^b}(\EuScript{S}_d)$, for $1 < i \leq d-1$: \begin{equation}\label{eq:mXdef} \xy (0,.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.5,-.5)node[below] {\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.5,.5)node[above] {\tiny $i-1$} -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy =\id_{\mathrm{T}_{d-1}^{-1}\dotsm\mathrm{T}_{i+1}^{-1}} F_{i,r} \id_{\mathrm{T}_{i-2}^{-1}\dotsm\mathrm{T}_{1}^{-1}} \colon \textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{blue}{\rB_i}\to \textcolor{myred}{\rB_{i-1}}\textcolor{violet}{\rT_{\rho}^{-1}} , \end{equation} where in homological degrees $-2$, $-1$ and $0$, respectively, we define \begin{equation}\label{eq:Fir} F_{i,r} := \left( 0, \begin{pmatrix} \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,0) -- (1.75, 1); \draw[ultra thick,blue] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{};} }\endxy & -\xy (0,0)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,myred] (2,-1.5) -- (2, 1);} }\endxy \\[1.5ex] -\xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,blue] ( .9,-.5) .. controls (1.2,0) and (1.8,0) .. (2.1,-.5); \draw[ultra thick,myred] (.9,.5) .. controls (1.2,0) and (1.8,0) .. (2.1, .5); }}\endxy & \xy (0,1)*{ \tikzdiagc[yscale=-0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,0) -- (1.75, 1); \draw[ultra thick,myred] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{};} }\endxy \end{pmatrix}, \xy (0,0)*{ \tikzdiagc[scale=0.3,yscale=-1]{ \draw[ultra thick,blue] (0,-1) -- (0,0); \draw[ultra thick,myred] (-1,-1) -- (0,0); \draw[ultra thick,myred] (1,-1) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,1); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[ultra thick,blue] (0,0) -- ( 1,1); }}\endxy \right) . \end{equation} This is the map obtained from the homotopy equivalence in~\fullref{lem:RRB} by tensoring on the left with the identity morphism of $\mathrm{T}_{d-1}^{-1}\dotsm\mathrm{T}_{i+1}^{-1}$ and on the right with the identity morphism of $\mathrm{T}_{i-2}^{-1}\dotsm\mathrm{T}_{1}^{-1}$, and using when necessary the permutation isomorphism between $\mathrm{T}_i^{-1}\mathrm{B}_j$ and $\mathrm{B}_j\mathrm{T}_i^{-1}$ if $\vert i-j\vert\neq 1$. Analogously, \begin{equation*} \xy (0,.75)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,myred] (.5,-.5)node[below] {\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (-.5,.5)node[above] {\tiny $i$} -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy = \id_{\mathrm{T}_{d-1}^{-1}\dotsm\mathrm{T}_{i+1}^{-1}} G_{i,r} \id_{\mathrm{T}_{i-2}^{-1}\dotsm\mathrm{T}_{1}^{-1}} \colon \colon \textcolor{myred}{\rB_{i-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\to\textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{blue}{\rB_i} , \end{equation*} with \begin{equation}\label{eq:Gir} G_{i,r} = \left( 0, \begin{pmatrix} \xy (0,1)*{ \tikzdiagc[yscale=-0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,0) -- (1.75, 1); \draw[ultra thick,blue] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{};} }\endxy & -\xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,myred] ( .9,-.5) .. controls (1.2,0) and (1.8,0) .. (2.1,-.5); \draw[ultra thick,blue] (.9,.5) .. controls (1.2,0) and (1.8,0) .. (2.1, .5); }}\endxy \\[1.5ex] -\xy (0,0)*{ \tikzdiagc[yscale=-0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,myred] (2,-1.5) -- (2, 1);} }\endxy & \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,0) -- (1.75, 1); \draw[ultra thick,myred] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{};} }\endxy \end{pmatrix}, \xy (0,0)*{ \tikzdiagc[scale=0.3,yscale=1]{ \draw[ultra thick,blue] (0,-1) -- (0,0); \draw[ultra thick,myred] (-1,-1) -- (0,0); \draw[ultra thick,myred] (1,-1) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,1); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[ultra thick,blue] (0,0) -- ( 1,1); }}\endxy \right) . \end{equation} Of course, there are also mixed crossings involving $\mathrm{T}_\rho$, which are depicted as \begin{equation*} \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,myred] (.5,-.5)node[below] {\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (-.5,.5)node[above] {\tiny $i$} -- (0,0); \draw[thick,double,violet,-to] (-.5,-.5) -- (.5,.5); }}\endxy \colon \textcolor{violet}{\rT_{\rho}}\textcolor{myred}{\rB_{i-1}}\to \textcolor{blue}{\rB_i} \textcolor{violet}{\rT_{\rho}} \mspace{40mu}\text{and}\mspace{40mu} \xy (0,0)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,blue] (.5,-.5)node[below] {\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.5,.5)node[above] {\tiny $i-1$} -- (0,0); \draw[thick,double,violet,-to] (-.5,-.5) -- (.5,.5); }}\endxy \colon \textcolor{blue}{\rB_i} \textcolor{violet}{\rT_{\rho}} \to \textcolor{violet}{\rT_{\rho}} \textcolor{myred}{\rB_{i-1}}. \end{equation*} \begin{lem} For distant colors $i, j=1,\ldots, d-1$, we have \begin{equation}\label{eq:R3violet} \xy (0,0)*{ \tikzdiagc[scale=.6]{ \draw[ultra thick,blue] (.35,-1)node[below]{\tiny $i$} -- (.35,.35); \draw[ultra thick,myred] (.35,.35) -- (.35,1)node[above]{\tiny $i-1$}; \draw[ultra thick,mygreen] ( 1,-1)node[below]{\tiny $j$} -- (0,0); \draw[ultra thick,orange] (0,0) -- (-1,1)node[above]{\tiny $j-1$}; \draw[thick,double,violet,to-] (-1,-1)-- (1,1); }}\endxy = \xy (0,0)*{ \tikzdiagc[scale=.6]{ \draw[ultra thick,blue] (-.35,-1)node[below]{\tiny $i$} -- (-.35,-.35); \draw[ultra thick,myred] (-.35,-.35) -- (-.35,1)node[above,xshift=4pt]{\tiny $i-1$}; \draw[ultra thick,mygreen] ( 1,-1)node[below]{\tiny $j$} -- (0,0); \draw[ultra thick,orange] (0,0) -- (-1,1)node[above, xshift=-4pt]{\tiny $j-1$}; \draw[thick,double,violet,to-] (-1,-1)-- (1,1); }}\endxy \end{equation} in ${\EuScript{K}^b}(\EuScript{S}_d)$. \end{lem} \begin{proof} It is clear that the map in \eqref{eq:mXdef} commutes with the 4-valent crossing for distant colors. \end{proof} The proof of the following lemma is immediate and, therefore, omitted. \begin{lem}\label{lem:mixedRtwo} The mixed crossings in ${\EuScript{K}^b}(\EuScript{S}_d)$ satisfy the relations \begingroup\allowdisplaybreaks \begin{equation}\label{eq:mixedRtwo} \begin{split} \xy (0,1)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick, blue] (1,0)node[below]{\tiny $i$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,myred] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[ultra thick, blue] (1,1)node[above]{\tiny $i$} .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \draw[thick,double,violet,to-] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); \node[myred] at (-.4,.5) {\tiny $i-1$} }}\endxy = \ \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick, blue] (1,0)node[below]{\tiny $i$} -- (1,1) node[above]{\tiny $i$}; \draw[thick,double,violet,to-] (0,0) -- (0,1); }}\endxy \mspace{80mu} \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=-1.1]{ \draw[ultra thick,myred] (1,0)node[below]{\tiny $i-1$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,blue] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[ultra thick,myred] (1,1)node[above]{\tiny $i-1$} .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \draw[thick,double,violet,to-] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); \node[blue] at (-.1,.5) {\tiny $i$} }}\endxy = \ \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=-1.1]{ \draw[ultra thick,myred] (1,0)node[below]{\tiny $i-1$} -- (1,1) node[above]{\tiny $i-1$}; \draw[thick,double,violet,to-] (0,0) -- (0,1); }}\endxy \\%\intertext{and} \xy (0,1)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick,myred] (1,0)node[below]{\tiny $i-1$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,blue] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[ultra thick,myred] (1,1)node[above]{\tiny $i-1$} .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \draw[thick,double,violet,-to] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); \node[blue] at (-.4,.5) {\tiny $i$} }}\endxy = \ \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick,myred] (1,0)node[below]{\tiny $i-1$} -- (1,1) node[above]{\tiny $i-1$}; \draw[thick,double,violet,-to] (0,0) -- (0,1); }}\endxy \mspace{80mu} \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=-1.1]{ \draw[ultra thick,blue] (1,0)node[below]{\tiny $i$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,myred] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[ultra thick,blue] (1,1)node[above]{\tiny $i$} .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \draw[thick,double,violet,-to] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); \node[myred] at (-.4,.5) {\tiny $i-1$} }}\endxy = \ \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=-1.1]{ \draw[ultra thick,blue] (1,0)node[below]{\tiny $i$} -- (1,1) node[above]{\tiny $i$}; \draw[thick,double,violet,-to] (0,0) -- (0,1); }}\endxy \end{split} \end{equation} \endgroup \end{lem} \begin{lem} The following diagrammatic relations hold in in ${\EuScript{K}^b}(\EuScript{BS}_d)$: \begingroup\allowdisplaybreaks \begin{equation}\label{eq:Dslide-Xmixed} \begin{split} \xy (0,1)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.3,-.3)node[below]{\tiny $i$} -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (-.5,.5)node[above]{\tiny $i-1$} -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5)node[below]{\phantom{\tiny $i$}} -- (.5,.5); }}\endxy = \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,myred] (-.5,.5)node[above]{\tiny $i-1$} -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[thick,double,violet,to-] (-.5,-.5)node[below]{\phantom{\tiny $i$}} -- (.5,.5); }}\endxy \mspace{80mu} \xy (0,-1.25)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,blue] (.3,-.3)node[above]{\tiny $i$} -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (-.5,.5)node[below]{\tiny $i-1$} -- (0,0); \draw[thick,double,violet,-to] (-.5,-.5) -- (.5,.5)node[above]{\phantom{\tiny $i-1$}}; }}\endxy = \xy (0,-.25)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,myred] (-.5,.5)node[below]{\tiny $i-1$} -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[thick,double,violet,-to] (-.5,-.5)node[above]{\phantom{\tiny $i-1$}} -- (.5,.5); }}\endxy \\[1ex] \xy (0,1)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,blue] (.3,-.3)node[below]{\tiny $i$} -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (-.5,.5)node[above]{\tiny $i-1$} -- (0,0); \draw[thick,double,violet,-to] (-.5,-.5)node[below]{\phantom{\tiny $i-1$}} -- (.5,.5); }}\endxy = \xy (0,0)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,myred] (-.5,.5)node[above]{\tiny $i-1$} -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[thick,double,violet,-to] (-.5,-.5)node[below]{\phantom{\tiny $i-1$}} -- (.5,.5); }}\endxy \mspace{80mu} \xy (0,-1.25)*{ \tikzdiagc[yscale=-1,xscale=-1]{ \draw[ultra thick,blue] (.3,-.3)node[above]{\tiny $i$} -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (-.5,.5)node[below]{\tiny $i-1$} -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5)node[below]{\phantom{\tiny $i-1$}} -- (.5,.5); }}\endxy = \xy (0,-.25)*{ \tikzdiagc[yscale=-1,xscale=-1]{ \draw[ultra thick,myred] (-.5,.5)node[below]{\tiny $i-1$} -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[thick,double,violet,to-] (-.5,-.5)node[above]{\phantom{\tiny $i-1$}} -- (.5,.5); }}\endxy \end{split} \end{equation} \endgroup \end{lem} \begin{proof} We prove the first relation in~\eqref{eq:Dslide-Xmixed}, the proof of the others being similar. By~\eqref{eq:Fir}, the maps of complexes corresponding to the two sides of~\eqref{eq:Dslide-Xmixed} are \begin{equation*} \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.3,-.3) -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (-.5,.5) -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy =\id_{\mathrm{T}_{d-1}^{-1}\dotsm\mathrm{T}_{i+1}^{-1}} F \id_{\mathrm{T}_{i-2}^{-1}\dotsm\mathrm{T}_{1}^{-1}} , \end{equation*} where in homological degrees $(-2, -1,0)$, respectively, we have \begin{equation*} F = \left( 0, \begin{pmatrix} \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,myred] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy & -\xy (0,0)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,blue] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy \\[1.5ex] -\xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,blue] (1.5,-.5) -- (1.5,-.15)node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (.9,.5) .. controls (1.2,.1) and (1.8,.1) .. (2.1, .5); }}\endxy & \xy (0,1)*{ \tikzdiagc[yscale=-0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,0) -- (1.75, 1); \draw[ultra thick,myred] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2.1,-.7) -- (2.1, .5)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{};} }\endxy \end{pmatrix},\; \xy (0,0)*{ \tikzdiagc[scale=0.4,yscale=-1]{ \draw[ultra thick,blue] (0,-1) -- (0,0); \draw[ultra thick,myred] (-1,-1) -- (0,0); \draw[ultra thick,myred] (1,-1) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,1); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[ultra thick,blue] (0,0) -- (.65,.65)node[pos=1, tikzdot]{}; }}\endxy \right) , \end{equation*} and \begin{equation*} \xy (0,.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,myred] (-.5,.5) -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy =\id_{\mathrm{T}_{d-1}^{-1}\dotsm\mathrm{T}_{i+1}^{-1}} G \id_{\mathrm{T}_{i-2}^{-1}\dotsm\mathrm{T}_{1}^{-1}} , \end{equation*} where in the same homological degrees we have \begin{equation*} G = \left( \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,myred] (2,0) -- (2, 1.5)node[pos=0, tikzdot]{};} }\endxy \, , \begin{pmatrix} \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,myred] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy & 0 \\[1.5ex] 0 & \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,myred] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,myred] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy\, \end{pmatrix},\; \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,myred] (1.5,-1.5) -- (1.5, 1); \draw[ultra thick,blue] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,myred] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy \right) . \end{equation*} The diagram below shows that $F-G$ is zero in ${\EuScript{K}^b}(\EuScript{S}_d)$: \[ \begin{tikzpicture}[anchorbase,scale=0.7] \begin{scope}[shift={(0,0)}] \node at (-4,0) {\small $ \textcolor{myred}{\rB_{i-1}}\textcolor{blue}{\rT_i^{-1}}\mathrm{T}_{i-1}^{-1} \langle 1\rangle\colon$}; \node at (0,0) {\small $\textcolor{myred}{\rB_{i-1}} \langle -1\rangle $}; \node at (6,0) {\small $\begin{pmatrix} \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \\[1.5ex] \textcolor{myred}{\rB_{i-1}} \textcolor{myred}{\rB_{i-1}}\end{pmatrix}$}; \node at (13.9,0) {\small $\underline{\textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}} \langle 1\rangle }$}; \draw[thick,->] (1.5,.05) to (4,.05); \node at (2.5,1.5) {$ \begin{pmatrix} \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,blue] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy \\[1ex] \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,myred] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy \end{pmatrix}$}; \draw[thick,->] (8,.05) to (11.5,.05); \node at (9.7,1) {$ \biggl( \xy (0,.0)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,myred] (1.5,-1) -- (1.5, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.75,-2) -- (1.75, .6); \draw[ultra thick,myred] (2,-2) -- (2, .6);} }\endxy\, ,\! \xy (0,.0)*{ \tikzdiagc[yscale=0.25]{ \node at (1.25,-.5) {\bf $-$}; \draw[ultra thick,blue] (1.75,-1) -- (1.75, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-2) -- (1.5, .6); \draw[ultra thick,myred] (2,-2) -- (2, .6);} }\endxy \biggr)$}; \end{scope} \begin{scope}[shift={(0,-5)}] \node at (-3.58,0) {\small $\textcolor{blue}{\rT_i^{-1}}\mathrm{T}_{i-1}^{-1}\colon$}; \node at (0,0) {\small $R \langle -2\rangle$}; \node at (6,0) {\small $\begin{pmatrix} \textcolor{blue}{\rB_i} \langle -1 \rangle\\[1.5ex] \textcolor{myred}{\rB_{i-1}} \langle -1 \rangle \end{pmatrix}$}; \node at (13.9,0) {\small $\underline{\textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}}}$}; \draw[thick,->] (1.1,.05) to (4.25,.05); \node at (2,1) {$ \begin{pmatrix} \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,blue] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy \\[0.5ex] \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,myred] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy \end{pmatrix}$}; \draw[thick,->] (8,.05) to (12.3,.05); \node at (9.7,1) {$ \biggl( \xy (0,.0)*{ \tikzdiagc[yscale=0.25]{ \draw[ultra thick,myred] (1.75,-1) -- (1.75, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1.5,-2) -- (1.5, .6);} }\endxy\, ,\! -\xy (0,.0)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,-1) -- (1.75, .6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-2) -- (1.5, .6);} }\endxy \biggr)$}; \end{scope} \draw[thick,->] (-4.05,-4.5) to (-4.05,-.5);\node at (-3.3,-2.5) {\tiny $F-G$}; \draw[thick,->] (-.2,-4.5) to (-.2,-.5); \node at (-.8,-2.2) {$-\xy (0,0)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,myred] (2,0) -- (2, 1.5)node[pos=0, tikzdot]{};} }\endxy$}; \draw[thick,->] (6,-3.9) to (6,-1); \node at (7.2,-2.2) {\tiny $(F-G)_{-1}$}; \draw[thick,->] (13.8,-4.5) to (13.8,-.5); \node at (14.7,-2.2) {\xy (0,0)*{ \tikzdiagc[yscale=0.25]{ \draw[ultra thick,blue] (1,-2) -- (1,-.8)node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (1.5,-.45) -- (1.5,.6)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.15,-1) and (1.25,0.4) .. (1.25, .6); \draw[ultra thick,myred] (1.5,-1.4) .. controls (1.85,-1) and (1.75,0.4) .. (1.75, .6); \draw[ultra thick,myred] (1.5,-2) -- (1.5, -1.4); }}\endxy}; \draw[thick,->] (5,-4) to (1,-.5);\node at (3.6,-2.2) {\tiny $H_{-1}$}; \draw[thick,->] (13,-4.5) to (8,-.5);\node at (10.7,-2.2) {\tiny $H_{0}$}; \end{tikzpicture} \] with \[ (F-G)_{-1} = \begin{pmatrix} 0 & -\xy (0,0)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,blue] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy \\[1.5ex] -\xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,blue] (1.5,-.5) -- (1.5,-.15)node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (.9,.5) .. controls (1.2,.1) and (1.8,.1) .. (2.1, .5); }}\endxy & \xy (0,1)*{ \tikzdiagc[yscale=-0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,0) -- (1.75, 1); \draw[ultra thick,myred] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2.1,-.7) -- (2.1, .5)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{};} }\endxy -\xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,myred] (1.75,-1.5) -- (1.75, 1); \draw[ultra thick,myred] (2,0) -- (2, 1)node[pos=0, tikzdot]{};} }\endxy\, \end{pmatrix} , \mspace{40mu} H_{-1} = \Bigl( 0 , -\,\xy (0,0)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,myred] (2,0) -- (2, 2);} }\endxy \Bigr) , \mspace{40mu} H_0 = \begin{pmatrix} 0 \\[0.5ex] -\xy (0,0)*{ \tikzdiagc[yscale=0.25]{ \draw[ultra thick,blue] (-.5,-1) -- (-.5,0)node[pos=1, tikzdot]{}; \draw[ultra thick,myred] (0,-1) -- (0,0); \draw[ultra thick,myred] (0,0) -- (-.25,1); \draw[ultra thick,myred] (0,0) -- (.25, 1); }}\endxy \end{pmatrix} . \] This finishes the proof. \end{proof} \begin{lem}\label{lem:RotXmixed} The following diagrammatic equalities hold in ${\EuScript{K}^b}(\EuScript{S}_d)$: \begingroup\allowdisplaybreaks \begin{gather} \label{eq:rotXmixed} \xy (0,-1.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.25,-.5)node[below] {\tiny $i$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,myred] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[below] {\tiny $i-1$}; \draw[thick,double,violet,to-] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy = \xy (0,-1.5)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,myred] (.25,-.5)node[below] {\tiny $i-1$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,blue] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[below] {\tiny $i$}; \draw[thick,double,violet,to-] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy \mspace{80mu} \xy (0,-1.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,myred] (.25,-.5)node[below] {\tiny $i-1$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,blue] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[below] {\tiny $i$}; \draw[thick,double,violet,-to] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy = \xy (0,-1.5)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,blue] (.25,-.5)node[below] {\tiny $i$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,myred] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[below] {\tiny $i-1$}; \draw[thick,double,violet,-to] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy \\[1ex] \xy (0,-1.5)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,blue] (.25,-.5)node[above] {\tiny $i$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,myred] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[above] {\tiny $i-1$}; \draw[thick,double,violet,-to] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy = \xy (0,-1.5)*{ \tikzdiagc[xscale=-1,yscale=-1]{ \draw[ultra thick,myred] (.25,-.5)node[above] {\tiny $i-1$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,blue] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[above] {\tiny $i$}; \draw[thick,double,violet,-to] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy \mspace{80mu} \xy (0,-1.5)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,myred] (.25,-.5)node[above] {\tiny $i-1$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,blue] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[above] {\tiny $i$}; \draw[thick,double,violet,to-] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy = \xy (0,-1.5)*{ \tikzdiagc[xscale=-1,yscale=-1]{ \draw[ultra thick,blue] (.25,-.5)node[above] {\tiny $i$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,myred] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[above] {\tiny $i-1$}; \draw[thick,double,violet,to-] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy \end{gather} \endgroup \end{lem} \begin{proof} We prove the first relation in~\eqref{eq:rotXmixed}, as the other can be proved in a similar way. The proof is a consequence of the fact that the composites \[ \textcolor{myred}{\rB_{i-1}} \textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{blue}{\rB_i} \xra{ \xy (0,0)*{ \tikzdiagc[scale=.75]{ \draw[ultra thick,myred] (-1,-.5) -- (-1,.5); \draw[ultra thick,blue] (.5,-.5) -- (0,0); \draw[ultra thick,myred] (-.5,.5) -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy} \textcolor{myred}{\rB_{i-1}} \textcolor{myred}{\rB_{i-1}} \textcolor{violet}{\rT_{\rho}^{-1}} \xra{ \xy (0,0)*{ \tikzdiagc[yscale=0.75,xscale=-.4]{ \draw[ultra thick,myred] ( .9,-.5) .. controls (1.2,0) and (1.8,0) .. (2.1,-.5); \draw[thick,double,violet,to-] (.3,-.5) -- (.3,.5); }}\endxy } \textcolor{violet}{\rT_{\rho}^{-1}} \] and \[ \textcolor{myred}{\rB_{i-1}} \textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{blue}{\rB_i} \xra{ \xy (0,0)*{ \tikzdiagc[scale=.75,xscale=-1]{ \draw[ultra thick,blue] (-1,-.5) -- (-1,.5); \draw[ultra thick,myred] (.5,-.5) -- (0,0); \draw[ultra thick,blue] (-.5,.5) -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy} \textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{blue}{\rB_i} \textcolor{blue}{\rB_i} \xra{ \xy (0,0)*{ \tikzdiagc[yscale=0.75,xscale=.4]{ \draw[ultra thick,blue] ( .9,-.5) .. controls (1.2,0) and (1.8,0) .. (2.1,-.5); \draw[thick,double,violet,to-] (.3,-.5) -- (.3,.5); }}\endxy } \textcolor{violet}{\rT_{\rho}^{-1}} , \] are both given by \[ \id_{\mathrm{T}_{d-1}^{-1}\dotsm\mathrm{T}_{i+1}^{-1}} \left( 0 , \begin{pmatrix} \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,0) -- (1.75, 1); \draw[ultra thick,blue] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2.25,-1.5) -- (2.25, 0)node[pos=1, tikzdot]{};} }\endxy & -\xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=-.35]{ \draw[ultra thick,myred] ( .9,-.5) .. controls (1.2,0) and (1.8,0) .. (2.1,-.5); \draw[ultra thick,blue] ( .5,-.5) -- (.5,.5); }}\endxy \\[1.5ex] -\xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,blue] ( .9,-.5) .. controls (1.2,0) and (1.8,0) .. (2.1,-.5); \draw[ultra thick,myred] ( .5,-.5) -- (.5,.5); }}\endxy & \xy (0,1)*{ \tikzdiagc[yscale=0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,0) -- (1.75, 1); \draw[ultra thick,myred] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2.25,-1.5) -- (2.25, 0)node[pos=1, tikzdot]{};} }\endxy \end{pmatrix} , \xy (0,0)*{ \tikzdiagc[scale=0.35,yscale=1]{ \draw[ultra thick,myred] (0,0) -- (.75,1); \draw[ultra thick,myred] (-1.25,-1) .. controls (-1,-.8) and (-.5,-.1) .. (0,0); \draw[ultra thick,myred] (.5,-1) -- (0,0); \draw[ultra thick,blue] (-.5,-1) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-.75,1); \draw[ultra thick,blue] (1.25,-1) .. controls (1,-.8) and (.5,-.1) .. (0,0); }}\endxy \right) \id_{\mathrm{T}_{i-2}^{-1}\dotsm\mathrm{T}_{1}^{-1}} . \] This computation is straightforward and uses~\eqref{eq:Fir} and~\eqref{eq:Gir}. \end{proof} \begin{rem} By~\fullref{lem:RotXmixed}, we can define \begin{equation*} \xy (0,-1.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.6,-.5)node[below]{\tiny $i$} .. controls (.6,-.2) and (.3,.25) .. (0,.25); \draw[ultra thick,myred] (-.6,-.5)node[below]{\tiny $i-1$} .. controls (-.6,-.2) and (-.3,.25) .. (0,.25); \draw[thick,double,violet,to-] (0,-.5) -- (0,1); }}\endxy := \xy (0,-1.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.25,-.5)node[below]{\tiny $i$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,myred] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[below]{\tiny $i-1$}; \draw[thick,double,violet,to-] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy = \xy (0,-1.5)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,myred] (.25,-.5)node[below]{\tiny $i-1$} .. controls (.2,-.2) and (.1,-.1) .. (0,0); \draw[ultra thick,blue] (0,0) .. controls (-.75,.65) and (-1,-.25) .. (-1,-.5)node[below]{\tiny $i$}; \draw[thick,double,violet,to-] (-.5,-.5) .. controls (.5,.25) and (0,.5) .. (-.5,1); }}\endxy \end{equation*} and similarly \begin{equation*} \xy (0,-1.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,myred] (.6,-.5)node[below]{\tiny $i-1$} .. controls (.6,-.2) and (.3,.25) .. (0,.25); \draw[ultra thick,blue] (-.6,-.5)node[below]{\tiny $i$} .. controls (-.6,-.2) and (-.3,.25) .. (0,.25); \draw[thick,double,violet,-to] (0,-.5) -- (0,1); }}\endxy \ , \mspace{40mu} \xy (0,2.75)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,blue] (.6,-.5)node[above]{\tiny $i$} .. controls (.6,-.2) and (.3,.25) .. (0,.25); \draw[ultra thick,myred] (-.6,-.5)node[above]{\tiny $i-1$} .. controls (-.6,-.2) and (-.3,.25) .. (0,.25); \draw[thick,double,violet,-to] (0,-.5) -- (0,1); }}\endxy \mspace{30mu}\text{and}\mspace{30mu} \xy (0,2.75)*{ \tikzdiagc[yscale=-1,xscale=1]{ \draw[ultra thick,myred] (.6,-.5)node[above]{\tiny $i-1$} .. controls (.6,-.2) and (.3,.25) .. (0,.25); \draw[ultra thick,blue] (-.6,-.5)node[above]{\tiny $i$} .. controls (-.6,-.2) and (-.3,.25) .. (0,.25); \draw[thick,double,violet,to-] (0,-.5) -- (0,1); }}\endxy \end{equation*} \end{rem} \begin{lem}\label{lem:pitchforks} The following \emph{pitchfork} relations hold in ${\EuScript{K}^b}(\EuScript{BS}_d)$: \begin{equation} \begin{split}\label{eq:mpitchfork} \xy (0,.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.5,-.5)node[below]{\tiny $i$} -- (.25,-.25); \draw[ultra thick,myred] (.15,.15) .. controls (.05,.25) and (-.1,.4) .. (-.25,.5)node[above,xshift=4pt]{\tiny $i-1$};\draw[ultra thick,blue] (.15,.15) .. controls (.20,.05) and (.25,0) .. (.25,-.25); \draw[ultra thick,myred] (-.1,-.1) .. controls (-.2,-.1) and (-.55,.4) .. (-.6,.5)node[above,xshift=-4pt]{\tiny $i-1$};\draw[ultra thick,blue] (-.1,-.1) .. controls (0,-.2) and (.2,-.25) .. (.25,-.25); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy = \xy (0,.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.5,-.5)node[below]{\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.2,.2) -- (0,0); \draw[ultra thick,myred] (-.2,.2) .. controls (-.15,.3) and (-.25,.5) .. (-.25,.5)node[above,xshift=4pt]{\tiny $i-1$}; \draw[ultra thick,myred] (-.2,.2) .. controls (-.4,.25) and (-.5,.3) .. (-.6,.5)node[above,xshift=-4pt]{\tiny $i-1$}; \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy\ , &\mspace{80mu} \xy (0,.5)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,myred] (.5,-.5)node[below]{\tiny $i-1$} -- (.25,-.25); \draw[ultra thick,blue] (.15,.15) .. controls (.05,.25) and (-.1,.4) .. (-.25,.5)node[above]{\tiny $i$};\draw[ultra thick,myred] (.15,.15) .. controls (.20,.05) and (.25,0) .. (.25,-.25); \draw[ultra thick,blue] (-.1,-.1) .. controls (-.2,-.1) and (-.55,.4) .. (-.6,.5)node[above]{\tiny $i$};\draw[ultra thick,myred] (-.1,-.1) .. controls (0,-.2) and (.2,-.25) .. (.25,-.25); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy = \xy (0,.5)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,myred] (.5,-.5)node[below]{\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (-.2,.2) -- (0,0); \draw[ultra thick,blue] (-.2,.2) .. controls (-.15,.3) and (-.25,.5) .. (-.25,.5)node[above]{\tiny $i$}; \draw[ultra thick,blue] (-.2,.2) .. controls (-.4,.25) and (-.5,.3) .. (-.6,.5)node[above]{\tiny $i$}; \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy \ , \\ \xy (0,.5)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,myred] (.5,-.5)node[above]{\tiny $i-1$} -- (.25,-.25); \draw[ultra thick,blue] (.15,.15) .. controls (.05,.25) and (-.1,.4) .. (-.25,.5)node[below]{\tiny $i$};\draw[ultra thick,myred] (.15,.15) .. controls (.20,.05) and (.25,0) .. (.25,-.25); \draw[ultra thick,blue] (-.1,-.1) .. controls (-.2,-.1) and (-.55,.4) .. (-.6,.5)node[below]{\tiny $i$};\draw[ultra thick,myred] (-.1,-.1) .. controls (0,-.2) and (.2,-.25) .. (.25,-.25); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy = \xy (0,.5)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,myred] (.5,-.5)node[above]{\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (-.2,.2) -- (0,0); \draw[ultra thick,blue] (-.2,.2) .. controls (-.15,.3) and (-.25,.5) .. (-.25,.5)node[below]{\tiny $i$}; \draw[ultra thick,blue] (-.2,.2) .. controls (-.4,.25) and (-.5,.3) .. (-.6,.5)node[below]{\tiny $i$}; \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy \ , &\mspace{80mu} \xy (0,.5)*{ \tikzdiagc[yscale=-1,xscale=-1]{ \draw[ultra thick,blue] (.5,-.5)node[above]{\tiny $i$} -- (.25,-.25); \draw[ultra thick,myred] (.15,.15) .. controls (.05,.25) and (-.1,.4) .. (-.25,.5)node[below,xshift=-4pt]{\tiny $i-1$};\draw[ultra thick,blue] (.15,.15) .. controls (.20,.05) and (.25,0) .. (.25,-.25); \draw[ultra thick,myred] (-.1,-.1) .. controls (-.2,-.1) and (-.55,.4) .. (-.6,.5)node[below,xshift=4pt]{\tiny $i-1$};\draw[ultra thick,blue] (-.1,-.1) .. controls (0,-.2) and (.2,-.25) .. (.25,-.25); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy = \xy (0,.5)*{ \tikzdiagc[yscale=-1,xscale=-1]{ \draw[ultra thick,blue] (.5,-.5)node[above]{\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.2,.2) -- (0,0); \draw[ultra thick,myred] (-.2,.2) .. controls (-.15,.3) and (-.25,.5) .. (-.25,.5)node[below,xshift=-4pt]{\tiny $i-1$}; \draw[ultra thick,myred] (-.2,.2) .. controls (-.4,.25) and (-.5,.3) .. (-.6,.5)node[below,xshift=4pt]{\tiny $i-1$}; \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy \ . \end{split} \end{equation} \end{lem} \begin{proof} We only prove the first relation in~\eqref{eq:mpitchfork}, as the others can be proved in a similar way. Relations~\eqref{eq:mXdef} and~\eqref{eq:Fir} imply that \begin{equation*} \xy (0,.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.3,-.5) -- (-.1,-.1); \draw[ultra thick,blue] (.7,-.5) -- (.1,.1); \draw[ultra thick,myred] (-.7,.5) -- (-.1,-.1); \draw[ultra thick,myred] (-.3,.5) -- (.1,.1); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy =\id_{\mathrm{T}_{d-1}^{-1}\dotsm\mathrm{T}_{i+1}^{-1}} F \id_{\mathrm{T}_{i-2}^{-1}\dotsm\mathrm{T}_{1}^{-1}} \colon \textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{blue}{\rB_i} \textcolor{blue}{\rB_i}\to \textcolor{myred}{\rB_{i-1}} \textcolor{myred}{\rB_{i-1}}\textcolor{violet}{\rT_{\rho}^{-1}} , \end{equation*} where $F$ in homological degrees $-2$, $-1$ and $0$, respectively, is given by \begin{equation*} F = \left( 0, \begin{pmatrix} \xy (0,1)*{ \tikzdiagc[yscale=0.23,xscale=-1]{ \draw[ultra thick,blue] (1.75,0) -- (1.75, 1); \draw[ultra thick,blue] (2,-.9) -- (1.75, 0); \draw[ultra thick,blue] (2.2, -1.75) -- (2,-.9) ; \draw[ultra thick,blue] (1.8, -1.75) -- (2,-.9) ; \draw[ultra thick,blue] (1.5, -1.75) ..controls (1.5,-.9) and (1.5,-.9) .. (1.75,0) ; \draw[ultra thick,myred] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (2.4,0) -- (2.4, 1)node[pos=0, tikzdot]{}; }}\endxy + \xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,blue] ( .9,-.5) .. controls (1.2,0) and (1.8,0) .. (2.1,-.5); \draw[ultra thick,myred] (.9,.5) .. controls (1.2,0) and (1.8,0) .. (2.1, .5); \draw[ultra thick,blue] ( 2.7,-.5) -- (2.7,.5); }}\endxy & & -\,\xy (0,0)*{ \tikzdiagc[yscale=0.25,xscale=-1]{ \draw[ultra thick,blue] (1.75,0) -- (1.75, 1); \draw[ultra thick,blue] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (2.4,-1.5) -- (2.4, 1);} }\endxy -\,\xy (0,0)*{ \tikzdiagc[yscale=-0.25,xscale=1]{ \draw[ultra thick,myred] (1.75,0) -- (1.75, 1); \draw[ultra thick,myred] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (2.4,-1.5) -- (2.4, 1); }}\endxy\, \\[1.5ex] -\, \xy (0,0)*{ \tikzdiagc[yscale=.6,xscale=.35]{ \draw[ultra thick,blue] ( .9,-.5) .. controls (1.2,.1) and (1.8,.1) .. (2.1,-.5); \draw[ultra thick,myred] (.9,.5) .. controls (1.2,0) and (1.8,0) .. (2.1, .5); \draw[ultra thick,blue] ( 1.5,-.5) -- (1.5,-.1); \draw[ultra thick,myred] (.1,0) -- (.1, .5)node[pos=0, tikzdot]{}; }}\endxy -\xy (0,0)*{ \tikzdiagc[yscale=-.6,xscale=-.35]{ \draw[ultra thick,myred] ( .9,-.5) .. controls (1.2,.1) and (1.8,.1) .. (2.1,-.5); \draw[ultra thick,blue] (.9,.5) .. controls (1.2,0) and (1.8,0) .. (2.1, .5); \draw[ultra thick,myred] ( 1.5,-.5) -- (1.5,-.1); \draw[ultra thick,blue] (.1,0) -- (.1, .5)node[pos=0, tikzdot]{}; }}\endxy & & \xy (0,1)*{ \tikzdiagc[yscale=-.23,xscale=1]{ \draw[ultra thick,myred] (1.75,0) -- (1.75, 1); \draw[ultra thick,myred] (2,-.9) -- (1.75, 0); \draw[ultra thick,myred] (2.2, -1.75) -- (2,-.9) ; \draw[ultra thick,myred] (1.8, -1.75) -- (2,-.9) ; \draw[ultra thick,myred] (1.5, -1.75) ..controls (1.5,-.9) and (1.5,-.9) .. (1.75,0) ; \draw[ultra thick,blue] (2.1,0) -- (2.1, 1)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (2.4,0) -- (2.4, 1)node[pos=0, tikzdot]{}; }}\endxy + \xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=-.35]{ \draw[ultra thick,blue] ( .9,-.5) .. controls (1.2,0) and (1.8,0) .. (2.1,-.5); \draw[ultra thick,myred] (.9,.5) .. controls (1.2,0) and (1.8,0) .. (2.1, .5); \draw[ultra thick,myred] ( 2.7,-.5) -- (2.7,.5); }}\endxy \end{pmatrix}, \xy (0,0)*{ \tikzdiagc[scale=0.3,yscale=1]{ \draw[ultra thick,myred] (0,-1) -- (0,0); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,myred] (1,3) -- (0,4);\draw[ultra thick,myred] (1,3) -- (2,4); \draw[ultra thick,blue] (1,3) -- (1,4); \draw[ultra thick,myred] (0,0) ..controls (-1,1) and (-1,2) .. (-1,4); \draw[ultra thick,myred] (0,0) ..controls (1,1) and (1,2) .. (1,3); \draw[ultra thick,blue] (0,0) .. controls (0,1) and (0,2) .. (1,3); \draw[ultra thick,blue] (2,-1) .. controls (2,1) and (2,2) .. (1,3); }}\endxy \ \right) . \end{equation*} Pre-composing with \[ \xy (0,0)*{ \tikzdiagc[yscale=-0.25,xscale=1]{ \draw[thick,double,violet,-to] (1.15,-1.5) -- (1.15,1); \draw[ultra thick,blue] (1.75,0) -- (1.75, 1); \draw[ultra thick,blue] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,blue] (2,-1.5) -- (1.75, 0); }}\endxy \] results in \[ \xy (0,.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.5,-.5) -- (.25,-.25); \draw[ultra thick,myred] (.15,.15) .. controls (.05,.25) and (-.1,.4) .. (-.25,.5);\draw[ultra thick,blue] (.15,.15) .. controls (.20,.05) and (.25,0) .. (.25,-.25); \draw[ultra thick,myred] (-.1,-.1) .. controls (-.2,-.1) and (-.55,.4) .. (-.6,.5);\draw[ultra thick,blue] (-.1,-.1) .. controls (0,-.2) and (.2,-.25) .. (.25,-.25); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy = \id_{\mathrm{T}_{d-1}^{-1}\dotsm\mathrm{T}_{i+1}^{-1}} F_{\text{pitchfork}} \id_{\mathrm{T}_{i-2}^{-1}\dotsm\mathrm{T}_{1}^{-1}} , \] where \[ F_{\text{pitchfork}} = \left( 0, \begin{pmatrix} \xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.35]{ \draw[ultra thick,myred] (.9,.5) .. controls (1.2,0) and (1.8,0) .. (2.1, .5); \draw[ultra thick,blue] (2.7,0) -- (2.7,.5); \draw[ultra thick,blue] (2.1,-.5) -- (2.7, 0); \draw[ultra thick,blue] (3.3,-.5) -- (2.7, 0); }}\endxy & -\, \xy (0,0)*{ \tikzdiagc[yscale=-0.6,xscale=.35]{ \draw[ultra thick,myred] (2.7,0) -- (2.7,.5); \draw[ultra thick,myred] (2.1,-.5) -- (2.7, 0); \draw[ultra thick,myred] (3.3,-.5) -- (2.7, 0); \draw[ultra thick,blue] (3.9,-.5) -- (3.9, .5); }}\endxy \, \\[1.5ex] -\xy (0,0)*{ \tikzdiagc[yscale=-.6,xscale=-.35]{ \draw[ultra thick,myred] ( .9,-.5) .. controls (1.2,.1) and (1.8,.1) .. (2.1,-.5); \draw[ultra thick,blue] (.9,.5) .. controls (1.2,0) and (1.8,0) .. (2.1, .5); \draw[ultra thick,myred] ( 1.5,-.5) -- (1.5,-.1); }}\endxy & \xy (0,1)*{ \tikzdiagc[yscale=-.23,xscale=1]{ \draw[ultra thick,myred] (1.75,0) -- (1.75, 1); \draw[ultra thick,myred] (2,-.9) -- (1.75, 0); \draw[ultra thick,myred] (2.2, -1.75) -- (2,-.9) ; \draw[ultra thick,myred] (1.8, -1.75) -- (2,-.9) ; \draw[ultra thick,myred] (1.5, -1.75) ..controls (1.5,-.9) and (1.5,-.9) .. (1.75,0) ; \draw[ultra thick,blue] (2.25,0) -- (2.25, 1)node[pos=0, tikzdot]{}; }}\endxy \end{pmatrix}, \xy (0,0)*{ \tikzdiagc[scale=0.3,yscale=1]{ \draw[ultra thick,myred] (0,-1) -- (0,0); \draw[ultra thick,myred] (1,3) -- (1,3); \draw[ultra thick,myred] (0,0) ..controls (1,1) and (1,2) .. (1,3); \draw[ultra thick,myred] (0,0) ..controls (-.9,.5) and (-1.25,1) .. (-1.25,1.75); \draw[ultra thick,myred] (-1.25,1.75) ..controls (-1.5,2) and (-1.5,2) .. (-1.75,3); \draw[ultra thick,myred] (-1.25,1.75) ..controls (-1,2) and (-1,2) .. (-.75,3); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,0) -- (0,3); }}\endxy \ \right) . \] In homological degree zero we have used~\eqref{eq:stroman} with a blue dot on the leftmost blue endpoint. The proof is now completed by the observation that \[ \xy (0,.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.5,-.5) -- (0,0); \draw[ultra thick,myred] (-.2,.2) -- (0,0); \draw[ultra thick,myred] (-.2,.2) .. controls (-.15,.3) and (-.25,.5) .. (-.25,.5); \draw[ultra thick,myred] (-.2,.2) .. controls (-.4,.25) and (-.5,.3) .. (-.6,.5); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy \] is given by exactly the same map, which can be seen immediately by post-composing the mixed crossing in~\eqref{eq:mXdef} with \[ \xy (0,0)*{ \tikzdiagc[yscale=-0.25,xscale=-1]{ \draw[thick,double,violet,-to] (1.15,-1.5) -- (1.15,1); \draw[ultra thick,myred] (1.75,0) -- (1.75, 1); \draw[ultra thick,myred] (1.5,-1.5) -- (1.75, 0); \draw[ultra thick,myred] (2,-1.5) -- (1.75, 0); }}\endxy \] and using~\eqref{eq:Fir}. \end{proof} \begin{lem}\label{lem:ReidIII-6v-violet} The following diagrammatic equalities hold in ${\EuScript{K}^b}(\EuScript{S}_d)$, for any adjacent triple $i-1,i,i+1\in \{1,\dotsc,d-1\}$: \begin{equation}\label{eq:orthru6vertex} \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,teal] (0,0) -- (0,.6); \draw[ultra thick,teal] (-1,-1)node[below]{\tiny $i+1$} -- (0,0); \draw[ultra thick,teal] (1,-1)node[below]{\tiny $i+1$} -- (0,0); \draw[ultra thick,blue] (0,.6) -- (0,1)node[above]{\tiny $i$}; \draw[ultra thick,blue] (0,-1)node[below]{\tiny $i$} -- (0,0); \draw[ultra thick,blue] (0,0) -- (-.45, .45); \draw[ultra thick,blue] (0,0) -- (.45,.45); \draw[ultra thick,myred] (-.45,.45) -- (-1,1)node[above]{\tiny $i-1$}; \draw[ultra thick,myred] (.45,.45) -- (1, 1)node[above]{\tiny $i-1$}; \draw[thick,double,violet,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \ =\ \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,teal] (-1,-1)node[below]{\tiny $i+1$} -- (-.45,-.45); \draw[ultra thick,teal] (1,-1)node[below]{\tiny $i+1$} -- (.45,-.45); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $i$} -- (0,-.6); \draw[ultra thick,myred] (0,-.6) -- (0,0); \draw[ultra thick,myred] (0,0) -- (-1,1)node[above]{\tiny $i-1$}; \draw[ultra thick,myred] (0,0) -- (1, 1)node[above]{\tiny $i-1$}; \draw[ultra thick,blue] (0,0) -- (0,1)node[above]{\tiny $i$}; \draw[ultra thick,blue] (-.45,-.45) -- (0,0); \draw[ultra thick,blue] (.45,-.45) -- (0,0); \draw[thick,double,violet,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \mspace{80mu} \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,blue] (0,0)-- (0,.6); \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $i$} -- (0,0); \draw[ultra thick,blue] (1,-1)node[below]{\tiny $i$} -- (0,0); \draw[ultra thick,myred] (0,.6) -- (0,1)node[above]{\tiny $i-1$}; \draw[ultra thick,teal] (0,-1)node[below]{\tiny $i+1$} -- (0,0); \draw[ultra thick,teal] (0,0) -- (-.45, .45); \draw[ultra thick,blue] (-.45,.45) -- (-1,1)node[above]{\tiny $i$}; \draw[ultra thick,teal] (0,0) -- (.45,.45); \draw[ultra thick,blue] (.45,.45) -- (1, 1)node[above]{\tiny $i$}; \draw[thick,double,violet,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \ =\ \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $i$} -- (-.45,-.45); \draw[ultra thick,blue] (1,-1)node[below]{\tiny $i$} -- (.45,-.45); \draw[ultra thick,teal] (0,-1)node[below]{\tiny $i+1$} -- (0,-.6); \draw[ultra thick,blue] (0,-.6) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-1,1)node[above]{\tiny $i$}; \draw[ultra thick,blue] (0,0) -- (1, 1)node[above]{\tiny $i$}; \draw[ultra thick,myred] (0,0) -- (0,1)node[above]{\tiny $i-1$}; \draw[ultra thick,myred] (-.45,-.45) -- (0,0); \draw[ultra thick,myred] (.45,-.45) -- (0,0); \draw[thick,double,violet,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \end{equation} \end{lem} \begin{proof} Both diagrams in the first equality represent morphisms between $\mathrm{T}_{\rho}^{-1}\mathrm{B}_{i+1}\mathrm{B}_i\mathrm{B}_{i+1}\mathrm{T}_\rho$ and $\mathrm{B}_{i-1}\mathrm{B}_i\mathrm{B}_{i-1}$. By~\eqref{eq;orReidII}, there is an isomorphism $\mathrm{T}_{\rho}^{-1}\mathrm{B}_{i+1}\mathrm{B}_i\mathrm{B}_{i+1}\mathrm{T}_\rho\cong\mathrm{B}_i\mathrm{B}_{i-1}\mathrm{B}_i$, so both diagrams correspond to morphisms in \[ \EuScript{S}_d\left(\textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}}\textcolor{blue}{\rB_i}, \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}}\right). \] Recall that $\mathrm{B}_i\mathrm{B}_{i-1}\mathrm{B}_i\cong \mathrm{B}_{i(i-1)i}\oplus \mathrm{B}_i$ and $\mathrm{B}_{i-1}\mathrm{B}_i\mathrm{B}_{i-1}\cong \mathrm{B}_{i(i-1)i}\oplus \mathrm{B}_{i-1}$, which implies that \[ \EuScript{S}_d\left(\textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}}\textcolor{blue}{\rB_i}, \textcolor{myred}{\rB_{i-1}} \textcolor{blue}{\rB_i} \textcolor{myred}{\rB_{i-1}}\right)\cong \EuScript{S}_d\left(\mathrm{B}_{i(i-1)i},\mathrm{B}_{i(i-1)i} \right) \cong \mathbb{C} \] by Soergel's Hom-formula in~\eqref{eq:Soergelhom}. In particular, this implies that the two diagrams in the first equality are multiples of each other. To check that they are actually equal, one can attach a dot at an appropriate place. For example, one can easily check that \[ \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,teal] (0,0)-- (0,.6); \draw[ultra thick,teal] (-1,-1) -- (0,0); \draw[ultra thick,teal] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,.6) -- (0,.95)node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (0,-1) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-.45, .45); \draw[ultra thick,blue] (0,0) -- (.45,.45); \draw[ultra thick,myred] (-.45,.45) -- (-1,1); \draw[ultra thick,myred] (.45,.45) -- (1, 1); \draw[thick,double,violet,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \ = \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,teal] (-1,-1) -- (-.45,-.45); \draw[ultra thick,teal] (1,-1) -- (.45,-.45); \draw[ultra thick,blue] (0,-1) -- (0,-.6); \draw[ultra thick,myred] (0,-.6) -- (0,0); \draw[ultra thick,myred] (0,0) -- (-1,1); \draw[ultra thick,myred] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0) -- (0,.95)node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (-.45,-.45) -- (0,0); \draw[ultra thick,blue] (.45,-.45) -- (0,0); \draw[thick,double,violet,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \] in ${\EuScript{K}^b}(\EuScript{BS}_d)$ by using relations~\eqref{eq:6vertexdot} and~\eqref{eq:Dslide-Xmixed}, followed by~\eqref{eq:mixedRtwo} and~\eqref{eq:mpitchfork}. The second equality of the statement is proved in the same way. \end{proof} The \emph{mixed 6-valent vertices} represent the following isomorphisms in ${\EuScript{K}^b}(\EuScript{S}_d)$, obtained by recursive application of~\fullref{lem:RBR}: \begingroup\allowdisplaybreaks \begin{equation} \begin{split}\label{eq:mixed-sixv} \xy (0,1)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double,violet] (-1,-1) -- (0,0); \draw[thick,double,violet,to-] ( 1,-1) -- (0,0); \draw[thick,double,violet] (0,0) to (-1,1); \draw[thick,double,violet,-to] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; }}\endxy &\colon \textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}}\to\textcolor{violet}{\rT_{\rho}} \textcolor{mygreen}{\mathrm{B}_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}} \\[1ex] \xy (0,1)*{ \tikzdiagc[scale=0.6,yscale=-1,xscale=-1]{ \draw[thick,double,violet,to-] (-1,-1) -- (0,0); \draw[thick,double,violet] ( 1,-1) -- (0,0); \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[above]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[below]{\tiny $d-1$}; }}\endxy &\colon \textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{mygreen}{\mathrm{B}_{d-1}} \textcolor{violet}{\rT_{\rho}}\to\textcolor{violet}{\rT_{\rho}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}^{-1}} \end{split} \end{equation} \begin{rem}\label{rem:sixvertex} To understand why we have introduced the mixed 6-valent vertices above, recall that the evaluation functors are (yet to be defined) functors from $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$ to ${\EuScript{K}^b}(\EuScript{S}_d)$, and that in $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$ there are mutually inverse isomorphisms \begin{equation*} \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[ultra thick,black] (-1,-1) -- (0,0); \draw[ultra thick,black,to-] ( 1,-1) -- (0,0); \draw[ultra thick,black] (0,0) to (-1,1); \draw[ultra thick,black,-to] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; }}\endxy := \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} to (0,-.35); \draw[ultra thick,mygreen] (0,.35) to (0,1)node[above]{\tiny $d-1$}; \draw[ultra thick,violet] (0,-.35) to (0,.35);\node at (.25,0) {\tiny $0$}; \draw[ultra thick,black,-to] (-1,-1) .. controls (-.5,-.2) and (.5,-.2) .. (1,-1); \draw[ultra thick,black,-to] (-1,1) .. controls (-.5, .2) and (.5, .2) .. (1,1); }}\endxy \mspace{40mu}\text{and}\mspace{40mu} \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=1,yscale=-1]{ \draw[ultra thick,black] (-1,-1) -- (0,0); \draw[ultra thick,black,to-] ( 1,-1) -- (0,0); \draw[ultra thick,black] (0,0) to (-1,1); \draw[ultra thick,black,-to] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[above]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[below]{\tiny $d-1$}; }}\endxy := \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=1,yscale=-1]{ \draw[ultra thick,blue] (0,-1)node[above]{\tiny $1$} to (0,-.35); \draw[ultra thick,mygreen] (0,.35) to (0,1)node[below]{\tiny $d-1$}; \draw[ultra thick,violet] (0,-.35) to (0,.35);\node at (-.25,0) {\tiny $0$}; \draw[ultra thick,black,-to] (-1,-1) .. controls (-.5,-.2) and (.5,-.2) .. (1,-1); \draw[ultra thick,black,-to] (-1,1) .. controls (-.5, .2) and (.5, .2) .. (1,1); }}\endxy \end{equation*} \end{rem} \begin{lem} The mixed 6-valent vertices satisfy \begin{align} \label{eq:msixviso} \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=-1.3]{ \draw[thick,double,violet,-to] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); \draw[thick,double,violet,to-] (1,0) ..controls (1,.35) and (0,.25) .. (0,.5) ..controls (0,.75) and (1,.65) .. (1,1); \draw[ultra thick,blue] (.5,0)node[below]{\tiny $1$} -- (.5,.3); \draw[ultra thick,blue] (.5,1)node[above]{\tiny $1$} -- (.5,.71) ; \draw[ultra thick,mygreen] (.5,.3) -- (.5,.71) ; }}\endxy &= \xy (0,-2.25)*{ \tikzdiagc[yscale=2.1,xscale=-1.3]{ \draw[thick,double,violet,-to] (0,0) -- (0,1); \draw[thick,double,violet,to-] (1,0) -- (1,1); \draw[ultra thick,blue] (.5,0)node[below]{\tiny $1$} -- (.5,1); }}\endxy & \xy (0,1)*{ \tikzdiagc[yscale=2.1,xscale=1.3]{ \draw[thick,double,violet,-to] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); \draw[thick,double,violet,to-] (1,0) ..controls (1,.35) and (0,.25) .. (0,.5) ..controls (0,.75) and (1,.65) .. (1,1); \draw[ultra thick,mygreen] (.5,0)node[below]{\tiny $d-1$} -- (.5,.3); \draw[ultra thick,mygreen] (.5,1)node[above]{\tiny $d-1$} -- (.5,.71) ; \draw[ultra thick,blue] (.5,.3) -- (.5,.71) ; }}\endxy &= \xy (0,-2.25)*{ \tikzdiagc[yscale=2.1,xscale=1.3]{ \draw[thick,double,violet,-to] (0,0) -- (0,1); \draw[thick,double,violet,to-] (1,0) -- (1,1); \draw[ultra thick,mygreen] (.5,0)node[below]{\tiny $d-1$} -- (.5,1); }}\endxy \end{align} in ${\EuScript{K}^b}(\EuScript{S}_d)$. \end{lem} \begin{lem}\label{lem:msixvdot} The mixed 6-valent vertices also satisfy the following \emph{dot relations} in ${\EuScript{K}^b}(\EuScript{BS}_d)$: \begin{align} \label{eq:msixvdot} \xy (0,2.5)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double,violet] (-1,-1) -- (0,0); \draw[thick,double,violet,to-] ( 1,-1) -- (0,0); \draw[thick,double,violet] (0,0) to (-1,1); \draw[thick,double,violet,-to] (0,0) to ( 1,1); \node[blue,tikzdot] at (0,-.6) {}; \draw[ultra thick,blue] (0,-.6) to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; }}\endxy &= \xy (0,2.5)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double,violet,-to] (-1,-1) .. controls (-.5, .1) and (.5, .1).. (1,-1); \draw[thick,double,violet,-to] (-1, 1) .. controls (-.5,-.1) and (.5,-.1).. (1, 1); \draw[ultra thick,mygreen] (0,.5) to (0,1)node[above]{\tiny $d-1$}; \node[mygreen,tikzdot] at (0,.5) {}; }}\endxy & \xy (0,3.5)*{ \tikzdiagc[scale=0.6,xscale=1]{ \draw[thick,double,violet] (-1,-1) -- (0,0); \draw[thick,double,violet,to-] ( 1,-1) -- (0,0); \draw[thick,double,violet] (0,0) to (-1,1); \draw[thick,double,violet,-to] (0,0) to ( 1,1); \node[mygreen,tikzdot] at (0,-.6) {}; \draw[ultra thick,mygreen] (0,-.6) to (0,0); \draw[ultra thick,blue] (0,0) to (0,1)node[above]{\tiny $1$}; }}\endxy &= \xy (0,2.5)*{ \tikzdiagc[scale=0.6,xscale=1]{ \draw[thick,double,violet,-to] (-1,-1) .. controls (-.5, .1) and (.5, .1).. (1,-1); \draw[thick,double,violet,-to] (-1, 1) .. controls (-.5,-.1) and (.5,-.1).. (1, 1); \draw[ultra thick,blue] (0,.5) to (0,1)node[above]{\tiny $1$}; \node[blue,tikzdot] at (0,.5) {}; }}\endxy \\[1ex] \label{eq:msixvdot-last} \xy (0,-2.5)*{ \tikzdiagc[scale=0.6,yscale=-1,xscale=-1]{ \draw[thick,double,violet,to-] (-1,-1) -- (0,0); \draw[thick,double,violet] ( 1,-1) -- (0,0); \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); \node[blue,tikzdot] at (0,-.6) {}; \draw[ultra thick,blue] (0,-.6) to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[below]{\tiny $d-1$}; }}\endxy &= \xy (0,-2.5)*{ \tikzdiagc[scale=0.6,yscale=-1,xscale=-1]{ \draw[thick,double,violet,to-] (-1,-1) .. controls (-.5, .1) and (.5, .1).. (1,-1); \draw[thick,double,violet,to-] (-1, 1) .. controls (-.5,-.1) and (.5,-.1).. (1, 1); \draw[ultra thick,mygreen] (0,.5) to (0,1)node[below]{\tiny $d-1$}; \node[mygreen,tikzdot] at (0,.5) {}; }}\endxy & \xy (0,-1.5)*{ \tikzdiagc[scale=0.6,xscale=1,yscale=-1]{ \draw[thick,double,violet,to-] (-1,-1) -- (0,0); \draw[thick,double,violet] ( 1,-1) -- (0,0); \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); \node[mygreen,tikzdot] at (0,-.6) {}; \draw[ultra thick,mygreen] (0,-.6) to (0,0); \draw[ultra thick,blue] (0,0) to (0,1)node[below]{\tiny $1$}; }}\endxy &= \xy (0,-2.5)*{ \tikzdiagc[scale=0.6,xscale=1,yscale=-1]{ \draw[thick,double,violet,to-] (-1,-1) .. controls (-.5, .1) and (.5, .1).. (1,-1); \draw[thick,double,violet,to-] (-1, 1) .. controls (-.5,-.1) and (.5,-.1).. (1, 1); \draw[ultra thick,blue] (0,.5) to (0,1)node[below]{\tiny $1$}; \node[blue,tikzdot] at (0,.5) {}; }}\endxy \end{align} \end{lem} \begin{proof} Apply~\fullref{lem:dotRBR} recursively. \end{proof} \begin{lem}\label{lem:evdumbzero} The following {\em mixed dumbbell-slide} relation holds in ${\EuScript{K}^b}(\EuScript{BS}_d)$: \begin{eqnarray}\label{eq:mixeddumbbellslide1} \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double,violet,to-] (-0.75,1) -- (-0.75,-1); \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy & = & \xy (0,1.2)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double,violet,to-] (0,1) -- (0,-1); }}\endxy \xy (0,-2)*{ \tikzdiagc[scale=0.6]{ \draw[ultra thick,myred] (0,-.35)node[below]{\tiny $i-1$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy \quad i=2,\dots,d-1 \\ \label{eq:mixeddumbbellslide2} \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double,violet,to-] (-0.75,1) -- (-0.75,-1); \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $1$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy & = & - \ \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double,violet,to-] (0,1) -- (0,-1); }}\endxy \; \sum\limits_{i=1}^{d-1} \xy (0,-2)*{ \tikzdiagc[scale=0.6]{ \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy \\ \label{eq:mixeddumbbellslide3} - \ \sum\limits_{i=1}^{d-1} \xy (0,-2)*{ \tikzdiagc[scale=0.6]{ \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy \; \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double,violet,to-] (0,1) -- (0,-1); }}\endxy & = & \xy (0,1)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double,violet,to-] (0.75,1) -- (0.75,-1); \draw[ultra thick,blue] (-.2,-.35)node[below]{\tiny $d-1$} -- (-.2,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy \end{eqnarray} \end{lem} \begin{proof} The equality in~\eqref{eq:mixeddumbbellslide1} is an immediate consequence of~\eqref{eq:Dslide-Xmixed}. For~\eqref{eq:mixeddumbbellslide2} apply the non-oriented dumbbell-slides from Lemma~\ref{lem:dumbbell-slide} \begin{equation*} \xy (0,-2.5)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[very thick,densely dotted, blue,to-] (-0.75,1)-- (-0.75,-1)node[below]{\tiny $1$} ; \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $1$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy \ = \ - \ \xy (0,-2.5)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[very thick,densely dotted,blue,to-] (0.75,1) -- (0.75,-1)node[below]{\tiny $1$}; \draw[ultra thick,blue] (0,-.35)node[below]{\tiny $1$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy \mspace{40mu}\text{and}\mspace{40mu} \xy (0,-2.5)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[very thick,densely dotted,myred,to-] (-0.75,1) -- (-0.75,-1)node[below]{\tiny $i+1$}; \draw[ultra thick,green] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy \ = \xy (0,-2.5)*{ \tikzdiagc[scale=0.6]{ \draw[very thick,densely dotted,myred,to-] (-0.75,1) -- (-0.75,-1)node[below]{\tiny $i+1$}; \draw[ultra thick,green] (0,-.35)node[below]{\tiny $i$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy \ + \xy (0,-2.5)*{ \tikzdiagc[scale=0.6]{ \draw[very thick,densely dotted,myred,to-] (-0.75,1) -- (-0.75,-1)node[below]{\tiny $i+1$}; \draw[ultra thick,myred] (0,-.35)node[below]{\tiny $i+1$} -- (0,.35)node[pos=0, tikzdot]{}node[pos=1, tikzdot]{}; }}\endxy \end{equation*} recursively. Finally, for~\eqref{eq:mixeddumbbellslide3} use the same non-oriented dumbbell-slides as above but with the colors $i$ and $i+1$ swapped. \end{proof} To prove \textcolor{blue}{Lemmas}~\ref{lem:msixv-cyc} to~\ref{lem:hopefullythelastViolet} below, we use the same strategy as in the proof of~\fullref{lem:ReidIII-6v-violet}: we first check that a certain hom-space is one-dimensional and then conclude that two morphisms in that hom-space are equal by attaching dots to the corresponding diagrams. \begin{lem}\label{lem:msixv-cyc} The mixed 6-valent vertices satisfy the following \emph{cyclicity} relations in ${\EuScript{K}^b}(\EuScript{S}_d)$: \begin{align} \label{eq:msixvbiad} \xy (0,-2)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); \draw[thick,double,violet] (0,0) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet,-to] (0,0) .. controls (-2,-3) and (4,-3.5) .. (4,1); \draw[ultra thick,blue] (0,0) ..controls (0,-2) and (3,-2) .. (3,1)node[above]{\tiny $1$}; \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; }}\endxy &= \xy (0,-2)*{ \tikzdiagc[scale=0.5,xscale=-1,yscale=1]{ \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); \draw[thick,double,violet] (0,0) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet,-to] (0,0) .. controls (-2,-3) and (4,-3.5) .. (4,1); \draw[ultra thick,mygreen] (0,0) ..controls (0,-2) and (3,-2) .. (3,1)node[above]{\tiny $d-1$}; \draw[ultra thick,blue] (0,0) to (0,1)node[above]{\tiny $1$}; }}\endxy & \xy (0,-2)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[thick,double,violet] (0,0) to (-1,1); \draw[thick,double,violet,-to] (0,0) to ( 1,1); \draw[thick,double,violet,-to] (0,0) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet] (0,0) .. controls (-2,-3) and (4,-3.5) .. (4,1); \draw[ultra thick,mygreen] (0,0) ..controls (0,-2) and (3,-2) .. (3,1)node[above]{\tiny $d-1$}; \draw[ultra thick,blue] (0,0) to (0,1)node[above]{\tiny $1$}; }}\endxy &= \xy (0,-3)*{ \tikzdiagc[scale=0.5,xscale=-1,yscale=1]{ \draw[thick,double,violet] (0,0) to (-1,1); \draw[thick,double,violet,-to] (0,0) to ( 1,1); \draw[thick,double,violet,-to] (0,0) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet] (0,0) .. controls (-2,-3) and (4,-3.5) .. (4,1); \draw[ultra thick,blue] (0,0) ..controls (0,-2) and (3,-2) .. (3,1)node[above]{\tiny $1$}; \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; }}\endxy \\[-4ex] \label{eq:msixvbiad-last} \xy (0,2)*{ \tikzdiagc[scale=0.5,yscale=-1]{ \draw[thick,double,violet] (0,0) to (-1,1); \draw[thick,double,violet,-to] (0,0) to ( 1,1); \draw[thick,double,violet,-to] (0,0) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet] (0,0) .. controls (-2,-3) and (4,-3.5) .. (4,1); \draw[ultra thick,blue] (0,0) ..controls (0,-2) and (3,-2) .. (3,1)node[below]{\tiny $1$}; \draw[ultra thick,mygreen] (0,0) to (0,1)node[below]{\tiny $d-1$}; }}\endxy &= \xy (0,2)*{ \tikzdiagc[scale=0.5,xscale=-1,yscale=-1]{ \draw[thick,double,violet] (0,0) to (-1,1); \draw[thick,double,violet,-to] (0,0) to ( 1,1); \draw[thick,double,violet,-to] (0,0) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet] (0,0) .. controls (-2,-3) and (4,-3.5) .. (4,1); \draw[ultra thick,mygreen] (0,0) ..controls (0,-2) and (3,-2) .. (3,1)node[below]{\tiny $d-1$}; \draw[ultra thick,blue] (0,0) to (0,1)node[below]{\tiny $1$}; }}\endxy & \xy (0,2)*{ \tikzdiagc[scale=0.5,yscale=-1]{ \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); \draw[thick,double,violet] (0,0) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet,-to] (0,0) .. controls (-2,-3) and (4,-3.5) .. (4,1); \draw[ultra thick,mygreen] (0,0) ..controls (0,-2) and (3,-2) .. (3,1)node[below]{\tiny $d-1$}; \draw[ultra thick,blue] (0,0) to (0,1)node[below]{\tiny $1$}; }}\endxy &= \xy (0,3)*{ \tikzdiagc[scale=0.5,xscale=-1,yscale=-1]{ \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); \draw[thick,double,violet] (0,0) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet,-to] (0,0) .. controls (-2,-3) and (4,-3.5) .. (4,1); \draw[ultra thick,blue] (0,0) ..controls (0,-2) and (3,-2) .. (3,1)node[below]{\tiny $1$}; \draw[ultra thick,mygreen] (0,0) to (0,1)node[below]{\tiny $d-1$}; }}\endxy \end{align} \end{lem} \begin{proof} We only prove the first relation in~\eqref{eq:msixvbiad}, as the remaining ones are proved in the same way. We claim that the two morphisms in~\eqref{eq:msixvbiad} are multiples of one another. To see this, note that $\mathrm{T}_\rho\mathrm{B}_{d-1}\mathrm{T}_{\rho}^{-1}\cong\mathrm{T}_{\rho}^{-1}\mathrm{B}_1\mathrm{T}_\rho$ and $\mathrm{B}_1\mathrm{B}_1\cong \mathrm{B}_1\langle -1\rangle \oplus \mathrm{B}_1\langle 1\rangle $, whence \begin{align*} {\EuScript{K}^b}(\EuScript{S}_d)\left(R,\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\Rrhoi \textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}}\right) &\cong {\EuScript{K}^b}(\EuScript{S}_d)\left(R,\textcolor{violet}{\rT_{\rho}} \textcolor{blue}{\mathrm{B}_1}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}}\right) \\ &\cong \EuScript{S}_d\left(R, \textcolor{blue}{\mathrm{B}_1}\textcolor{blue}{\mathrm{B}_1}\right) \\ &\cong \EuScript{S}_d \left( R, \textcolor{blue}{\mathrm{B}_1}\langle -1\rangle \oplus \textcolor{blue}{\mathrm{B}_1}\langle 1\rangle \right), \end{align*} where we have used the biadjointness of $\textcolor{violet}{\rT_{\rho}}$ and its inverse, and the fullness of the natural embedding of $\EuScript{S}_d$ in ${\EuScript{K}^b}(\EuScript{S}_d)$, for the second isomorphism. By Soergel's Hom-formula in~\eqref{eq:Soergelhom}, we know that \[ \dim_{\mathbb{C}} \left(\EuScript{S}_d\left(R, \textcolor{blue}{\mathrm{B}_1}\langle -1\rangle \right)\right) =0 \quad\text{and}\quad \dim_{\mathbb{C}} \left(\EuScript{S}_d\left(R, \textcolor{blue}{\mathrm{B}_1}\langle 1\rangle \right)\right)=1 \] and hence \[ \dim_{\mathcal{C}} \left({\EuScript{K}^b}(\EuScript{S}_d)\left( R,\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\Rrhoi\textcolor{blue}{\rB_i}\textcolor{violet}{\rT_{\rho}}\right)\right) = 1. \] Attaching a dot to one of the colored strands (say with 1) on both sides of~\eqref{eq:msixvbiad} and using the relations in~\fullref{lem:msixvdot} and certain isotopies shows that both morphisms are equal in ${\EuScript{K}^b}(\EuScript{BS}_d)$. \end{proof} \begin{lem} For each $j\in \{1,\ldots,d-1\}$ distant from $1$ and $d-1$, the following equalities hold in ${\EuScript{K}^b}(\EuScript{S}_d)$: \begin{equation}\label{eq:sixv-ReidIII} \xy (0,0)*{ \tikzdiagc[scale=.6]{ \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; \draw[ultra thick,orange] (-1.5,-1)node[below]{\tiny $j$} ..controls (-1.5,-.75) and (-1,-.75) .. (-.75,-.75); \draw[ultra thick,black] (-.75,-.75) .. controls (-.7,-.75) and (.3,-.8) .. (.55,-.6); \draw[ultra thick,orange] (.55,-.6) .. controls (.75,-.6) and (1.5,0).. (1.5,1); \draw[thick,double,violet,to-] (-1,-1) -- (0,0); \draw[thick,double,violet] ( 1,-1) -- (0,0); \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); }}\endxy = \xy (0,0)*{ \tikzdiagc[yscale=-.6,xscale=.6]{ \draw[ultra thick,mygreen] (0,-1)node[above]{\tiny $1$} to (0,0); \draw[ultra thick,blue] (0,0) to (0,1)node[below]{\tiny $d-1$}; \draw[ultra thick,orange] (1.5,-1) ..controls (1.5,-.75) and (1,-.75) .. (.75,-.75); \draw[ultra thick,black] (.75,-.75) .. controls (.7,-.75) and (-.3,-.8) .. (-.55,-.6); \draw[ultra thick,orange] (-.55,-.6) .. controls (-.75,-.6) and (-1.5,0).. (-1.5,1)node[below]{\tiny $j$}; \draw[thick,double,violet,to-] (-1,-1) -- (0,0); \draw[thick,double,violet] ( 1,-1) -- (0,0); \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); }}\endxy \mspace{80mu} \xy (0,0)*{ \tikzdiagc[scale=.6]{ \draw[ultra thick,mygreen] (0,-1)node[below]{\tiny $1$} to (0,0); \draw[ultra thick,blue] (0,0) to (0,1)node[above]{\tiny $d-1$}; \draw[ultra thick,orange] (-1.5,-1)node[below]{\tiny $j$} ..controls (-1.5,-.75) and (-1,-.75) .. (-.75,-.75); \draw[ultra thick,black] (-.75,-.75) .. controls (-.7,-.75) and (.3,-.8) .. (.55,-.6); \draw[ultra thick,orange] (.55,-.6) .. controls (.75,-.6) and (1.5,0).. (1.5,1); \draw[thick,double,violet] (-1,-1) -- (0,0); \draw[thick,double,violet,to-] ( 1,-1) -- (0,0); \draw[thick,double,violet] (0,0) to (-1,1); \draw[thick,double,violet,-to] (0,0) to ( 1,1); }}\endxy = \xy (0,0)*{ \tikzdiagc[yscale=-.6,xscale=.6]{ \draw[ultra thick,blue] (0,-1)node[above]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[below]{\tiny $d-1$}; \draw[ultra thick,orange] (1.5,-1) ..controls (1.5,-.75) and (1,-.75) .. (.75,-.75); \draw[ultra thick,black] (.75,-.75) .. controls (.7,-.75) and (-.3,-.8) .. (-.55,-.6); \draw[ultra thick,orange] (-.55,-.6) .. controls (-.75,-.6) and (-1.5,0).. (-1.5,1)node[below]{\tiny $j$}; \draw[thick,double,violet] (-1,-1) -- (0,0); \draw[thick,double,violet,to-] ( 1,-1) -- (0,0); \draw[thick,double,violet] (0,0) to (-1,1); \draw[thick,double,violet,-to] (0,0) to ( 1,1); }}\endxy \end{equation} \end{lem} \begin{proof} We only prove the first equality, as the other can be proved in the same way. By adjointness, proving the first equaltiy in~\eqref{eq:sixv-ReidIII} is equivalent to proving the equality \begin{equation}\label{eq:sixv-ReidIIIb} \xy (0,-2)*{ \tikzdiagc[scale=.5,xscale=-1,yscale=1]{ \draw[ultra thick,mygreen] (0,0) ..controls (0,-2) and (3,-2) .. (3,1)node[above]{\tiny $d-1$}; \draw[ultra thick,blue] (0,0) to (0,1)node[above]{\tiny $1$}; \draw[ultra thick,orange] (-1.5,-2.5)node[below]{\tiny $j$} .. controls (-1.5,-1.5) and (-.75,-.9) .. (-.35,-.8); \draw[ultra thick,black] (-.35,-.8) -- (.65,-.45); \draw[ultra thick,orange] (.65,-.45) .. controls (.8,-.25) and (1.5,0) .. (1.5,1); \draw[thick,double,violet,-to] (0,0) to (-1,1); \draw[thick,double,violet] (0,0) to ( 1,1); \draw[thick,double,violet] (0,0) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet,-to] (0,0) .. controls (-2,-3) and (4,-3.5) .. (4,1); }}\endxy \ = \ \xy (0,-2)*{ \tikzdiagc[scale=0.5,xscale=-1,yscale=1]{ \draw[ultra thick,mygreen] (0,-.25) ..controls (0,-2) and (3,-2) .. (3,1)node[above]{\tiny $d-1$}; \draw[ultra thick,blue] (0,-.25) to (0,1)node[above]{\tiny $1$}; \draw[ultra thick,orange] (-1.5,-2.5)node[below]{\tiny $j$} .. controls (-1.5,-.5) and (-.7,-.1) .. (-.35,.15); \draw[ultra thick,black] (-.35,.15) -- (.5,.35); \draw[ultra thick,orange] (.5,.35) .. controls (.8,.4) and (1.5,.4) .. (1.5,1); \draw[thick,double,violet,-to] (0,-.25) to (-1,1); \draw[thick,double,violet] (0,-.25) to ( 1,1); \draw[thick,double,violet] (0,-.25) .. controls (1,-1) and (2,-1) .. (2,1) ; \draw[thick,double,violet,-to] (0,-.25) .. controls (-2,-3) and (4,-3.5) .. (4,1); }}\endxy \end{equation} in ${\EuScript{K}^b}(\EuScript{S}_d)$. For any $j\in \{1,\ldots,d-1\}$ distant from $1$ and $d-1$, the same arguments as before (and the fact that $\mathrm{B}_1$ and $\mathrm{B}_{j+1}$ commute) prove the following isomorphisms of hom-spaces: \begin{eqnarray*} {\EuScript{K}^b}(\EuScript{S}_d)\left(\textcolor{orange}{\mathrm{B}_j},\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{orange}{\mathrm{B}_j}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1} \textcolor{violet}{\rT_{\rho}} \right) &\cong & {\EuScript{K}^b}(\EuScript{S}_d)\left(\textcolor{orange}{\mathrm{B}_j}, \textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1} \textcolor{violet}{\rT_{\rho}} \textcolor{orange}{\mathrm{B}_j} \textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1} \textcolor{violet}{\rT_{\rho}} \right) \\ &\cong& {\EuScript{K}^b}(\EuScript{S}_d)\left(\textcolor{violet}{\rT_{\rho}}\textcolor{orange}{\mathrm{B}_j}\textcolor{violet}{\rT_{\rho}^{-1}}, \textcolor{blue}{\mathrm{B}_1} \textcolor{violet}{\rT_{\rho}} \textcolor{orange}{\mathrm{B}_j} \textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1} \right) \\ &\cong & \EuScript{S}_d\left(\textcolor{myred}{\mathrm{B}_{j+1}}, \textcolor{blue}{\mathrm{B}_1}\textcolor{myred}{\mathrm{B}_{j+1}} \textcolor{blue}{\mathrm{B}_1} \right) \\ &\cong & \EuScript{S}_d\left(\textcolor{myred}{\mathrm{B}_{j+1}}, \textcolor{myred}{\mathrm{B}_{j+1}} \textcolor{blue}{\mathrm{B}_1}\textcolor{blue}{\mathrm{B}_1} \right) \\ &\cong & \EuScript{S}_d\left(\textcolor{myred}{\mathrm{B}_{j+1}}, \textcolor{myred}{\mathrm{B}_{j+1}} \textcolor{blue}{\mathrm{B}_1} \langle -1 \rangle \oplus \textcolor{myred}{\mathrm{B}_{j+1}}\textcolor{blue}{\mathrm{B}_1}\langle 1\rangle \right) \end{eqnarray*} By Soergel's Hom-formula in~\eqref{eq:Soergelhom}, we know that \[ \dim_{\mathbb{C}}\left(\EuScript{S}_d\left(\textcolor{myred}{\mathrm{B}_{j+1}}, \textcolor{myred}{\mathrm{B}_{j+1}} \textcolor{blue}{\mathrm{B}_1} \langle -1 \rangle \right)\right)=0 \quad \text{and}\quad \dim_{\mathbb{C}}\left(\EuScript{S}_d\left(\textcolor{myred}{\mathrm{B}_{j+1}}, \textcolor{myred}{\mathrm{B}_{j+1}} \textcolor{blue}{\mathrm{B}_1} \langle 1 \rangle \right)\right)=1 \] whence \[ \dim_{\mathbb{C}}\left({\EuScript{K}^b}(\EuScript{S}_d)\left(\textcolor{orange}{\mathrm{B}_j},\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{orange}{\mathrm{B}_j}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}} \right)\right)=1 \] and the equality in~\eqref{eq:sixv-ReidIIIb} can be proved by attaching dots to these diagrams at appropriate places. \end{proof} \begin{lem} \label{lem:ReidIII-Rrho} The following equalities are true in ${\EuScript{K}^b}(\EuScript{S}_d)$: \begin{equation}\label{eq:ReidIII-Rrho} \xy (0,-3.1)*{ \tikzdiagc[yscale=.6,xscale=.7]{ \draw[ultra thick,mygreen] (0,-1.4) -- (0, 2.5); \draw[ultra thick,mygreen] (0,0) .. controls (-.5,.4) and (-1.1,1.4) .. (-1.15, 1.6); \draw[ultra thick,mygreen] (0,0) .. controls ( .5,.4) and ( 1.1,1.4) .. ( 1.15, 1.6); \draw[ultra thick,orange] (-.6,-.5) -- (0,0); \draw[ultra thick,orange] ( .6,-.5) -- (0,0); \draw[ultra thick,orange] (0,0) -- (0,1); \draw[ultra thick,mygreen] (0,1) -- (0,1.7); \draw[ultra thick,mygreen] (-1.5,-2)node[below]{\tiny $d-1$} .. controls (-1.4,-1.4) and (-1,-.85) .. (-.6,-.5); \draw[ultra thick,mygreen] (1.5,-2)node[below]{\tiny $d-1$} .. controls ( 1.4,-1.4) and ( 1,-.85) .. ( .6,-.5); \draw[ultra thick,blue] (0,-2)node[below]{\tiny $1$} -- (0,-1.4); \draw[ultra thick,blue] (-1.15,1.6) ..controls (-1.2,1.75) and (-1.4,2.2) .. (-1.5,2.5); \draw[ultra thick,blue] ( 1.15,1.6) ..controls ( 1.2,1.75) and ( 1.4,2.2) .. ( 1.5,2.5); \draw[thick,double,violet,-to] (-2,2.5) .. controls (-.1,.5) and (.1,.5) .. (2,2.5); \draw[thick,double,violet,-to] (.5,-2) .. controls (-1,-.1) and (-1.5,.4) .. (-1,2.5); \draw[thick,double,violet,to-] (-.5,-2) .. controls (1,-.1) and (1.5,.4) .. (1,2.5); }}\endxy = \xy (0,-2)*{ \tikzdiagc[yscale=-.6,xscale=.7]{ \draw[ultra thick,blue] (0,1.7) -- (0,2.5)node[below]{\tiny $1$}; \draw[ultra thick,blue] (0,-1.4) -- (0,0); \draw[ultra thick,blue] (0,0) .. controls (-.5,.4) and (-.8,.7) .. (-.95, .85); \draw[ultra thick,blue] (0,0) .. controls ( .5,.4) and ( .8,.7) .. ( .95, .85); \draw[ultra thick,myred] (-.6,-.5) -- (0,0); \draw[ultra thick,myred] ( .6,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,1.7); \draw[ultra thick,blue] (-1.5,-2) .. controls (-1.4,-1.4) and (-1,-.85) .. (-.6,-.5); \draw[ultra thick,blue] (1.5,-2) .. controls ( 1.4,-1.4) and ( 1,-.85) .. ( .6,-.5); \draw[ultra thick,mygreen] (0,-2) -- (0,-1.4); \draw[ultra thick,mygreen] (-.95,.85) ..controls (-1.3,1.1) and (-1.8,2) .. (-1.8,2.5)node[below]{\tiny $d-1$}; \draw[ultra thick,mygreen] ( .95,.85) ..controls ( 1.3,1.1) and ( 1.8,2) .. ( 1.8,2.5)node[below]{\tiny $d-1$}; \draw[thick,double,violet,-to] (-2,-2) .. controls (-.5,3) and (.5,3) .. (2,-2); \draw[thick,double,violet,-to] (.5,-2) .. controls (-1,-.1) and (-1,.4) .. (-1,2.5); \draw[thick,double,violet,to-] (-.5,-2) .. controls (1,-.1) and (1,.4) .. (1,2.5); }}\endxy \end{equation} \end{lem} \begin{proof} We first note that \begin{align*} \textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}} &\cong \textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}} \\ &\cong \textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{orange}{\mathrm{B}_{d-2}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}} , \intertext{and} \textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}} &\cong \textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{mygreen}{\rB_{d-1}} \\ & \cong \textcolor{violet}{\rT_{\rho}}\textcolor{orange}{\mathrm{B}_{d-2}} \textcolor{mygreen}{\rB_{d-1}}\textcolor{orange}{\mathrm{B}_{d-2}}\textcolor{violet}{\rT_{\rho}^{-1}}, \end{align*} and therefore \begingroup\allowdisplaybreaks \begin{align*} &{\EuScript{K}^b}(\EuScript{S}_d)\left( \textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}, \textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}} \right) \\ &\cong \EuScript{S}_d \left( \textcolor{mygreen}{\rB_{d-1}}\textcolor{orange}{\mathrm{B}_{d-2}}\textcolor{mygreen}{\rB_{d-1}}, \textcolor{orange}{\mathrm{B}_{d-2}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{orange}{\mathrm{B}_{d-2}}\right) \end{align*} \endgroup By the decompositions \[ \textcolor{mygreen}{\rB_{d-1}}\textcolor{orange}{\mathrm{B}_{d-2}}\textcolor{mygreen}{\rB_{d-1}} \cong \mathrm{B}_{(d-1)(d-2)(d-1)}\oplus \textcolor{mygreen}{\rB_{d-1}} \quad \text{and}\quad \textcolor{orange}{\mathrm{B}_{d-2}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{orange}{\mathrm{B}_{d-2}} \cong \mathrm{B}_{(d-1)(d-2)(d-1)}\oplus \textcolor{orange}{\mathrm{B}_{d-2}} \] and Soergel's Hom-formula in~\eqref{eq:Soergelhom}, we conclude that \[ \dim_{\mathbb{C}}\left({\EuScript{K}^b}(\EuScript{S}_d)\left(\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}, \textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}}\textcolor{mygreen}{\rB_{d-1}}\textcolor{violet}{\rT_{\rho}^{-1}}\textcolor{blue}{\mathrm{B}_1}\textcolor{violet}{\rT_{\rho}} \right)\right)=1. \] Thus the two diagrams in~\eqref{eq:ReidIII-Rrho} are scalar multiples of each other and the equality now follows by attaching dots to these diagrams at appropriate places. \end{proof} The proof of the following lemma uses exactly the same arguments as above and is left as an exercise to the reader. \begin{lem}\label{lem:hopefullythelastViolet} The following equalities hold in ${\EuScript{K}^b}(\EuScript{S}_d)$: \begingroup\allowdisplaybreaks \begin{align}\label{RIIviolet} \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=1.3]{ \draw[ultra thick,blue] (.5,0)node[below]{\tiny $1$} -- (.5,.3); \draw[ultra thick,blue] (.5,1)node[above]{\tiny $1$} -- (.5,.71) ; \draw[ultra thick,mygreen] (.5,.3) -- (.5,.71) ; \draw[ultra thick,blue] (0,.5) -- (1,.5); \draw[ultra thick,myred] (1,.5) -- (1.5,.5); \draw[ultra thick,myred] (-.5,.5)node[below]{\tiny $2$} -- (0,.5); \draw[thick,double,violet,to-] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); \draw[thick,double,violet,-to] (1,0) ..controls (1,.35) and (0,.25) .. (0,.5) ..controls (0,.75) and (1,.65) .. (1,1); }}\endxy\ &= \xy (0,-2.25)*{ \tikzdiagc[yscale=2.1,xscale=1.3]{ \draw[ultra thick,blue] (.5,0)node[below]{\tiny $1$} -- (.5,1); \draw[ultra thick,orange] (.15,.5) -- (.85,.5); \draw[ultra thick,myred] (.85,.5) -- (1.5,.5); \draw[ultra thick,myred] (-.5,.5)node[below]{\tiny $2$} -- (.15,.5); \draw[thick,double,violet,to-] (0,0) ..controls (.2,.35) and (.2,.65) .. (0,1); \draw[thick,double,violet,to-] (1,0) ..controls (.8,.35) and (.8,.65) .. (1,1); }}\endxy \\[1ex] \label{eq:RIIIviolet} \xy (0,-2.5)*{ \tikzdiagc[yscale=.5,xscale=.5]{ \draw[ultra thick,mygreen] (0,0) to[out=-90,in=150] (2,-2)node[below]{\tiny $d-1$}; \draw[ultra thick,blue] (0,-2)node[below]{\tiny $1$} to[out=30,in=-125] (1.5,-.78); \draw[ultra thick,myred] (1.5,-.78) to[out=65,in=-90] (1.75,.9); \draw[ultra thick,orange] (1.75,.9) to (1.7,2)node[above]{\tiny $3$}; \draw[ultra thick,blue] (0,0) to[out=100,in=-90] (.5,2); \draw[thick,double,violet,to-] (3,-1) to[out=180,in=-70] (-1,1.5); \draw[thick,double,violet,to-] (3, 1) to[out=180,in= 70] (-1,-1.5); }}\endxy &= \xy (0,-2.5)*{ \tikzdiagc[yscale=-.5,xscale=-.5]{ \draw[ultra thick,blue] (0,0) to[out=-90,in=150] (2,-2); \draw[ultra thick,orange] (0,-2)node[above]{\tiny $3$} to[out=30,in=-125] (1.5,-.78); \draw[ultra thick,myred] (1.5,-.78) to[out=65,in=-90] (1.75,.9); \draw[ultra thick,blue] (1.75,.9) to (1.7,2)node[below]{\tiny $1$}; \draw[ultra thick,mygreen] (0,0) to[out=100,in=-90] (.5,2)node[below]{\tiny $d-1$}; \draw[thick,double,violet,-to] (3,-1) to[out=180,in=-70] (-1,1.5); \draw[thick,double,violet,-to] (3, 1) to[out=180,in= 70] (-1,-1.5); }}\endxy \\[1ex] \label{eq:itwasnotthelast} \xy (0,0)*{ \tikzdiagc[scale=.4]{ \draw[ultra thick,blue] (0,0) -- (2,0)node[below]{\tiny $1$}; \draw[ultra thick,mygreen] (0,0) -- (-1.25,0); \draw[ultra thick,mygreen] (-3, 1.5)node[left]{\tiny $d-1$} to[out=-30,in=135] (-1.25,0); \draw[ultra thick,mygreen] (-3,-1.5)node[left]{\tiny $d-1$} to[out=30,in=-135] (-1.25,0); \draw[thick,double,violet,to-] (-1,-3) to[out=90,in=-90] ( 1,3); \draw[thick,double,violet,to-] ( 1,-3) to[out=90,in=-90] (-1,3); }}\endxy &= \xy (0,0)*{ \tikzdiagc[scale=.4]{ \draw[ultra thick,mygreen] (-3, 1.5)node[left]{\tiny $d-1$} -- (0, 1.5); \draw[ultra thick,mygreen] (-3,-1.5)node[left]{\tiny $d-1$} -- (0,-1.5); \draw[ultra thick,blue] (0, 1.5) to[out=0,in=130] (2.5,0); \draw[ultra thick,blue] (0,-1.5) to[out=0,in=-130] (2.5,0); \draw[ultra thick,blue] (2.5,0) -- (3.5,0); \draw[thick,double,violet,to-] (1,-3) to[out=90,in=-90] (-1,0) to[out=90,in=-90] (1,3); \draw[thick,double,violet,to-] (-1,-3) to[out=90,in=-90] (1,0) to[out=90,in=-90] (-1,3); }}\endxy \end{align} \endgroup \end{lem} \section{Evaluation functors}\label{sec:evaluationfunctors} In this section, we finally define the evaluation functors $\mathcal{E}v_{r,s}\colon\widehat{\EuScript{S}}^{\mathrm{ext}}_d\to {\EuScript{K}^b}(\EuScript{S}_d)$, for $r,s\in \mathbb{Z}$, which categorify the evaluation maps $\ev_a$ from Definition~\ref{defn:evaluationmap}, for $a=(-1)^s q^r$ with $r,s\in \mathbb{Z}$. The other evaluation maps in that definition, denoted $\ev'_a$, can be categorified likewise, but we don't work out the details here. \begin{rem}\label{rem:shifts-etc3} To be really precise, we actually define a degree-preserving functor from $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$ to ${\EuScript{K}^b}((\EuScript{BS}_d^{\mathrm{sh}})^{\mathrm{gr}})$ which uniquely determines $\mathcal{E}v_{r,s}$, see Remarks~\ref{rem:shifts-etc1} and~\ref{rem:shifts-etc2}. Note that $(\EuScript{BS}_d^{\mathrm{sh}})^{\mathrm{gr}}$ is a graded category with shift, and that $X\langle t\rangle \cong X$ for every $X\in (\EuScript{BS}_d^{\mathrm{sh}})^{\mathrm{gr}}$ and $t\in \mathbb{Z}$. The natural, degree-preserving embedding of $\EuScript{BS}_d$ into $(\EuScript{BS}_d^{\mathrm{sh}})^{\mathrm{gr}}$ is therefore fully faithful and essentially surjective, although it is not an equivalence of graded categories because its inverse is not degree-preserving. However, for our purposes all that matters is that the monoidal subcategory of degree-zero morphisms $((\EuScript{BS}_d^{\mathrm{sh}})^{\mathrm{gr}})^0$ is isomorphic with $\EuScript{BS}_d^{\mathrm{sh}}$, which implies that the idempotent completion of both is $\EuScript{S}_d$. This might sound a bit complicated, but we can not simply define a functor from $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$ to ${\EuScript{K}^b}(\EuScript{BS}_d)$ because the image of $\mathrm{B}_{\rho}$ requires non-trivial internal shifts when $r\ne 0$. \end{rem} \subsection{Definition}\label{sec:defevalfunctor} Let $r,s\in \mathbb{Z}$ and $d\in \mathbb{N}_{\geq 3}$ be arbitrary but fixed for the remainder of this section. The {\em evaluation functor} is the monoidal, $\mathbb{C}$-linear functor \[ \mathcal{E}v_{r,s}\colon\widehat{\EuScript{S}}^{\mathrm{ext}}_d\to {\EuScript{K}^b}(\EuScript{S}_d) \] commuting with shifts which is uniquely determined (see Remark~\ref{rem:shifts-etc3}) by the monoidal, degree-preserving, $\mathbb{C}$-linear functor \[ \mathcal{E}v_{r,s}\colon \widehat{\EuScript{BS}}^{\mathrm{ext}}_d \to {\EuScript{K}^b}((\EuScript{BS}_d^{\mathrm{sh}})^{\mathrm{gr}}) \] defined below. Note that we use the same notation for both functors. \begin{itemize} \item On the (non-full) subcategory $\EuScript{BS}_d$ of $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$, the evaluation functor $\mathcal{E}v_{r,s}$ is the identity. More specifically, this means that $\mathcal{E}v_{r,s}(\mathrm{B}_i) := \mathrm{B}_i$ for every $i\in \{1,\ldots, d-1\}$ and that $\mathcal{E}v_{r,s}$ sends any diagram without unoriented $0$-colored strands and oriented strands to itself. \end{itemize} \smallskip On other objects of $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$, it is defined as \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}(\textcolor{violet}{\mathrm{B}_0}) &:= \textcolor{violet}{\rT_{\rho}^{-1}} \textcolor{blue}{\mathrm{B}_1} \textcolor{violet}{\rT_{\rho}} , \\ \mathcal{E}v_{r,s}(\mathrm{B}_{\rho}^{\pm 1}) &:= \textcolor{violet}{\mathrm{T}_{\rho}^{\pm 1}}\langle \pm r\rangle [\pm s]. \end{align*} \endgroup \smallskip On other morphisms it is defined as follows. \begin{itemize} \item On {\em oriented and $0$-colored} generators: \begingroup \allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s} \biggl(\ \xy (0,0)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,black,-to] (1.5,-.5) -- (1.5, .5); } }\endxy\ \biggr) &=\ \xy (0,0)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[thick,double,violet,-to] (.9,-.5) -- (.9,.5); }}\endxy \mathcal{E}v_{r,s} \biggl(\ \xy (0,0)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,black,to-] (1.5,-.5) -- (1.5, .5); } }\endxy\ \biggr) &=\ \xy (0,0)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[thick,double,violet,to-] (.9,-.5) -- (.9,.5); }}\endxy \\[1ex] \mathcal{E}v_{r,s} \Bigl(\ \xy (0,0)*{ \tikzdiagc[yscale=0.9]{ \draw[ultra thick,black,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy\ \Bigr) & =\ \xy (0,0)*{ \tikzdiagc[yscale=0.9]{ \draw[thick,double,violet,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy & \mathcal{E}v_{r,s} \Bigl(\ \xy (0,0)*{ \tikzdiagc[yscale=0.9]{ \draw[ultra thick,black,-to] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy\ \Bigr) & =\ \xy (0,0)*{ \tikzdiagc[yscale=0.9]{ \draw[thick,double,violet,-to] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy \\[1ex] \mathcal{E}v_{r,s} \Bigl(\ \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,black,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy\ \Bigr) & =\ \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[thick,double,violet,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy & \mathcal{E}v_{r,s} \Bigl(\ \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,black,-to] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy\ \Bigr) & =\ \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[thick,double,violet,-to] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy \end{align*} \endgroup \begin{equation*} \mathcal{E}v_{r,s}\Bigl( \xy (0,2.5)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,violet] (1.5,-.5) -- (1.5, .5); \node[violet] at (1.5,.75) {\tiny $0$}; } }\endxy \Bigr) = \xy (0,2.5)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[thick,double,violet,to-] (.9,-.5) -- (.9,.5); \draw[ultra thick,blue] (1.5,-.5) -- (1.5,.5)node[above] {\tiny $1$}; \draw[thick,double,violet,-to] (2.1,-.5) -- (2.1,.5); }}\endxy \end{equation*} \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}\Bigl( \xy (0,2.5)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,violet] (1.5,0) -- (1.5, 0.5)node[pos=0, tikzdot]{}; \node[violet] at (1.5,.75) {\tiny $0$}; } }\endxy \Bigr) &= \xy (0,1.3)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.4)node[above] {\tiny $1$}node[pos=0, tikzdot]{}; \draw[thick,double,violet,-to] (.9,.4) .. controls (1.2,-.5) and (1.8,-.5) .. (2.1,.4); }}\endxy &\ \mathcal{E}v_{r,s}\Bigl( \xy (0,-2.5)*{ \tikzdiagc[yscale=-0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,violet] (1.5,0) -- (1.5, 0.5)node[pos=0, tikzdot]{}; \node[violet] at (1.5,.75) {\tiny $0$}; } }\endxy \Bigr) &= \xy (0,-1.3)*{ \tikzdiagc[yscale=-0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,blue] (1.5,0) -- (1.5, 0.4) {}node[pos=0, tikzdot]{}; \node[blue] at (1.5,.71) {\tiny $1$}; \draw[thick,double,violet,to-] (.9,.4) .. controls (1.2,-.5) and (1.8,-.5) .. (2.1,.4); }}\endxy \\ \mathcal{E}v_{r,s}\biggl( \xy (0,-2.5)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,violet] (1.5,0) -- (1.5, .5); \draw[ultra thick,violet] (1.5,.5) -- (1.1, 1); \draw[ultra thick,violet] (1.5,.5) -- (1.9, 1); \node[violet] at (1.5,-.25) {\tiny $0$}; } }\endxy \biggr) &= \xy (0,-2.5)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,blue] (1.5,-.25) -- (1.5, .5);\node[blue] at (1.5,-.5) {\tiny $1$}; \draw[ultra thick,blue] (1.5,.5) ..controls (1.2,.75) and (1.1,.85) .. ( .9, 1.25); \draw[ultra thick,blue] (1.5,.5) .. controls (1.8,.75) and (1.9,.85) .. (2.1, 1.25); \draw[thick,double,violet,to-] (1,-.25) .. controls (.85,.5) and (0.75,.65) .. (.5,1.25); \draw[thick,double,violet,-to] (2,-.25) .. controls (2.15,.5) and (2.25,.65) .. (2.5,1.25); \draw[thick,double,violet,-to] (1.2,1.25) .. controls (1.4,.75) and (1.6,.75) .. (1.8,1.25); }}\endxy & \mathcal{E}v_{r,s}\biggl(\!\! \xy (0,-2.1)*{ \tikzdiagc[yscale=-0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,violet] (1.5,0) -- (1.5, .5); \draw[ultra thick,violet] (1.5,.5) -- (1.1, 1); \draw[ultra thick,violet] (1.5,.5) -- (1.9, 1); \node[violet] at (1.1,1.25) {\tiny $0$}; } }\endxy \biggr) &= \xy (0,-2.1)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,blue] (1.5,-.25) -- (1.5, .5);\node[blue] at (2.1,1.5) {\tiny $1$}; \node[blue] at (.9,1.5) {\tiny $1$}; \draw[ultra thick,blue] (1.5,.5) ..controls (1.2,.75) and (1.1,.85) .. ( .9, 1.25); \draw[ultra thick,blue] (1.5,.5) .. controls (1.8,.75) and (1.9,.85) .. (2.1, 1.25); \draw[thick,double,violet,-to] (1,-.25) .. controls (.85,.5) and (0.75,.65) .. (.5,1.25); \draw[thick,double,violet,to-] (2,-.25) .. controls (2.15,.5) and (2.25,.65) .. (2.5,1.25); \draw[thick,double,violet,to-] (1.2,1.25) .. controls (1.4,.75) and (1.6,.75) .. (1.8,1.25); }}\endxy \end{align*} \endgroup \item On generators including strands with \emph{distant colors}: \begingroup \allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}\biggl(\!\! \xy (0,-2.1)*{ \tikzdiagc[scale=0.8]{ \draw[ultra thick,violet] (0,0)node[below]{\tiny $0$} -- (1, 1); \draw[ultra thick,mygreen] (1,0)node[below]{\tiny $i$} -- (0, 1); } }\endxy\! \biggr) &= \xy (0,-2.75)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,mygreen] (1,0) -- (1.75,1)node[below]{\tiny $i$}; \draw[ultra thick,orange] (1,0) -- (.1,-.2); \draw[ultra thick,mygreen] (.1,-.2) -- (-.75,-1); \draw[thick,double,violet,-to] (.75,-1)-- (-.75,1); \draw[ultra thick,blue] (1.25,-1) -- (-.25, 1)node[below]{\tiny $1$}; \draw[thick,double,violet,to-] (1.75,-1)-- (.25,1); }}\endxy \rlap{\hspace{8ex} for $i\neq 1,d-1$} \\ \mathcal{E}v_{r,s}\biggl(\!\! \xy (0,-2.1)*{ \tikzdiagc[scale=0.8,xscale=-1]{ \draw[ultra thick,violet] (0,0)node[below]{\tiny $0$} -- (1, 1); \draw[ultra thick,mygreen] (1,0)node[below]{\tiny $i$} -- (0, 1); } }\endxy\! \biggr) &= \xy (0,-2.75)*{ \tikzdiagc[yscale=-0.9,xscale=-1]{ \draw[ultra thick,mygreen] (1,0) -- (1.75,1)node[below]{\tiny $i$}; \draw[ultra thick,orange] (1,0) -- (.1,-.2); \draw[ultra thick,mygreen] (.1,-.2) -- (-.75,-1); \draw[thick,double,violet,to-] (.75,-1)-- (-.75,1); \draw[ultra thick,blue] (1.25,-1) -- (-.25, 1)node[below]{\tiny $1$}; \draw[thick,double,violet,-to] (1.75,-1)-- (.25,1); }}\endxy \rlap{\hspace{8ex} for $i\neq 1,d-1$} \end{align*} \endgroup \item On generators including strands with \emph{adjacent colors}: \begingroup\allowdisplaybreaks \begin{align} \mathcal{E}v_{r,s} \biggl(\mspace{-10mu} \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.5,-.5)node[below] {\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.5,.5)node[above] {\tiny $i-1$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\! \biggr) & =\!\! \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.5,-.5)node[below] {\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.5,.5)node[above] {\tiny $i-1$} -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy & \mathcal{E}v_{r,s} \biggl(\mspace{-10mu} \xy (0,0)*{ \tikzdiagc[scale=1,xscale=-1]{ \draw[ultra thick,myred] (.5,-.5)node[below] {\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (-.5,.5)node[above] {\tiny $i$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy \biggr) & =\!\!\!\! \xy (0,0)*{ \tikzdiagc[scale=1,xscale=-1]{ \draw[ultra thick,myred] (.5,-.5)node[below] {\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (-.5,.5)node[above] {\tiny $i$} -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy \\[1ex] \mathcal{E}v_{r,s} \biggl(\! \xy (0,0)*{ \tikzdiagc[scale=1,yscale=-1]{ \draw[ultra thick,myred] (.5,-.5)node[above] {\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (-.5,.5)node[below] {\tiny $i$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\mspace{-10mu} \biggr) & = \xy (0,0)*{ \tikzdiagc[scale=1,yscale=-1]{ \draw[ultra thick,myred] (.5,-.5)node[above] {\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (-.5,.5)node[below] {\tiny $i$} -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy & \mathcal{E}v_{r,s} \biggl( \xy (0,0)*{ \tikzdiagc[xscale=-1,yscale=-1]{ \draw[ultra thick,blue] (.5,-.5)node[above] {\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.5,.5)node[below] {\tiny $i-1$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\!\!\!\!\! \biggr) & = \xy (0,0)*{ \tikzdiagc[xscale=-1,yscale=-1]{ \draw[ultra thick,blue] (.5,-.5)node[above] {\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.5,.5)node[below] {\tiny $i-1$} -- (0,0); \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy \intertext{if $i\neq 0,1$, while} \label{eq:EmfourvI} \mathcal{E}v_{r,s} \biggl(\!\! \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.5,-.5)node[below] {\tiny $1$} -- (0,0); \draw[ultra thick,violet] (-.5,.5)node[above] {\tiny $0$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\! \biggr) & =\ \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[thick,double, violet,to-] (1,1) .. controls (1.1,.2) and (1.9,.2) .. (2,1); \draw[ultra thick,blue] (2,-1) .. controls (1.5,0) and (.5,0) .. (.5,1); \draw[thick,double, violet,to-] (.5,-1) .. controls (.5,0) and (0,0) .. (0,1); }}\endxy & \mathcal{E}v_{r,s} \biggl(\!\! \xy (0,0)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,violet] (.5,-.5)node[below] {\tiny $0$} -- (0,0); \draw[ultra thick,blue] (-.5,.5)node[above] {\tiny $1$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\! \biggr) & =\ \xy (0,0)*{ \tikzdiagc[yscale=-.7,xscale=.7]{ \draw[thick,double, violet,-to] (1,1) .. controls (1.1,.2) and (1.9,.2) .. (2,1); \draw[ultra thick,blue] (2,-1) .. controls (1.5,0) and (.5,0) .. (.5,1); \draw[thick,double, violet,-to] (.5,-1) .. controls (.5,0) and (0,0) .. (0,1); }}\endxy \\[1ex] \label{eq:EmfourvII} \mathcal{E}v_{r,s} \biggl(\!\! \xy (0,0)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,violet] (.5,-.5)node[above] {\tiny $0$} -- (0,0); \draw[ultra thick,blue] (-.5,.5)node[below] {\tiny $1$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\! \biggr) & =\ \xy (0,0)*{ \tikzdiagc[yscale=.7,xscale=-.7]{ \draw[thick,double, violet,-to] (1,1) .. controls (1.1,.2) and (1.9,.2) .. (2,1); \draw[ultra thick,blue] (2,-1) .. controls (1.5,0) and (.5,0) .. (.5,1); \draw[thick,double, violet,-to] (.5,-1) .. controls (.5,0) and (0,0) .. (0,1); }}\endxy & \mathcal{E}v_{r,s} \biggl(\!\! \xy (0,0)*{ \tikzdiagc[xscale=-1,yscale=-1]{ \draw[ultra thick,blue] (.5,-.5)node[above] {\tiny $1$} -- (0,0); \draw[ultra thick,violet] (-.5,.5)node[below] {\tiny $0$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\! \biggr) & =\ \xy (0,0)*{ \tikzdiagc[yscale=-.7,xscale=-.7]{ \draw[thick,double, violet,to-] (1,1) .. controls (1.1,.2) and (1.9,.2) .. (2,1); \draw[ultra thick,blue] (2,-1) .. controls (1.5,0) and (.5,0) .. (.5,1); \draw[thick,double, violet,to-] (.5,-1) .. controls (.5,0) and (0,0) .. (0,1); }}\endxy \\[1ex] \label{eq:EmfourvIII} \mathcal{E}v_{r,s} \biggl(\!\!\!\!\! \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,violet] (.5,-.5)node[below] {\tiny $0$} -- (0,0); \draw[ultra thick,mygreen] (-.5,.5)node[above] {\tiny $d-1$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\! \biggr) & =\ \xy (0,0)*{ \tikzdiagc[scale=0.6]{ \draw[thick,double, violet,to-] (-1,-1) -- (0,0); \draw[thick,double, violet] ( 1,-1) -- (0,0); \draw[thick,double, violet,-to] (0,0) .. controls (-1.5,1.5) and (-1.75,-.25) .. (-2,-1); \draw[thick,double, violet] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; }}\endxy & \mathcal{E}v_{r,s} \biggl(\!\!\!\!\! \xy (0,0)*{ \tikzdiagc[xscale=-1]{ \draw[ultra thick,mygreen] (.5,-.5)node[below] {\tiny $d-1$} -- (0,0); \draw[ultra thick,violet] (-.5,.5)node[above] {\tiny $0$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\! \biggr) & =\ \xy (0,0)*{ \tikzdiagc[scale=0.6,yscale=-1]{ \draw[thick,double, violet] (-1,-1) -- (0,0); \draw[thick,double, violet,to-] ( 1,-1) -- (0,0); \draw[thick,double, violet] (0,0) .. controls (-1.5,1.5) and (-1.75,-.25) .. (-2,-1); \draw[thick,double, violet,-to] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[above]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[below]{\tiny $d-1$}; }}\endxy \\[1ex] \label{eq:EmfourvIV} \mathcal{E}v_{r,s} \biggl(\!\! \xy (0,0)*{ \tikzdiagc[yscale=-1]{ \draw[ultra thick,mygreen] (.5,-.5)node[above] {\tiny $d-1$} -- (0,0); \draw[ultra thick,violet] (-.5,.5)node[below] {\tiny $0$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\!\!\!\! \biggr) & =\ \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=-1]{ \draw[thick,double, violet] (-1,-1) -- (0,0); \draw[thick,double, violet,to-] ( 1,-1) -- (0,0); \draw[thick,double, violet] (0,0) .. controls (-1.5,1.5) and (-1.75,-.25) .. (-2,-1); \draw[thick,double, violet,-to] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; }}\endxy & \mathcal{E}v_{r,s} \biggl(\! \xy (0,0)*{ \tikzdiagc[xscale=-1,yscale=-1]{ \draw[ultra thick,violet] (.5,-.5)node[above] {\tiny $0$} -- (0,0); \draw[ultra thick,mygreen] (-.5,.5)node[below] {\tiny $d-1$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy\!\!\!\!\! \biggr) & =\ \xy (0,0)*{ \tikzdiagc[scale=0.6,xscale=-1,yscale=-1]{ \draw[thick,double, violet,to-] (-1,-1) -- (0,0); \draw[thick,double, violet] ( 1,-1) -- (0,0); \draw[thick,double, violet,-to] (0,0) .. controls (-1.5,1.5) and (-1.75,-.25) .. (-2,-1); \draw[thick,double, violet] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[above]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[below]{\tiny $d-1$}; }}\endxy \end{align} \endgroup and \begingroup\allowdisplaybreaks \begin{align} \label{eq:eval-six-val-first} \mathcal{E}v_{r,s}\biggl(\!\! \xy (0,-2.1)*{ \tikzdiagc[yscale=0.45,xscale=.45]{ \draw[ultra thick,violet] (0,-1)node[below]{\tiny $0$} -- (0,0); \draw[ultra thick,violet] (0,0) -- (-1, 1);\draw[ultra thick,violet] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $1$} -- (0,0); \draw[ultra thick,blue] (1,-1)node[below]{\tiny $1$} -- (0,0); } }\endxy\!\! \biggr) &= \xy (0,-2.5)*{ \tikzdiagc[yscale=0.9]{ \draw[ultra thick,blue] (0,-1.2)node[below] {\tiny $1$} -- (0, 0); \draw[ultra thick,blue] (0,0) -- (-1.4, 1.7); \draw[ultra thick,blue] (0,0) -- ( 1.4, 1.7); \draw[ultra thick,myred] (-.6,-.5) -- (0,0); \draw[ultra thick,myred] ( .6,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,.75); \draw[ultra thick,blue] (0,.75) -- (0,1.7); \draw[ultra thick,blue] (-1.5,-1.2) -- (-.6,-.5); \draw[ultra thick,blue] ( 1.5,-1.2) -- ( .6,-.5); \draw[thick,double,violet,to-] (-.5,-1.2) .. controls (-.5,-.4) and (-.75,.25) .. (-2,1.7); \draw[thick,double,violet,-to] (.5,-1.2) .. controls (.5,-.4) and (.75,.25) .. (2,1.7); \draw[thick,double,violet,to-] (-.8,1.7) .. controls (-.25,.45) and (.25,.45) .. (.8,1.7); }}\endxy \\ \label{eq:eval-six-val-second} \mathcal{E}v_{r,s}\biggl(\!\! \xy (0,-2.1)*{ \tikzdiagc[yscale=-0.45,xscale=.45]{ \draw[ultra thick,violet] (0,-1)-- (0,0); \draw[ultra thick,violet] (0,0) -- (-1, 1)node[below]{\tiny $0$}; \draw[ultra thick,violet] (0,0) -- (1, 1)node[below]{\tiny $0$}; \draw[ultra thick,blue] (0,0)-- (0, 1)node[below]{\tiny $1$}; \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); } }\endxy\!\! \biggr) &= \xy (0,-2.5)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,blue] (0,-1.2) -- (0, 0); \draw[ultra thick,blue] (0,0) -- (-1.4, 1.7); \draw[ultra thick,blue] (0,0) -- ( 1.4, 1.7); \draw[ultra thick,myred] (-.6,-.5) -- (0,0); \draw[ultra thick,myred] ( .6,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,.75); \draw[ultra thick,blue] (0,.75) -- (0,1.7)node[below] {\tiny $1$}; \draw[ultra thick,blue] (-1.5,-1.2) -- (-.6,-.5); \draw[ultra thick,blue] ( 1.5,-1.2) -- ( .6,-.5); \draw[thick,double,violet,-to] (-.5,-1.2) .. controls (-.5,-.4) and (-.75,.25) .. (-2,1.7); \draw[thick,double,violet,to-] (.5,-1.2) .. controls (.5,-.4) and (.75,.25) .. (2,1.7); \draw[thick,double,violet,-to] (-.8,1.7) .. controls (-.25,.45) and (.25,.45) .. (.8,1.7); }}\endxy \\ \label{eq:EsixvI} \mathcal{E}v_{r,s}\biggl(\mspace{-18mu} \xy (0,-2.1)*{ \tikzdiagc[yscale=0.45,xscale=.45]{ \draw[ultra thick,violet] (0,-1)node[below]{\tiny $0$} -- (0,0); \draw[ultra thick,violet] (0,0) -- (-1, 1);\draw[ultra thick,violet] (0,0) -- (1, 1); \draw[ultra thick,mygreen] (0,0)-- (0, 1); \draw[ultra thick,mygreen] (-1,-1) -- (0,0);\node[mygreen] at (-1.25,-1.6) {\tiny $d-1$}; \draw[ultra thick,mygreen] (1,-1) -- (0,0);\node[mygreen] at (1.25,-1.6) {\tiny $d-1$}; } }\endxy \mspace{-18mu} \biggr) &= \xy (0,-2)*{ \tikzdiagc[yscale=0.9]{ \draw[ultra thick,mygreen] (0,-1.4) -- (0,0); \draw[ultra thick,mygreen] (0,1) -- (0, 2.5)node[above]{\tiny $d-1$}; \draw[ultra thick,mygreen] (0,0) .. controls (-.5,.4) and (-1.1,1.4) .. (-1.15, 1.6); \draw[ultra thick,mygreen] (0,0) .. controls ( .5,.4) and ( 1.1,1.4) .. ( 1.15, 1.6); \draw[ultra thick,orange] (-.6,-.5) -- (0,0); \draw[ultra thick,orange] ( .6,-.5) -- (0,0); \draw[ultra thick,orange] (0,0) -- (0,1); \draw[ultra thick,mygreen] (0,1) -- (0,1.7); \draw[ultra thick,mygreen] (-1.5,-2)node[below]{\tiny $d-1$} .. controls (-1.4,-1.4) and (-1,-.85) .. (-.6,-.5); \draw[ultra thick,mygreen] (1.5,-2)node[below]{\tiny $d-1$} .. controls ( 1.4,-1.4) and ( 1,-.85) .. ( .6,-.5); \draw[ultra thick,blue] (0,-2)node[below]{\tiny $1$} -- (0,-1.4); \draw[ultra thick,blue] (-1.15,1.6) ..controls (-1.2,1.75) and (-1.4,2.2) .. (-1.5,2.5)node[above]{\tiny $1$}; \draw[ultra thick,blue] ( 1.15,1.6) ..controls ( 1.2,1.75) and ( 1.4,2.2) .. ( 1.5,2.5)node[above]{\tiny $1$}; \draw[thick,double,violet,-to] (-2,2.5) .. controls (-.1,.5) and (.1,.5) .. (2,2.5); \draw[thick,double,violet,-to] (.5,-2) .. controls (-1,-.1) and (-1.5,.4) .. (-1,2.5); \draw[thick,double,violet,to-] (-.5,-2) .. controls (1,-.1) and (1.5,.4) .. (1,2.5); }}\endxy \\ \label{eq:EsixvII} \mathcal{E}v_{r,s}\biggl(\mspace{-5mu} \xy (0,-2.1)*{ \tikzdiagc[yscale=-0.45,xscale=.45]{ \draw[ultra thick,violet] (0,-1)-- (0,0); \draw[ultra thick,violet] (0,0) -- (-1, 1); \node[violet] at (-1.25,1.6) {\tiny $0$}; \draw[ultra thick,violet] (0,0) -- (1, 1);\node[violet] at (1.25,1.6) {\tiny $0$}; \draw[ultra thick,mygreen] (0,0)-- (0, 1)node[below]{\tiny $d-1$}; \draw[ultra thick,mygreen] (-1,-1) -- (0,0); \draw[ultra thick,mygreen] (1,-1) -- (0,0); } }\endxy \mspace{-5mu} \biggr) &= \xy (0,-2)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,mygreen] (0,-1.4) -- (0,0); \draw[ultra thick,mygreen] (0,1) -- (0,2.5)node[below]{\tiny $d-1$}; \draw[ultra thick,mygreen] (0,0) .. controls (-.5,.4) and (-1.1,1.4) .. (-1.15, 1.6); \draw[ultra thick,mygreen] (0,0) .. controls ( .5,.4) and ( 1.1,1.4) .. ( 1.15, 1.6); \draw[ultra thick,orange] (-.6,-.5) -- (0,0); \draw[ultra thick,orange] ( .6,-.5) -- (0,0); \draw[ultra thick,orange] (0,0) -- (0,1); \draw[ultra thick,mygreen] (-1.5,-2)node[above]{\tiny $d-1$} .. controls (-1.4,-1.4) and (-1,-.85) .. (-.6,-.5); \draw[ultra thick,mygreen] (1.5,-2)node[above]{\tiny $d-1$} .. controls ( 1.4,-1.4) and ( 1,-.85) .. ( .6,-.5); \draw[ultra thick,blue] (0,-2)node[above]{\tiny $1$} -- (0,-1.4); \draw[ultra thick,blue] (-1.15,1.6) ..controls (-1.2,1.75) and (-1.4,2.2) .. (-1.5,2.5)node[below]{\tiny $1$}; \draw[ultra thick,blue] ( 1.15,1.6) ..controls ( 1.2,1.75) and ( 1.4,2.2) .. ( 1.5,2.5)node[below]{\tiny $1$}; \draw[thick,double,violet,to-] (-2,2.5) .. controls (-.1,.5) and (.1,.5) .. (2,2.5); \draw[thick,double,violet,to-] (.5,-2) .. controls (-1,-.1) and (-1.5,.4) .. (-1,2.5); \draw[thick,double,violet,-to] (-.5,-2) .. controls (1,-.1) and (1.5,.4) .. (1,2.5); }}\endxy \end{align} \endgroup \end{itemize} This ends the definition of $\mathcal{E}v_{r,s}$. \begin{rem}\label{rem:RoRdR} Since $\mathrm{T}_{\rho}^{-1}\mathrm{B}_1\mathrm{T}_{\rho} \cong \mathrm{T}_\rho\mathrm{B}_{d-1}\mathrm{T}_{\rho}^{-1}$ in ${\EuScript{K}^b}((\EuScript{BS}_d^{\mathrm{sh}})^{\mathrm{gr}})$, we could have defined $\mathcal{E}v_{r,s}(\textcolor{violet}{\mathrm{B}_0})$ as $\textcolor{violet}{\rT_{\rho}} \textcolor{mygreen}{\mathrm{B}_{d-1}} \textcolor{violet}{\rT_{\rho}^{-1}}$. These two choices result in naturally isomorphic evaluation functors, the isomorphim being induced by the 6-valent vertices~\eqref{eq:mixed-sixv}, as can be checked by straightforward diagrammatic calculations. \end{rem} \begin{rem} The apparent lack of symmetry between the image via $\mathcal{E}v_{r,s}$ of the mixed 4-vertices involving strands colored $0$ and $1$, and the corresponding image of the mixed 4-vertices involving colored $0$ and $d-1$ (\eqref{eq:EmfourvI} to~\eqref{eq:EmfourvIV}) is explained by \fullref{rem:RoRdR}. Note also that \begin{equation*} \mathcal{E}v_{r,s} \Biggl( \xy (0,0)*{ \tikzdiagc[scale=0.6]{ \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} to (0,-.35); \draw[ultra thick,mygreen] (0,.35) to (0,1)node[above]{\tiny $d-1$}; \draw[ultra thick,violet] (0,-.35) to (0,.35);\node at (.25,0) {\tiny $0$}; \draw[ultra thick,black,to-] (-1,-1) .. controls (-.5,-.2) and (.5,-.2) .. (1,-1); \draw[ultra thick,black,to-] (-1,1) .. controls (-.5, .2) and (.5, .2) .. (1,1); }}\endxy \Biggr) = \xy (0,0)*{ \tikzdiagc[scale=0.6]{ \draw[thick,double, violet,to-] (-1,-1) -- (0,0); \draw[thick,double, violet] ( 1,-1) -- (0,0); \draw[thick,double, violet,-to] (0,0) to (-1,1); \draw[thick,double, violet] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; }}\endxy \mspace{40mu}\text{and}\mspace{40mu} \mathcal{E}v_{r,s} \Biggl( \xy (0,0)*{ \tikzdiagc[xscale=-0.6,yscale=-.6]{ \draw[ultra thick,blue] (0,-1)node[above]{\tiny $1$} to (0,-.35); \draw[ultra thick,mygreen] (0,.35) to (0,1)node[below]{\tiny $d-1$}; \draw[ultra thick,violet] (0,-.35) to (0,.35);\node at (-.25,0) {\tiny $0$}; \draw[ultra thick,black,to-] (-1,-1) .. controls (-.5,-.2) and (.5,-.2) .. (1,-1); \draw[ultra thick,black,to-] (-1,1) .. controls (-.5, .2) and (.5, .2) .. (1,1); }}\endxy \Biggr) = \xy (0,0)*{ \tikzdiagc[xscale=-0.6,yscale=-.6]{ \draw[thick,double, violet,to-] (-1,-1) -- (0,0); \draw[thick,double, violet] ( 1,-1) -- (0,0); \draw[thick,double, violet,-to] (0,0) to (-1,1); \draw[thick,double, violet] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-1)node[above]{\tiny $1$} to (0,0); \draw[ultra thick,mygreen] (0,0) to (0,1)node[below]{\tiny $d-1$}; }}\endxy \end{equation*} \end{rem} \subsection{Proof of well-definedness} \begin{thm}\label{thm:ueva-functor} The monoidal functor $\mathcal{E}v_{r,s}$ is well-defined. \end{thm} \begin{proof} The fact that $\mathcal{E}v_{r,s}$ preserves \emph{isotopy invariance} follows from~\fullref{lem:istpy-Rrho},~\fullref{lem:RotXmixed} and~\fullref{lem:msixv-cyc}, together with isotopy invariance of the usual (non-oriented) Soergel calculus. \begin{itemize} \item Relations involving only \emph{one color}. We only need to check for color 0. Relations~\eqref{eq:relhatSfirst} and~\eqref{eq:relhatSsecond} are clear. For the remaining one-color relations we have \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}\biggl( \xy (0,-2.75)*{ \tikzdiagc[scale=0.9]{ \draw[ultra thick,violet] (0,0) circle (.4); \draw[ultra thick,violet] (0,-.8)node[below]{\tiny $0$} -- (0,-.4); }}\endxy \biggr) &=\!\!\!\! \xy (0,-2.2)*{ \tikzdiagc[scale=0.9,xscale=-1]{ \draw[ultra thick,blue] (0,0) circle (.4); \draw[ultra thick,blue] (0,-.8)node[below]{\tiny $1$} -- (0,-.4); \draw[thick,double, violet] (0,0) circle (.25); \draw[thick,violet,-to] (.25,0)-- (.25,.05); \draw[thick,double, violet,-to] (-.35,-.8) .. controls (-.35,-.5) and (-1,.52) .. (0,.55) .. controls (1,.52) and (.35,-.5) .. (.35,-.8); }}\endxy\!\!\!\! \overset{\eqref{eq:loopRrho}}{=}\!\!\!\! \xy (0,0)*{ \tikzdiagc[scale=0.9,xscale=-1]{ \draw[ultra thick,blue] (0,0) circle (.4); \draw[ultra thick,blue] (0,-.8) -- (0,-.4); \draw[thick,double, violet,-to] (-.35,-.8) .. controls (-.35,-.5) and (-1,.52) .. (0,.55) .. controls (1,.52) and (.35,-.5) .. (.35,-.8); }}\endxy\!\!\!\! \overset{\eqref{eq:lollipop}}{=} 0 , \intertext{and} \mathcal{E}v_{r,s}\biggl( \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,violet] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,violet] (.6,-1)node[below]{\tiny $0$}-- (.6, 1); }}\endxy + \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,violet] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,violet] (.6,-1)node[below]{\tiny $0$}-- (.6, 1); }}\endxy \biggl) &= \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double,violet] (0,0) ellipse (.6 cm and (.9 cm); \draw[thick,double, violet,-to] (-.6,-.1)-- (-.6,-.15); \draw[thick,double, violet,to-] (1.2,-1)-- (1.2, 1); \draw[ultra thick,blue] (1.8,-1)node[below]{\tiny $1$} -- (1.8, 1); \draw[thick,double, violet,-to] (2.4,-1)-- (2.4, 1); }}\endxy \ +\ \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double,violet] (0,0) ellipse (.6 cm and (.9 cm); \draw[thick,double, violet,-to] (.6,-.1)-- (.6,-.15); \draw[thick,double, violet,-to] (1.2,-1)-- (1.2, 1); \draw[ultra thick,blue] (1.8,-1)node[below]{\tiny $1$}-- (1.8, 1); \draw[thick,double, violet,to-] (2.4,-1)-- (2.4, 1); }}\endxy\ \overset{\eqref{eq:Rrhoinvert}}{=} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (.4,-.35) -- (.4,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double, violet,to-] (1.2,-1) .. controls (1.2,.5) and (0,-2) .. (0,0) .. controls (0,2) and (1.2,-.5) .. (1.2, 1); \draw[ultra thick,blue] (1.8,-1)-- (1.8, 1); \draw[thick,double, violet,-to] (2.4,-1)-- (2.4, 1); }}\endxy \ +\ \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (.4,-.35) -- (.4,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double, violet,-to] (1.2,-1) .. controls (1.2,.5) and (0,-2) .. (0,0) .. controls (0,2) and (1.2,-.5) .. (1.2, 1); \draw[ultra thick,blue] (1.8,-1)-- (1.8, 1); \draw[thick,double, violet,to-] (2.4,-1)-- (2.4, 1); }}\endxy\ \overset{\eqref{eq:relHatSlast}}{=} 2\,\ \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[thick,double, violet,-to] (-.6,-1)-- (-.6, 1); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} -- (0,-.4)node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (0,.4) -- (0,1)node[pos=0, tikzdot]{}; \draw[thick,double, violet,to-] (.6,-1)-- (.6, 1); }}\endxy = \mathcal{E}v_{r,s}\biggl(\!\! 2\, \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,violet] (0,-1)node[below]{\tiny $0$} -- (0,-.4)node[pos=1, tikzdot]{}; \draw[ultra thick,violet] (0,.4) -- (0,1)node[pos=0, tikzdot]{}; }}\endxy \biggr) . \end{align*} \endgroup \item Relations involving \emph{two distant colors}. Here $j\neq 1,d-1$. \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s} \Biggl( \xy (0,-2.2)*{ \tikzdiagc[yscale=1.2,xscale=1]{ \draw[ultra thick,violet] (0,0)node[below]{\tiny $0$} ..controls (0,.25) and (.65,.25) .. (.65,.5) ..controls (.65,.75) and (0,.75) .. (0,1); \begin{scope}[shift={(.65,0)}] \draw[ultra thick,mygreen] (0,0)node[below]{\tiny $j$} ..controls (0,.25) and (-.65,.25) .. (-.65,.5) ..controls (-.65,.75) and (0,.75) .. (0,1); \end{scope} }}\endxy \Biggr) &= \xy (0,-2.6)*{ \tikzdiagc[yscale=1.4,xscale=1]{ \draw[ultra thick,blue] (0,0)node[below]{\tiny $1$} ..controls (0,.3) and (.65,.25) .. (.65,.5) ..controls (.65,.75) and (0,.7) .. (0,1); \begin{scope}[shift={(1,0)}] \draw[ultra thick,mygreen] (0,0)node[below]{\tiny $j$} .. controls (0,.1) and (-.3,.2) .. (-.475,.22); \draw[ultra thick,mygreen] (0,1) .. controls (0,.9) and (-.3,.8) .. (-.475,.78); \draw[ultra thick,myred] (-.475,.22) -- (-.9,.32); \draw[ultra thick,myred] (-.475,.78) -- (-.9,.68); \draw[ultra thick,mygreen] (-.9,.32) .. controls (-1.25,.4) and (-1.25,.6) .. (-.9,.68); \end{scope} \begin{scope}[shift={(-.35,0)}] \draw[thick,double,violet,to-] (0,0) ..controls (0,.3) and (.65,.25) .. (.65,.5) ..controls (.65,.75) and (0,.7) .. (0,1); \end{scope} \begin{scope}[shift={(.35,0)}] \draw[thick,double,violet,-to] (0,0) ..controls (0,.3) and (.65,.25) .. (.65,.5) ..controls (.65,.75) and (0,.7) .. (0,1); \end{scope} }}\endxy \overset{\eqref{eq:Scat-Rtwo},\eqref{eq:mixedRtwo}}{=} \xy (0,-2.6)*{ \tikzdiagc[yscale=1.4,xscale=1]{ \draw[ultra thick,blue] (0,0)node[below]{\tiny $1$} -- (0,1); \begin{scope}[shift={(1,0)}] \draw[ultra thick,mygreen] (0,0)node[below]{\tiny $j$} -- (0,1); \end{scope} \begin{scope}[shift={(-.35,0)}] \draw[thick,double,violet,to-] (0,0) -- (0,1); \end{scope} \begin{scope}[shift={(.35,0)}] \draw[thick,double,violet,-to] (0,0) -- (0,1); \end{scope} }}\endxy = \mathcal{E}v_{r,s} \left( \xy (0,0)*{ \tikzdiagc[yscale=1.2,xscale=-1]{ \draw[ultra thick,violet] (.65,0) -- (.65,1); \draw[ultra thick,mygreen] (0,0) -- (0,1); }}\endxy \right) , \\[1ex] \mathcal{E}v_{r,s}\biggl( \xy (0,.5)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,violet] (.3,-.3) -- (-.5,.5)node[above]{\tiny $0$}node[pos=0, tikzdot]{}; \draw[ultra thick,mygreen] (-.5,-.5)node[below]{\tiny $j$} -- (.5,.5); }}\endxy \biggr) &= \xy (0,0)*{ \tikzdiagc[scale=.8]{ \draw[ultra thick,blue] (.2,-.2) -- (-.7,.7)node[above]{\tiny $1$}node[pos=0, tikzdot]{}; \draw[ultra thick,mygreen] (-.7,-.7)node[below]{\tiny $j$} -- (-.275,-.275); \draw[ultra thick,myred] (-.275,-.275) -- (.275,.275); \draw[ultra thick,mygreen] (.275,.275) -- (.7,.7); \draw[thick,double,violet,-to] (-1.2,.7) .. controls (.3,-1.4) and (1.3,-.4).. (-.2,.7); }}\endxy \overset{\eqref{eq:Scat-dotslide},\eqref{eq:mixedRtwo}}{=} \xy (0,0)*{ \tikzdiagc[scale=.8]{ \draw[ultra thick,blue] (-.4,.4) -- (-.7,.7)node[above]{\tiny $1$}node[pos=0, tikzdot]{}; \draw[thick,double,violet,-to] (-1.2,.7) .. controls (-.5,-.2) and (.2,.1).. (-.2,.7); \draw[ultra thick,mygreen] (-.7,-.7)node[below]{\tiny $j$} -- (.7,.7); }}\endxy = \mathcal{E}v_{r,s} \biggl( \xy (0,-.5)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,violet] (-.5,.5) -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[ultra thick,mygreen] (-.5,-.5) -- (.5,.5); }}\endxy \biggr) , \\[1ex] \mathcal{E}v_{r,s} \biggl( \xy (0,-1)*{ \tikzdiagc[scale=.45]{ \draw[ultra thick,violet] (0,0)-- (0, 1); \draw[ultra thick,violet] (-1,-1)node[below]{\tiny $0$} -- (0,0); \draw[ultra thick,violet] (1,-1) -- (0,0); \draw[ultra thick,mygreen] (-1,0)node[above]{\tiny $j$} ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \biggr) &= \xy (0,-2.1)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,blue] (1.5,-.25) -- (1.5,.3);\node[blue] at (2.1,1.5) {\tiny $1$}; \draw[ultra thick,blue] (1.5,.3) ..controls (1.2,.5) and (1.1,.75) .. ( .9, 1.25); \draw[ultra thick,blue] (1.5,.3) .. controls (1.8,.5) and (1.9,.75) .. (2.1, 1.25); \draw[ultra thick,myred] (.9,.2) .. controls (1.2,0) and (1.8,0) .. (2.1,.2); \draw[ultra thick,mygreen] (.4,.5) .. controls (.5,.45) and (.7,.3) .. (.9,.2); \draw[ultra thick,mygreen] (2.6,.5) .. controls (2.5,.45) and (2.3,.3) .. (2.1,.2); \draw[thick,double,violet,-to] (1,-.25) .. controls (.85,.5) and (0.75,.65) .. (.5,1.25); \draw[thick,double,violet,to-] (2,-.25) .. controls (2.15,.5) and (2.25,.65) .. (2.5,1.25); \draw[thick,double,violet,-to] (1.2,1.25) .. controls (1.4,.5) and (1.6,.5) .. (1.8,1.25); }}\endxy \overset{\eqref{eq:Scat-trivslide},\eqref{eq:mixedRtwo}}{=} \xy (0,-2.1)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,blue] (1.5,-.25) -- (1.5,.3);\node[blue] at (2.1,1.5) {\tiny $1$}; \draw[ultra thick,blue] (1.5,.3) ..controls (1.2,.5) and (1.1,.75) .. ( .9, 1.25); \draw[ultra thick,blue] (1.5,.3) .. controls (1.8,.5) and (1.9,.75) .. (2.1, 1.25); \draw[ultra thick,mygreen] (.4,.5) -- (.72,.72); \draw[ultra thick,mygreen] (2.6,.5) -- (2.28,.72); \draw[ultra thick,mygreen] (1.3,.94) .. controls (1.4,.96) and (1.6,.96) .. (1.7,.94); \draw[ultra thick,myred] (.72,.71) ..controls (.82,.82) and (1.2,.92) .. (1.3,.94); \draw[ultra thick,myred] (2.28,.72) ..controls (2.18,.82) and (1.8,.92) .. (1.7,.94); \draw[thick,double,violet,-to] (1,-.25) .. controls (.85,.5) and (0.75,.65) .. (.5,1.25); \draw[thick,double,violet,to-] (2,-.25) .. controls (2.15,.5) and (2.25,.65) .. (2.5,1.25); \draw[thick,double,violet,-to] (1.2,1.25) .. controls (1.4,.5) and (1.6,.5) .. (1.8,1.25); }}\endxy = \mathcal{E}v_{r,s} \biggl( \xy (0,-.5)*{ \tikzdiagc[scale=.45]{ \draw[ultra thick,violet] (0,0)-- (0, 1); \draw[ultra thick,violet] (-1,-1) -- (0,0); \draw[ultra thick,violet] (1,-1) -- (0,0); \draw[ultra thick,mygreen] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \biggr) . \end{align*} \endgroup The corresponding relations with the colors $0$ and $j$ switched are proved in the same way. \item Relations involving \emph{two adjacent colors}. We have to check the cases involving either the pair $(0,1)$ or the pair $(0,d-1)$. For the pair $(0,1)$ we compute: \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}\biggl( \xy (0,2.5)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.75) -- (0,0)node[pos=0, tikzdot]{};\draw[ultra thick,blue] (0,0) -- (-1, 1); \draw[ultra thick,blue] (0,0) -- (1, 1); \draw[ultra thick,violet] (0,0)-- (0, 1)node[above] {\tiny $0$}; \draw[ultra thick,violet] (-1,-1) -- (0,0); \draw[ultra thick,violet] (1,-1) -- (0,0); }}\endxy \biggr) &= \xy (0,-2.5)*{ \tikzdiagc[yscale=-.55,xscale=.55]{ \draw[ultra thick,blue] (0,-1.2) -- (0, 0); \draw[ultra thick,blue] (0,0) -- (-1.4, 1.7)node[below] {\tiny $1$}; \draw[ultra thick,blue] (0,0) -- ( 1.4, 1.7); \draw[ultra thick,myred] (-.6,-.5) -- (0,0); \draw[ultra thick,myred] ( .6,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,.75); \draw[ultra thick,blue] (0,.75) -- (0,1.3)node[pos=1,tikzdot]{}; \draw[ultra thick,blue] (-1.5,-1.2) -- (-.6,-.5); \draw[ultra thick,blue] ( 1.5,-1.2) -- ( .6,-.5); \draw[thick,double,violet,-to] (-.5,-1.2) .. controls (-.5,-.4) and (-.75,.25) .. (-2,1.7); \draw[thick,double,violet,to-] (.5,-1.2) .. controls (.5,-.4) and (.75,.25) .. (2,1.7); \draw[thick,double,violet,-to] (-.8,1.7) .. controls (-.25,.45) and (.25,.45) .. (.8,1.7); }}\endxy \overset{\eqref{eq:Dslide-Xmixed}}{=} \xy (0,-2.5)*{ \tikzdiagc[yscale=-.55,xscale=.55]{ \draw[ultra thick,blue] (0,-1.2) -- (0, 0); \draw[ultra thick,blue] (0,0) -- (-1.4, 1.7)node[below] {\tiny $1$}; \draw[ultra thick,blue] (0,0) -- ( 1.4, 1.7); \draw[ultra thick,myred] (-.6,-.5) -- (0,0); \draw[ultra thick,myred] ( .6,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,.65)node[pos=1,tikzdot]{}; \draw[ultra thick,blue] (-1.5,-1.2) -- (-.6,-.5); \draw[ultra thick,blue] ( 1.5,-1.2) -- ( .6,-.5); \draw[thick,double,violet,-to] (-.5,-1.2) .. controls (-.5,-.4) and (-.75,.25) .. (-2,1.7); \draw[thick,double,violet,to-] (.5,-1.2) .. controls (.5,-.4) and (.75,.25) .. (2,1.7); \draw[thick,double,violet,-to] (-.8,1.7) .. controls (-.25,.8) and (.25,.8) .. (.8,1.7); }}\endxy \overset{\eqref{eq:6vertexdot}}{=} \xy (0,-.25)*{ \tikzdiagc[yscale=-.55,xscale=.55]{ \draw[ultra thick,blue] (0,-1.2) -- (0,.5); \draw[ultra thick,blue] (.05,.53) ..controls (0,.4) and (-1.2,1.2) .. (-1.4, 1.7); \draw[ultra thick,blue] (-.05,.53) ..controls (0,.4) and ( 1.2,1.2) .. ( 1.4, 1.7); \draw[ultra thick,myred] (-.6,-.3) -- (-.4,.2)node[pos=1,tikzdot]{}; \draw[ultra thick,myred] ( .6,-.3) -- ( .4,.2)node[pos=1,tikzdot]{}; \draw[ultra thick,blue] (-1.5,-1.2) .. controls (-1,-.7) and (-.6,-.45) .. (-.65,-.3); \draw[ultra thick,blue] ( 1.5,-1.2) .. controls ( 1,-.7) and ( .6,-.45) .. ( .65,-.3); \draw[thick,double,violet,-to] (-.5,-1.2) .. controls (-.5,-.4) and (-.75,.25) .. (-2,1.7); \draw[thick,double,violet,to-] (.5,-1.2) .. controls (.5,-.4) and (.75,.25) .. (2,1.7); \draw[thick,double,violet,-to] (-.8,1.7) .. controls (-.25,.8) and (.25,.8) .. (.8,1.7); }}\endxy + \xy (0,-.25)*{ \tikzdiagc[yscale=-.55,xscale=.55]{ \draw[ultra thick,blue] (0,-1.2) -- (0,-.6)node[pos=1,tikzdot]{}; \draw[ultra thick,blue] (-1.4,1.7) ..controls (-.4,.1) and (.4,.1) .. ( 1.4, 1.7); \draw[ultra thick,myred] (-.6,-.35) ..controls (-.2,0) and (.2,0) .. (.6,-.35); \draw[ultra thick,blue] (-1.5,-1.2) .. controls (-1,-.7) and (-.6,-.45) .. (-.65,-.3); \draw[ultra thick,blue] ( 1.5,-1.2) .. controls ( 1,-.7) and ( .6,-.45) .. ( .65,-.3); \draw[thick,double,violet,-to] (-.5,-1.2) .. controls (-.5,-.4) and (-.75,.25) .. (-2,1.7); \draw[thick,double,violet,to-] (.5,-1.2) .. controls (.5,-.4) and (.75,.25) .. (2,1.7); \draw[thick,double,violet,-to] (-.8,1.7) .. controls (-.25,.8) and (.25,.8) .. (.8,1.7); }}\endxy \\ & \overset{\eqref{eq:Rrhoinvert},\eqref{eq:mixedRtwo},\eqref{eq:Dslide-Xmixed}}{=} \xy (0,-.25)*{ \tikzdiagc[yscale=-.55,xscale=.55]{ \draw[ultra thick,blue] (0,-1.2) -- (0,.5); \draw[ultra thick,blue] (.05,.53) ..controls (0,.4) and (-1.2,1.2) .. (-1.4, 1.7); \draw[ultra thick,blue] (-.05,.53) ..controls (0,.4) and ( 1.2,1.2) .. ( 1.4, 1.7); \draw[ultra thick,blue] (-1.5,-1.2) -- (-1,-.5)node[pos=1,tikzdot]{}; \draw[ultra thick,blue] ( 1.5,-1.2) -- ( 1,-.5)node[pos=1,tikzdot]{}; \draw[thick,double,violet,-to] (-.5,-1.2) .. controls (-.5,-.4) and (-.75,.25) .. (-2,1.7); \draw[thick,double,violet,to-] (.5,-1.2) .. controls (.5,-.4) and (.75,.25) .. (2,1.7); \draw[thick,double,violet,-to] (-.8,1.7) .. controls (-.25,.8) and (.25,.8) .. (.8,1.7); }}\endxy + \xy (0,-.25)*{ \tikzdiagc[yscale=-.55,xscale=.55]{ \draw[ultra thick,blue] (0,-1.2) -- (0,-.7)node[pos=1,tikzdot]{}; \draw[ultra thick,blue] (-1.4,1.7) ..controls (-.4,.6) and (.4,.6) .. ( 1.4, 1.7); \draw[ultra thick,blue] (-1.5,-1.2) .. controls (-.7,.4) and (.7,.4) .. (1.5,-1.2); \draw[thick,double,violet,-to] (-.5,-1.2) .. controls (-.45,0) and (.45,0) .. (.5,-1.2); \draw[thick,double,violet,-to] (-.8,1.7) .. controls (-.25,1.1) and (.25,1.1) .. (.8,1.7); \draw[thick,double,violet,to-] (-2.1,1.7) .. controls (-.65,0) and (.65,0) .. (2.1,1.7); }}\endxy = \mathcal{E}v_{r,s} \biggl( \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (-1,0) -- (-1, 1)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (1,0) -- (1, 1)node[pos=0, tikzdot]{}; \draw[ultra thick,violet] (0,0)-- (0, 1); \draw[ultra thick,violet] (-1,-1) -- (0,0); \draw[ultra thick,violet] (1,-1) -- (0,0); }}\endxy \ \ + \ \xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.8]{ \draw[ultra thick,violet] (1.5,0.1) -- (1.5, 0.4)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (.9,.4) .. controls (1.2,-.45) and (1.8,-.45) .. (2.1,.4); \draw[ultra thick,violet] ( .9,-1.2) .. controls (1.2,-.35) and (1.8,-.35) .. (2.1,-1.2); }}\endxy \biggr) , \\[1ex] \mathcal{E}v_{r,s} \vastl{-.05} \xy (0,-2.3)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,violet] (0,-1) -- (0,1); \draw[ultra thick,violet] (0,1) -- (-1,2); \draw[ultra thick,violet] (0,1) -- (1,2); \draw[ultra thick,violet] (0,-1) -- (-1,-2)node[below]{\tiny $0$}; \draw[ultra thick,violet] (0,-1) -- (1,-2); \draw[ultra thick,blue] (0,1) -- (0,2); \draw[ultra thick,blue] (0,-1) -- (0,-2); \draw[ultra thick,blue] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[ultra thick,blue] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy \vastr{-.05} &= \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=.6]{ \draw[ultra thick,blue] (0,-1.2) -- (0, 1.2); \draw[ultra thick,blue] (0,1.2) -- (-1.4, 2.9); \draw[ultra thick,blue] (0,1.2) -- ( 1.4, 2.9); \draw[ultra thick,blue] (0,-1.2) -- (-1.4, -2.9); \draw[ultra thick,blue] (0,-1.2) -- ( 1.4, -2.9); \draw[ultra thick,myred] (-.6,.7) -- (0,1.2); \draw[ultra thick,myred] ( .6,.7) -- (0,1.2); \draw[ultra thick,myred] (0,1.2) -- (0,1.95); \draw[ultra thick,myred] (0,-1.2) -- (0,-1.95); \draw[ultra thick,myred] (-.6,-.7) -- (0,-1.2); \draw[ultra thick,myred] ( .6,-.7) -- (0,-1.2); \draw[ultra thick,blue] (0,1.95) -- (0,2.9); \draw[ultra thick,blue] (0,-1.95) -- (0,-2.9); \draw[ultra thick,blue] (-.6,.7) .. controls (-1.25,0) and (-1.5,0) .. (-.6,-.7); \draw[ultra thick,blue] (.6,.7) .. controls (1.25,0) and (1.5,0) .. (.6,-.7); \draw[thick,double,violet,to-] (-2,-2.9)..controls (-.75,-1.45) and (-.5,-.8) .. (-.5,0) .. controls (-.5,.8) and (-.75,1.45) .. (-2,2.9); \draw[thick,double,violet,-to] (2,-2.9)..controls (.75,-1.45) and (.5,-.8) .. (.5,0) .. controls (.5,.8) and (.75,1.45) .. (2,2.9); \draw[thick,double,violet,to-] (-.8,2.9) .. controls (-.25,1.65) and (.25,1.65) .. (.8,2.9); \draw[thick,double,violet,-to] (-.8,-2.9) .. controls (-.25,-1.65) and (.25,-1.65) .. (.8,-2.9); }}\endxy \overset{\eqref{eq:mixedRtwo}}{=} \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=.6]{ \draw[ultra thick,blue] (0,-1.2) -- (0, 1.2); \draw[ultra thick,blue] (0,1.2) -- (-1.4, 2.9); \draw[ultra thick,blue] (0,1.2) -- ( 1.4, 2.9); \draw[ultra thick,blue] (0,-1.2) -- (-1.4, -2.9); \draw[ultra thick,blue] (0,-1.2) -- ( 1.4, -2.9); \draw[ultra thick,myred] (0,1.2) -- (0,1.95); \draw[ultra thick,myred] (0,-1.2) -- (0,-1.95); \draw[ultra thick,blue] (0,1.95) -- (0,2.9); \draw[ultra thick,blue] (0,-1.95) -- (0,-2.9); \draw[ultra thick,myred] (0,1.2) .. controls (-1.25,.4) and (-1.25,-.4) .. (0,-1.2); \draw[ultra thick,myred] (0,1.2) .. controls ( 1.25,.4) and ( 1.25,-.4) .. (0,-1.2); \draw[thick,double,violet,to-] (-2,-2.9)..controls (-1.25,-.8) and (-1.25,.8) .. (-2,2.9); \draw[thick,double,violet,-to] (2,-2.9)..controls (1.25,-.8) and (1.25,.8) .. (2,2.9); \draw[thick,double,violet,to-] (-.8,2.9) .. controls (-.25,1.65) and (.25,1.65) .. (.8,2.9); \draw[thick,double,violet,-to] (-.8,-2.9) .. controls (-.25,-1.65) and (.25,-1.65) .. (.8,-2.9); }}\endxy \overset{\eqref{eq:braidmoveB}}{=} \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=.6]{ \draw[ultra thick,blue] (1.4, -2.9)..controls (.7,-.8) and (.7,.8) .. (1.4, 2.9); \draw[ultra thick,blue] (-1.4, -2.9)..controls (-.7,-.8) and (-.7,.8) .. (-1.4, 2.9); \draw[ultra thick,myred] (0,-1.95) -- (0,1.95); \draw[ultra thick,blue] (0,1.95) -- (0,2.9); \draw[ultra thick,blue] (0,-1.95) -- (0,-2.9); \draw[thick,double,violet,to-] (-2,-2.9)..controls (-1.25,-.8) and (-1.25,.8) .. (-2,2.9); \draw[thick,double,violet,-to] (2,-2.9)..controls (1.25,-.8) and (1.25,.8) .. (2,2.9); \draw[thick,double,violet,to-] (-.8,2.9) .. controls (-.25,1.65) and (.25,1.65) .. (.8,2.9); \draw[thick,double,violet,-to] (-.8,-2.9) .. controls (-.25,-1.65) and (.25,-1.65) .. (.8,-2.9); }}\endxy + \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=.6]{ \draw[ultra thick,blue] (0,-.75) -- (0,.75); \draw[ultra thick,blue] (0,.75) -- (-1.4, 2.9); \draw[ultra thick,blue] (0,.75) -- ( 1.4, 2.9); \draw[ultra thick,blue] (0,-.75) -- (-1.4, -2.9); \draw[ultra thick,blue] (0,-.75) -- ( 1.4, -2.9); \draw[ultra thick,myred] (0,1.4) -- (0,1.95)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (0,-1.4) -- (0,-1.95)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (0,1.95) -- (0,2.9); \draw[ultra thick,blue] (0,-1.95) -- (0,-2.9); \draw[thick,double,violet,to-] (-2,-2.9)..controls (-1.25,-.8) and (-1.25,.8) .. (-2,2.9); \draw[thick,double,violet,-to] (2,-2.9)..controls (1.25,-.8) and (1.25,.8) .. (2,2.9); \draw[thick,double,violet,to-] (-.8,2.9) .. controls (-.25,1.65) and (.25,1.65) .. (.8,2.9); \draw[thick,double,violet,-to] (-.8,-2.9) .. controls (-.25,-1.65) and (.25,-1.65) .. (.8,-2.9); }}\endxy \\ & \overset{\eqref{eq:Rrhoinvert},\eqref{eq:mixedRtwo},\eqref{eq:Dslide-Xmixed}}{=} \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=.6]{ \draw[ultra thick,blue] (1.4, -2.9)..controls (.7,-.8) and (.7,.8) .. (1.4, 2.9); \draw[ultra thick,blue] (-1.4, -2.9)..controls (-.7,-.8) and (-.7,.8) .. (-1.4, 2.9); \draw[ultra thick,blue] (0,-2.9) -- (0,2.9); \draw[thick,double,violet,-to] (-.8,-2.9)..controls (-.2,-.8) and (-.2,.8) .. (-.8,2.9); \draw[thick,double,violet,to-] (-2,-2.9)..controls (-1.35,-.8) and (-1.35,.8) .. (-2,2.9); \draw[thick,double,violet,-to] (2,-2.9)..controls (1.35,-.8) and (1.35,.8) .. (2,2.9); \draw[thick,double,violet,to-] (.8,-2.9)..controls (.2,-.8) and (.2,.8) .. (.8,2.9); }}\endxy + \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=.6]{ \draw[ultra thick,blue] (0,-.75) -- (0,.75); \draw[ultra thick,blue] (0,.75) -- (-1.4, 2.9); \draw[ultra thick,blue] (0,.75) -- ( 1.4, 2.9); \draw[ultra thick,blue] (0,-.75) -- (-1.4, -2.9); \draw[ultra thick,blue] (0,-.75) -- ( 1.4, -2.9); \draw[ultra thick,blue] (0,2.2) -- (0,2.9)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (0,-2.2) -- (0,-2.9)node[pos=0, tikzdot]{}; \draw[thick,double,violet,to-] (-2,-2.9)..controls (-1.25,-.8) and (-1.25,.8) .. (-2,2.9); \draw[thick,double,violet,-to] (2,-2.9)..controls (1.25,-.8) and (1.25,.8) .. (2,2.9); \draw[thick,double,violet,to-] (-.8,2.9) .. controls (-.25,1.25) and (.25,1.25) .. (.8,2.9); \draw[thick,double,violet,-to] (-.8,-2.9) .. controls (-.25,-1.25) and (.25,-1.25) .. (.8,-2.9); }}\endxy = \mathcal{E}v_{r,s} \vastl{0} \xy (0,0)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,violet] (-1,-2) -- (-1,2); \draw[ultra thick,violet] ( 1,-2) -- ( 1,2); \draw[ultra thick,blue] (0,-2) -- (0,2); }}\endxy\, + \xy (0,.05)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,violet] (0,-.6) -- (0,.6); \draw[ultra thick,violet] (0,.6) .. controls (-.25,.6) and (-1,1).. (-1,2); \draw[ultra thick,violet] (0,.6) .. controls (.25,.6) and (1,1) .. (1,2); \draw[ultra thick,violet] (0,-.6) .. controls (-.25,-.6) and (-1,-1).. (-1,-2); \draw[ultra thick,violet] (0,-.6) .. controls (.25,-.6) and (1,-1) .. (1,-2); \draw[ultra thick,blue] (0,1.25) -- (0,2)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (0,-1.25) -- (0,-2)node[pos=0, tikzdot]{}; }}\endxy \vastr{-.05} , \\[1ex] \mathcal{E}v_{r,s} \vastl{-.2} \xy (0,-2.2)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,blue] (0,-1) -- (0,1); \draw[ultra thick,blue] (0,1) -- (-1,2); \draw[ultra thick,blue] (0,1) -- (1,2); \draw[ultra thick,blue] (0,-1) -- (-1,-2); \draw[ultra thick,blue] (0,-1) -- (1,-2); \draw[ultra thick,violet] (0,1) -- (0,2); \draw[ultra thick,violet] (0,-1) -- (0,-2)node[below]{\tiny $0$}; \draw[ultra thick,violet] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[ultra thick,violet] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); \draw[ultra thick,violet] (-2,0) -- (-.75,0); \draw[ultra thick,violet] (2,0) -- (.75,0); }}\endxy \vastr{-.2} &= \xy (0,-2.5)*{ \tikzdiagc[yscale=0.5,xscale=.6]{ \draw[ultra thick,blue] (0,-2.9)node[below] {\tiny $1$} -- (0, -1.7); \draw[ultra thick,blue] (0,-1.7) -- (-1.4, 0); \draw[ultra thick,blue] (0,-1.7) -- ( 1.4, 0); \draw[ultra thick,myred] (-.6,-2.2) -- (0,-1.7); \draw[ultra thick,myred] ( .6,-2.2) -- (0,-1.7); \draw[ultra thick,myred] (0,-1.7) -- (0,-1.05); \draw[ultra thick,blue] (0,-1.05) -- (0,0); \draw[ultra thick,blue] (-1.5,-2.9) -- (-.6,-2.2); \draw[ultra thick,blue] ( 1.5,-2.9) -- ( .6,-2.2); \draw[ultra thick,blue] (0,2.9) -- (0, 1.7); \draw[ultra thick,blue] (0,1.7) -- (-1.4, 0); \draw[ultra thick,blue] (0,1.7) -- ( 1.4, 0); \draw[ultra thick,myred] (-.6,2.2) -- (0,1.7); \draw[ultra thick,myred] ( .6,2.2) -- (0,1.7); \draw[ultra thick,myred] (0,1.7) -- (0,1.05); \draw[ultra thick,blue] (0,1.05) -- (0,0); \draw[ultra thick,blue] (-1.5,2.9) -- (-.6,2.2); \draw[ultra thick,blue] ( 1.5,2.9) -- ( .6,2.2); \draw[ultra thick,blue] (-1.4, 0) -- (-2.9,0); \draw[ultra thick,blue] ( 1.4, 0) -- ( 2.9,0); \draw[thick,double,violet,to-] (-.5,-2.9) .. controls (-.5,-1.8) and (-1.75,-.5) .. (-2.9,-.5); \draw[thick,double,violet,-to] ( .5,-2.9) .. controls ( .5,-1.8) and ( 1.75,-.5) .. ( 2.9,-.5); \draw[thick,double,violet,-to] (-.5, 2.9) .. controls (-.5, 1.8) and (-1.75, .5) .. (-2.9, .5); \draw[thick,double,violet,to-] ( .5, 2.9) .. controls ( .5, 1.8) and ( 1.75, .5) .. ( 2.9, .5); \draw[thick,double,violet] (0,0) ellipse (.6 cm and (1.0 cm); \draw[thick,double,violet,-to] (-.6,.1) -- (-.6,.15); }}\endxy \overset{\eqref{eq:loopRrho},\eqref{eq:mixedRtwo}}{=} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.6]{ \draw[ultra thick,blue] (0,-2.9) -- (0, -1.7); \draw[ultra thick,blue] (0,-1.7) -- (-1.4, 0); \draw[ultra thick,blue] (0,-1.7) -- ( 1.4, 0); \draw[ultra thick,blue] (-1.5,-2.9) -- (-.6,-2.2); \draw[ultra thick,blue] ( 1.5,-2.9) -- ( .6,-2.2); \draw[ultra thick,blue] (0,2.9) -- (0, 1.7); \draw[ultra thick,blue] (0,1.7) -- (-1.4, 0); \draw[ultra thick,blue] (0,1.7) -- ( 1.4, 0); \draw[ultra thick,myred] (-.6,2.2) -- (0,1.7); \draw[ultra thick,myred] ( .6,2.2) -- (0,1.7); \draw[ultra thick,myred] (-.6,-2.2) -- (0,-1.7); \draw[ultra thick,myred] ( .6,-2.2) -- (0,-1.7); \draw[ultra thick,myred] (0,-1.7) -- (0,1.7); \draw[ultra thick,blue] (-1.5,2.9) -- (-.6,2.2); \draw[ultra thick,blue] ( 1.5,2.9) -- ( .6,2.2); \draw[ultra thick,blue] (-1.4, 0) -- (-2.9,0); \draw[ultra thick,blue] ( 1.4, 0) -- ( 2.9,0); \draw[thick,double,violet,to-] (-.5,-2.9) .. controls (-.5,-1.8) and (-1.75,-.5) .. (-2.9,-.5); \draw[thick,double,violet,-to] ( .5,-2.9) .. controls ( .5,-1.8) and ( 1.75,-.5) .. ( 2.9,-.5); \draw[thick,double,violet,-to] (-.5, 2.9) .. controls (-.5, 1.8) and (-1.75, .5) .. (-2.9, .5); \draw[thick,double,violet,to-] ( .5, 2.9) .. controls ( .5, 1.8) and ( 1.75, .5) .. ( 2.9, .5); }}\endxy \overset{\eqref{eq:stroman}}{=} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.6]{ \draw[ultra thick,blue] (0,-2.9) -- (0, -1.7); \draw[ultra thick,blue] (0,-1.7) -- (-1.4, 0); \draw[ultra thick,blue] (0,-1.7) -- ( 1.4, 0); \draw[ultra thick,blue] (-2, .8) -- (-2.6, 1.6); \draw[ultra thick,blue] (-2,-.8) -- (-2.6,-1.6); \draw[ultra thick,blue] ( 2,-.8) -- ( 2.6,-1.6); \draw[ultra thick,blue] ( 2, .8) -- ( 2.6, 1.6); \draw[ultra thick,myred] (-1.4,0) -- (1.4,0); \draw[ultra thick,myred] (-1.4,0) -- (-2, .8); \draw[ultra thick,myred] (-1.4,0) -- (-2,-.8); \draw[ultra thick,myred] ( 1.4,0) -- ( 2,-.8); \draw[ultra thick,myred] ( 1.4,0) -- ( 2, .8); \draw[ultra thick,blue] (0,2.9) -- (0, 1.7); \draw[ultra thick,blue] (0,1.7) -- (-1.4, 0); \draw[ultra thick,blue] (0,1.7) -- ( 1.4, 0); \draw[ultra thick,blue] (-1.4, 0) -- (-2.9,0); \draw[ultra thick,blue] ( 1.4, 0) -- ( 2.9,0); \draw[thick,double,violet,to-] (-.5,-2.9) .. controls (-.5,-1.8) and (-1.75,-.5) .. (-2.9,-.5); \draw[thick,double,violet,-to] ( .5,-2.9) .. controls ( .5,-1.8) and ( 1.75,-.5) .. ( 2.9,-.5); \draw[thick,double,violet,-to] (-.5, 2.9) .. controls (-.5, 1.8) and (-1.75, .5) .. (-2.9, .5); \draw[thick,double,violet,to-] ( .5, 2.9) .. controls ( .5, 1.8) and ( 1.75, .5) .. ( 2.9, .5); }}\endxy \\ & \overset{\eqref{eq:loopRrho},\eqref{eq:mixedRtwo}}{=} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.6]{ \draw[ultra thick,blue] (0,-2.9) -- (0, -1.7); \draw[ultra thick,blue] (0,-1.7) -- (-1.4, 0); \draw[ultra thick,blue] (0,-1.7) -- ( 1.4, 0); \draw[ultra thick,blue] (-2, .8) -- (-2.6, 1.6); \draw[ultra thick,blue] (-2,-.8) -- (-2.6,-1.6); \draw[ultra thick,blue] ( 2,-.8) -- ( 2.6,-1.6); \draw[ultra thick,blue] ( 2, .8) -- ( 2.6, 1.6); \draw[ultra thick,blue] (-.8,0) -- ( .8,0); \draw[ultra thick,myred] (-1.4,0) -- ( -.8,0); \draw[ultra thick,myred] ( .8,0) -- ( 1.4,0); \draw[ultra thick,myred] (-1.4,0) -- (-2, .8); \draw[ultra thick,myred] (-1.4,0) -- (-2,-.8); \draw[ultra thick,myred] ( 1.4,0) -- ( 2,-.8); \draw[ultra thick,myred] ( 1.4,0) -- ( 2, .8); \draw[ultra thick,blue] (0,2.9) -- (0, 1.7); \draw[ultra thick,blue] (0,1.7) -- (-1.4, 0); \draw[ultra thick,blue] (0,1.7) -- ( 1.4, 0); \draw[ultra thick,blue] (-1.4, 0) -- (-2.9,0); \draw[ultra thick,blue] ( 1.4, 0) -- ( 2.9,0); \draw[thick,double,violet,to-] (-.5,-2.9) .. controls (-.5,-1.8) and (-1.75,-.5) .. (-2.9,-.5); \draw[thick,double,violet,-to] ( .5,-2.9) .. controls ( .5,-1.8) and ( 1.75,-.5) .. ( 2.9,-.5); \draw[thick,double,violet,-to] (-.5, 2.9) .. controls (-.5, 1.8) and (-1.75, .5) .. (-2.9, .5); \draw[thick,double,violet,to-] ( .5, 2.9) .. controls ( .5, 1.8) and ( 1.75, .5) .. ( 2.9, .5); \draw[thick,double,violet] (0,0) ellipse (.8 cm and .6 cm); \draw[thick,double,violet,to-] (-.2,-.6) -- (-.15,-.6); }}\endxy = \mathcal{E}v_{r,s} \vastl{-.2} \xy (0,0)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,blue] (-1,0) -- (1,0); \draw[ultra thick,blue] (-2,-1) -- (-1,0); \draw[ultra thick,blue] (-2,1) -- (-1,0); \draw[ultra thick,blue] ( 2,-1) -- ( 1,0); \draw[ultra thick,blue] ( 2,1) -- ( 1,0); \draw[ultra thick,violet] (-1,0) ..controls (-.5,-.8) and (.5,-.8) .. (1,0); \draw[ultra thick,violet] (-1,0) ..controls (-.5, .8) and (.5, .8) .. (1,0); \draw[ultra thick,violet] (0,.65) -- (0,1.8); \draw[ultra thick,violet] (0,-.65) -- (0,-1.8); \draw[ultra thick,violet] (-2,0) -- (-1,0); \draw[ultra thick,violet] (2,0) -- (1,0); }}\endxy \vastr{-.2} , \\[1ex] \mathcal{E}v_{r,s}\biggl( \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,violet] (.6,-1)node[below]{\tiny $0$} -- (.6, 1); }}\endxy - \xy (0,-2)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,violet] (.6,-1)node[below]{\tiny $0$}-- (.6, 1); }}\endxy \biggr) &= \xy (0,-2)*{ \tikzdiagc[yscale=.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double, violet,to-] (.75,-1) -- (.75, 1); \draw[ultra thick,blue] (1.5,-1)node[below]{\tiny $1$} -- (1.5, 1); \draw[thick,double, violet,-to] (2.25,-1) -- (2.25, 1); }}\endxy\;\; -\;\; \xy (0,-2)*{ \tikzdiagc[yscale=.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double, violet,-to] (.75,-1) -- (.75, 1); \draw[ultra thick,blue] (1.5,-1)node[below]{\tiny $1$} -- (1.5, 1); \draw[thick,double, violet,to-] (2.25,-1) -- (2.25, 1); }}\endxy\;\; \overset{\eqref{eq:mixeddumbbellslide1}}{=} \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=.5]{ \draw[ultra thick,myred] (.75,-.35)node[below]{\tiny $2$} -- (.75,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double, violet,to-] (0,-1) -- (0, 1); \draw[ultra thick,blue] (1.5,-1) -- (1.5, 1); \draw[thick,double, violet,-to] (2.25,-1) -- (2.25, 1); }}\endxy\;\; -\;\; \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=-.5]{ \draw[ultra thick,myred] (.75,-.35)node[below]{\tiny $2$} -- (.75,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double, violet,-to] (0,-1) -- (0, 1); \draw[ultra thick,blue] (1.5,-1) -- (1.5, 1); \draw[thick,double, violet,to-] (2.25,-1) -- (2.25, 1); }}\endxy \overset{\eqref{eq:forcingdumbel-i-iminus}}{=} \frac{1}{2}\,\biggl(\; \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=-.5]{ \draw[ultra thick,blue] (.75,-.35) -- (.75,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double, violet,-to] (0,-1) -- (0, 1); \draw[ultra thick,blue] (1.5,-1) -- (1.5, 1); \draw[thick,double, violet,to-] (2.25,-1) -- (2.25, 1); }}\endxy \;\; - \;\; \xy (0,0)*{ \tikzdiagc[yscale=.5,xscale=.5]{ \draw[ultra thick,blue] (.75,-.35) -- (.75,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double, violet,to-] (0,-1) -- (0, 1); \draw[ultra thick,blue] (1.5,-1) -- (1.5, 1); \draw[thick,double, violet,-to] (2.25,-1) -- (2.25, 1); }}\endxy\ \biggr) \\ &\overset{\eqref{eq:Rrhoinvert},\eqref{eq:snake}}{=} \frac{1}{2}\,\biggl(\; \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double,violet] (0,0) ellipse (.6 cm and (.9 cm); \draw[thick,double, violet,-to] (-.6,.15)-- (-.6,.2); \draw[thick,double, violet,-to] (1.2,-1)-- (1.2, 1); \draw[ultra thick,blue] (1.8,-1)-- (1.8, 1); \draw[thick,double, violet,to-] (2.4,-1)-- (2.4, 1); }}\endxy \ - \ \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[thick,double,violet] (0,0) ellipse (.6 cm and (.9 cm); \draw[thick,double, violet,-to] (-.6,-.1)-- (-.6,-.15); \draw[thick,double, violet,to-] (1.2,-1)-- (1.2, 1); \draw[ultra thick,blue] (1.8,-1) -- (1.8, 1); \draw[thick,double, violet,-to] (2.4,-1)-- (2.4, 1); }}\endxy\ \biggr) = \frac{1}{2}\, \mathcal{E}v_{r,s}\biggl(\, \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,violet] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,violet] (.6,-1)-- (.6, 1); }}\endxy - \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,violet] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,violet] (.6,-1)-- (.6, 1); }}\endxy\ \biggr) . \end{align*} \endgroup The relations with the colors $0$ and $1$ switched are proved in the same way. The relations for the pair $(0,d-1)$ can be proved similarly, using the image of the corresponding mixed $6$-valent vertex, of course. \item The relation involving \emph{three distant colors} is straightforward and follows from the observation that the case involving colors $0$, $i$ and $j$, with $1<i,j<d-1$ and distant implies checking a relation involving the colors $1$, $i+1$ and $j+1$, which are still distant. \item The relation involving a {\em distant dumbbell} colored $i\in \{2,\ldots,d-2\}$ and a straight line colored $0$ is straightforward, because~\eqref{eq:mixeddumbbellslide1} implies that it reduces to the same relation involving a distant dumbbell with color $i+1$ and a straight line colored $1$. Similarly, the relation involving a {\em distant dumbbell} colored $0$ and a straight line colored $i\in \{2,\ldots, d-2\}$ reduces to the relation involving a distant dumbbell colored $1$ and a straight line colored $i+1$, thanks to~\eqref{eq:mixedRtwo}. \item Relation involving \emph{two adjacent colors and one distant from the other two}. If the distant color in~\eqref{eq:sixv-dist} is $0$, the proof is straightforward. Otherwise, we compute \begin{align*} \mathcal{E}v_{r,s}\biggl(\!\! \xy (0,-2.1)*{ \tikzdiagc[yscale=.5,xscale=.5]{ \draw[ultra thick,violet] (0,-1)node[below]{\tiny $0$} -- (0,0); \draw[ultra thick,violet] (0,0) -- (-1, 1); \draw[ultra thick,violet] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $1$} -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,orange] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy\, \biggr) &= \xy (0,-2.5)*{ \tikzdiagc[yscale=.9]{ \draw[ultra thick,blue] (0,-1.2)node[below] {\tiny $1$} -- (0, 0); \draw[ultra thick,blue] (0,0) -- (-1.4, 1.7); \draw[ultra thick,blue] (0,0) -- ( 1.4, 1.7); \draw[ultra thick,myred] (-.6,-.5) -- (0,0); \draw[ultra thick,myred] ( .6,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,.75); \draw[ultra thick,blue] (0,.75) -- (0,1.7); \draw[ultra thick,blue] (-1.5,-1.2) -- (-.6,-.5); \draw[ultra thick,blue] ( 1.5,-1.2) -- ( .6,-.5); \draw[ultra thick,orange] (-2,.3) -- (-1.3,.85); \draw[ultra thick,black] (-1.3,.85) -- (-.55,1.3); \draw[ultra thick,orange] (-.55,1.3) ..controls (-.2,1.5) and (.2,1.5) .. ( .55,1.3); \draw[ultra thick,orange] ( 2,.3) -- ( 1.3,.85); \draw[ultra thick,black] ( 1.3,.85) -- ( .55,1.3); \draw[thick,double,violet,to-] (-.5,-1.2) .. controls (-.5,-.4) and (-.75,.25) .. (-2,1.7); \draw[thick,double,violet,-to] (.5,-1.2) .. controls (.5,-.4) and (.75,.25) .. (2,1.7); \draw[thick,double,violet,to-] (-.8,1.7) .. controls (-.25,.45) and (.25,.45) .. (.8,1.7); }}\endxy \overset{\eqref{eq:sixv-dist},\eqref{eq:mixedRtwo},\eqref{eq:R3violet}}{=} \xy (0,-2.5)*{ \tikzdiagc[yscale=.9]{ \draw[ultra thick,blue] (0,-1.2)node[below] {\tiny $1$} -- (0, 0); \draw[ultra thick,blue] (0,0) -- (-1.4, 1.7); \draw[ultra thick,blue] (0,0) -- ( 1.4, 1.7); \draw[ultra thick,myred] (-.6,-.5) -- (0,0); \draw[ultra thick,myred] ( .6,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,.75); \draw[ultra thick,blue] (0,.75) -- (0,1.7); \draw[ultra thick,blue] (-1.5,-1.2) -- (-.6,-.5); \draw[ultra thick,blue] ( 1.5,-1.2) -- ( .6,-.5); \draw[ultra thick,orange] (-2,.3) -- (-.5,-.85); \draw[ultra thick,orange] ( 2,.3) -- ( .5,-.85); \draw[ultra thick,black] (-.5,-.85) .. controls (-.15,-1) and (.15,-1) .. (.5,-.85); \draw[thick,double,violet,to-] (-.5,-1.2) .. controls (-.5,-.4) and (-.75,.25) .. (-2,1.7); \draw[thick,double,violet,-to] (.5,-1.2) .. controls (.5,-.4) and (.75,.25) .. (2,1.7); \draw[thick,double,violet,to-] (-.8,1.7) .. controls (-.25,.45) and (.25,.45) .. (.8,1.7); }}\endxy \\ & = \mathcal{E}v_{r,s}\biggl(\, \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,violet] (0,-1) -- (0,0); \draw[ultra thick,violet] (0,0) -- (-1, 1); \draw[ultra thick,violet] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,orange] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \, \biggr) , \\ \intertext{and} \mathcal{E}v_{r,s}\biggl(\!\!\!\! \xy (0,-2.2)*{ \tikzdiagc[yscale=.5,xscale=.5]{ \draw[ultra thick,violet] (0,-1)node[below]{\tiny $0$} -- (0,0); \draw[ultra thick,violet] (0,0) -- (-1, 1); \draw[ultra thick,violet] (0,0) -- (1, 1); \draw[ultra thick,mygreen] (0,0)-- (0, 1); \draw[ultra thick,mygreen] (-1,-1)node[below]{\tiny $d-1$} -- (0,0); \draw[ultra thick,mygreen] (1,-1) -- (0,0); \draw[ultra thick,orange] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy\, \biggr) &= \xy (0,1.25)*{ \tikzdiagc[yscale=.6,xscale=.7]{ \draw[ultra thick,mygreen] (0,-1.4) -- (0, 2.5)node[above]{\tiny $d-1$}; \draw[ultra thick,mygreen] (0,0) .. controls (-.5,.4) and (-1.1,1.4) .. (-1.15, 1.6); \draw[ultra thick,mygreen] (0,0) .. controls ( .5,.4) and ( 1.1,1.4) .. ( 1.15, 1.6); \draw[ultra thick,myred] (-.6,-.5) -- (0,0); \draw[ultra thick,myred] ( .6,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,1); \draw[ultra thick,mygreen] (0,1) -- (0,1.7); \draw[ultra thick,mygreen] (-1.5,-2)node[below]{\tiny $d-1$} .. controls (-1.4,-1.4) and (-1,-.85) .. (-.6,-.5); \draw[ultra thick,mygreen] (1.5,-2)node[below]{\tiny $d-1$} .. controls ( 1.4,-1.4) and ( 1,-.85) .. ( .6,-.5); \draw[ultra thick,blue] (0,-2)node[below]{\tiny $1$} -- (0,-1.4); \draw[ultra thick,blue] (-1.15,1.6) ..controls (-1.2,1.75) and (-1.4,2.2) .. (-1.5,2.5)node[above]{\tiny $1$}; \draw[ultra thick,blue] ( 1.15,1.6) ..controls ( 1.2,1.75) and ( 1.4,2.2) .. ( 1.5,2.5)node[above]{\tiny $1$}; \draw[ultra thick,orange] (-1.08,2.1) ..controls (-.5,2.35) and (.5,2.35) .. (1.08,2.1); \draw[ultra thick,black] (-1.5,1.9) -- (-1.08,2.1); \draw[ultra thick,black] ( 1.5,1.9) -- ( 1.08,2.1); \draw[ultra thick,orange] (-2.5,0) ..controls (-2.2,1) and (-2,1.6) .. (-1.5,1.9); \draw[ultra thick,orange] ( 2.5,0) ..controls ( 2.2,1) and ( 2,1.6) .. ( 1.5,1.9); \draw[thick,double,violet,-to] (-2,2.5) .. controls (-.1,.5) and (.1,.5) .. (2,2.5); \draw[thick,double,violet,-to] (.5,-2) .. controls (-1,-.1) and (-1.5,.4) .. (-1,2.5); \draw[thick,double,violet,to-] (-.5,-2) .. controls (1,-.1) and (1.5,.4) .. (1,2.5); }}\endxy \overset{\eqref{eq:sixv-dist},\eqref{eq:R3violet},\eqref{eq:sixv-ReidIII}}{=} \xy (0,1.25)*{ \tikzdiagc[yscale=.6,xscale=.7]{ \draw[ultra thick,mygreen] (0,-1.4) -- (0, 2.5)node[above]{\tiny $d-1$}; \draw[ultra thick,mygreen] (0,0) .. controls (-.5,.4) and (-1.1,1.4) .. (-1.15, 1.6); \draw[ultra thick,mygreen] (0,0) .. controls ( .5,.4) and ( 1.1,1.4) .. ( 1.15, 1.6); \draw[ultra thick,myred] (-.6,-.5) -- (0,0); \draw[ultra thick,myred] ( .6,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (0,1); \draw[ultra thick,mygreen] (0,1) -- (0,1.7); \draw[ultra thick,mygreen] (-1.5,-2)node[below]{\tiny $d-1$} .. controls (-1.4,-1.4) and (-1,-.85) .. (-.6,-.5); \draw[ultra thick,mygreen] (1.5,-2)node[below]{\tiny $d-1$} .. controls ( 1.4,-1.4) and ( 1,-.85) .. ( .6,-.5); \draw[ultra thick,blue] (0,-2)node[below]{\tiny $1$} -- (0,-1.4); \draw[ultra thick,blue] (-1.15,1.6) ..controls (-1.2,1.75) and (-1.4,2.2) .. (-1.5,2.5)node[above]{\tiny $1$}; \draw[ultra thick,blue] ( 1.15,1.6) ..controls ( 1.2,1.75) and ( 1.4,2.2) .. ( 1.5,2.5)node[above]{\tiny $1$}; \draw[ultra thick,black] (-.3,-1.7) -- (.3,-1.7); \draw[ultra thick,orange] (-2.5,0) ..controls (-2.2,-1) and (-2,-1.6) .. (-.3,-1.7); \draw[ultra thick,orange] ( 2.5,0) ..controls ( 2.2,-1) and ( 2,-1.6) .. ( .3,-1.7); \draw[thick,double,violet,-to] (-2,2.5) .. controls (-.1,.5) and (.1,.5) .. (2,2.5); \draw[thick,double,violet,-to] (.5,-2) .. controls (-1,-.1) and (-1.5,.4) .. (-1,2.5); \draw[thick,double,violet,to-] (-.5,-2) .. controls (1,-.1) and (1.5,.4) .. (1,2.5); }}\endxy = \mathcal{E}v_{r,s}\biggl(\, \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,violet] (0,-1) -- (0,0); \draw[ultra thick,violet] (0,0) -- (-1, 1); \draw[ultra thick,violet] (0,0) -- (1, 1); \draw[ultra thick,mygreen] (0,0)-- (0, 1); \draw[ultra thick,mygreen] (-1,-1) -- (0,0); \draw[ultra thick,mygreen] (1,-1) -- (0,0); \draw[ultra thick,orange] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy\, \biggr) . \end{align*} The relations with the adjacent colors exchanged are proved in the same way. \item Relation involving \emph{three adjacent colors}. We need to check the cases of three adjacent colors belonging to $\{d-2,d-1,0,1,2\}$. Starting with the case of $(0,1,d-1)$, we have \begingroup\allowdisplaybreaks \begin{align* \mathcal{E}v_{r,s} \left(\rule{-.2 cm}{1.2cm}\right. \xy (0,-2.5)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[ultra thick,blue] (0,-1) -- (0,1); \draw[ultra thick,blue] (0,1) -- (-1,2); \draw[ultra thick,blue] (0,1) -- (2,2); \draw[ultra thick,blue] (0,-1) -- (-2,-2)node[below] {\tiny $1$}; \draw[ultra thick,blue] (0,-1) -- (1,-2); \draw[ultra thick,violet] (0,1) -- (0,2); \draw[ultra thick,violet] (0,-1) -- (0,-2)node[below] {\tiny $0$}; \draw[ultra thick,violet] (0,1) -- (1,0); \draw[ultra thick,violet] (0,1) -- (-1,0); \draw[ultra thick,violet] (0,-1) -- (1,0); \draw[ultra thick,violet] (0,-1) -- (-1,0); \draw[ultra thick,violet] (-2,0) -- (-1,0); \draw[ultra thick,violet] (2,0) -- (1,0); % \draw[ultra thick,mygreen] (-1,0) -- (1,0); \draw[ultra thick,mygreen] (-1,0) -- (-1,-2); \draw[ultra thick,mygreen] (-1,0) -- (-2,2); \draw[ultra thick,mygreen] (1,0) -- (2,-2)node[below] {\tiny $d-1$}; \draw[ultra thick,mygreen] (1,0) -- (1,2); }}\endxy \left.\rule{-.5 cm}{1.2cm}\right) = \xy (0,0)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[ultra thick,mygreen] (-1,0) -- (1,0); \draw[ultra thick,blue] (0,-1.15) -- (0,1.15); \draw[ultra thick,orange] (-2,0) -- (-1,0); \draw[ultra thick,orange] (-2,0) -- (-2.4, .65); \draw[ultra thick,orange] (-2,0) -- (-2.4,-.65); \draw[ultra thick,orange] ( 2,0) -- ( 1,0); \draw[ultra thick,orange] ( 2,0) -- ( 2.4, .65); \draw[ultra thick,orange] ( 2,0) -- ( 2.4,-.65); \draw[ultra thick,mygreen] (-2,0) -- (-1, 1); \draw[ultra thick,mygreen] (-2,0) -- (-1,-1); \draw[ultra thick,mygreen] (-2,0) -- (-3.25,0); \draw[ultra thick,mygreen] ( 2,0) -- ( 1, 1); \draw[ultra thick,mygreen] ( 2,0) -- ( 1,-1); \draw[ultra thick,mygreen] ( 2,0) -- ( 3.25,0); \draw[ultra thick,blue] (-3.25,0) -- (-4.25,0); \draw[ultra thick,blue] ( 3.25,0) -- ( 4.25,0); \draw[ultra thick,myred] (0,1.15) -- (0,2.25); \draw[ultra thick,myred] (0,2.25) -- (-1,3); \draw[ultra thick,myred] (0,2.25) -- ( 1,3); \draw[ultra thick,myred] (0,-1.15) -- (0,-2.25); \draw[ultra thick,myred] (0,-2.25) -- (-1,-3); \draw[ultra thick,myred] (0,-2.25) -- ( 1,-3); \draw[ultra thick,blue] (0,2.25) -- (0,4); \draw[ultra thick,blue] (0,2.25) -- (-1,1); \draw[ultra thick,blue] (0,2.25) -- ( 1,1); \draw[ultra thick,blue] (0,-2.25) -- (0,-4); \draw[ultra thick,blue] (0,-2.25) -- (-1,-1); \draw[ultra thick,blue] (0,-2.25) -- ( 1,-1); \draw[ultra thick,blue] (1,3) to[out=30,in=200] (4,4); \draw[ultra thick,blue] (-1,-3) to[out=210,in=20] (-4,-4); \draw[ultra thick,mygreen] (2.4,.65) to[out=70,in=-90] (2.4,4); \draw[ultra thick,mygreen] (-2.4,-.65) to[out=250,in=90] (-2.4,-4); \draw[ultra thick,blue] (-1,3) to (-2.5,4); \draw[ultra thick,blue] ( 1,-3) to ( 2.5,-4); \draw[ultra thick,mygreen] (-2.4,.65) to[out=120,in=-70] (-4,4); \draw[ultra thick,mygreen] (2.4,-.65) to[out=-60,in=110] (4,-4); \draw[thick,double,violet,to-] (-1,-4) -- (-1,4); \draw[thick,double,violet,-to] ( 1,-4) -- ( 1,4); \draw[thick,double,violet,to-] (-4,1) to[out=-65,in=-115] (4,1); \draw[thick,double,violet,-to] (-4,-1) to[out=65,in=115] (4,-1); }}\endxy \intertext{and} \mathcal{E}v_{r,s}\left(\rule{0 cm}{1.2cm}\right. \xy (0,.05)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[ultra thick,mygreen] (0,-1) -- (0,1); \draw[ultra thick,mygreen] (0,1) -- (1,2); \draw[ultra thick,mygreen] (0,1) -- (-2,2); \draw[ultra thick,mygreen] (0,-1) -- (2,-2); \draw[ultra thick,mygreen] (0,-1) -- (-1,-2); \draw[ultra thick,violet] (0,1) -- (0,2); \draw[ultra thick,violet] (0,-1) -- (0,-2); \draw[ultra thick,violet] (0,1) -- (1,0); \draw[ultra thick,violet] (0,1) -- (-1,0); \draw[ultra thick,violet] (0,-1) -- (1,0); \draw[ultra thick,violet] (0,-1) -- (-1,0); \draw[ultra thick,violet] (-2,0) -- (-1,0); \draw[ultra thick,violet] (2,0) -- (1,0); % \draw[ultra thick,blue] (1,0) -- (-1,0); \draw[ultra thick,blue] (1,0) -- (1,-2); \draw[ultra thick,blue] (1,0) -- (2,2); \draw[ultra thick,blue] (-1,0) -- (-2,-2); \draw[ultra thick,blue] (-1,0) -- (-1,2); }}\endxy \left.\rule{0 cm}{1.2cm}\right) = \xy (0,0)*{ \tikzdiagc[scale=0.5,yscale=1,rotate=90]{ \draw[ultra thick,mygreen] (-1,0) -- (1,0); \draw[ultra thick,blue] (0,-1.15) -- (0,1.15); \draw[ultra thick,orange] (-2,0) -- (-1,0); \draw[ultra thick,orange] (-2,0) -- (-2.4, .65); \draw[ultra thick,orange] (-2,0) -- (-2.4,-.65); \draw[ultra thick,orange] ( 2,0) -- ( 1,0); \draw[ultra thick,orange] ( 2,0) -- ( 2.4, .65); \draw[ultra thick,orange] ( 2,0) -- ( 2.4,-.65); \draw[ultra thick,mygreen] (-2,0) -- (-1, 1); \draw[ultra thick,mygreen] (-2,0) -- (-1,-1); \draw[ultra thick,mygreen] (-2,0) -- (-3.25,0); \draw[ultra thick,mygreen] ( 2,0) -- ( 1, 1); \draw[ultra thick,mygreen] ( 2,0) -- ( 1,-1); \draw[ultra thick,mygreen] ( 2,0) -- ( 3.25,0); \draw[ultra thick,blue] (-3.25,0) -- (-4.25,0); \draw[ultra thick,blue] ( 3.25,0) -- ( 4.25,0); \draw[ultra thick,myred] (0,1.15) -- (0,2.25); \draw[ultra thick,myred] (0,2.25) -- (-1,3); \draw[ultra thick,myred] (0,2.25) -- ( 1,3); \draw[ultra thick,myred] (0,-1.15) -- (0,-2.25); \draw[ultra thick,myred] (0,-2.25) -- (-1,-3); \draw[ultra thick,myred] (0,-2.25) -- ( 1,-3); \draw[ultra thick,blue] (0,2.25) -- (0,4); \draw[ultra thick,blue] (0,2.25) -- (-1,1); \draw[ultra thick,blue] (0,2.25) -- ( 1,1); \draw[ultra thick,blue] (0,-2.25) -- (0,-4); \draw[ultra thick,blue] (0,-2.25) -- (-1,-1); \draw[ultra thick,blue] (0,-2.25) -- ( 1,-1); \draw[ultra thick,blue] (1,3) to[out=30,in=200] (4,4); \draw[ultra thick,blue] (-1,-3) to[out=210,in=20] (-4,-4); \draw[ultra thick,mygreen] (2.4,.65) to[out=70,in=-90] (2.4,4); \draw[ultra thick,mygreen] (-2.4,-.65) to[out=250,in=90] (-2.4,-4); \draw[ultra thick,blue] (-1,3) to (-2.5,4); \draw[ultra thick,blue] ( 1,-3) to ( 2.5,-4); \draw[ultra thick,mygreen] (-2.4,.65) to[out=120,in=-70] (-4,4); \draw[ultra thick,mygreen] (2.4,-.65) to[out=-60,in=110] (4,-4); \draw[thick,double,violet,to-] (-1,-4) -- (-1,4); \draw[thick,double,violet,-to] ( 1,-4) -- ( 1,4); \draw[thick,double,violet,to-] (-4,1) to[out=-65,in=-115] (4,1); \draw[thick,double,violet,-to] (-4,-1) to[out=65,in=115] (4,-1); }}\endxy \end{align*} \endgroup To prove that these are equal, first use the relations in~\fullref{lem:ReidIII-Rrho} and~\fullref{lem:hopefullythelastViolet} to write them in the form \[ \xy (0,0)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[densely dashed] (0,0) circle (2.7); \draw[thick,double,violet,to-] (-1,-4) to[out=155,in=-155] (-1,4); \draw[thick,double,violet,-to] ( 1,-4) to[out=25,in=-25] ( 1,4); \draw[thick,double,violet,-to] (-4,-1) to[out=-65,in=-115] (4,-1); \draw[thick,double,violet,to-] (-4,1) to[out=65,in=115] (4,1); \draw[ultra thick,mygreen] (-2.5,2.5) to (-3.5,3.5); \draw[ultra thick,mygreen] ( 2.5,2.5) to ( 3.5,3.5); \draw[ultra thick,mygreen] (-2.5,-2.5) to (-3.5,-3.5); \draw[ultra thick,mygreen] (2.5,-2.5) to (3.5,-3.5); \draw[ultra thick,blue] (-2,3.2) to (-2.75,4); \draw[ultra thick,blue] (3.2,2) to ( 4,2.75); \draw[ultra thick,blue] (2,-3.2) to (2.75,-4); \draw[ultra thick,blue] (-3.2,-2) to (-4,-2.75); \draw[ultra thick,blue] (-2.5, 2.5) to (-1.9, 1.9); \draw[ultra thick,blue] ( 2.5, 2.5) to ( 1.9, 1.9); \draw[ultra thick,blue] ( 2.5,-2.5) to ( 1.9,-1.9); \draw[ultra thick,blue] (-2.5,-2.5) to (-1.9,-1.9); \node[blue] at (-1.8,1.6) {\tiny $1$}; \draw[ultra thick,myred] (-2,3.2) to (-1.65,2.85); \draw[ultra thick,myred] (3.2,2) to (2.85,1.65); \draw[ultra thick,myred] (2,-3.2) to (1.65,-2.85); \draw[ultra thick,myred] (-3.2,-2) to (-2.85,-1.65); \draw[ultra thick,olive] (-1.65,2.85) to (-1.2,2.4); \draw[ultra thick,olive] (2.85,1.65) to (2.4,1.2); \draw[ultra thick,olive] (1.65,-2.85) to (1.2,-2.4); \draw[ultra thick,olive] (-2.85,-1.65) to (-2.4,-1.2); \node[olive] at (-1.1,2.1) {\tiny $3$}; \draw[ultra thick,blue] (-4.25,0) to (-3.15,0); \draw[ultra thick,myred] (-3.15,0) to (-2.7,0); \draw[ultra thick,blue] (4.25,0) to (3.15,0); \draw[ultra thick,myred] (3.15,0) to (2.7,0); \node[myred] at (-2.4,0) {\tiny $2$}; }}\endxy. \] Then observe that the parts of the diagrams inside the dashed circle are exactly as the two sides of~\eqref{eq:relhatSlast} with colors $(1, 2, 3)$, which completes the proof of this case. The remaining cases can be proved in similar ways, but they are actually a bit easier. For example, for the colors $(0,1,2)$ we have \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s} \left(\rule{-.2 cm}{1.2cm}\right. \xy (0,-2.5)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[ultra thick,myred] (0,-1) -- (0,1); \draw[ultra thick,myred] (0,1) -- (-1,2); \draw[ultra thick,myred] (0,1) -- (2,2); \draw[ultra thick,myred] (0,-1) -- (-2,-2)node[below] {\tiny $2$}; \draw[ultra thick,myred] (0,-1) -- (1,-2); \draw[ultra thick,blue] (0,1) -- (0,2); \draw[ultra thick,blue] (0,-1) -- (0,-2)node[below] {\tiny $1$}; \draw[ultra thick,blue] (0,1) -- (1,0); \draw[ultra thick,blue] (0,1) -- (-1,0); \draw[ultra thick,blue] (0,-1) -- (1,0); \draw[ultra thick,blue] (0,-1) -- (-1,0); \draw[ultra thick,blue] (-2,0) -- (-1,0); \draw[ultra thick,blue] (2,0) -- (1,0); % \draw[ultra thick,violet] (-1,0) -- (1,0); \draw[ultra thick,violet] (-1,0) -- (-1,-2); \draw[ultra thick,violet] (-1,0) -- (-2,2); \draw[ultra thick,violet] (1,0) -- (2,-2)node[below] {\tiny $0$}; \draw[ultra thick,violet] (1,0) -- (1,2); }}\endxy \left.\rule{-.2 cm}{1.2cm}\right) &= \xy (0,0)*{ \tikzdiagc[scale=.5,xscale=1.25,yscale=.9]{ \draw[ultra thick,olive] (0,-1) -- (0,1); \draw[ultra thick,olive] (1.85,2.8) to (3.4,3.2); \draw[ultra thick,olive] (-1.85,-2.8) to (-3.4,-3.2); \draw[ultra thick,myred] (0,1) -- (0,2.25); \draw[ultra thick,myred] (0,-1) -- (0,-2.25); \draw[ultra thick,myred] (0,2.25) to[out=135,in=-60] (-2,4); \draw[ultra thick,myred] (0,-2.25) to[out=-45,in=120] (2,-4); \draw[ultra thick,myred] (0,2.25) to (1.85,2.8); \draw[ultra thick,myred] (0,-2.25) to (-1.85,-2.8); \draw[ultra thick,myred] (3.4,3.2) to (4,3.4); \draw[ultra thick,myred] (-3.4,-3.2) to (-4,-3.4); \draw[ultra thick,myred] (-1.5,0) -- (-2.8,0); \draw[ultra thick,myred] (1.5,0) -- (2.8,0); \draw[ultra thick,myred] (-1.5,0) -- (-.7,1.2); \draw[ultra thick,myred] (-1.5,0) -- (-.7,-1.2); \draw[ultra thick,myred] ( 1.5,0) -- ( .7,1.2); \draw[ultra thick,myred] ( 1.5,0) -- ( .7,-1.2); \draw[ultra thick,blue] (-4,0) -- (-2.8,0); \draw[ultra thick,blue] ( 4,0) -- ( 2.8,0); \draw[ultra thick,blue] (-1.5,0) -- (1.5,0); \draw[ultra thick,blue] (-1.5,0) to[out=130,in=-40] (-4,4); \draw[ultra thick,blue] ( 1.5,0) to[out=-60,in=135] (4,-4); \draw[ultra thick,blue] (-1.5,0) to[out=-120,in=80] (-2.75,-4); \draw[ultra thick,blue] ( 1.5,0) to[out=60,in=-100] ( 2.75, 4); \draw[ultra thick,blue] ( 0,-2.25) -- (-.7,-1.2); \draw[ultra thick,blue] ( 0,-2.25) -- ( .7,-1.2); \draw[ultra thick,blue] ( 0,2.25) -- (-.7,1.2); \draw[ultra thick,blue] ( 0,2.25) -- ( .7,1.2); \draw[ultra thick,blue] ( 0,-2.25) -- (0,-4); \draw[ultra thick,blue] ( 0,2.25) -- (0,4); \draw[thick,double,violet,to-] (-3.5,-4) to[out=80,in=-45] (-4,3); \draw[thick,double,violet,-to] ( 4,-3) to[out=135,in=-100] (3.5,4); \draw[thick,double,violet,to-] (-3,4) to[out=-65,in=180] (0,1) to[out=0,in=-90] (2,4); \draw[thick,double,violet,-to] (-2,-4) to[out=90,in=180] (0,-1) to[out=0,in=115] (3,-4); }}\endxy \intertext{and} \mathcal{E}v_{r,s}\left(\rule{0 cm}{1.2cm}\right. \xy (0,.05)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[ultra thick,violet] (0,-1) -- (0,1); \draw[ultra thick,violet] (0,1) -- (1,2); \draw[ultra thick,violet] (0,1) -- (-2,2); \draw[ultra thick,violet] (0,-1) -- (2,-2); \draw[ultra thick,violet] (0,-1) -- (-1,-2); \draw[ultra thick,blue] (0,1) -- (0,2); \draw[ultra thick,blue] (0,-1) -- (0,-2); \draw[ultra thick,blue] (0,1) -- (1,0); \draw[ultra thick,blue] (0,1) -- (-1,0); \draw[ultra thick,blue] (0,-1) -- (1,0); \draw[ultra thick,blue] (0,-1) -- (-1,0); \draw[ultra thick,blue] (-2,0) -- (-1,0); \draw[ultra thick,blue] (2,0) -- (1,0); % \draw[ultra thick,myred] (1,0) -- (-1,0); \draw[ultra thick,myred] (1,0) -- (1,-2); \draw[ultra thick,myred] (1,0) -- (2,2); \draw[ultra thick,myred] (-1,0) -- (-2,-2); \draw[ultra thick,myred] (-1,0) -- (-1,2); }}\endxy \left.\rule{0 cm}{1.2cm}\right) &=\ \xy (0,0)*{ \tikzdiagc[scale=.5,xscale=1.2,yscale=1.1,rotate=90]{ \draw[ultra thick,olive] (0,-1) -- (0,1); \draw[ultra thick,olive] (1.85,2.8) to (3.4,3.2); \draw[ultra thick,olive] (-1.85,-2.8) to (-3.4,-3.2); \draw[ultra thick,myred] (0,1) -- (0,2.25); \draw[ultra thick,myred] (0,-1) -- (0,-2.25); \draw[ultra thick,myred] (0,2.25) to[out=135,in=-60] (-2,4); \draw[ultra thick,myred] (0,-2.25) to[out=-45,in=120] (2,-4); \draw[ultra thick,myred] (0,2.25) to (1.85,2.8); \draw[ultra thick,myred] (0,-2.25) to (-1.85,-2.8); \draw[ultra thick,myred] (3.4,3.2) to (4,3.4); \draw[ultra thick,myred] (-3.4,-3.2) to (-4,-3.4); \draw[ultra thick,myred] (-1.5,0) -- (-2.8,0); \draw[ultra thick,myred] (1.5,0) -- (2.8,0); \draw[ultra thick,myred] (-1.5,0) -- (-.7,1.2); \draw[ultra thick,myred] (-1.5,0) -- (-.7,-1.2); \draw[ultra thick,myred] ( 1.5,0) -- ( .7,1.2); \draw[ultra thick,myred] ( 1.5,0) -- ( .7,-1.2); \draw[ultra thick,blue] (-4,0) -- (-2.8,0); \draw[ultra thick,blue] ( 4,0) -- ( 2.8,0); \draw[ultra thick,blue] (-1.5,0) -- (1.5,0); \draw[ultra thick,blue] (-1.5,0) to[out=130,in=-40] (-4,4); \draw[ultra thick,blue] ( 1.5,0) to[out=-60,in=135] (4,-4); \draw[ultra thick,blue] (-1.5,0) to[out=-120,in=80] (-2.75,-4); \draw[ultra thick,blue] ( 1.5,0) to[out=60,in=-100] ( 2.75, 4); \draw[ultra thick,blue] ( 0,-2.25) -- (-.7,-1.2); \draw[ultra thick,blue] ( 0,-2.25) -- ( .7,-1.2); \draw[ultra thick,blue] ( 0,2.25) -- (-.7,1.2); \draw[ultra thick,blue] ( 0,2.25) -- ( .7,1.2); \draw[ultra thick,blue] ( 0,-2.25) -- (0,-4); \draw[ultra thick,blue] ( 0,2.25) -- (0,4); \draw[thick,double,violet,to-] (-3.5,-4) to[out=80,in=-45] (-4,3); \draw[thick,double,violet,-to] ( 4,-3) to[out=135,in=-100] (3.5,4); \draw[thick,double,violet,to-] (-3,4) to[out=-65,in=180] (0,1) to[out=0,in=-90] (2,4); \draw[thick,double,violet,-to] (-2,-4) to[out=90,in=180] (0,-1) to[out=0,in=115] (3,-4); }}\endxy \end{align*} \endgroup Proceeding as in the previous case, but using the relations in~\fullref{lem:ReidIII-6v-violet} and~\fullref{lem:mixedRtwo}, results in two diagrams which differ only by parts that are equal to the two sides of~\eqref{eq:relhatSlast} with colors $(1, 2, 3)$ again. \item Relations involving \emph{oriented strands}. Relations~\eqref{eq:orloop} and~\eqref{eq:orinv} translate under $\mathcal{E}v_{r,s}$ into relations~\eqref{eq:loopRrho} and~\eqref{eq:Rrhoinvert}, respectively. The remaining relations~\eqref{eq:orthru4vertex} to~\eqref{eq:orslide6vertex} translate into relations~\eqref{eq:R3violet},~\eqref{eq:mixedRtwo},~\eqref{eq:Dslide-Xmixed},~\eqref{eq:mpitchfork} and~\eqref{eq:orthru6vertex} (together with some obvious relations in the usual (non-oriented) Soergel calculus), respectively, if they don't involve the color $0$. However, if one of the strands is colored $0$, then there is something to check. For each relation, we prove one case involving the colors $0$ and $1$ and one case involving the colors $0$ and $d-1$, the other cases being similar. \begin{itemize} \item For relation~\eqref{eq:orthru4vertex}, we have \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}\Biggl( \xy (0,0)*{ \tikzdiagc[scale=.55]{ \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $1$} -- (.45,.45); \draw[ultra thick,violet] (.45,.45) -- (1,1)node[above]{\tiny $0$}; \draw[ultra thick,mygreen] (1,-1) -- (-.45,.45);\draw[ultra thick,orange] (-.45,.45) -- (-1,1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \Biggl) = \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,blue] (-1,-1) -- (1,1); \draw[ultra thick,mygreen] (1,-1) -- (-.45,.45);\draw[ultra thick,orange] (-.45,.45) -- (-1,1); \draw[thick,double,violet,to-] (-1,0) to[out=40,in=180] (0,.6) to[out=0,in=-135] (.5,1); \draw[thick,double,violet,to-] (1.25,.75) to[out=-135,in=135] (1.25,0); }}\endxy \overset{\eqref{eq:mixedRtwo}}{=} \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,blue] (-1,-1) -- (1,1); \draw[ultra thick,orange] (-.42,.42) -- (-1,1); \draw[ultra thick,mygreen] (1,-1) -- (.5,-.5); \draw[ultra thick,mygreen] (-.42,.42) -- (.2,-.2); \draw[ultra thick,orange] (.5,-.5) -- (.2,-.2); \draw[thick,double,violet,to-] (-1,0) to[out=20,in=180] (0,.65) to[out=0,in=-135] (.5,1); \draw[thick,double,violet,to-] (1.25,.75) to[out=-135,in=45] (0,-.4) to[out=-120,in=-120] (1.25,0); }}\endxy = \mathcal{E}v_{r,s}\Biggl( \xy (0,0)*{ \tikzdiagc[scale=.55]{ \draw[ultra thick,blue] (-1,-1) -- (-.45,-.45); \draw[ultra thick,violet] (-.45,-.45) -- (1,1); \draw[ultra thick,mygreen] (1,-1) -- (.45,-.45); \draw[ultra thick,orange] (.45,-.45) -- (-1,1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \Biggr) \intertext{and} \mathcal{E}v_{r,s}\Biggl( \xy (0,0)*{ \tikzdiagc[scale=.55]{ \draw[ultra thick,violet] (-1,-1)node[below]{\tiny $0$} -- (.45,.45); \draw[ultra thick,mygreen] (.45,.45) -- (1,1)node[above]{\tiny $d-1$}; \draw[ultra thick,teal] (1,-1) -- (-.45,.45);\draw[ultra thick,orange] (-.45,.45) -- (-1,1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \Biggl) = \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $1$} -- (.45,.45); \draw[ultra thick,mygreen] (.45,.45) -- (1,1)node[above]{\tiny $d-1$}; \draw[ultra thick,teal] (1,-1) -- (-.49,.49); \draw[ultra thick,orange] (-.49,.49) -- (-1,1); \draw[thick,double,violet,to-] (-1.2,0) to[out=45,in=135] (.45,.45) to[out=-45,in=45] (-.5,-1); \draw[thick,double,violet,to-] (-1,-.5) to[out=45,in=180] (.45,.45) to[out=0,in=160] (1.2,0); }}\endxy \overset{\eqref{eq:mixedRtwo},\eqref{eq:sixv-ReidIII}}{=} \xy (0,0)*{ \tikzdiagc[scale=.7]{ \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $1$} -- (-.45,-.45); \draw[ultra thick,mygreen] (-.45,-.45) -- (1,1)node[above]{\tiny $d-1$}; \draw[ultra thick,teal] (1,-1) -- (.45,-.45); \draw[ultra thick,orange] (.45,-.45) -- (-1,1); \draw[thick,double,violet,to-] (-1.2,0) to[out=0,in=135] (-.45,-.45) to[out=-45,in=95] (-.4,-1); \draw[thick,double,violet,to-] (-1.1,-.5) to[out=0,in=180] (-.45,-.45) to[out=0,in=-135] (1.2,0); }}\endxy = \mathcal{E}v_{r,s}\Biggl( \xy (0,0)*{ \tikzdiagc[scale=.55]{ \draw[ultra thick,violet] (-1,-1) -- (-.45,-.45); \draw[ultra thick,mygreen] (-.45,-.45) -- (1,1); \draw[ultra thick,teal] (1,-1) -- (.45,-.45); \draw[ultra thick,orange] (.45,-.45) -- (-1,1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \Biggr) \end{align*} \endgroup \item For relation~\eqref{eq;orReidII}, we have \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s} \left(\rule{-.05 cm}{1.2cm}\right. \xy (0,-2)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick, blue] (1,0)node[below]{\tiny $1$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,violet] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[ultra thick, blue] (1,1) .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \draw[ultra thick,black,to-] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); }}\endxy \left.\rule{-.15 cm}{1.2cm}\right) &= \xy (0,-2)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick, blue] (1,0)node[below]{\tiny $1$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,blue] (.5,.29) .. controls (.1,.4) and (.1,.6) .. (.5,.71); \draw[ultra thick,blue] (1,1) .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \draw[thick,double,violet,to-] (0,0) to[out=60,in=-90] (0,.5) to[out=90,in=-60] (0,1); \draw[thick,double,violet] (.75,.5) ellipse (.28 cm and .15 cm); \draw[thick,double,violet,-to] (.47,.52) to (.47,.55); }}\endxy \overset{\eqref{eq:loopRrho}}{=} \mathcal{E}v_{r,s} \left(\rule{0 cm}{1.2cm}\right. \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick, blue] (1,0) -- (1,1); \draw[ultra thick,black,to-] (0,0) -- (0,1); }}\endxy \left.\rule{0 cm}{1.2cm}\right) \intertext{and} \mathcal{E}v_{r,s} \left(\rule{-.05 cm}{1.2cm}\right. \xy (0,-2)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick,violet] (1,0)node[below]{\tiny $0$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,mygreen] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[ultra thick,violet] (1,1) .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \draw[ultra thick,black,to-] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); }}\endxy \left.\rule{-.15 cm}{1.2cm}\right) &= \xy (0,-2)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick,blue] (1,0)node[below]{\tiny $1$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,mygreen] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[ultra thick,blue] (1,1) .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \draw[thick,double,violet,to-] (.4,0) to[out=90,in=-90] (.6,.5) to[out=90,in=-90] (.4,1); \draw[thick,double,violet,to-] (-.5,0) to[out=60,in=180] (.5,.29) to[out=0,in=150] (1.5,0); \draw[thick,double,violet,-to] (-.5,1) to[out=-60,in=180] (.5,.71) to[out=0,in=-150] (1.5,1); }}\endxy \overset{\eqref{eq:Rrhoinvert}}{=}\ \ \xy (0,.2)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[thick,double,violet] (1,0) .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[thick,double,violet] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[thick,double,violet,to-] (1,1) .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \draw[thick,double,violet,to-] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); \draw[thick,double,violet,to-] (-.7,0) -- (-.7,1); \draw[ultra thick,blue] (.5,.71) -- (.5,1); \draw[ultra thick,blue] (.5,0) -- (.5,.29); \draw[ultra thick,mygreen] (.5,.29) -- (.5,.71); }}\endxy \overset{\eqref{eq:msixviso}}{=} \mathcal{E}v_{r,s} \left(\rule{0 cm}{1.2cm}\right. \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick,violet] (1,0) -- (1,1); \draw[ultra thick,black,to-] (0,0) -- (0,1); }}\endxy \left.\rule{0 cm}{1.2cm}\right) \end{align*} \endgroup \item for relation~\eqref{eq:dotrhuor}, we have \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}\biggl( \xy (0,-2)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.3,-.3)node[below]{\tiny $1$} -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,violet] (-.5,.5) -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy \biggr) & = \xy (0,0)*{ \tikzdiagc[scale=1.2]{ \draw[ultra thick,blue] (.3,-.3) -- (-.5,.5)node[pos=0, tikzdot]{}; \draw[thick,double,violet,to-] (-.5,-.5) to[out=45,in=-45] (-.7,.3); \draw[thick,double,violet,to-] (-.3,.7) to[out=-45,in=-135] (.5,.5); }}\endxy = \xy (0,0)*{ \tikzdiagc[scale=1.2]{ \draw[ultra thick,blue] (-.3,.3) -- (-.5,.5)node[pos=0, tikzdot]{}; \draw[thick,double,violet,to-] (-.5,-.5) to[out=45,in=-45] (-.7,.3); \draw[thick,double,violet,to-] (-.3,.7) to[out=-45,in=-135] (.5,.5); }}\endxy \overset{\eqref{eq:Rrhoinvert}}{=} \xy (0,0)*{ \tikzdiagc[scale=1.2]{ \draw[ultra thick,blue] (-.3,.3) -- (-.5,.5)node[pos=0, tikzdot]{}; \draw[thick,double,violet,to-] (-.3,.7) to[out=-45,in=45] (-.15,.15) to[out=-135,in=-45] (-.7,.3); \draw[thick,double,violet,to-] (-.5,-.5) to[out=40,in=-130] (.5,.5); }}\endxy = \mathcal{E}v_{r,s}\biggl( \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,violet] (-.5,.5) -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy \biggr) \intertext{and} \mathcal{E}v_{r,s}\biggl( \xy (0,-2)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,violet] (.3,-.3)node[below]{\tiny $0$} -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,mygreen] (-.5,.5) -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy \biggr) &= \xy (0,1)*{ \tikzdiagc[scale=0.6]{ \draw[thick,double, violet] (0,0) to[out=225,in=90] (-1,-1) to[out=270,in=180] (0,-1.5) to[out=0,in=270] (1,-1) to[out=90,in=135] (0,0); \draw[thick,double, violet,-to] (0,0) .. controls (-1.5,1.5) and (-1.75,-.25) .. (-2,-1); \draw[thick,double, violet] (0,0) to ( 1,1); \draw[ultra thick,blue] (0,-.5)node[below]{\tiny $1$} -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,mygreen] (0,0) to (0,1)node[above]{\tiny $d-1$}; }}\endxy \overset{\eqref{eq:loopRrho},\eqref{eq:msixvdot}}{=} \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,mygreen] (-.5,.5) -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[thick,double,violet,to-] (-.5,-.5) -- (.5,.5); }}\endxy = \mathcal{E}v_{r,s}\biggl( \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,mygreen] (-.5,.5) -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy \biggr) \end{align*} \endgroup \item For relation~\eqref{eq:orpitchfork}, we have \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}\biggl( \xy (0,-2.5)*{ \tikzdiagc[yscale=-.5,xscale=.5]{ \draw[ultra thick,violet] (0,0)-- (0,.5); \draw[ultra thick,violet] (-1,-1) -- (0,0); \draw[ultra thick,violet] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,.5) -- (0,1)node[below]{\tiny $0$}; \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.7) and (.25,.7) .. (1,0); }}\endxy \biggr) &= \xy (0,-2.5)*{ \tikzdiagc[yscale=-.65,xscale=.65]{ \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,0) -- (0,1)node[below]{\tiny $1$}; \draw[thick,double,violet,to-] (-1.5,.25) to[out=20,in=90] (-.6,.25) to[out=-100,in=30] (-1.35,-.5); \draw[thick,double,violet,-to] (1.5,.25) to[out=-20,in=90] (.6,.25) to[out=-100,in=150] (1.35,-.5); \draw[thick,double,violet,to-] (-.5,-1.2) to[out=60,in=180] ( 0,-.6) to[out=0,in=120] (.5,-1.25); }}\endxy = \mathcal{E}v_{r,s}\biggl( \xy (0,0)*{ \tikzdiagc[yscale=-.5,xscale=.5]{ \draw[ultra thick,violet] (-1,-1) -- (-.45,-.45); \draw[ultra thick,violet] (1,-1) -- (.45,-.45); \draw[ultra thick,blue] (-.45,-.45)-- (0,0); \draw[ultra thick,blue] (.45,-.45)-- (0,0); \draw[ultra thick,blue] (0,0)-- (0,1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \biggr) \intertext{and} \mathcal{E}v_{r,s}\biggl( \xy (0,-2.5)*{ \tikzdiagc[yscale=-0.5,xscale=.5]{ \draw[ultra thick,mygreen] (0,0)-- (0,.5); \draw[ultra thick,mygreen] (-1,-1) -- (0,0); \draw[ultra thick,mygreen] (1,-1) -- (0,0); \draw[ultra thick,violet] (0,.5) -- (0,1)node[below]{\tiny $0$}; \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.7) and (.25,.7) .. (1,0); }}\endxy \biggr) &= \xy (0,-2.5)*{ \tikzdiagc[yscale=-.65,xscale=.65]{ \draw[ultra thick,mygreen] (0,0)-- (0,.5); \draw[ultra thick,mygreen] (-1,-1) -- (0,0); \draw[ultra thick,mygreen] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,.5) -- (0,1.2)node[below]{\tiny $1$}; \draw[thick,double,violet,to-] (-1.5,0) to[out=0,in=200] (0,.5) to[out=20,in=-90] (.7,1.2); \draw[thick,double,violet,to-] (-.7,1.2) to[out=-90,in=160] (0,.5) to[out=-20,in=185] (1.5,0); }}\endxy \overset{\eqref{eq:itwasnotthelast}}{=} \xy (0,-2.5)*{ \tikzdiagc[yscale=-.65,xscale=.65]{ \draw[ultra thick,mygreen] (-1.05,-1.2) -- (-.45,-.5); \draw[ultra thick,mygreen] (1.05,-1.2) -- (.45,-.5); \draw[ultra thick,blue] (0,0) -- (0,1)node[below]{\tiny $1$}; \draw[ultra thick,blue] (0,0) -- (-.45,-.5); \draw[ultra thick,blue] (0,0) -- ( .45,-.5); \draw[thick,double,violet,to-] (-1.5,0) to[out=-60,in=180] (-.6,-.5) to[in=180,out=0] (0,-.45) to[out=0,in=180] (.6,-.5) to[out=0,in=-120] (1.5,0); \draw[thick,double,violet,to-] (-.5,1) to[out=-90,in=180] ( 0,-.8) to[out=0,in=-90] (.5,1); }}\endxy = \mathcal{E}v_{r,s}\biggl( \xy (0,0)*{ \tikzdiagc[yscale=-0.5,xscale=.5]{ \draw[ultra thick,mygreen] (-1,-1) -- (-.45,-.45); \draw[ultra thick,mygreen] (1,-1) -- (.45,-.45); \draw[ultra thick,violet] (-.45,-.45)-- (0,0); \draw[ultra thick,violet] (.45,-.45)-- (0,0); \draw[ultra thick,violet] (0,0)-- (0,1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \biggr) \end{align*} \endgroup \item Relation~\eqref{eq:orslide6vertex} actually consists of two (similar) relations. For the first of them, we have \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}\biggl( \!\! \xy (0,0)*{ \tikzdiagc[scale=.5]{ \draw[ultra thick,myred] (0,0)-- (0,.6); \draw[ultra thick,myred] (-1,-1)node[below]{\tiny $2$} -- (0,0); \draw[ultra thick,myred] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,.6) -- (0,1); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} -- (0,0); \draw[ultra thick,blue] (0,0) -- (-.45, .45); \draw[ultra thick,violet] (-.45,.45) -- (-1,1); \draw[ultra thick,blue] (0,0) -- (.45,.45); \draw[ultra thick,violet] (.45,.45) -- (1, 1)node[above]{\tiny $0$}; \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \!\! \biggr) &= \xy (0,0)*{ \tikzdiagc[scale=.85]{ \draw[ultra thick,myred] (0,0)-- (0,.6); \draw[ultra thick,myred] (-1,-1) -- (0,0); \draw[ultra thick,myred] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,.6) -- (0,1.2); \draw[ultra thick,blue] (0,-1) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[ultra thick,blue] (0,0) -- (1,1); \draw[thick,double,violet,to-] (-1.35,-.2) to[out=10,in=270] (-.6,.1) to[out=90,in=-45] (-1.2,.7); \draw[thick,double,violet,-to] ( 1.35,-.2) to[out=170,in=270] ( .6,.1) to[out=90,in=-135] ( 1.2,.7); \draw[thick,double,violet,to-] (-.7,1.2) to[out=-45,in=180] (0,.55) to[out=0,in=-135] (.7,1.2); }}\endxy \overset{\eqref{eq:mixedRtwo}}{=} \xy (0,0)*{ \tikzdiagc[scale=.85]{ \draw[ultra thick,myred] (0,0)-- (0,.6); \draw[ultra thick,myred] (-1,-1) -- (-.67,-.67); \draw[ultra thick,blue] (-.67,-.67) -- (-.33,-.33); \draw[ultra thick,myred] (-.33,-.33) -- (0,0); \draw[ultra thick,myred] (1,-1) -- (.67,-.67); \draw[ultra thick,blue] (.67,-.67) -- (.33,-.33); \draw[ultra thick,myred] (.33,-.33) -- (0,0); \draw[ultra thick,blue] (0,.6) -- (0,1.2); \draw[ultra thick,blue] (0,-1) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[ultra thick,blue] (0,0) -- (1,1); \draw[thick,double,violet,to-] (-1.35,-.2) to[out=-40,in=270] (-.3,-.6) to[out=90,in=-45] (-1.2,.7); \draw[thick,double,violet,-to] ( 1.35,-.2) to[out=-140,in=270] (.3,-.6) to[out=90,in=-135] ( 1.2,.7); \draw[thick,double,violet,to-] (-.7,1.2) to[out=-45,in=180] (0,.55) to[out=0,in=-135] (.7,1.2); }}\endxy = \mathcal{E}v_{r,s}\biggl( \xy (0,0)*{ \tikzdiagc[scale=.5]{ \draw[ultra thick,myred] (-1,-1) -- (-.45,-.45); \draw[ultra thick,myred] (1,-1) -- (.45,-.45); \draw[ultra thick,blue] (0,-1) -- (0,-.6); \draw[ultra thick,violet] (0,-.6) -- (0,0); \draw[ultra thick,violet] (0,0) -- (-1,1); \draw[ultra thick,violet] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0) -- (0,1); \draw[ultra thick,blue] (-.45,-.45) -- (0,0); \draw[ultra thick,blue] (.45,-.45) -- (0,0); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \biggr) \intertext{and} \mathcal{E}v_{r,s}\biggl( \!\! \xy (0,0)*{ \tikzdiagc[scale=.5]{ \draw[ultra thick,blue] (0,0)-- (0,.6); \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $1$} -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,violet] (0,.6) -- (0,1); \draw[ultra thick,violet] (0,-1)node[below]{\tiny $0$} -- (0,0); \draw[ultra thick,violet] (0,0) -- (-.45, .45); \draw[ultra thick,mygreen] (-.45,.45) -- (-1,1); \draw[ultra thick,violet] (0,0) -- (.45,.45); \draw[ultra thick,mygreen] (.45,.45) -- (1, 1)node[above]{\tiny $d-1$}; \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \!\!\!\!\! \biggr) &= \xy (0,0)*{ \tikzdiagc[scale=.85]{ \draw[ultra thick,blue] (0,.6) -- (0,1.2); \draw[ultra thick,blue] (0,-1.2) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-.45, .45); \draw[ultra thick,mygreen] (-.45,.45) -- (-1,1); \draw[ultra thick,blue] (0,0) -- (.45,.45); \draw[ultra thick,mygreen] (.45,.45) -- (1, 1); \draw[ultra thick,blue] (-1,-1) -- (-.45,-.45); \draw[ultra thick,blue] (1,-1) -- (.45,-.45); \draw[ultra thick,myred] (-.45,-.45) -- (0,0); \draw[ultra thick,myred] (0,0)-- (0,.6); \draw[ultra thick,myred] (.45,-.45) -- (0,0); \draw[thick,double,violet,to-] (-.4,-1.2) to[out=90,in=-90] (-.5,0) to[out=90,in=-90] (-.4,1.2); \draw[thick,double,violet,-to] (.4,-1.2) to[out=90,in=-90] (.5,0) to[out=90,in=-90] (.4,1.2); \draw[thick,double,violet,to-] (-1.2,0) to[out=10,in=180] (0,.62) to[out=0,in=170] (1.2,0); }}\endxy \overset{\eqref{eq:msixviso}}{=} \xy (0,0)*{ \tikzdiagc[scale=.85]{ \draw[ultra thick,blue] (0,-1.2) -- (0,-.8); \draw[ultra thick,mygreen] (0,-.8) -- (0,.2); \draw[ultra thick,blue] (0,.2) -- (0,.6); \draw[ultra thick,blue] (0,.9) -- (0,1.2); \draw[ultra thick,blue] (0,.6) -- (-.55,.65); \draw[ultra thick,blue] (0,.6) -- (.55,.65); \draw[ultra thick,myred] (0,.6) -- (0,.9); \draw[ultra thick,myred] (0,.6) -- (-.32,.39); \draw[ultra thick,myred] (0,.6) -- ( .32,.39); \draw[ultra thick,mygreen] (-.55,.65) to[out=180,in=-40] (-1.2,1); \draw[ultra thick,mygreen] (.55,.65) to[out=0,in=220] (1.2,1); \draw[ultra thick,blue] (-.32,.39) to[out=-135,in=70] (-1.2,-1); \draw[ultra thick,blue] ( .32,.39) to[out=-45,in=110] ( 1.2,-1); \draw[thick,double,violet,to-] (-.4,-1.2) to[out=60,in=-90] (.35,-.3) to[out=90,in=-90] (-.6,.85) to[out=90,in=-90] (-.6,1.2); \draw[thick,double,violet,-to] (.4,-1.2) to[out=120,in=-90] (-.35,-.3) to[out=90,in=-90] (.6,.85) to[out=90,in=-90] (.6,1.2); \draw[thick,double,violet,to-] (-1.2,0) to[out=40,in=180] (0,.9) to[out=0,in=140] (1.2,0); }}\endxy \overset{\eqref{eq:ReidIII-Rrho}}{=} \xy (0,0)*{ \tikzdiagc[scale=.85]{ \draw[ultra thick,blue] (0,-1.2) -- (0,-.8); \draw[ultra thick,mygreen] (0,-.8) -- (0,-.29); \draw[ultra thick,blue] (-.48,-.23) to[out=-135,in=50] (-1.2,-1); \draw[ultra thick,blue] ( .48,-.23) to[out=-45,in=130] ( 1.2,-1); \draw[ultra thick,blue] (0,.6) -- (0,1.2); \draw[ultra thick,mygreen] (-.35,.25) to[out=150,in=-45] (-1.2,1); \draw[ultra thick,mygreen] (.35,.25) to[out=30,in=-135] (1.2,1); \draw[ultra thick,mygreen] (0,.1) -- (0,.6); \draw[ultra thick,mygreen] (-.5,-.25) -- (0,.1); \draw[ultra thick,mygreen] ( .5,-.25) -- (0,.1); \draw[ultra thick,orange] (0,-.32) -- (0,.1); \draw[ultra thick,orange] (0,.1) -- (-.35,.25); \draw[ultra thick,orange] (0,.1) -- (.35,.25); \draw[thick,double,violet,to-] (-.4,-1.2) to[out=60,in=-90] (.5,-.2) to[out=90,in=-70] (-.4,1.2); \draw[thick,double,violet,-to] (.4,-1.2) to[out=120,in=-90] (-.5,-.2) to[out=90,in=250] (.4,1.2); \draw[thick,double,violet,to-] (-1.2,0) to[out=-20,in=180] (0,-.3) to[out=0,in=200] (1.2,0); }}\endxy = \mathcal{E}v_{r,s}\biggl( \xy (0,0)*{ \tikzdiagc[scale=.5]{ \draw[ultra thick,blue] (-1,-1) -- (-.45,-.45); \draw[ultra thick,blue] (1,-1) -- (.45,-.45); \draw[ultra thick,violet] (0,-1) -- (0,-.6); \draw[ultra thick,mygreen] (0,-.6) -- (0,0); \draw[ultra thick,mygreen] (0,0) -- (-1,1); \draw[ultra thick,mygreen] (0,0) -- (1, 1); \draw[ultra thick,violet] (0,0) -- (0,1); \draw[ultra thick,violet] (-.45,-.45) -- (0,0); \draw[ultra thick,violet] (.45,-.45) -- (0,0); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \biggr) \end{align*} \endgroup To check this relation with colors $(d-2,d-1,0)$, use~\eqref{eq:mixedRtwo} and~\eqref{eq:msixviso}. For the second relation in~\eqref{eq:orslide6vertex}, we have \begingroup\allowdisplaybreaks \begin{align*} \mathcal{E}v_{r,s}\biggl( \!\! \xy (0,0)*{ \tikzdiagc[scale=.5]{ \draw[ultra thick,blue] (0,0)-- (0,.6); \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $1$} -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,violet] (0,.6) -- (0,1)node[above]{\tiny $0$}; \draw[ultra thick,myred] (0,-1)node[below]{\tiny $2$} -- (0,0); \draw[ultra thick,myred] (0,0) -- (-.45, .45); \draw[ultra thick,blue] (-.45,.45) -- (-1,1); \draw[ultra thick,myred] (0,0) -- (.45,.45); \draw[ultra thick,blue] (.45,.45) -- (1, 1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \biggr) &= \xy (0,0)*{ \tikzdiagc[scale=.85]{ \draw[ultra thick,blue] (0,0)-- (0,.6); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,.6) -- (0,1.2); \draw[ultra thick,myred] (0,-1.2) -- (0,0); \draw[ultra thick,myred] (0,0) -- (-.45, .45); \draw[ultra thick,blue] (-.45,.45) -- (-1,1); \draw[ultra thick,myred] (0,0) -- (.45,.45); \draw[ultra thick,blue] (.45,.45) -- (1, 1); \draw[thick,double,violet,to-] (-1.2,0) to[out=20,in=210] (-.6,.25) to[out=30,in=-90] (-.4,1.2); \draw[thick,double,violet,-to] (1.2,0) to[out=160,in=-30] (.6,.25) to[out=150,in=-90] (.4,1.2); }}\endxy \overset{\eqref{eq:loopRrho},\eqref{eq:mixedRtwo}}{=} \xy (0,0)*{ \tikzdiagc[scale=.85]{ \draw[ultra thick,blue] (0,0)-- (0,.6); \draw[ultra thick,blue] (-1.2,-1) -- (0,0); \draw[ultra thick,blue] (1.2,-1) -- (0,0); \draw[ultra thick,blue] (0,.6) -- (0,1.2); \draw[ultra thick,myred] (0,0) -- (-.45, .45); \draw[ultra thick,blue] (-.45,.45) -- (-1,1); \draw[ultra thick,myred] (0,0) -- (.45,.45); \draw[ultra thick,blue] (.45,.45) -- (1, 1); \draw[ultra thick,myred] (0,-1.2) -- (0,-1); \draw[ultra thick,myred] (0,-.4) -- (0,0); \draw[ultra thick,blue] (0,-.4) -- (0,-1); \draw[thick,double,violet,to-] (-1.2,0) to[out=-20,in=180] (-.7,-.15) to[out=0,in=-100] (-.45,.45) to[out=80,in=-90] (-.4,1.2); \draw[thick,double,violet,-to] (1.2,0) to[out=200,in=0] (.7,-.15) to[out=180,in=-80] (.45,.45) to[out=100,in=-90] (.4,1.2); \draw[thick,double,violet,to-] (0,-.7) ellipse (.3 cm and (.32 cm); \draw[thick,double,violet,to-] (-.3,-.58) to (-.3,-.6); }}\endxy = \mathcal{E}v_{r,s}\biggl( \xy (0,0)*{ \tikzdiagc[scale=.5]{ \draw[ultra thick,blue] (-1,-1) -- (-.45,-.45); \draw[ultra thick,blue] (1,-1) -- (.45,-.45); \draw[ultra thick,myred] (0,-1) -- (0,-.6); \draw[ultra thick,blue] (0,-.6) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[ultra thick,blue] (0,0) -- (1, 1); \draw[ultra thick,violet] (0,0) -- (0,1); \draw[ultra thick,violet] (-.45,-.45) -- (0,0); \draw[ultra thick,violet] (.45,-.45) -- (0,0); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \biggr) \intertext{and} \mathcal{E}v_{r,s}\biggl( \!\! \xy (0,0)*{ \tikzdiagc[scale=.5]{ \draw[ultra thick,violet] (0,0)-- (0,.6); \draw[ultra thick,violet] (-1,-1)node[below]{\tiny $0$} -- (0,0); \draw[ultra thick,violet] (1,-1) -- (0,0); \draw[ultra thick,mygreen] (0,.6) -- (0,1)node[above]{\tiny $d-1$}; \draw[ultra thick,blue] (0,-1)node[below]{\tiny $1$} -- (0,0); \draw[ultra thick,blue] (0,0) -- (-.45, .45); \draw[ultra thick,violet] (-.45,.45) -- (-1,1); \draw[ultra thick,blue] (0,0) -- (.45,.45); \draw[ultra thick,violet] (.45,.45) -- (1, 1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \biggr) &= \xy (0,0)*{ \tikzdiagc[scale=.85]{ \draw[ultra thick,mygreen] (0,.6) -- (0,1.2); \draw[ultra thick,blue] (-.45,.45) -- (-1,1); \draw[ultra thick,blue] (.45,.45) -- (1, 1); \draw[ultra thick,blue] (0,-1.2) -- (0,-.5); \draw[ultra thick,blue] (-1.2,-1) -- (0,0); \draw[ultra thick,blue] (1.2,-1) -- (0,0); \draw[ultra thick,blue] (0,0)-- (0,.6); \draw[ultra thick,myred] (0,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (-.45, .45); \draw[ultra thick,myred] (0,0) -- (.45,.45); \draw[thick,double,violet,-to] (-.8,-1.2) to[out=45,in=180] (0,-.5) to[out=0,in=135] (.8,-1.2); \draw[thick,double,violet,to-] (-1.2,-.6) to[out=45,in=-135] (-.45,.45) to[out=30,in=180] (0,.6) to[out=0,in=-135] (.8,1.2); \draw[thick,double,violet,-to] (1.2,-.6) to[out=135,in=-45] (.45,.45) to[out=150,in=0] (0,.6) to[out=180,in=-45] (-.8,1.2); \draw[thick,double,violet,to-] (-1.2,.1) to[out=30,in=-90] (-.9,.45) to[out=90,in=-30] (-1.2,.7); \draw[thick,double,violet,-to] (1.2,.1) to[out=150,in=-90] (.9,.45) to[out=90,in=210] (1.2,.7); }}\endxy \overset{\eqref{eq:Rrhoinvert},\eqref{eq:msixviso}}{=} \xy (0,0)*{ \tikzdiagc[scale=.85]{ \draw[ultra thick,mygreen] (0,.6) -- (0,1.2); \draw[ultra thick,blue] (-.45,.45) -- (-1,1); \draw[ultra thick,blue] (.45,.45) -- (1, 1); \draw[ultra thick,blue] (0,-1.2) -- (0,-.5); \draw[ultra thick,blue] (0,0)-- (0,.6); \draw[ultra thick,blue] (-.4,-.32) -- (0,0); \draw[ultra thick,mygreen] (-.9,-.75) -- (-.4,-.32); \draw[ultra thick,blue] (-1.2,-1) -- (-.9,-.75); \draw[ultra thick,blue] (.4,-.32) -- (0,0); \draw[ultra thick,mygreen] (.9,-.75) -- (.4,-.32); \draw[ultra thick,blue] (1.2,-1) -- (.9,-.75); \draw[ultra thick,myred] (0,-.5) -- (0,0); \draw[ultra thick,myred] (0,0) -- (-.45, .45); \draw[ultra thick,myred] (0,0) -- (.45,.45); \draw[thick,double,violet,-to] (-.7,-1.2) to[out=120,in=-60] (-.9,-.75) to[out=120,in=-40] (-1.4,0); \draw[thick,double,violet,-to] (.7,-1.2) to[out=60,in=240] (.9,-.75) to[out=60,in=220] (1.4,0); \draw[thick,double,violet,to-] (-1.4,-.6) to[out=-30,in=160] (-.9,-.75) to[out=-20,in=-135] (-.45,-.7) to[out=45,in=-90] (-.4,-.32) to[out=90,in=-135] (-.45,.45) to[out=45,in=180] (0,.6) to[out=0,in=-120] (0.5,1.2); \draw[thick,double,violet,to-] (1.4,-.6) to[out=210,in=20] (.9,-.75) to[out=200,in=-45] (.45,-.7) to[out=135,in=-90] (.4,-.32) to[out=90,in=-45] (.45,.45) to[out=135,in=0] (0,.6) to[out=180,in=-60] (-0.5,1.2); \draw[thick,double,violet,-to] (-1.2,.8) to[out=-45,in=150] (-.4,-.32) to[out=-30,in=180] (0,-.5) to[out=0,in=-135] (.4,-.32) to (1.2,.8); }}\endxy \overset{\eqref{eq:ReidIII-Rrho}}{=} \mathcal{E}v_{r,s}\biggl( \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,violet] (-1,-1) -- (-.45,-.45); \draw[ultra thick,violet] (1,-1) -- (.45,-.45); \draw[ultra thick,blue] (0,-1) -- (0,-.6); \draw[ultra thick,violet] (0,-.6) -- (0,0); \draw[ultra thick,violet] (0,0) -- (-1,1); \draw[ultra thick,violet] (0,0) -- (1, 1); \draw[ultra thick,mygreen] (0,0) -- (0,1); \draw[ultra thick,mygreen] (-.45,-.45) -- (0,0); \draw[ultra thick,mygreen] (.45,-.45) -- (0,0); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \biggr) \end{align*} \endgroup Checking the relation with colors $(d-2,d-1,0)$ uses~\eqref{eq:msixviso},~\eqref{eq:mixedRtwo} and~\eqref{eq:Rrhoinvert}. \end{itemize} \end{itemize} This ends the proof of~\fullref{thm:ueva-functor}. \end{proof} \begin{rem} \begin{enumerate} \item The functor $\mathcal{E}v_{r,s}$ is not full: For example, the special Rouquier complex $\mathcal{E}v_{r,s}(\mathrm{B}_{\rho}^{-1})= \mathrm{T}_{\rho}^{-1}\langle -r\rangle [-s]$ has the form \[ \dotsm \to \mathrm{B}_{d-1}\dotsm \mathrm{B}_1\langle -r\rangle [-s] \to 0. \] Therefore, there is an obvious (non-null-homotopic) map from $\mathrm{B}_{d-1}\dotsm \mathrm{B}_1\langle -r\rangle [-s]$ to $\mathrm{T}_{\rho}^{-1}\langle -r\rangle [-s]$, which is the identity on $\mathrm{B}_{d-1}\dotsm \mathrm{B}_1\langle -r\rangle [-s]$ and zero elsewhere, but this map is not in the image of $\mathcal{E}v_{r,s}$. Note also that $\mathcal{E}v_{r,s}$ induces a functor $\Ev_a\colon {\EuScript{K}^b}( \widehat{\EuScript{S}}^{\mathrm{ext}}_d )\to{\EuScript{K}^b}(\EuScript{S}_d)$ and that \[ {\EuScript{K}^b}(\widehat{\EuScript{S}}^{\mathrm{ext}}_d)\left(\textcolor{violet}{\rT_{\rho}^{-1}}, \mathrm{B}_{\rho}^{-1}\right)=0, \] whereas \[ {\EuScript{K}^b}(\EuScript{S}_d)\left(\mathcal{E}v_{r,s}(\textcolor{violet}{\rT_{\rho}^{-1}}),\mathcal{E}v_{r,s}(\mathrm{B}_{\rho}^{-1})\right)= {\EuScript{K}^b}(\EuScript{S}_d)\left(\textcolor{violet}{\rT_{\rho}^{-1}},\textcolor{violet}{\rT_{\rho}^{-1}}\right)\cong R^\prime, \] where $R^\prime$ is the polynomial ring in the $i$-colored dumbbells with $i\in \{1, \ldots, d\}$, i.e. the endomorphism ring of the identity object for finite type $A_{d-1}$. \item By~\eqref{eq:orientedcoloredgens1} and~\eqref{eq:mixeddumbbellslide2}, the evaluation functor $\mathcal{E}v_{r,s}$ maps the central morphism \[ \sum_{i=0}^{d-1}\ \xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=-.5,xscale=.5] \draw[ultra thick,blue] (-1,-.6) -- (-1, .6)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{} node at (-0.5,0) {$i$} ; \end{scope} }}\endxy \] to zero. We could have defined $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$ over the polynomial ring $\mathbb{C}[y,x_1,\ldots,x_{d-1}]$ as in \cite{mackaay-thiel} and extended $\mathcal{E}v_{r,s}$ to that "base ring". In that case, the central morphism $\fbox{y}$ (which is equal to the above dumbbell sum, as already remarked) would be sent to zero by the evaluation functor, which makes perfect sense as the extended base ring of $\EuScript{BS}_d$ would be $\mathbb{C}[x_1,\ldots,x_d]$. \end{enumerate} \end{rem} \section{Reminders on Soergel categories}\label{sec:soergel} In this section we briefly recall the definition of the diagrammatic Soergel category of non-extended and extended affine type $A$ and finite type $A$, but before we do that we start with a brief section on graded categories and categories with shift. \subsection{Graded categories and categories with shift}\label{sec:graded-and-shifted-stuff} All categories in this paper are assumed to be essentially small, meaning that they are equivalent to small categories, so set-theoretic questions play no role. We call a $\mathbb{C}$-linear category $\mathcal{A}$ {\em graded} if it is enriched over the category of $\mathbb{Z}$-graded vector spaces, and we call a $\mathbb{C}$-linear functor between such graded categories {\em degree-preserving} if it preserves the degrees of homogeneous morphisms. We say that a $\mathbb{C}$-linear category $\mathcal{A}$ has a {\em shift} (or, alternatively, that it is a {\em category with shift}) if there is a $\mathbb{C}$-linear automorphism $\langle 1\rangle$ of $\mathcal{A}$. If such a shift exists, we define $\langle r\rangle$ as the composite of $r$ copies of $\langle 1\rangle$ for any $r\in \mathbb{Z}_{\geq 0}$, and $-r$ copies of the inverse of $\langle 1\rangle$ for any $r\in \mathbb{Z}_{\leq 0}$. By definition, therefore, we have $\langle r+s\rangle =\langle r\rangle\langle s \rangle=\langle s\rangle\langle t \rangle$, for all $r,s\in \mathbb{Z}$, and $\langle 0\rangle=\mathrm{Id}_{\mathcal{A}}$. Given a graded category $\mathcal{A}$, let $\mathcal{A}^{\mathrm{sh}}$ be the associated $\mathbb{C}$-linear category with shift, whose objects are formal integer shifts of objects in $\mathcal{A}$ and whose hom-spaces are defined by \[ \mathcal{A}^{\mathrm{sh}}\left(X\langle r\rangle, Y\langle s\rangle\right):=\mathcal{A}\left(X,Y\right)_{s-r} \] for every $X,Y\in \mathcal{A}$ and $r,s\in \mathbb{Z}$. Note that $\mathcal{A}^{\mathrm{sh}}$ is no longer a graded category. If the Hom-spaces of $\mathcal{A}$ are finite-dimensional in every degree, then the hom-spaces of $\mathcal{A}^{\mathrm{sh}}$ are finite-dimensional. Given two graded categories $\mathcal{A}$ and $\mathcal{B}$, any degree-preserving, $\mathbb{C}$-linear functor $F\colon \mathcal{A}\to \mathcal{B}$ induces a unique $\mathbb{C}$-linear functor $F\colon \mathcal{A}^{\mathrm{sh}}\to \mathcal{B}^{\mathrm{sh}}$, denoted by the same symbol, which commutes with the shifts. Conversely, given any $\mathbb{C}$-linear category $\mathcal{A}$ with shift, let $\mathcal{A}^{\mathrm{gr}}$ be the associated graded category, whose objects are those of $\mathcal{A}$ and whose graded Hom-spaces are defined by \[ \mathcal{A}^{\mathrm{gr}}\left(X,Y\right):=\bigoplus_{s\in \mathbb{Z}}\mathcal{A}\left(X,Y\langle s\rangle\right), \] for any $X,Y\in \mathcal{A}$. Note that $\mathcal{A}^{\mathrm{gr}}$ is graded and has a shift, and, moreover, that $X\cong X\langle t\rangle$ for all $X\in \mathcal{A}$ and $t\in \mathbb{Z}$. Given two $\mathbb{C}$-linear categories $\mathcal{A}$ and $\mathcal{B}$ with shifts, any $\mathbb{C}$-linear functor $F\colon \mathcal{A}\to \mathcal{B}$ commuting with the shifts induces a unique degree-preserving, $\mathbb{C}$-linear functor $F\colon \mathcal{A}^{\mathrm{gr}}\to \mathcal{B}^{\mathrm{gr}}$, denoted by the same symbol. Thus $(-)^{\mathrm{sh}}$ and $(-)^{\mathrm{gr}}$ define a pair of $2$-functors between the $2$-category of graded categories and the $2$-category of $\mathbb{C}$-linear categories with shift. It is not hard to show, see e.g.~\cite[Proposition 11.9]{e-m-t-w}, that $(-)^{\mathrm{sh}}$ is left adjoint to $(-)^{\mathrm{gr}}$, i.e., that there is a functorial isomorphism \[ \mathrm{Fun}\left(\mathcal{A}^{\mathrm{sh}}, \mathcal{B}\right)\cong \mathrm{Fun}\left(\mathcal{A}, \mathcal{B}^{\mathrm{gr}}\right) \] for $\mathcal{A}$ a graded category and $\mathcal{B}$ a $\mathbb{C}$-linear category with shift. Here the first functor category is between categories with shift and the second between graded categories. For more details on graded categories and categories with shift, and also on additive closures and idempotent completions (a.k.a. Karoubi closures/envelopes), see e.g.~\cite[Sections 11.2.1-11.24]{e-m-t-w}. \subsection{Soergel calculus in finite and non-extended affine type A}\label{sec:soergeldiagrammatics-one} The finite type $A$ diagrammatic Soergel calculus was introduced by Elias--Khovanov~\cite{elias-khovanov} and generalized to all Coxeter types by Elias--Williamson~\cite{elias-williamson-2}. The extended affine Soergel calculus was first defined in~\cite{mackaay-thiel} and studied more systematically in~\cite{elias2018}. We refer to the latter two papers for more details. For the specialists, we remark that we use the so-called {\em root span realization} of the Cartan datum of finite and affine type A below. Denote by $S=\{s_0,\ldots,s_{d-1}\}$ the set of simple reflections of $\widehat{\mathfrak{S}}_d$. The \emph{diagrammatic Bott-Samelson category} of type $\widehat{A}_{d-1}$, denoted $\widehat{\EuScript{BS}}_d$, is the $\mathbb{Z}$-graded, $\mathbb{C}$-linear, additive, monoidal category whose objects are formal finite direct sums of finite words in the alphabet $S$, and whose graded vector spaces of morphisms are defined below in terms of homogeneous generating diagrams and relations. In general, we can write the objects as vectors of words and morphisms as matrices of equivalence classes of diagrams. As usual, we will color the strands to facilitate the reading of the diagrams. These colors correspond to the elements of $\mathbb{Z}/d\mathrm{Z}$, so henceforth we will also refer to those elements as colors. When there are too many different colors in a diagram, the colors are sometimes indicated by labels next to the strands. We say that two colors $i,j\in \mathbb{Z}/d\mathbb{Z}$ are {\em adjacent} if $i=j\pm 1\bmod d$ and that they are {\em distant} otherwise. The generating diagrams are \begin{equation*} \xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=-.5,xscale=.5,shift={(5,-2)}] \draw[ultra thick,blue] (-1,0) -- (-1, 1)node[pos=0, tikzdot]{}; \end{scope} \begin{scope}[yscale=.5,xscale=.5,shift={(8,2)}] \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \end{scope} \begin{scope}[scale=1,shift={(6,1)}] \draw[ultra thick,blue] (.5,-.5) -- (-.5,.5); \draw[ultra thick,mygreen] (-.5,-.5) -- (.5,.5); \end{scope} \begin{scope}[scale=.5,shift={(16,2)}] \draw[ultra thick,myred] (0,-1) -- (0,0);\draw[ultra thick,myred] (0,0) -- (-1, 1);\draw[ultra thick,myred] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \end{scope} \node at (0,0) {Degree}; \node at (2,0) {$1$}; \node at (4,0) {$-1$}; \node at (6,0) {$0$}; \node at (8,0) {$0$}; }}\endxy \end{equation*} and the diagrams obtained from these by a rotation of $180$ degrees (which have the same degrees). The colors of the 4-valent vertices are assumed to be distant, whereas those of the 6-valent vertices are assumed to be adjacent. Diagrams can be stacked vertically (composition of morphisms) and juxtaposed horizontally (monoidal product of morphisms), while adding the degrees, and are subject to the relations below. We denote by $\id_{X}$ the identity morphism of $X$ and write $fg$ for the monoidal product of morphisms $f$ and $g$ (or, equivalently, horizontal composition when considering the monoidal category as a one-object bicategory). We also assume isotopy invariance and cyclicity, meaning that closed parts of the diagrams can be moved around freely in the plane as long as they do not cross any other strands and the boundary is fixed, and all diagrams can be bent and rotated and the bent and rotated versions of the relations also hold. \begin{itemize} \item Relations involving one color: \begingroup\allowdisplaybreaks \begin{gather}\label{eq:relhatSfirst} \xy (0,0)*{ \tikzdiagc[scale=.4,yscale=.7]{ \draw[ultra thick,blue] (0,-1.75) -- (0,1.75); \draw[ultra thick,blue] (0,0) -- (1,0)node[pos=1, tikzdot]{}; }}\endxy =\ \xy (0,0)*{ \tikzdiagc[scale=.4,yscale=.7]{ \draw[ultra thick,blue] (0,-1.75) -- (0,1.75); }}\endxy \\[1ex] \label{eq:relhatSsecond} \xy (0,.05)*{ \tikzdiagc[scale=.4,yscale=.7]{ \draw[ultra thick,blue] (0,-1) -- (0,1); \draw[ultra thick,blue] (0,1) -- (-1,2); \draw[ultra thick,blue] (0,1) -- (1,2); \draw[ultra thick,blue] (0,-1) -- (-1,-2); \draw[ultra thick,blue] (0,-1) -- (1,-2); }}\endxy = \xy (0,.05)*{ \tikzdiagc[yscale=.4,xscale=.3]{ \draw[ultra thick,blue] (-1,0) -- (1,0); \draw[ultra thick,blue] (-2,-1) -- (-1,0); \draw[ultra thick,blue] (-2,1) -- (-1,0); \draw[ultra thick,blue] ( 2,-1) -- ( 1,0); \draw[ultra thick,blue] ( 2,1) -- ( 1,0); }}\endxy \\[1ex] \label{eq:lollipop} \xy (0,0)*{ \tikzdiagc[scale=0.9]{ \draw[ultra thick,blue] (0,0) circle (.3); \draw[ultra thick,blue] (0,-.8) -- (0,-.3); }}\endxy \ = 0 \\[1ex] \label{eq:relHatSlast} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy \ +\ \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy\ = 2\,\ \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-1) -- (0,-.4)node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (0,.4) -- (0,1)node[pos=0, tikzdot]{}; }}\endxy \end{gather} \endgroup \item Relations involving two distant colors: \begingroup\allowdisplaybreaks \begin{gather} \label{eq:Scat-Rtwo} \xy (0,0)*{ \tikzdiagc[yscale=1.7,xscale=1.1]{ \draw[ultra thick,blue] (0,0) ..controls (0,.25) and (.65,.25) .. (.65,.5) ..controls (.65,.75) and (0,.75) .. (0,1); \begin{scope}[shift={(.65,0)}] \draw[ultra thick,mygreen] (0,0) ..controls (0,.25) and (-.65,.25) .. (-.65,.5) ..controls (-.65,.75) and (0,.75) .. (0,1); \end{scope} }}\endxy = \ \xy (0,0)*{ \tikzdiagc[yscale=1.7,xscale=-1.1]{ \draw[ultra thick, blue] (.65,0) -- (.65,1); \draw[ultra thick,mygreen] (0,0) -- (0,1); }}\endxy \\[1ex]\label{eq:Scat-dotslide} \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.3,-.3) -- (-.5,.5)node[pos=0, tikzdot]{}; \draw[ultra thick,mygreen] (-.5,-.5) -- (.5,.5); }}\endxy = \xy (0,0)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (-.5,.5) -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[ultra thick,mygreen] (-.5,-.5) -- (.5,.5); }}\endxy \\[1ex]\label{eq:Scat-trivslide} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,mygreen] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy = \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,mygreen] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \end{gather} \endgroup \item Relations involving two adjacent colors: \begingroup\allowdisplaybreaks \begin{gather}\label{eq:6vertexdot} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,myred] (0,-.75) -- (0,0)node[pos=0, tikzdot]{};\draw[ultra thick,myred] (0,0) -- (-1, 1);\draw[ultra thick,myred] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); }}\endxy \ =\ \ \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,myred] (-1,0) -- (-1, 1)node[pos=0, tikzdot]{};\draw[ultra thick,myred] (1,0) -- (1, 1)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); }}\endxy \ \ + \ \xy (0,0)*{ \tikzdiagc[yscale=0.6,xscale=.8]{ \draw[ultra thick,blue] (1.5,0.1) -- (1.5, 0.4)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (.9,.4) .. controls (1.2,-.45) and (1.8,-.45) .. (2.1,.4); \draw[ultra thick,blue] ( .9,-1.2) .. controls (1.2,-.35) and (1.8,-.35) .. (2.1,-1.2); }}\endxy \\[1ex]\label{eq:braidmoveB} \xy (0,.05)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,myred] (-1,-2) -- (-1,2); \draw[ultra thick,myred] ( 1,-2) -- ( 1,2); \draw[ultra thick,blue] (0,-2) -- (0,2); }}\endxy = \xy (0,.05)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,myred] (0,-1) -- (0,1); \draw[ultra thick,myred] (0,1) -- (-1,2); \draw[ultra thick,myred] (0,1) -- (1,2); \draw[ultra thick,myred] (0,-1) -- (-1,-2); \draw[ultra thick,myred] (0,-1) -- (1,-2); \draw[ultra thick,blue] (0,1) -- (0,2); \draw[ultra thick,blue] (0,-1) -- (0,-2); \draw[ultra thick,blue] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[ultra thick,blue] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); }}\endxy - \xy (0,.05)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,myred] (0,-.6) -- (0,.6); \draw[ultra thick,myred] (0,.6) .. controls (-.25,.6) and (-1,1).. (-1,2); \draw[ultra thick,myred] (0,.6) .. controls (.25,.6) and (1,1) .. (1,2); \draw[ultra thick,myred] (0,-.6) .. controls (-.25,-.6) and (-1,-1).. (-1,-2); \draw[ultra thick,myred] (0,-.6) .. controls (.25,-.6) and (1,-1) .. (1,-2); \draw[ultra thick,blue] (0,1.25) -- (0,2)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (0,-1.25) -- (0,-2)node[pos=0, tikzdot]{}; }}\endxy \\[1ex] \label{eq:stroman} \xy (0,.05)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,myred] (0,-1) -- (0,1); \draw[ultra thick,myred] (0,1) -- (-1,2); \draw[ultra thick,myred] (0,1) -- (1,2); \draw[ultra thick,myred] (0,-1) -- (-1,-2); \draw[ultra thick,myred] (0,-1) -- (1,-2); \draw[ultra thick,blue] (0,1) -- (0,2); \draw[ultra thick,blue] (0,-1) -- (0,-2); \draw[ultra thick,blue] (0,1) ..controls (-.95,.25) and (-.95,-.25) .. (0,-1); \draw[ultra thick,blue] (0,1) ..controls ( .95,.25) and ( .95,-.25) .. (0,-1); \draw[ultra thick,blue] (-2,0) -- (-.75,0); \draw[ultra thick,blue] (2,0) -- (.75,0); }}\endxy = \xy (0,.05)*{ \tikzdiagc[scale=0.4,yscale=1]{ \draw[ultra thick,myred] (-1,0) -- (1,0); \draw[ultra thick,myred] (-2,-1) -- (-1,0); \draw[ultra thick,myred] (-2,1) -- (-1,0); \draw[ultra thick,myred] ( 2,-1) -- ( 1,0); \draw[ultra thick,myred] ( 2,1) -- ( 1,0); \draw[ultra thick,blue] (-1,0) ..controls (-.5,-.8) and (.5,-.8) .. (1,0); \draw[ultra thick,blue] (-1,0) ..controls (-.5, .8) and (.5, .8) .. (1,0); \draw[ultra thick,blue] (0,.65) -- (0,1.8); \draw[ultra thick,blue] (0,-.65) -- (0,-1.8); \draw[ultra thick,blue] (-2,0) -- (-1,0); \draw[ultra thick,blue] (2,0) -- (1,0); }}\endxy \\[1ex]\label{eq:forcingdumbel-i-iminus} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,myred] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy - \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,myred] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy \ =\frac{1}{2} \biggl(\ \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy - \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy\ \biggr) \end{gather} \endgroup \item Relation involving three distant colors: \begin{equation} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (-1,-1) -- (1,1); \draw[ultra thick,mygreen] (1,-1) -- (-1,1); \draw[ultra thick,orange] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy = \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (-1,-1) -- (1,1); \draw[ultra thick,mygreen] (1,-1) -- (-1,1); \draw[ultra thick,orange] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \end{equation} \item Relation involving distant dumbbells: \begin{equation}\label{eq:distantdumbbells} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,mygreen] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy - \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=-.5]{ \draw[ultra thick,mygreen] (0,-.35) -- (0,.35)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \draw[ultra thick,blue] (.6,-1)-- (.6, 1); }}\endxy \ = 0 \end{equation} \item Relation involving two adjacent colors and one distant from the other two: \begin{equation}\label{eq:sixv-dist} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,myred] (0,-1) -- (0,0);\draw[ultra thick,myred] (0,0) -- (-1, 1);\draw[ultra thick,myred] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,mygreen] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \ =\ \ \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,myred] (0,-1) -- (0,0);\draw[ultra thick,myred] (0,0) -- (-1, 1);\draw[ultra thick,myred] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0)-- (0, 1); \draw[ultra thick,blue] (-1,-1) -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,mygreen] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \end{equation} \item Relation involving three adjacent colors: \begin{equation}\label{eq:relhatSlast} \xy (0,.05)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[ultra thick,myred] (0,-1) -- (0,1); \draw[ultra thick,myred] (0,1) -- (-1,2); \draw[ultra thick,myred] (0,1) -- (2,2); \draw[ultra thick,myred] (0,-1) -- (-2,-2); \draw[ultra thick,myred] (0,-1) -- (1,-2); \draw[ultra thick,blue] (0,1) -- (0,2); \draw[ultra thick,blue] (0,-1) -- (0,-2); \draw[ultra thick,blue] (0,1) -- (1,0); \draw[ultra thick,blue] (0,1) -- (-1,0); \draw[ultra thick,blue] (0,-1) -- (1,0); \draw[ultra thick,blue] (0,-1) -- (-1,0); \draw[ultra thick,blue] (-2,0) -- (-1,0);\draw[ultra thick,blue] (2,0) -- (1,0); % \draw[ultra thick,mygreen] (-1,0) -- (1,0); \draw[ultra thick,mygreen] (-1,0) -- (-1,-2);\draw[ultra thick,mygreen] (-1,0) -- (-2,2); \draw[ultra thick,mygreen] (1,0) -- (2,-2);\draw[ultra thick,mygreen] (1,0) -- (1,2); }}\endxy = \xy (0,.05)*{ \tikzdiagc[scale=0.5,yscale=1]{ \draw[ultra thick,mygreen] (0,-1) -- (0,1); \draw[ultra thick,mygreen] (0,1) -- (1,2); \draw[ultra thick,mygreen] (0,1) -- (-2,2); \draw[ultra thick,mygreen] (0,-1) -- (2,-2); \draw[ultra thick,mygreen] (0,-1) -- (-1,-2); \draw[ultra thick,blue] (0,1) -- (0,2); \draw[ultra thick,blue] (0,-1) -- (0,-2); \draw[ultra thick,blue] (0,1) -- (1,0); \draw[ultra thick,blue] (0,1) -- (-1,0); \draw[ultra thick,blue] (0,-1) -- (1,0); \draw[ultra thick,blue] (0,-1) -- (-1,0); \draw[ultra thick,blue] (-2,0) -- (-1,0); \draw[ultra thick,blue] (2,0) -- (1,0); % \draw[ultra thick,myred] (1,0) -- (-1,0); \draw[ultra thick,myred] (1,0) -- (1,-2); \draw[ultra thick,myred] (1,0) -- (2,2); \draw[ultra thick,myred] (-1,0) -- (-2,-2); \draw[ultra thick,myred] (-1,0) -- (-1,2); }}\endxy \end{equation} \end{itemize} Note that the empty word is the identity object in $\widehat{\EuScript{BS}}_d$ and its endomorphisms are the closed diagrams, which by the relations above are equal to polynomials in the colored dumbbells \begin{equation*} \xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=-.5,xscale=.5] \draw[ultra thick,blue] (-1,-.4) -- (-1, .4)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{}; \end{scope} }}\endxy \end{equation*} As each dumbbell has degree $2$, the degree of any polynomial in these dumbbells, as a morphism in $\widehat{\EuScript{S}_{BS}}$, is twice its polynomial degree. From now on, we denote this polynomial algebra by $R$. Note further that, by relations \eqref{eq:relHatSlast}, \eqref{eq:forcingdumbel-i-iminus} and \eqref{eq:distantdumbbells}, the morphism \begin{equation}\label{eq:dumbbellsum} \sum_{i=0}^{d-1} \,\xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=-.5,xscale=.5] \draw[ultra thick,blue] (-1,-.4) -- (-1, .4)node[pos=0, tikzdot]{} node[pos=1, tikzdot]{} node at (-0.5,0) {\small $i$} ; \end{scope} }}\endxy \end{equation} is central, in the sense that it can be slid through all diagrams (i.e. it commutes horizontally with all morphisms). Note that this morphism is equal to $\fbox{y}$ (up to sign, depending on conventions) in~\cite{mackaay-thiel}, because it is equal to the sum of all simple roots. Let $\widehat{\EuScript{BS}}_d^{\mathrm{sh}}$ be the category with shift associated to $\widehat{\EuScript{BS}}_d$, see Section~\ref{sec:graded-and-shifted-stuff}. \begin{defn} The {\em diagrammatic Soergel category} $\widehat{\EuScript{S}}_d$ is the idempotent completion of the diagrammatic Bott-Samelson category with shift $\widehat{\EuScript{BS}}_d^{\mathrm{sh}}$. \end{defn} \begin{rem}\label{rem:shifts-etc1} In the following sections, we sometimes state and prove diagrammatic equations in $\widehat{\EuScript{BS}}_d$, in which case there are no shifts for the source and target objects, instead of $\widehat{\EuScript{S}}_d$, in which case the source and target objects are carefully shifted. This is just to simplify notation and makes no essential difference in our case. As long as the equations in $\widehat{\EuScript{BS}}_d$ are between homogeneous diagrams of the same degree, they give rise to an equality between morphisms in $\widehat{\EuScript{S}}_d$, which is the key point. \end{rem} The diagrammatic Bott-Samelson category $\widehat{\EuScript{BS}}_d$ is equivalent to the algebraic category of Bott-Samelson bimodules and bimodule maps and the diagrammatic Soergel category $\widehat{\EuScript{S}}_d$ is equivalent to the algebraic category of Soergel bimodules and degree-preserving bimodule maps, see~\cite[Theorem 6.28]{elias-williamson-2}. For convenience, we will therefore denote the objects of $\widehat{\EuScript{BS}}_d$ by $\mathrm{B}_{\underline{w}}=\mathrm{B}_{s_{i_1}}\cdots \mathrm{B}_{s_{i_{\ell}}}$, where $\underline{w}=s_{i_1}\cdots s_{i_{\ell}}$ is a finite word in the alphabet $S$. In particular, the monoidal product is given by $\mathrm{B}_{\underline{u}}\mathrm{B}_{\underline{v}} = \mathrm{B}_{\underline{uv}}$, where $\underline{uv}$ is the concatenation of the words $\underline{u}$ and $\underline{v}$. Let us also recall the so-called {\em Categorification Theorem}, due to Soergel in finite type $A$, to H\"arterich~\cite{harterich} in affine type $A$ and to Elias--Williamson~\cite{elias-williamson-1, elias-williamson-2} in general Coxeter type. \begin{thm}\label{thm:categorification} For any $w\in \widehat{\EuScript{S}}_d$ and rex $\underline{w}=s_{i_1}\cdots s_{i_\ell}$ of $w$, there is an indecomposable object $\mathrm{B}_w\in \widehat{\EuScript{S}}_d$, independent of the choice of rex, such that \[ \mathrm{B}_{\underline{w}}\cong \mathrm{B}_w \oplus \bigoplus_{u \prec w} \mathrm{B}_u^{\oplus h_{w,u}}, \] where $\prec$ is the Bruhat order in $\widehat{\EuScript{S}}_d$ and $h_{w,u}\in \mathbb{N}[q,q^{-1}]$ is the graded multiplicity of $\mathrm{B}_u$ in the decomposition of $\mathrm{B}_{\underline{w}}$. Moreover, the $\mathbb{Z}[q,q^{-1}]$-linear map \begin{gather*} \widehat{H}_d^{\mathbb{Z}[q,q^{-1}]} \to [\widehat{\EuScript{S}}_d]_{\oplus}\\ b_w\mapsto \mathrm{B}_w, \quad w \in \widehat{\EuScript{S}}_d \end{gather*} is an isomorphism of algebras, where $\widehat{H}_d^{\mathbb{Z}[q,q^{-1}]}$ is the integral form of $\widehat{H}_d$. \end{thm} Let $\widehat{\EuScript{S}}_d^{\mathrm{gr}}$ be the graded monoidal category associated to $\widehat{\EuScript{S}}_d$, see Section~\ref{sec:graded-and-shifted-stuff}. For every $u,v\in \widehat{\EuScript{S}}$, the graded Hom-space \[ \widehat{\EuScript{S}}_d^{\mathrm{gr}}\left(\mathrm{B}_u,\mathrm{B}_v\right)=\bigoplus_{t\in \mathbb{Z}}\widehat{\EuScript{S}}_d\left(\mathrm{B}_u,\mathrm{B}_v \langle t\rangle \right) \] is a free left (or right) graded $R$-module of finite graded rank, given by {\em Soergel's Hom-formula}: \begin{equation}\label{eq:Soergelhom} \mathrm{grk}_{R}\left(\widehat{\EuScript{S}}_d^{\mathrm{gr}}\left(\mathrm{B}_u,\mathrm{B}_v\right)\right)=(b_u,b_v), \end{equation} where $(-,-)$ is the well-known sesquilinear form on $\widehat{H}_d$, see e.g.~\cite[Section 2.4 and Theorem 3.15]{elias-williamson-2}. \begin{defn} The diagrammatic Bott-Samelson category and the diagrammatic Soergel category of finite type $A_{d-1}$, denoted $\EuScript{BS}_d$ and $\EuScript{S}_d$ respectively, are defined as $\widehat{\EuScript{BS}}_d$ and $\widehat{\EuScript{S}}_d$ but only using the colors $1,\ldots,d-1$. \end{defn} Note that $\EuScript{BS}_d$ and $\EuScript{S}_d$ are monoidal subcategories of $\widehat{\EuScript{BS}}_d$ and $\widehat{\EuScript{S}}_d$, respectively, but that the natural embeddings are not full because e.g. the $0$-colored dumbbell is not a morphism in $\EuScript{BS}_d$ and $\EuScript{S}_d$. \subsection{Soergel calculus in extended affine type $A$}\label{sec:soergeldiagrammatics-two} In this subsection we briefly sketch how to enhance $\widehat{\EuScript{BS}}_d$ and $\widehat{\EuScript{S}}_d$ to get the extended diagrammatic Soergel category of type $\widehat{A}_{d-1}$, denoted $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$ and $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$, which were introduced in~\cite{mackaay-thiel} and further studied in~\cite{elias2018}. We refer to those two papers for more details. The objects of $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$ are formal direct sums of words in the alphabet $S\cup\{\rho,\rho^{-1}\}$. Because of the link with algebraic bimodules, we write $\mathrm{B}_{\rho}^n$ for $\rho^n$, for any $n\in \mathbb{Z}$. There are also new generating diagrams, all of degree zero, involving oriented strands. The generators involving only oriented strands are \begin{equation}\label{eq:orientedcoloredgens1} \xy (0,0)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,black,-to] (1.5,-.5) -- (1.5, .5); } }\endxy \mspace{60mu} \xy (0,0)*{ \tikzdiagc[yscale=0.9,baseline={([yshift=-.8ex]current bounding box.center)}]{ \draw[ultra thick,black,to-] (1.5,-.5) -- (1.5, .5); } }\endxy \mspace{60mu} \xy (0,.55)*{ \tikzdiagc[yscale=0.9]{ \draw[ultra thick,black,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy \mspace{60mu} \xy (0,.55)*{ \tikzdiagc[yscale=0.9]{ \draw[ultra thick,black,-to] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy \mspace{60mu} \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,black,to-] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy \mspace{60mu} \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,black,-to] (1,.4) .. controls (1.2,-.4) and (1.8,-.4) .. (2,.4); }}\endxy \end{equation} and the generating diagrams involving oriented strands and adjacent colored strands are \begin{equation}\label{eq:orientedcoloredgens2} \xy (0,-2.25)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,myred] (.5,-.5)node[below] {\tiny $i-1$} -- (0,0); \draw[ultra thick,blue] (-.5,.5) node[above] {\tiny $i$} -- (0,0); \draw[ultra thick,black,-to] (-.5,-.5) -- (.5,.5); }}\endxy \mspace{60mu} \xy (0,-2.25)*{ \tikzdiagc[scale=1,xscale=-1]{ \draw[ultra thick,blue] (.5,-.5)node[below] {\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.5,.5) node[above] {\tiny $i-1$}-- (0,0); \draw[ultra thick,black,-to] (-.5,-.5) -- (.5,.5); }}\endxy \mspace{60mu} \xy (0,-2.25)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.5,-.5)node[below] {\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.5,.5) node[above] {\tiny $i-1$} -- (0,0); \draw[ultra thick,black,-to] (.5,.5) -- (-.5,-.5); }}\endxy \mspace{60mu} \xy (0,-2.25)*{ \tikzdiagc[scale=1,yscale=-1]{ \draw[ultra thick,blue] (.5,-.5) node[above] {\tiny $i$} -- (0,0); \draw[ultra thick,myred] (-.5,.5)node[below] {\tiny $i-1$} -- (0,0); \draw[ultra thick,black,-to] (-.5,-.5) -- (.5,.5); }}\endxy \end{equation} The new morphisms satisfy the following relations, where we again assume isotopy invariance and cyclicity. \begin{itemize} \item Relations involving only oriented strands: \begingroup\allowdisplaybreaks \begin{gather}\label{eq:orloop} \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,black] (0,0) circle (.65);\draw [ultra thick,black,-to] (.65,0) --(.65,0); }}\endxy \ = 1 =\ \xy (0,0)*{ \tikzdiagc[yscale=-0.9]{ \draw[ultra thick,black] (0,0) circle (.65);\draw [ultra thick,black,to-] (-.65,0) --(-.65,0); }}\endxy \\ \label{eq:orinv} \xy (0,0)*{ \tikzdiagc[yscale=0.8]{ \draw[ultra thick,black,-to] (.5,-.75) -- (.5,.75); \draw[ultra thick,black,to-] (-.5,-.75) -- (-.5,.75); }}\endxy\ = \xy (0,0)*{ \tikzdiagc[yscale=0.8]{ \draw[ultra thick,black,-to] (1,.75) .. controls (1.2,-.05) and (1.8,-.05) .. (2,.75); \draw[ultra thick,black,to-] (1,-.75) .. controls (1.2,.05) and (1.8,.05) .. (2,.-.75); }}\endxy \mspace{80mu} \xy (0,0)*{ \tikzdiagc[yscale=-0.8]{ \draw[ultra thick,black,-to] (.5,-.75) -- (.5,.75); \draw[ultra thick,black,to-] (-.5,-.75) -- (-.5,.75); }}\endxy\ = \xy (0,0)*{ \tikzdiagc[yscale=-0.8]{ \draw[ultra thick,black,-to] (1,.75) .. controls (1.2,-.05) and (1.8,-.05) .. (2,.75); \draw[ultra thick,black,to-] (1,-.75) .. controls (1.2,.05) and (1.8,.05) .. (2,.-.75); }}\endxy \end{gather} \endgroup \item Relation involving oriented strands and distant colored strands: \begin{equation}\label{eq:orthru4vertex} \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $i$} -- (.45,.45); \draw[ultra thick,myred] (.45,.45) -- (1,1)node[above]{\tiny $i-1$}; \draw[ultra thick,mygreen] (1,-1)node[below]{\tiny $j$} -- (-.45,.45); \draw[ultra thick,orange] (-.45,.45) -- (-1,1)node[above]{\tiny $j-1$}; \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy = \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $i$} -- (-.45,-.45); \draw[ultra thick,myred] (-.45,-.45) -- (1,1)node[above]{\tiny $i-1$}; \draw[ultra thick,mygreen] (1,-1)node[below]{\tiny $j$} -- (.45,-.45); \draw[ultra thick,orange] (.45,-.45) -- (-1,1)node[above]{\tiny $j-1$}; \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \end{equation} \item Relations involving oriented strands and two adjacent colored strands: \begingroup\allowdisplaybreaks \begin{gather}\label{eq;orReidII} \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick, blue] (1,0)node[below]{\tiny $i$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,myred] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[ultra thick, blue] (1,1)node[above]{\tiny $i$} .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \node[myred] at (-.35,.5) {\tiny $i-1$}; \draw[ultra thick,black,to-] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); }}\endxy = \ \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=1.1]{ \draw[ultra thick, blue] (1,0)node[below]{\tiny $i$} -- (1,1)node[above]{\phantom{\tiny $i$}}; \draw[ultra thick,black,to-] (0,0) -- (0,1); }}\endxy \mspace{80mu} \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=-1.1]{ \draw[ultra thick,myred] (1,0)node[below]{\tiny $i-1$} .. controls (1,.15) and (.7,.24) .. (.5,.29); \draw[ultra thick,blue] (.5,.29) .. controls (-.1,.4) and (-.1,.6) .. (.5,.71); \draw[ultra thick,myred] (1,1)node[above]{\tiny $i-1$} .. controls (1,.85) and (.7,.74) .. (.5,.71) ; \node[blue] at (-.15,.5) {\tiny $i$}; \draw[ultra thick,black,to-] (0,0) ..controls (0,.35) and (1,.25) .. (1,.5) ..controls (1,.75) and (0,.65) .. (0,1); }}\endxy = \ \xy (0,0)*{ \tikzdiagc[yscale=2.1,xscale=-1.1]{ \draw[ultra thick,myred] (1,0)node[below]{\tiny $i-1$} -- (1,1)node[above]{\phantom{\tiny $i-1$}}; \draw[ultra thick,black,to-] (0,0) -- (0,1); }}\endxy \\[1ex] \label{eq:dotrhuor} \xy (0,1.2)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,blue] (.3,-.3)node[below]{\tiny $i$} -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,myred] (-.5,.5)node[above]{\tiny $i-1$} -- (0,0); \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy = \xy (0,2.5)*{ \tikzdiagc[scale=1]{ \draw[ultra thick,myred] (-.5,.5)node[above]{\tiny $i-1$} -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[ultra thick,black,to-] (-.5,-.5) -- (.5,.5); }}\endxy \mspace{80mu} \xy (0,-.85)*{ \tikzdiagc[xscale=-1,yscale=-1]{ \draw[ultra thick,myred] (.3,-.3)node[above]{\tiny $i-1$} -- (0,0)node[pos=0, tikzdot]{}; \draw[ultra thick,blue] (-.5,.5)node[below]{\tiny $i$} -- (0,0); \draw[ultra thick,black,-to] (-.5,-.5) -- (.5,.5); }}\endxy = \xy (0,-2.3)*{ \tikzdiagc[xscale=-1,yscale=-1]{ \draw[ultra thick,blue] (-.5,.5)node[below]{\tiny $i$} -- (-.2,.2)node[pos=1, tikzdot]{}; \draw[ultra thick,black,-to] (-.5,-.5) -- (.5,.5); }}\endxy \\[1ex] \label{eq:orpitchfork} \xy (0,0)*{ \tikzdiagc[yscale=-0.5,xscale=.5]{ \draw[ultra thick,myred] (0,0)-- (0,.5); \draw[ultra thick,myred] (-1,-1)node[above]{\tiny $i-1$} -- (0,0); \draw[ultra thick,myred] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,.5) -- (0,1)node[below]{\tiny $i$}; \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.7) and (.25,.7) .. (1,0); }}\endxy = \xy (0,0)*{ \tikzdiagc[yscale=-0.5,xscale=.5]{ \draw[ultra thick,myred] (-1,-1)node[above]{\tiny $i-1$} -- (-.45,-.45); \draw[ultra thick,myred] (1,-1) -- (.45,-.45); \draw[ultra thick,blue] (-.45,-.45)-- (0,0); \draw[ultra thick,blue] (.45,-.45)-- (0,0); \draw[ultra thick,blue] (0,0)-- (0,1)node[below]{\tiny $i$}; \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \end{gather} \endgroup \item Relations involving oriented strands and three adjacent colored strands: \begin{equation} \label{eq:orslide6vertex} \begin{split} \xy (0,1)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,mygreen] (0,0)-- (0,.6); \draw[ultra thick,mygreen] (-1,-1)node[below]{\tiny $i+1$} -- (0,0); \draw[ultra thick,mygreen] (1,-1) -- (0,0); \draw[ultra thick,blue] (0,.6) -- (0,1); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $i$} -- (0,0); \draw[ultra thick,blue] (0,0) -- (-.45, .45); \draw[ultra thick,myred] (-.45,.45) -- (-1,1)node[above]{\tiny $i-1$}; \draw[ultra thick,blue] (0,0) -- (.45,.45); \draw[ultra thick,myred] (.45,.45) -- (1, 1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \ &=\ \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,mygreen] (-1,-1)node[below]{\tiny $i+1$} -- (-.45,-.45); \draw[ultra thick,mygreen] (1,-1) -- (.45,-.45); \draw[ultra thick,blue] (0,-1)node[below]{\tiny $i$} -- (0,-.6); \draw[ultra thick,myred] (0,-.6) -- (0,0); \draw[ultra thick,myred] (0,0) -- (-1,1)node[above]{\tiny $i-1$}; \draw[ultra thick,myred] (0,0) -- (1, 1); \draw[ultra thick,blue] (0,0) -- (0,1); \draw[ultra thick,blue] (-.45,-.45) -- (0,0); \draw[ultra thick,blue] (.45,-.45) -- (0,0); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \\[1ex] \xy (0,1)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (0,0)-- (0,.6); \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $i$} -- (0,0); \draw[ultra thick,blue] (1,-1) -- (0,0); \draw[ultra thick,myred] (0,.6) -- (0,1)node[above]{\tiny $i-1$}; \draw[ultra thick,mygreen] (0,-1)node[below]{\tiny $i+1$} -- (0,0); \draw[ultra thick,mygreen] (0,0) -- (-.45, .45); \draw[ultra thick,blue] (-.45,.45) -- (-1,1); \draw[ultra thick,mygreen] (0,0) -- (.45,.45); \draw[ultra thick,blue] (.45,.45) -- (1, 1); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,.75) and (.25,.75) .. (1,0); }}\endxy \ &=\ \xy (0,0)*{ \tikzdiagc[yscale=0.5,xscale=.5]{ \draw[ultra thick,blue] (-1,-1)node[below]{\tiny $i$} -- (-.45,-.45); \draw[ultra thick,blue] (1,-1) -- (.45,-.45); \draw[ultra thick,mygreen] (0,-1)node[below]{\tiny $i+1$} -- (0,-.6); \draw[ultra thick,blue] (0,-.6) -- (0,0); \draw[ultra thick,blue] (0,0) -- (-1,1); \draw[ultra thick,blue] (0,0) -- (1, 1); \draw[ultra thick,myred] (0,0) -- (0,1)node[above]{\tiny $i-1$}; \draw[ultra thick,myred] (-.45,-.45) -- (0,0); \draw[ultra thick,myred] (.45,-.45) -- (0,0); \draw[ultra thick,black,to-] (-1,0) ..controls (-.25,-.75) and (.25,-.75) .. (1,0); }}\endxy \end{split} \end{equation} \end{itemize} By relations~\eqref{eq:dotrhuor}, the sum of all colored dumbbells in \eqref{eq:dumbbellsum} also commutes with oriented strands, so the corresponding morphism is also central in $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$. In general, any object in $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$ is isomorphic to a direct sum of objects of the form $\mathrm{B}_{\rho}^n\mathrm{B}_{\underline{w}}$, for some $n\in \mathbb{Z}$ and word $\underline{w}$ in $S$. By the relations in~\eqref{eq:orloop}, there is an isomorphism of vector spaces (and of algebras) \[ \left(\widehat{\EuScript{BS}}^{\mathrm{ext}}_d\right)^0 \left(\mathrm{B}_{\rho}^m, \mathrm{B}_{\rho}^n\right) \cong \begin{cases} \mathbb{C}\mathrm{id}_{\mathrm{B}_{\rho}^m},&\text{if}\; m=n;\\ \{0\},&\text{else}. \end{cases} \] Recall that $R=\widehat{\EuScript{BS}}(\varnothing,\varnothing)$ is the polynomial algebra in the colored dumbbells. Then the isomorphism above generalizes to an isomorphism of graded $R$-$R$-bimodules \[ \widehat{\EuScript{BS}}^{\mathrm{ext}}_d \left(\mathrm{B}_{\rho}^m, \mathrm{B}_{\rho}^n\right) \cong \begin{cases} R^{\tau^m},&\text{if}\; m=n;\\ \{0\},&\text{else}, \end{cases} \] where $\tau$ is the automorphism of $R$ which sends the $i$-colored dumbbell to the $i+1$-colored dumbbell, for any $i\in \mathbb{Z}/d\mathbb{Z}$, and $R^{\tau^m}$ is the free rank-one $R$-$R$-bimodule with the normal left $R$-action and the right $R$-action twisted by $\tau^m$. Moreover, the black oriented part and the non-oriented colored part of any diagram can be separated by the above relations, resulting in an isomorphism of graded $R$-$R$-bimodules \[ \widehat{\EuScript{BS}}^{\mathrm{ext}}_d\left(\mathrm{B}_{\rho}^m\mathrm{B}_{\underline{u}}, \mathrm{B}_{\rho}^m\mathrm{B}_{\underline{v}}\right) \cong \begin{cases} R^{\tau^m}\otimes_R \widehat{\EuScript{BS}}_d\left(\mathrm{B}_{\underline{u}}, \mathrm{B}_{\underline{v}}\right), & \text{if}\; m=n;\\ \{0\},&\text{else}. \end{cases} \] In particular, this implies that the natural embedding $\widehat{\EuScript{BS}}_d\hookrightarrow \widehat{\EuScript{BS}}^{\mathrm{ext}}_d$ is full. For the proofs of these results, see~\cite[Section 3.3]{elias2018}. \begin{defn} The {\em extended diagrammatic Soergel category} $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$ is the idempotent completion of $\left(\widehat{\EuScript{BS}}^{\mathrm{ext}}_d\right)^{\mathrm{sh}}$. \end{defn} The above results on the Hom-spaces in $\widehat{\EuScript{BS}}^{\mathrm{ext}}_d$ and Theorem~\ref{thm:categorification} imply the following generalization to the extended case, see~\cite[Theorem 2.5]{mackaay-thiel}. \begin{thm}\label{thm:extcategorification} For any $n\in \mathbb{Z}$ and $w\in \widehat{\EuScript{S}}_d$, the object $\mathrm{B}_{\rho}^n\mathrm{B}_w\in \widehat{\EuScript{S}}^{\mathrm{ext}}_d$ is indecomposable. Moreover, the $\mathbb{Z}[q,q^{-1}]$-linear map \begin{gather*} \left(\widehat{H}^{ext}_d\right)^{\mathbb{Z}[q,q^{-1}]} \to [\widehat{\EuScript{S}}^{\mathrm{ext}}_d]_{\oplus}\\ \rho^n b_w\mapsto \mathrm{B}_{\rho}^n \mathrm{B}_w, \quad n\in \mathbb{Z}, w \in \widehat{\EuScript{S}}_d \end{gather*} is an isomorphism of algebras. \end{thm} \section{Evaluation birepresentations and finitary covers}\label{sec:bireps} \subsection{Recollections on birepresentation theory} In the following, we will work with graded (finitary or triangulated) birepresentations of graded, additive bicategories. The particular bicategory we are interested in is, of course, $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$, which we view as a bicategory with one object in the usual way. We call a graded, $\mathbb{C}$-linear, additive category $\mathcal{A}$ {\em graded-finitary} if $\mathcal{A}^{\mathrm{sh}}$ is idempotent complete, morphisms spaces between indecomposables are finite-dimensional and there are only finitely many isomorphism classes of indecomposables up to isomorphism and grading shift. Note that $\mathcal{A}$ need not be finitary, because the Hom-spaces might be infinite-dimensional, although they are finite-dimensional in each degree. This is why we write {\em graded-finitary} and not {\em graded, finitary}. We denote the $2$-category of graded, resp. graded-finitary, $\mathbb{C}$-linear, additive categories, degree-preserving $\mathbb{C}$-linear functors and natural transformations by $\mathfrak{A}_{\mathbb{C}}^{g}$, resp. $\mathfrak{A}_{\mathbb{C}}^{gf}$. A {\em(locally) graded, additive bicategory} $\mathcal{C}$ is one whose morphism categories are enriched over $\mathfrak{A}_{\mathbb{C}}^{g}$ and a {\em (locally) graded-finitary bicategory} $\mathcal{C}$ is one whose morphism categories are enriched over $\mathfrak{A}_{\mathbb{C}}^{gf}$ and whose identity $1$-morphisms are indecomposable. Note that, to shorten the string of adjectives, we drop the adjective $\mathbb{C}$-linear, even though it is implicit in the enrichment. A {\em graded, additive} (resp. {\em graded-finitary}) {\em birepresentation} is a degree-preserving pseudofunctor from $\mathcal{C}$ to $\mathfrak{A}_{\mathbb{C}}^{g}$ (resp. $\mathfrak{A}_{\mathbb{C}}^{gf}$). Since we are mainly interested in $\widehat{\EuScript{S}}_d$, we will also abuse notation and call additive (bi)categories of the form $\mathcal{A}^{\mathrm{sh}}$ graded-finitary provided $\mathcal{A}$ is. Similarly, given a graded-finitary birepresentation $\mathbf{M}$ of a graded, additive bicategory $\mathcal{C}$, we will also call the birepresentation $\mathbf{M}^{\mathrm{sh}}$ of $\mathcal{C}^{\mathrm{sh}}$ (which acts on categories $\mathbf{M}(\mathtt{i})^{\mathrm{sh}}$, for objects $\mathtt{i}$, via functors which commute with shifts) graded-finitary. For more detail on these constructions, we refer to \cite[Section 2.6]{mmmtz2019}. We will also be considering triangulated birepresentations of graded, additive bicategories. Denote by $\mathfrak{T}_\mathbb{C}$ the bicategory of triangulated, $\mathbb{C}$-linear categories, ($\mathbb{C}$-linear) triangle functors and natural transformations. A {\em triangulated birepresentation} of a $\mathbb{C}$-linear, additive bicategory $\mathcal{C}$ is a ($\mathbb{C}$-linear) pseudofunctor from $\mathcal{C}$ to $\mathfrak{T}_\mathbb{C}^{gf}$. In order to consider graded versions, we restrict ourselves to the $2$-full subbicategory $\mathfrak{T}_\mathbb{C}^{g}$ of $\mathfrak{T}_\mathbb{C}$ whose objects are triangulated categories of the form ${\EuScript{K}^b}(\mathcal{A}^{\mathrm{sh}})$ for a graded, $\mathbb{C}$-linear, additive category $\mathcal{A}$, and whose functors are degree-preserving triangle functors. A {\em graded-triangulated birepresentation} of an additive, graded bicategory $\mathcal{C}$ is then a degree-preserving ($\mathbb{C}$-linear) pseudofunctor from $\mathcal{C}$ to $\mathfrak{T}_\mathbb{C}^{g}$. Similarly to the finitary case above, we will call a birepresentation via which a bicategory of the form $\mathcal{C}^{\mathrm{sh}}$ acts on triangulated categories of the form $\mathcal{T}^{\mathrm{sh}}$ via triangle functors commuting with shifts (i.e. one obtained by taking a graded birepresentation of $\mathcal{C}$ acting on $\mathcal{T}$, closing under shifts, and then restricting to morphisms of degree zero) a graded, triangulated birepresentation. In some cases, graded-finitary birepresentations will have an additional shift functor (coming from the homological shift in a triangulated birepresentation), with respect to which morphisms in the underlying categories will have degree zero. We call such birepresentations bigraded-finitary. Given a (locally) additive, graded bicategory, the set of isomorphism classes of indecomposable $1$-morphisms up to grading shift can be given three natural partial preorders: the {\em left} preorder ($[\mathrm{F} ]\leq_L [\mathrm{G}]$ if and only if $[\mathrm{G}]$ appears as a direct summand of $[\mathrm{HF}]$ for some $1$-morphism $\mathrm{H}$), the {\em right} preorder ($[\mathrm{F} ]\leq_R [\mathrm{G}]$ if and only if $[\mathrm{G}]$ appears as a direct summand of $[\mathrm{FH}]$ for some $1$-morphism $\mathrm{H}$) and the {\em two-sided} preorder ($[\mathrm{F} ]\leq_J [\mathrm{G}]$ if and only if $[\mathrm{G}]$ appears as a direct summand of $[\mathrm{H_1FH_2}]$ for some $1$-morphisms $\mathrm{H_1}, \mathrm{H_2}$), and the corresponding equivalence classes are called {\em left, right} and {\em two-sided cells}, respectively. If $\mathcal{C}$ is graded-finitary, we can associate to any left cell a so-called {\em graded cell $2$-representation}, which is the quotient of the left $2$-ideal in $\mathcal{C}$ generated by the identities on the $1$-morphisms in the cell, by the unique maximal ideal of the resulting birepresentation (i.e. the unique maximal ideal of the underlying categories which is stable under the action of $\mathcal{C}$). For more details (in the ungraded case, but the graded one is analogous), see e.g. \cite[Section 3.3]{MM5}. \subsection{Finitary covers of evaluation cell birepresentations} Let $\mathbf{M}$ be a graded-finitary birepresentation of $\EuScript{S}_d$, for any $d\in \mathbb{N}_{\geq 2}$. Then ${\EuScript{K}^b}(\mathbf{M})$, as a graded, triangulated birepresentation of ${\EuScript{K}^b}(\EuScript{S}_d)$, induces a graded, triangulated birepresentation of $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$, the {\em evaluation birepresentation} $\mathbf{M}^{\mathcal{E}v_{r,s}}$, resp. $\mathbf{M}^{\mathcal{E}v'_{r,s}}$, by pull-back through the evaluation functors $\mathcal{E}v_{r,s}$, resp. $\mathcal{E}v'_{r,s}$, for any $r,s\in \mathbb{Z}$. In this subsection we show that, if $\mathbf{M}$ is a cell birepresentation of $\EuScript{S}_d$, then $\mathbf{M}^{\mathcal{E}v_{r,s}}$ has a bigraded-finitary cover in the following sense. \begin{defn}\label{defn:fincov} A {\em bigraded-finitary cover} of a graded, triangulated birepresentation $\mathbf{N}$ of a graded, additive bicategory $\mathcal{C}$ is a bigraded-finitary birepresentation $\mathbf{L}$ of $\mathcal{C}$ together with an epimorphic, faithful and essentially surjective morphism $\Phi\colon {\EuScript{K}^b}(\mathbf{L}) \to {\EuScript{K}^b}(\mathbf{N})$ of graded, triangulated birepresentations. Here \emph{epimorphic} means that every morphism in ${\EuScript{K}^b}(\mathbf{N})$ is a composite of morphisms in the image of $\Phi$, possibly with additional isomorphisms. \end{defn} \begin{prop}\label{prop:fincovgen} Let $\mathbf{M}$ be the graded cell birepresentation associated to some left cell $\mathcal{L}$ of $\EuScript{S}_d$. Then $\mathbf{M}^{\mathcal{E}v_{r,s}}$ has a bigraded-finitary cover. \end{prop} \begin{proof} By \cite[Proposition 4.31]{elias-hogancamp}, $\mathrm{T}_{\rho}^d$ acts as $\mathrm{Id}\langle x\rangle [y]$ on $\mathbf{M}^{\mathcal{E}v_{r,s}}$, for some $x,y\in \mathbb{Z}$. Let $\mathbf{L}$ be the closure under isomorphisms, direct sums, direct summands, grading and homological shifts of the $\mathcal{E}v_{r,s}(\mathrm{T}_{\rho}^i) \mathrm{B}_w$, for $0\leq i\leq d-1$ and $w\in \mathcal{L}$. Relation~\eqref{eq;orReidII} implies that $\mathbf{L}$ is a bigraded-finitary birepresentation of $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$. The inclusion functor $\mathbf{L}\hookrightarrow \mathbf{M}^{\mathcal{E}v_{r,s}}$ extends to a morphism of graded triangulated birepresentations $\Phi\colon {\EuScript{K}^b}(\mathbf{L})\hookrightarrow \mathbf{M}^{\mathcal{E}v_{r,s}}$ which is epimorphic, faithful and essentially surjective by construction. \end{proof} We refer to Corollary~\ref{cor:fincover} for an example demonstrating that $\Phi$ is not full in general. \begin{rem} It is easy to see that $\mathbf{L}$ is transitive, and it looks likely that calculations, using the explicit descriptions of the representing bimodules for the $\mathrm{B}_w$ given in \cite[Section 4.3]{mmmtz2019}, one can verify that it is indeed simple transitive. \end{rem} \subsection{The zigzag algebras} Let us first recall the {\em affine zigzag algebra} $\widehat{Z}_d$ over $\mathbb{C}$ associated to the $\widehat{A}_{d-1}$ Dynkin diagram. As is well-known, there are two isomorphism classes of affine zigzag algebras with invertible integer coefficients, and we use a specific representative of either one or the other depending on the parity of $d$. Let $e_0,\ldots, e_{d-1}$ denote the orthogonal idempotents associated to the vertices of the zigzag quiver and $i_1\vert i_2\vert \ldots \vert i_k$ the path in the quiver from $i_k$ to $i_1$ via $i_{k-1},\ldots, i_2$. As usual, all indices are to be taken modulo $d$. The relations in $\widehat{Z}_d$ are \begin{gather*} i\vert i+1 \vert i+2=0=i\vert i-1\vert i-2, \quad i=0,\ldots,d-1; \\ i\vert i+1\vert i = i\vert i-1\vert i, \quad i=1,\ldots, d-1; \\ 0 \vert 1 \vert 0 = (-1)^d (0 \vert d-1 \vert 0). \end{gather*} For convenience, we also use the notation \[ \ell_i:= i\vert i+1\vert i, \] for any $i=0,\ldots, d-1$. This algebra has dimension $4d$, it is positively graded by putting the degree of every path equal to its length, and it is a graded Frobenius algebra with non-degenerate trace defined by \[ \mathrm{tr}(\ell_i)=1\;\text{for every}\;i=0,\ldots,d-1;\quad \mathrm{tr}(a)=0\;\text{when} \deg(a) \ne 2. \] This means that $\widehat{Z}_d^\star \cong \widehat{Z}_d\langle 2\rangle$ as graded left, resp. right, $\widehat{Z}_d$-modules. Define the non-degenerate bilinear pairing $\langle .,.\rangle\colon \widehat{Z}_d\otimes \widehat{Z}_d\to \mathbb{C}$ as usual \[ \langle a,b\rangle :=\mathrm{tr}(ab),\; a,b\in \widehat{Z}_d, \] and recall that two bases of $\widehat{Z}_d$, say $\{a_i\mid i=1\ldots, 4d\}$ and $\{a_i^{\star}\mid 1,\ldots,4d\}$, are called {\em dual} to each other if they satisfy \[ \langle a_i,a_j^{\star}\rangle =\delta_{i,j},\; i,j=1,\ldots,4d, \] where $\delta_{i,j}$ is the Kronecker delta. With respect to the bilinear form on $\widehat{Z}_d$, there is a natural pair of dual bases $\{e_i,\ell_i ,i\vert i\pm 1\mid i=0,\ldots,d-1\}$ and $\{e_i^{\star},\ell_i^{\star},(i\vert i\pm 1)^{\star} \mid i=0,\ldots,d-1\}$, such that \[ e_i^\star= \ell_i;\quad \ell_i^\star = e_i;\quad (0\vert (d-1))^{\star}=(-1)^d ((d-1)\vert 0);\quad (i \vert (i\pm 1))^\star = (i\pm 1) \vert i,\; i=1,\ldots, d-1. \] for $i=0,\ldots, d-1$. Note that $\widehat{Z}_d$ is symmetric when $d$ is even and only weakly symmetric when $d$ is odd. Let $\widehat{Z}_d{-}\mathrm{fgproj}$, resp. $\mathrm{fgproj}{-}\widehat{Z}_d$, be the category of finite-dimensional, graded, projective left, resp. right, $\widehat{Z}_d$-modules and degree-preserving module maps. The indecomposable objects in these categories are isomorphic to $\widehat{Z}e_i\langle t\rangle$, resp. $e_i\widehat{Z}\langle t\rangle,$ for some $i=0,\ldots, d-1$ and $t\in \mathbb{Z}$. Finally, let $\widehat{Z}_d{-}\mathrm{fgbiproj}{-}\widehat{Z}_d$ be the monoidal category of all finite-dimensional, graded, biprojective $\widehat{Z}_d{-}\widehat{Z}_d$-bimodules and degree-preserving bimodule maps. A bimodule is called biprojective if it is projective as a graded left module and as a graded right module, but not necessarily as a graded bimodule. Every indecomposable projective object in this category is isomorphic to \[ \widehat{Z}_d e_i\otimes e_j\widehat{Z}_d\langle t \rangle, \] for some $i,j=0,\ldots,d-1$ and $t\in \mathbb{Z}$. The monoidal structure of $\widehat{Z}_d{-}\mathrm{fgbiproj}{-}\widehat{Z}_d$ is given by tensoring over $\widehat{Z}_d$ and the unit object is $\widehat{Z}_d$, which is biprojective but not projective as a bimodule over itself. Recall that any exact, graded endofunctor of $\widehat{Z}_d{-}\mathrm{fgproj}$ is naturally isomorphic to $B \otimes_{\widehat{Z}_d} -$, for some $B\in \widehat{Z}_d{-}\mathrm{fgbiproj}{-}\widehat{Z}_d$. Natural transformations between exact, graded endofunctors correspond to $\widehat{Z}_d{-}\widehat{Z}_d$-bimodule maps and the composition of endofunctors corresponds to the tensor product of the corresponding bimodules over $\widehat{Z}_d$. Let $\tau$ be the degree-preserving algebra automorphism of $\widehat{Z}_d$ induced by the counterclockwise rotation of the Dynkin diagram defined by \[ e_i \mapsto e_{i+1},\quad 0\vert (d-1) \mapsto (-1)^d (1\vert 0),\quad i\vert j \mapsto (i+1) \vert (j+1), \] for $i,j=0,\ldots, d-1$, such that $j=i\pm 1$ but $(i,j)\ne (0,d-1)$. Note that $\tau^{d}=\mathrm{id}$ when $d$ is even, and $(\tau)^{2d}=\mathrm{id}$ when $d$ is odd. By definition, the {\em twisted bimodule} \[ \widehat{Z}^{\tau}_n\in \widehat{Z}_d{-}\mathrm{fgbiproj}{-}\widehat{Z}_d \] has underlying vector space $\widehat{Z}_d$, while the left and right $\widehat{Z}_d$-actions are defined by \[ a\cdot_L b \cdot_R c:=ab\tau(c), \] for $a,b,c\in \widehat{Z}_d$. It is clear that $\widehat{Z}^{\tau}_d\cong \widehat{Z}_d$ as left and as right $\widehat{Z}_d$-modules, but not as $\widehat{Z}_d$-$\widehat{Z}_d$-bimodules. In other words, $\widehat{Z}^{\tau}_d$ is biprojective, but not projective as a $\widehat{Z}_d$-$\widehat{Z}_d$-bimodule. We record the existence of an isomorphism \begin{gather}\label{eq:lefttau1} \widehat{Z}^{\tau^k}_d\otimes_{\widehat{Z}_d} \widehat{Z}^{\tau^m}_d \cong \widehat{Z}^{\tau^{k+m}}_d \end{gather} in $\widehat{Z}_d{-}\mathrm{fgbiproj}{-}\widehat{Z}_d$, for every pair $k,m\in \mathbb{Z}$. Note further that there exist isomorphisms of left, resp. right, $\widehat{Z}_d$-modules \begin{equation}\label{eq:lefttau2} \widehat{Z}_d^{\tau}\otimes_{\widehat{Z}_d} \widehat{Z}_d e_i \cong \widehat{Z}_d e_{i+1} \quad\text{and}\quad e_i \widehat{Z}_d \otimes_{\widehat{Z}_d} \widehat{Z}_d^{\tau}\cong e_{i-1} \widehat{Z}_d \end{equation} and, therefore, an isomorphism of $\widehat{Z}_d$-$\widehat{Z}_d$-bimodules \begin{equation}\label{eq:lefttau3} \widehat{Z}_d^{\tau}\otimes_{\widehat{Z}_d} \widehat{Z}_d e_i \otimes e_i\widehat{Z}_d \cong \widehat{Z}_d e_{i+1} \otimes e_{i+1}\widehat{Z}_d \otimes_{\widehat{Z}_d}\widehat{Z}_d^{\tau} \end{equation} for every $i=0,\ldots,d-1$. The {\em zigzag algebra} $Z_d$ of finite type $A_{d-1}$ is by definition the idempotent subalgebra \[ (e_1+\cdots+e_{d-1})\widehat{Z}_d(e_1+\cdots+e_{d-1}).\] \subsection{The birepresentations} Let $Z=Z_d$ denote the zigzag algebra of finite type $A_{d-1}$ for some fixed $d\geq 3$. Recall the finitary birepresentation $\mathbf{M}_d$ of $\EuScript{S}_d$ acting on $Z$-$\mathrm{gproj}$, the finitary category of finite-dimensional, graded projective $Z$-modules, by graded, biprojective $Z$-$Z$-bimodules. Under this birepresentation, $\mathbbm{1}=R$ acts by tensoring (over $Z$) with $Z$ and each $\mathrm{B}_i$ acts by tensoring (over $Z$) with $Ze_i\otimes e_iZ\langle 1\rangle$, for $i=1,\ldots,d-1$. The image of the generating Soergel diagrams is given by \[ \begin{array}{lcrcl} \mathbf{M}_d\left( \xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=-.5,xscale=.5,shift={(5,-2)}] \draw[ultra thick,blue] (-1,0) -- (-1, 1)node[pos=0, tikzdot]{} node[below]{\tiny $i$}; \end{scope} }} \endxy \right) & \colon & Z e_i \otimes e_i Z\langle 1 \rangle & \to &Z \\ && ae_i \otimes e_ib & \mapsto & ae_ib, \\[2ex] \mathbf{M}_d\left( \xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=-.5,xscale=.5,shift={(5,-2)}] \draw[ultra thick,blue] (-1,0) -- (-1, 1)node[pos=1, tikzdot]{} node[above, yshift=14pt]{\tiny $i$}; \end{scope} }} \endxy \right) & \colon & Z & \to & Z e_i \otimes e_i Z\langle 1 \rangle \\ && e_j & \mapsto & \begin{cases} (-1)^i \left(\ell_i \otimes e_i + e_i \otimes \ell_i\right), & j=i;\\ (-1)^i\left(j\vert i \otimes i \vert j\right), & j\pm 1=i, \end{cases} \\[2ex] \mathbf{M}_d\left(\xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=.5,xscale=.5,shift={(8,2)}] \draw[ultra thick,blue] (0,0)-- (0, 1) node[above]{\tiny $i$}; \draw[ultra thick,blue] (-1,-1) -- (0,0) node[below, shift={(-0.5,-0.5)}]{\tiny $i$}; \draw[ultra thick,blue] (1,-1) -- (0,0) node[below, shift={(0.5,-0.5)}]{\tiny $i$}; \end{scope} }} \endxy \right) &\colon & Z e_i \otimes e_i Z e_i \otimes e_i Z \langle 2 \rangle & \to & Z e_i \otimes e_i Z \langle 1 \rangle \\ && e_i \otimes e_i ae_i \otimes e_i &\mapsto & (-1)^i \mathrm{tr}(e_i a e_i) e_i \otimes e_i \\[2ex] \mathbf{M}_d\left(\xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=.5,xscale=.5,shift={(8,2)}] \draw[ultra thick,blue] (0,0)-- (0, 1) node[below, yshift=-14pt]{\tiny $i$}; \draw[ultra thick,blue] (0,1) -- (-1,2) node[above]{\tiny $i$}; \draw[ultra thick,blue] (0,1) -- (1,2) node[above]{\tiny $i$}; \end{scope} }} \endxy \right) &\colon & Z e_i \otimes e_i Z \langle 1 \rangle &\to & Z e_i \otimes e_i Z e_i \otimes e_i Z \langle 2 \rangle \\ && e_i \otimes e_i &\mapsto & e_i \otimes e_i \otimes e_i, \end{array} \] while all other generating Soergel diagrams are sent to zero. The proof that this is well-defined is a straightforward computation and similar to the proof of \cite[Theorem I]{mackaay-tubbenhauer}. It is easy to see that this birepresentation decategorifies to the representation $M_d$ of $H_d$, given in \eqref{eq:M}. Now, consider the triangulated birepresentations $\mathbf{M}^{\mathcal{E}v_{r,s}}$ and $\mathbf{M}^{\mathcal{E}v'_{-r,-s}}$ of $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$, for $r,s\in \mathbb{Z}$, obtained by pulling ${\EuScript{K}^b}(\mathbf{M})$ back through the evaluation functors $\mathcal{E}v_{r,s}$ and $\mathcal{E}v'_{-r,-s}$. These decategorify to $M^{\ev_{a}}$ and $M^{\ev_{a^{-1}}}$ defined in \eqref{eq:action-rho} and \eqref{eq:action-prime-rho}, respectively, where $a=(-1)^s q^r$. The case $(r,s)=(d-2, 2-d)$ is somewhat special, as it corresponds to the so-called {\em Tate twist}, but the general case can easily be derived from this one by shifting the bigrading in all arguments below. To keep the notation simple, we therefore consider $\mathbf{M}^{\mathcal{E}v_{r,s}}$ for the fixed choice $(r,s)=(d-2,2-d)$ first. Define the complex $$X_0:= \underline{Ze_{d-1}\langle 1\rangle} \to Ze_{d-2}\langle 2\rangle \to\cdots \to Ze_1\langle d\rangle$$ where the term $Ze_{d-1}\langle 1\rangle$ is in homological degree $0$ and the differential in position $i$ is given by right multiplication by $d-i-1\vert d-i-2$. We further set $X_i:=Ze_i$, for $i=1,\ldots,d-1$. In Proposition~\ref{prop:fincovgen}, the rank of the bigraded-finitary cover $\mathbf{L}$ of an evaluation cell birepresentation is not necessarily minimal. In the following proposition, we give a minimal finitary cover for $\mathbf{M}^{\mathcal{E}v_{r,s}}$. \begin{prop}\label{prop:invariant} The bigraded-finitary subcategory \[ \widehat{\mathbf{M}}_{d-2,2-d}:= \mathrm{add}\left\{(X_0\oplus X_1\oplus\cdots \oplus X_{d-1}) \langle i\rangle[j]\mid i,j\in \mathbb{Z} \right\} \] is stable under the action of $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$, and hence carries the structure of a finitary birepresentation of $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$, which we denote by the same symbol. \end{prop} \begin{proof} We need to check stability under $\mathrm{B}_1\ldots, \mathrm{B}_{d-1}$ and $\mathrm{T}_{\rho}$. The action of $\mathrm{B}_1\ldots, \mathrm{B}_{d-1}$ stabilises $\mathrm{add}\left\{X_1\oplus\cdots \oplus X_{d-1}\langle i\rangle[j]\mid i,j\in \mathbb{Z} \right\}$ since this is just the finitary birepresentation of $\EuScript{S}_d$ described above. We therefore first compute $\mathrm{B}_i(X_0)$ for $i\in \{1,\ldots, d-1\}$ and then verify stability of $\mathrm{add}\left\{X_1\oplus\cdots \oplus X_{d-1}\langle i\rangle[j]\mid i,j\in \mathbb{Z} \right\}$ under $\mathrm{T}_{\rho}$. Notice that, for $i\in \{2,\cdots d-2\}$, $\mathrm{B}_i(X_0)$ is given by the complex $$Ze_i\otimes \big(e_iZe_{i+1}\langle d-i \rangle \to e_iZe_{i}\langle d-i +1 \rangle \to e_iZe_{i-1}\langle d-i+2 \rangle \big) $$ and the complex of vector spaces in the right tensor factor is null-homotopic, since the first map embeds a one-dimensional space into a two-dimensional space, and the second map is a surjection onto another one-dimensional space. Hence the whole complex is null-homotopic. Further, $\mathrm{B}_1(X_0)$ is given by $$Ze_1\otimes \big( e_1Ze_2\langle d-1\rangle \to e_1Ze_1\langle d\rangle \big)$$ with map $1|2\mapsto \ell_1$, which is injective, hence the summand surviving Gaussian elimination is $Ze_1\otimes e_1\langle d\rangle $ in homological degree d-2. Thus the result is homotopy equivalent to $Ze_1\langle d\rangle [2-d]$. On the other extreme, $\mathrm{B}_{d-1}(X_0)$ is given by $$Ze_{d-1}\otimes \big( \underline{e_{d-1}Ze_{d-1}\langle 2\rangle }\to e_{d-1}Ze_{d-2}\langle 3\rangle \big)$$ where the map is right multiplication by $d-1|d-2$, which is surjective. The kernel is thus $Ze_{d-1}\otimes \ell_{d-1}\langle 2\rangle$ and the result is homotopy equivalent to $Ze_{d-1}$ without any shifts. Thus $\mathrm{add}\left\{X_1\oplus\cdots \oplus X_{d-1}\langle i\rangle[j]\mid i,j\in \mathbb{Z} \right\}$ is stable under the action of $\mathrm{B}_1,\ldots,\mathrm{B}_{d-1}$. It remains to show that $\widehat{\mathbf{M}}$ is stable under the action of $\mathrm{T}_{\rho}$. Recall from Section~\ref{sec:defevalfunctor} that $\mathcal{E}v_{d-2,2-d}(\mathrm{T}_{\rho})= \mathrm{T}^{-1}_{1}\cdots \mathrm{T}^{-1}_{d-1}\langle d-2\rangle [2-d]$ and \begin{equation*} \mathrm{T}_i^{-1} = R\langle -1\rangle \xra{ \mspace{15mu} \xy (0,0)*{ \tikzdiagc[yscale=.3,xscale=.25]{ \draw[ultra thick,blue] (-1,0) -- (-1, 1)node[pos=0, tikzdot]{}; }}\endxy \mspace{15mu} }\underline{\textcolor{blue}{\rB_i}}. \end{equation*} Using the definition of $\mathbf{M}_d$ above, it is easy to see that the complex representing $\mathcal{E}v_{d-2,2-d}(\mathrm{T}_{\rho})$ is \begin{equation}\label{eq:imageTrho} \resizebox{\textwidth}{!}{ \xymatrix@R=3pt { &\underline{ Ze_1\otimes e_1Z \langle 1 \rangle} \ar[dr]&&&&\\ &&Ze_1\otimes e_2Z\langle 2 \rangle \ar[dr]&&&\\ &\underline{ Ze_2\otimes e_2Z\langle 1 \rangle} \ar[ur]\ar[dr] &&\ddots\ar[dr]&&\\ &&\ddots&&Ze_1\otimes e_{d-2}Z\langle d-2 \rangle \ar[dr]&\\ Z\langle -1\rangle \ar[uuuur]\ar[uur]\ar[ddddr]\ar[ddr]&&&\ar[dr]\ar[ur]&& Ze_1\otimes e_{d-1}Z\langle d-1 \rangle \\ &&\iddots&&Ze_2\otimes e_{d-1}Z\langle d-2 \rangle \ar[ur]&\\ &\underline{Ze_{d-2}\otimes e_{d-2}Z\langle 1 \rangle }\ar[dr]\ar[ur]&&\ar[ur]\iddots&&\\ &&Ze_{d-2}\otimes e_{d-1}Z\langle 2 \rangle \ar[ur]&&&\\ &\underline{Ze_{d-1}\otimes e_{d-1}Z\langle 1 \rangle }\ar[ur]&&&& } } \end{equation} Here $d^{-1} = (-d^{-1}_1, d^{-1}_2,\ldots, (-1)^{d-1}d^{-1}_{d-1})$, where each $d^{-1}_i \colon Z\langle -1\rangle \to Ze_i\otimes e_iZ\langle 1\rangle$ is given by \[ e_j \mapsto \begin{cases} \ell_i \otimes e_i + e_i \otimes \ell_i, &\mathrm{if}\, i=j;\\ j\vert i \otimes i \vert j, &\mathrm{if}\, i\ne j. \end{cases} \] The other differentials are all vectors of $Z$-$Z$-bimodule maps which are equal to the tensor product of $\pm \mathrm{id}$ on one tensor factor and $i\vert i+1$, for some $i=1,\ldots, d-2$, on the other tensor factor. For our arguments below, the signs of these maps are not important. We are first going to prove that $\mathcal{E}v_{d-2,2-d}(\mathrm{T}_{\rho})(X_i)\simeq X_{i+1}$, for any $i=1,\ldots, d-2$. Since $e_jZe_i=\{0\}$ when $\vert i-j\vert >1$, the non-zero part of the complex corresponding to $\mathcal{E}v_{d-2,2-d}(\mathrm{T}_{\rho})(X_i)$ is \begin{equation*} \resizebox{\textwidth}{!}{ \xymatrix { &\underline{ Ze_{i-1}\otimes e_{i-1}Ze_i\langle 1 \rangle }\ar[r] \ar[dr]& Ze_{i-2}\otimes e_{i-1}Ze_i\langle 2 \rangle \ar[r]\ar[dr]&\cdots& Ze_1\otimes e_{i-1}Ze_i \langle i-1 \rangle \ar[dr]&&\\ Ze_i\langle -1 \rangle \ar[ur]\ar[r]\ar[dr]&\underline{ Ze_{i}\otimes e_{i}Ze_i\langle 1 \rangle }\ar[r]\ar[dr]& Ze_{i-1}\otimes e_{i}Ze_i\langle 2 \rangle \ar[r]\ar[dr]&\cdots& Ze_2\otimes e_{i}Ze_i\langle i-1 \rangle \ar[r]\ar[dr]&Ze_1 \otimes e_{i}Ze_i\langle i \rangle \ar[dr]&\\ & {\color{purple}\underline{Ze_{i+1}\otimes e_{i+1}Ze_i\langle 1 \rangle }}\ar[r]& Ze_{i}\otimes e_{i+1}Ze_i\langle 2 \rangle \ar[r]&\cdots&Ze_3\otimes e_{i+1}Ze_i \langle i-1 \rangle \ar[r]&Ze_2\otimes e_{i+1}Ze_i \langle i \rangle \ar[r]& Ze_1\otimes e_{i+1}Ze_i\langle i+1 \rangle \\ } } \end{equation*} By Gaussian elimination, one can then see that this is homotopy equivalent to the purple $Ze_{i+1}\otimes e_{i+1}Ze_i\langle 1 \rangle$ in homological degree zero, which is isomorphic to $X_{i+1}$. To explain this, we identity each vertex of the diagram above by its pair of coordinates (row number, column number), where we number the rows of the complex by 1,2,3 from top to bottom and the colums by their homological degree. As in the diagram above, we omit the signs of all maps below, since they are not important for our argument. Using these conventions, first note that the the part of the complex $(2,-1)\to (2,0)\to (3,1)$ is given by $$Ze_i \otimes \big(\Bbbk\langle -1\rangle \to \underline{e_iZe_i\langle 1\rangle} \to e_{i+1}Ze_i\langle 2\rangle\big)$$ where the complex of vector spaces is split by the same arguments as above and hence null-homotopic. Thus these three terms cancel in the Gaussian elimination procedure. Similarly, every part of the complex of the form $(1,j)\to (2,j+1)\to (3,j+2)$, for $j=0,\ldots, i-2$, is given by $$Ze_{i-j-1}\otimes \big(e_{i-1}Ze_i\langle j+1\rangle \to e_iZe_i \langle j+2\rangle\to e_{i+1}Ze_i\langle j+3\rangle \big) $$ is split and hence null-homotopic. Hence all these triples of terms cancel in the Gaussian elimination procedure, which in the end only leaves the purple one, proving the desired homotopy equivalence. The next homotopy equivalence we are going to prove is $\mathcal{E}v_{d-2,2-d}(\mathrm{T}_{\rho})(X_{d-1})\simeq X_0$. The non-zero part of the complex $\mathcal{E}v_{d-2,2-d}(\mathrm{T}_{\rho})(X_{d-1})$ is \begin{equation*} \resizebox{\textwidth}{!}{ \xymatrix { & \underline{Ze_{d-2} \otimes e_{d-2}Ze_{d-1}\langle 1\rangle}\ar[r]\ar[ddr] & Ze_{d-3} \otimes e_{d-2}Ze_{d-1}\langle 2\rangle \ar[r]\ar[ddr]&\cdots \ar[r]\ar[ddr] & Ze_1\otimes e_{d-2}Ze_{d-1}\langle d-1 \rangle \ar[dr] & \\ Ze_{d-1}\langle -1\rangle \ar[ur] \ar[dr]&&&&& {\color{purple}Ze_1 \otimes e_{d-1}Ze_{d-1}\langle d \rangle }\\ & {\color{purple}\underline{Ze_{d-1} \otimes e_{d-1}Z e_{d-1} \langle 1\rangle}} \ar[r] & {\color{purple}Ze_{d-2} \otimes e_{d-1}Ze_{d-1}\langle 2 \rangle} \ar[r] & \cdots \ar[r]& {\color{purple}Ze_2 \otimes e_{d-1}Z e_{d-1}\langle d-1\rangle} \ar[ur] & \\ } } \end{equation*} The differentials are as above and by Gaussian elimination this complex is homotopy equivalent to the direct summand of the purple subcomplex for which the right tensor factor is restricted to multiples of $e_{d-1}$. This direct summand is indeed isomorphic to $X_0$. (row number, column number). Note that again all descending maps in the complex are given by the tensor product the identity of some $Ze_{d-j}$ with an injective map of vector spaces hence split. This implies that all black terms in the complex are killed and only one direct summand of each purple term (the one given by $Ze_{d-1-j} \otimes e_{d-1}\langle j+1\rangle$) survives in the Gaussian elimination procedure. Thus the complex is homotopy equivalent to $X_0$, as claimed.. The remaining case of the action of $\mathcal{E}v_{d-2,2-d}(\mathrm{T}_{\rho})$ on $X_0$ can be replaced by considering the action of $\mathcal{E}v_{d-2,2-d}(\mathrm{T}_{\rho})^{-1}$ on $X_1$, which is analogous to the action of $\mathcal{E}v_{d-2,2-d}(\mathrm{T}_{\rho})$ on $X_{d-2}$. \end{proof} Similarly, we can define an additive birepresentation $\widehat{\mathbf{M}}_{r,s}$ of $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$, for any $r,s\in \mathbb{Z}$. \begin{cor}\label{cor:fincover} For any $r,s\in \mathbb{Z}$, there is a morphism of additive $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$-birepresentations $\Phi\colon \widehat{\mathbf{M}}_{r,s}\to \mathbf{M}^{\mathcal{E}v_{r,s}}$, induced by the embedding from Proposition~\ref{prop:invariant} with a suitable bigrading shift. Moreover, $\Phi$ extends to a morphism of triangulated $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$-birepresentations $\Phi\colon {\EuScript{K}^b}(\widehat{\mathbf{M}}_{r,s}) \to \mathbf{M}^{\mathcal{E}v_{r,s}}$, which is essentially surjective and faithful, but not full. \end{cor} \begin{proof} All assertions follow immediately from Proposition~\ref{prop:invariant}, except the lack of fullness. Without loss of generality, assume that $(r,s)=(d-2,2-d)$ again. Note that in $\widehat{\mathbf{M}}$ the only non-radical morphisms between $X_0$ and $X_i$, for $i=1,\ldots, d-1$, are from $X_0$ to $X_{d-1}$ and from $X_1[2-d]$ to $X_0$. This implies that $X_0\not\cong (\underline{X_{d-1}\langle 1 \rangle}\to X_{d-2}\langle 2\rangle \to \ldots \to X_1\langle d-1\rangle)$ in ${\EuScript{K}^b}(\widehat{\mathbf{M}})$. \end{proof} \begin{rem} Note that $\widehat{\mathbf{M}}_{r,s}$ decategorifies to the Graham--Lehrer cell module $\widehat{M}_{d,\lambda}$ with $\lambda= (-1)^{s-(2-d)}q^{r-(d-2)}$, as can be easily seen by comparing the action of the generators on the $X_i$ with the decategorified action in \eqref{eq:GL} and \eqref{eq:GLext}. Moreover, $\Phi$ decategorifies to the projection of $\widehat{M}_{r,s}$ onto $L^{+}_{d,(-1)^{s-(2-d)}q^{r-(d-2)}}$. \end{rem} \begin{prop}\label{prop:isoalgs} For any $r,s\in \mathbb{Z}$, there is an isomorphism of ungraded algebras \[ \mathrm{End}_{ \mathbf{M}^{\mathcal{E}v_{r,s}}}(X_0\oplus\cdots\oplus X_{d-1})\cong \widehat{Z}. \] \end{prop} \begin{proof} Without loss of generality, we assume that $(r,s)=(d-2,2-d)$, as before. Denote by $p_{d-1}\colon X_0 \to X_{d-1}$ the projection onto the component in homological degree $0$ and by $j_d\colon X_{d-1} \to X_0$ the map induced by multiplication with $\ell_{d-1}$. Similarly, denote by by $j_1\colon X_1[2-d] \to X_0$ the inclusion of the component in homological degree $d-2$ and by $p_1\colon X_0 \to X_1[2-d]$ the map induced by multiplication with $\ell_1$. We remark that $p_{d-1},j_{d-1}, j_1,p_1$ have degrees $1, 1, 1-d, d+1$, respectively. Moreover, we denote the maps $Ze_i \to Ze_{i\pm 1}$ given by right multiplication by $i\vert i\pm 1$ by $r_{i\vert i\pm 1}$. Then it is a straightforward calculation to verify that $\mathrm{End}_{ \mathbf{M}^{\mathcal{E}v_a}}(X_0\oplus\cdots\oplus X_{d-1})$ is given by the path algebra of the quiver $$\xymatrix{ &&&&\overset{0}{\bullet}\ar@/_2pc/_{p_1}[ddllll]\ar@/^2pc/^{p_{d-1}}[ddrrrr]&&&&\\\\ \overset{1 }{\bullet}\ar@/^0.5pc/^{r_{1\vert 2}}[rr]\ar@/^1pc/_{j_1}[uurrrr]&& \overset{2}{\bullet}\ar@/^0.5pc/^{r_{2\vert 1}}[ll]&& \cdots && \overset{d-2}{\bullet} \ar@/^0.5pc/^{r_{d-2\vert d-1}}[rr]&& \overset{d-1}{\bullet}\ar@/^0.5pc/^{r_{d-1\vert d-2}}[ll]\ar@/_1pc/^{j_{d-1}}[uullll] } $$ modulo the relations defining $\widehat{Z}$ under the isomorphism sending $r_{i\vert i\pm 1}$ to $i\pm 1\vert i$, $p_i$ to $i\vert 0$ and $j_i$ to $0\vert i$ for $i\in \{1, d-1\}$. To verify the sign in the relation involving $0$ we observe that the endomorphism of $X_0$ given by $j_1p_1 + (-1)^{d-1}j_{d-1}p_{d-1}$ is (omitting shifts for readability) given by the solid arrows in the diagram $$\xymatrix{Ze_{d-1}\ar[r]\ar_{\ell_{d-1}}[d]& Ze_{d-2}\ar^{0}[d]\ar^{r_{d-2\vert d-1}}@{-->}[dl] \ar[r]& \cdots\ar[r] &Ze_2\ar[r]\ar^{0}[d]\ar^{r_{2\vert 3}}@{-->}[dl] &Ze_1\ar^{\ell_1}[d]\ar^{r_{1\vert 2}}@{-->}[dl]\\ Ze_{d-1}\ar[r]& Ze_{d-2} \ar[r]& \cdots\ar[r] &Ze_2\ar[r] &Ze_1\\ }$$ and is null-homotopic via the homotopy indicated by the dashed arrows. \end{proof} \begin{rem} The natural bigrading of $\mathrm{End}_{ \mathbf{M}^{\mathcal{E}v_{r,s}}}(X_0\oplus\cdots\oplus X_{d-1})$ induces a bigrading on $\widehat{Z}$ via the isomorphism in Proposition~\ref{prop:isoalgs}. The first entry of this bigrading is compatible with the above grading of $\widehat{Z}$ except for the degrees of the arrows between $0$ and $1$. \end{rem} The explicit $2$-action of $\widehat{\EuScript{S}}^{\mathrm{ext}}_d$ on $\widehat{\mathbf{M}}_{r,s}$ is given \begin{itemize} \item on $1$-morphisms by \begin{eqnarray*} F(i) & :=&\widehat{Z}_d e_i \otimes e_i\widehat{Z}_d\langle 1\rangle,\quad i=0,\ldots, n;\\ F(\pm) & := & \widehat{Z}_d^{\tau^{\pm 1}}\langle r\rangle [s], \end{eqnarray*} \item on $2$-morphisms by \begin{gather*} \begin{array}{lcrcl} F\left( \xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=-.5,xscale=.5,shift={(5,-2)}] \draw[ultra thick,blue] (-1,0) -- (-1, 1)node[pos=0, tikzdot]{} node[below]{\tiny $i$}; \end{scope} }} \endxy \right) & \colon & \widehat{Z}_d e_i \otimes e_i \widehat{Z}_d\langle 1 \rangle & \to &\widehat{Z}_d \\ && ae_i \otimes e_ib & \mapsto & ae_ib, \end{array}\\ \begin{array}{lcrcl} F\left( \xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=-.5,xscale=.5,shift={(5,-2)}] \draw[ultra thick,blue] (-1,0) -- (-1, 1)node[pos=1, tikzdot]{} node[above, yshift=14pt]{\tiny $i$}; \end{scope} }} \endxy \right) & \colon & \widehat{Z}_d & \to & \widehat{Z}_d e_i \otimes e_i \widehat{Z}_d\langle 1 \rangle \\ && e_j & \mapsto & \begin{cases} (-1)^i \left(\ell_i \otimes e_i + e_i \otimes \ell_i\right), & j=i;\\ (-1)^i\left(j\vert i \otimes i \vert j\right), & j\pm 1=i\ne 0; \\ 1\vert 0 \otimes 0 \vert 1, & j=1, i=0;\\ (-1)^d (d-1\vert 0 \otimes 0 \vert d-1), & j=d-1, i=0, \end{cases} \end{array}\\ \begin{array}{lcrcl} F\left(\xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=.5,xscale=.5,shift={(8,2)}] \draw[ultra thick,blue] (0,0)-- (0, 1) node[above]{\tiny $i$}; \draw[ultra thick,blue] (-1,-1) -- (0,0) node[below, shift={(-0.5,-0.5)}]{\tiny $i$}; \draw[ultra thick,blue] (1,-1) -- (0,0) node[below, shift={(0.5,-0.5)}]{\tiny $i$}; \end{scope} }} \endxy \right) &\colon & \widehat{Z}_d e_i \otimes e_i \widehat{Z}_d e_i \otimes e_i \widehat{Z}_d \langle 2 \rangle & \to & \widehat{Z}_d e_i \otimes e_i \widehat{Z}_d \langle 1 \rangle \\ && e_i \otimes e_i ae_i \otimes e_i &\mapsto & (-1)^i \mathrm{tr}(e_i a e_i) e_i \otimes e_i \\[2ex] F\left(\xy (0,0)*{ \tikzdiagc[scale=1]{ \begin{scope}[yscale=.5,xscale=.5,shift={(8,2)}] \draw[ultra thick,blue] (0,0)-- (0, 1) node[below, yshift=-14pt]{\tiny $i$}; \draw[ultra thick,blue] (0,1) -- (-1,2) node[above]{\tiny $i$}; \draw[ultra thick,blue] (0,1) -- (1,2) node[above]{\tiny $i$}; \end{scope} }} \endxy \right) &\colon & \widehat{Z}_d e_i \otimes e_i \widehat{Z}_d \langle 1 \rangle &\to & \widehat{Z}_d e_i \otimes e_i \widehat{Z}_d e_i \otimes e_i \widehat{Z}_d \langle 2 \rangle \\ && e_i \otimes e_i &\mapsto & e_i \otimes e_i \otimes e_i. \end{array} \end{gather*} The generating $2$-morphisms involving an oriented black strand in \eqref{eq:orientedcoloredgens1} and \eqref{eq:orientedcoloredgens2} are sent to the isomorphisms in \eqref{eq:lefttau1} and \eqref{eq:lefttau3}, respectively, and all other generating $2$-morphisms are sent to zero. \end{itemize} \begin{rem} We could alternatively have used the evaluation functor $\mathcal{E}v'_{-r,-s}$ to obtain another evaluation birepresentation and its finitary cover. \end{rem}
2024-02-18T23:39:57.465Z
2022-07-07T02:09:34.000Z
algebraic_stack_train_0000
942
72,977
proofpile-arXiv_065-4678
\section{Introduction} Advanced mathematics is rooted in the acquisition of elementary concepts, such as number symbols and arithmetic operators. However, despite its apparent simplicity, learning to manipulate symbolic numbers is a sophisticated process that occupies children for several years during development and formal education \cite{ontogenetic_origins, number_sense}. Indeed, even mastering a basic procedure such as multi-digit addition involves a series of non-trivial skills: operands must be correctly aligned by place value, summations must be carried out in the proper order and regrouping must be performed by keeping track of the corresponding carry. Most importantly, the addition procedure should work for any number of operands, of any length. The recent achievements of Artificial Intelligence (AI) in solving high-level reasoning tasks \cite{external_memory, rel_inductive_bias} have spurred interest in numerical cognition as a stimulating challenge for deep learning models \cite{dl_for_symb_math, math_reasoning, math_concepts}. Promising results have been obtained in a variety of domains, ranging from numerical reasoning over textual input \cite{math_language} to solving differential equations \cite{diff_equations} and automated theorem proving \cite{prove_theo_language}. However, deep learning often fails in elementary tasks that require systematic generalization: a prominent example is given by symbolic arithmetic, where neural networks do not easily extrapolate outside the numerical range encountered during training \cite{nalu}. Considering that digital calculators can solve such tasks in the blink of an eye, why is it so difficult to teach them to machine learning models? In trying to answer this question, we should keep in mind that it took centuries for humans to grasp even the most basic arithmetic principles, which were later implemented in digital calculators. Building machines that can autonomously discover algorithmic procedures might thus lay the foundations for creating more human-like artificial general intelligence. In this paper we describe an innovative deep learning architecture that learns to generalize arithmetic knowledge well-beyond the numerical examples included in the training distribution. The model is trained on a set of multi-digit addition problems consisting of up to 4 operands, each composed by up to 10 digits; it is then tested over a much wider range of problems, featuring up to 10 operands and thousands of digits. The performance of the model is benchmarked against other recent models, and its internal functioning is investigated through ablation studies and analysis of the emerging internal representations. \section{Related Work} Most of contemporary machine learning approaches tackle symbolic arithmetic tasks by introducing explicit biases or human-engineered features specifically built for numerical reasoning. For example, the generalization performance of recurrent neural networks on single-digit addition was improved by designing activation functions enriched with primitive arithmetic operators \cite{nalu}, and further refinements of the same idea led to even higher extrapolation performance \cite{nau}. An alternative path is given by models that exploit an external memory to learn algorithmic tasks, such as Differentiable Neural Computers \cite{external_memory}, Grid LSTMs \cite{grid_lstm}, and Neural GPUs \cite{ngpu}. The latter two have been tested on multi-digit addition and multiplication, though generalization outside the training range was not systematically investigated for multi-operand problems (e.g., Neural GPU training examples included up to 20 bits and generalization was tested on problems of up to 2000 bits, but only for 2-terms additions). One key property of algorithmic tasks is given by their sequential nature, which motivates the use of recurrent models. A particularly relevant architecture in this respect is the Universal Transformer \cite{universal_transformer}, which combines the parallelizability of feed-forward attention mechanisms with the inductive bias of recurrent networks. Being a parallel-in-time architecture, the Universal Transformer receives the entire series of input tokens at once; however, its recurrent nature allows to iteratively refine its internal state and thus produce output responses dynamically. Though such architecture was shown able to successfully learn a variety of algorithmic tasks, performance on integer addition was not satisfactory \cite{universal_transformer}. Another important aspect to consider while learning an algorithmic task is that recurrent models should learn to run the necessary number of computational steps to process input sequences of different complexity. This problem can be tackled by embedding halting units into the model architecture, as in adaptive computation time (ACT) \cite{act} and PonderNet \cite{pondernet}. Finally, it is well-known that certain arithmetic tasks (and multi-digit addition in particular) can be performed much easier and faster once numbers are aligned by place value. In agreement with this intuition, it has been shown that operand alignment indeed plays a key role to successfully learn symbolic addition with Neural GPUs \cite{ngpu_dec}. This finding motivated the design of more advanced mechanisms for input pre-processing, which allow to map the token sequence into a grid-like format to facilitate successive manipulation \cite{seq2grid}. In this work we will combine several of the architectures and processing mechanisms reviewed above, with the goal of producing a comprehensive model that could more effectively tackle extrapolation in symbolic addition tasks. In Section \ref{sec:prop_appr} we will provide the formal details of our model, while in Section \ref{sec:exp_setup} we will describe the datasets, model parameters and training/testing details. Results and analyses will be presented in Section \ref{sec:results} and critically discussed in Section \ref{sec:conclusion}. \section{Proposed Approach} \label{sec:prop_appr} \subsection{Problem Definition} \label{subsec:prob_def} The learning task considered in this work requires to sum an arbitrary number of operands, each composed of an arbitrary number of digits. These two degrees of freedom will be the main focus for measuring the extrapolation capabilities of the proposed architecture. An instance of the addition problem will be denoted by $\Pi([N_1,N_2],[D_1,D_2])$, where the four positive integers $[N_1,N_2]$ and $[D_1,D_2]$ denote the intervals for the number of operands and digits, respectively. For example, existing models such as the Grid LSTM \cite{grid_lstm} have been successful with $\Pi([2,2],[15,15])$, that is, sums of 2 operands of 15 digits each. The input for such addition problems can be devised as a symbol sequence $I\in\Sigma^S$, where $\Sigma=\{\text{PAD},0,1,\dots,8,9,+,=\}$ is the base-10 addition alphabet and $S$ is the length of the input sequence. $I$ is constrained to contain $n$ terms of $d_1,\dots,d_n$ digits, with $n\in[N_1,N_2]$ and $d_i\in[D_1,D_2]\;\forall i=1,\dots,n$. Four properties should be taken into account when designing a model that can successfully solve this kind of task: \begin{enumerate} \item Capability of manipulating discrete entities: humans break problems into easy-to-use parts that can be effectively manipulated and re-combined. Neural networks mimic this process when storing and moving data in an external memory \cite{external_memory} or when aggregating tokens in a sequence through self-attention mechanisms \cite{transformer}. \item Translation equivariance: a translation of the input should produce an equivalent translation of the output, which is useful for learning operators that do not depend on the absolute position they are applied to. In vision tasks this can be achieved by using convolutions \cite{shift_invariance} or relative positional encoding with self-attention \cite{transformer_tr_inv}. \item Permutation variance: permuting the order of the input could change the output. The majority of neural operators posses this property (e.g., permuting pixels in an image changes the activation of convolutional filters). On the contrary, self-attention without positional encoding is permutation invariant, as changing the order of tokens inside the sequence leads to the same output. \item Adaptive time: execution steps of an algebraic procedure should depend on the complexity of the input. In recurrent models the number of steps is often fixed \textit{a priori}: to adaptively stop execution when deemed fit, we must introduce dynamic halting mechanisms \cite{act, pondernet}. \end{enumerate} \subsection{Model Architecture} \label{subsec:model_arch} Our architecture is built around the properties introduced in Sec. \ref{subsec:prob_def} and is composed of several modules working together (see Fig. \ref{fig:full_architecture}). For the scope of this paper, an $L$ layers feedforward network is defined as follows: \begin{IEEEeqnarray}{c} \ffn_{L}(x)= \begin{cases} W_L \silu(\ffn_{L-1}(x)) + b_L & \text{if } L>1 \\ W_1 x + b_1 & \text{if } L=1 \end{cases} \label{eq:ffn} \end{IEEEeqnarray} where $W$ are the weights, $b$ are the biases and $\silu(\cdot)$ is the Sigmoid Linear Unit \cite{silu}. Processing is carried out through the following stages: \subsubsection{Input and output} \label{subsubsec:input_output} The input symbol sequence $I\in\Sigma^S$ of length $S$, is first embedded into a corresponding vector sequence $X\in\mathbb{R}^{S\times d_{emb}}$ element-wise, using the learnable embedding matrix $E\in\mathbb{R}^{|\Sigma|\times d_{emb}}$, which maps each symbol in $\Sigma$ to a vector $x\in\mathbb{R}^{d_{emb}}$ in a lookup table fashion. Through learning, an embedding vector $x$ encodes in real numbers the meaning of its associated symbol, with no positional information, as the latter is added by local attention (Sec. \ref{subsubsec:ut}). $d_{emb}$ is the size of the embedding vectors, and is used throughout the whole architecture to comply with the recurrence of the architecture. As output, through a linear projection followed by the $\softmax$ function, the model produces a sequence of probabilities $Y$ of length $T$, such that each element is a distribution on the symbols of $\Sigma$. The output symbols can be picked as those with maximum probability. \subsubsection{Seq2Grid Preprocessing} \label{subsubsec:preproc} The vector sequence $X$ is first preprocessed by rearranging the input vectors into a grid $G\in\mathbb{R}^{H\times W\times d_{emb}}$, where $H$ and $W$ are the fixed height and width of the grid. This enables the model to exploit useful structure in the input sequence that might not be evident in its 1-dimensional form. This stage is implemented using a Seq2Grid module \cite{seq2grid}, where grid operations are mirrored horizontally to make the grid readable and already in the right order for producing the output result. Input vectors are elaborated one at a time, choosing among three possible actions: insert the vector on the top row of the grid, shifting left all elements in that same row (Top List Update); insert the vector on a new empty row, shifting all elements down (New List Push); ignore the vector and hold the grid (No-Op). For each vector $x_t$, the action probabilities $a_{TLU}^{(t)}$, $a_{NLP}^{(t)}$, $a_{NOP}^{(t)}$ are computed through an encoder map, which in the original paper is a recurrent network. We opted for a simpler feedforward network, since in our case the rearrangement does not require to consider temporal dependencies: \begin{equation} (a_{TLU}^{(t)}, a_{NLP}^{(t)}, a_{NOP}^{(t)})=\softmax(\ffn_2^{s2g}(x_t)) \label{eq:seq2grid_controller} \end{equation} where the 2-layers $\ffn_2^{s2g}$ has hidden layers of size $d_{s2g}$. The initial grid $G^{(0)}$ is filled with zeroes. The intermediate grids $G^{(t)}$ $1\leq t\leq S$ are computed as: \begin{IEEEeqnarray*}{rl} G^{(t)}=\;&a^{(t)}_{TLU}TLU(G^{(t-1)},x_t) +\\ &+a^{(t)}_{NLP}NLP(G^{(t-1)},x_t)+a^{(t)}_{NOP}G^{(t-1)}\\ TLU&(G,x)_{i,j}= \begin{cases} x & \text{if }i=1,j=W\\ G_{1,j+1} & \text{if }i=1,j<W\\ G_{i,j} & \text{if }i>1\\ \end{cases} \yesnumber\\ NLP&(G,x)_{i,j}= \begin{cases} x & \text{if }i=1,j=W\\ 0 & \text{if }i=1,j<W\\ G_{i-1,j} & \text{if }i>1\\ \end{cases} \label{eq:seq2grid_mirrored} \end{IEEEeqnarray*} \begin{figure}[t] \centerline{\includegraphics[width=1.\linewidth]{full_architecture_2.png}} \caption{High-level representation of the proposed architecture, which combines Seq2Grid preprocessing with a convolutional Universal Transformer with external memory and dynamic halting mechanisms.} \label{fig:full_architecture} \end{figure} \subsubsection{Universal Transformer with Local Attention} \label{subsubsec:ut} The resulting grid $G^{(S)}$ will now be denoted as $G_0$, as it undergoes several computational steps through a Universal Transformer \cite{universal_transformer}. The number of computational steps is decided by a 2-state stochastic process $\Lambda_n$ where state 0 means ``continue'' and state 1 means ``stop'' (the halting mechanism is described in detail below). After halting, the output of the network is directly read from the top row of the grid. A single computational step is defined as: \begin{equation} G_{n+1}=\text{UT}(G_n)= \begin{cases} \text{ConvTransfBlock}(G_n) & \text{if }\Lambda_n=0 \\ G_n & \text{if }\Lambda_n=1 \end{cases} \label{eq:main_ut} \end{equation} where the ConvTransfBlock is a convolutional transformer block implementing the core of computation on the grid $G_n$. It is implemented as a standard transformer block \cite{transformer} with a local self-attention: \begin{equation} \label{eq:transformer_block} \begin{split} G'&=\LN(X_n+\localattn(G_n)) \\ G_{n+1}&=\LN(G'+\ffn_3^{ut}(G')) \end{split} \end{equation} where $\LN$ is the Layer Normalization and the $\localattn$ operator is an extension of the Stand-Alone Self Attention (SASA) \cite{sasa} with number of groups $g$ and number of heads $h$. The vectors in the input grid $G\in\mathbb{R}^{H\times W\times d_{emb}}$ are first split into $g$ groups $G^l\in\mathbb{R}^{H\times W\times d_{emb}/g}$, then separately and linearly projected into queries, keys, and values: \begin{equation} Q^l=G^lW_Q^l, \; K^l=G^lW_K^l, \; V^l=G^lW_V^l, \quad 1\leq l\leq g \end{equation} where $W_Q^l,W_K^l,W_V^l\in\mathbb{R}^{d_{emb}/g\times d_{emb}/g}$ are weight matrices. Queries, keys and values are concatenated and split again into $h$ parts $Q^m,K^m,V^m$, one for each head. Values are then aggregated through a convolutional operator with weights computed from the usual dot-product: \begin{IEEEeqnarray*}{c} Y_{ij}^m=\sum_{a,b\in\mathcal{N}_k(i,j)}A_{ij,ab}^m V_{ab}^m \\ A_{ij,ab}^m=\softmax_{ab}((Q_{ij}^m+s^m)^T(K_{ab}^m+r_{a-i,b-j}^m)) \yesnumber \end{IEEEeqnarray*} where $A_{ij,ab}^m$ is the attention that position $ij$ pays to position $ab$ at head $m$, $r_{a-i,b-j}\in\mathbb{R}^{k\times k\times d_{emb}}$ is a learned relative positional encoding, $s\in\mathbb{R}^{d_{emb}}$ is a learned query encoding, $\mathcal{N}_k(i,j)$ is the neighborhood of position $ij$ with spatial extent $k$. Our definition differs from \cite{sasa} in 2 points: (1) we split vectors at two different points, contrary to SASA which can be interpreted as the special case $g=h$; (2) we added the query encoding $s$ and extended $r_{a-i,b-j}$. The use of both $r_{a-i,b-j}$ and $s$ is meant to allow the network to build more expressive rules for aggregating value vectors. This can be seen by expanding the expression: \begin{IEEEeqnarray}{rCl} \IEEEeqnarraymulticol{3}{l}{ (Q_{ij}+s)^T(K_{ab}+r_{a-i,b-j})= }\nonumber\\* \quad & = & Q_{ij}^TK_{ab}+Q_{ij}^Tr_{a-i,b-j}+s^TK_{ab}+s^Tr_{a-i,b-j} \label{eq:attn_rules_terms} \end{IEEEeqnarray} The model is free to learn rules where some features are aggregated independently from queries and keys, but only based on the relative position information contained in $s^Tr_{a-i,b-j}$ when this term dominates the sum. Likewise, positional information can be partially or totally ignored when the opposite happens for other content-based terms. In other words, the attention payed to tokens can depend on the content of tokens (content-based rules), on the position (position-based rules), or a mixture of the two. Empirical analyses presented later show that the model in fact learns all these kind of rules (Sec. \ref{subsec:computation}). \subsubsection{Halting Mechanism} \label{subsubsec:halting_mech} As halting policies we consider both a fixed number of steps and a more sophisticated PonderNet policy \cite{pondernet}. In the latter, a separate network produces a single conditioned halting probability at each time step $n$: \begin{equation} \lambda_n=Pr\{\Lambda_n=1|\Lambda_{n-1}=0\}\quad 0\leq n\leq N \end{equation} with the Markov process $\Lambda_n$ starting at $\Lambda_{-1}=0$. The \textit{a priori} probability distribution $p_n$ can be computed as a truncated generalized geometric distribution: \begin{equation} p_n=\lambda_n\prod_{i=0}^{n-1}(1-\lambda_i),\quad p_N=1-\sum_{i=0}^{N-1}p_i \\ \end{equation} where $N$ is the minimum number of steps at which the cumulative distribution exceeds the threshold $1-\epsilon$, where $\epsilon$ is a small hyperparameter. During training, the expected value of all losses computed at each time step is taken with probability distribution $p_n$, unlike ACT where only one loss is computed from the expected value of all outputs. This is a significant difference, as in the former the output result does not depend on the distribution $p_n$ if not for stopping, whereas the latter takes weighted sums of its internal values, also at evaluation time. PonderNet simplifies halting at evaluation time, because halting events can be sampled as a Bernoulli of probability $\lambda_n$ ($\Lambda_n\sim B(\lambda_n)$). The full PonderNet loss is: \begin{equation} \label{eq:objective_pondernet} \hat{\mathcal{L}}(y,\hat{y}_n)=\sum_{n=0}^N p_n\mathcal{L}(y,\hat{y}_n)+\beta R(p_n) \end{equation} where $y$ is the ground truth and $R(p_n)$ is a regularizer for the \textit{a priori} distribution $p_n$, weighted by the hyperparameter $\beta$. The original paper uses $R(p_n)=\kldiv{p_n}{p_G(\lambda_p)}$ in order to regularize $p_n$ as a geometric distribution $p_G$ (truncated at $N$) of parameter $\lambda_p$. According to the authors, this incentivizes exploration giving a nonzero probability to all possible steps, while the model learns to use computational time efficiently as a form of Occam's Razor. \begin{comment} By extending the definition of KL-divergence, letting $N_h$ be the random variable with probability distribution $p_n$, it is possible to write it in terms of the entropy $\entropy(p_n)$, the expected value $\E[N_h]$, and the parameter $\lambda_p$: \begin{IEEEeqnarray}{rCl} \IEEEeqnarraymulticol{3}{l}{ \kldiv{p_n}{p_G(\lambda_p)}= } \nonumber \\* \quad &=& -\entropy(p_n)-\log(\lambda_p) - \log(1-\lambda_p)\E[N_h] \end{IEEEeqnarray} \begin{theorem} \label{theo:kl_div} Let $p_n=p_{N_h}(n)$ be the probability mass function of a discrete random variable $N_h$ which takes values in $\{0,\dots,N\}$, $N\in\mathbb{N}$, and let $p_G(n)$ be a geometric probability mass function with parameter $\lambda_p$, truncated at $N$. Then the KL-divergence from $p_{N_h}$ to $p_G$ depends from the entropy $\entropy(p_n)$, the expected value $\E[N_h]$, and the parameter $\lambda_p$ as: \begin{IEEEeqnarray}{rCl} \IEEEeqnarraymulticol{3}{l}{ \kldiv{p_n}{p_G(\lambda_p)}= } \nonumber \\* \quad &=& -\entropy(p_n)-\log(\lambda_p) - \log(1-\lambda_p)\E[N_h] \end{IEEEeqnarray} \end{theorem} A proof for this theorem can be found in the appendix. Theorem \ref{theo:kl_div} sheds light on some properties of this regularizing loss: This particular form let us better analyze the properties of this particular regularization term: \begin{itemize} \item $\entropy(p_n)$ is maximized, with maxima when $p_n$ is uniformly distributed in $\{0,\dots,N\}$, i.e., $p_n=1/(N+1)\; \forall n$. This means that there is an equal chance of halting at any step; \item $\E[N_h]$ is weighted by $-\log(1-\lambda_p)$. By close inspection, this term is actually minimized because $\log(1-\lambda_p)<0$ $\forall\lambda_p\in(0,1)$. The larger $\lambda_p$, the more $\E[N_h]$ is weighted. \item term $-\log(\lambda_p)$ is a constant and does not contribute to the gradient, so it can be dropped if not otherwise needed for logging purposes. \end{itemize} In the wake of these considerations, we propose the Explore-Reinforce (ER) regularizer for the halting distribution $p_n$: \begin{equation} R_{ER}(p_n, a)=\underbrace{-(1-a)\entropy(p_n)}_{\text{Explore}}+\underbrace{a\E[log(1+N_h)]}_{\text{Reinforce}} \end{equation} where $a\in[0,1]$ is a sample-wise measure of success, such as the model per-sample accuracy. It is trivial to see that both the entropy and expected log value are constrained in the interval $[0,log(1+N)]$, and the hyperparameter $\lambda_p$ is dropped. \end{comment} By extending the definition of KL-divergence with a geometric distribution, it is possible to extract its dependence from the negative entropy $-H(p_n)$ and expected number of steps before halting $E[N_h]$ where $N_h$ is distributed as $p_n$. Motivated by these considerations, we propose a new hyperparameter-free regularizer for $p_n$ named Explore-Reinforce (ER), which contains a reformulation of the two terms mentioned above: \begin{equation} R_{ER}(p_n, a)=\underbrace{-(1-a)\entropy(p_n)}_{\text{Explore}}+\underbrace{a\E[log(1+N_h)]}_{\text{Reinforce}} \end{equation} where $a\in[0,1]$ is a sample-wise measure of success, such as the model per-sample accuracy. It is trivial to see that both the entropy and expected log value are constrained in the interval $[0,log(1+N)]$, and the hyperparameter $\lambda_p$ is dropped. The accuracy trades-off the Explore and Reinforce terms: while the model learns to solve the problem, it is incentivized to give equal possibilities to each step by maximizing the entropy of $p_n$; as the model progresses, computation steps have to be compressed and made more efficient by minimizing the expected log-number of steps. In other words, the model learns to take fewer steps for easier problems. \subsubsection{Halting through a Context Transformer} \label{subsubsec:context_transf} As elegant as the PonderNet formulation is, it is not obvious how to fit it in the definition of the Universal Transformer, which splits the halting process to the token level using ACT. Instead, we pair an halting transformer to the main Convolutional UT, gathering information from the grid into a compact context sequence $C_n\in\mathbb{R}^{S_C\times d_{emb}}$, where $S_C$ is the length of the sequence, fixed as hyperparameter. Each element of the context sequence can attend to the grid $G_n\in\mathbb{R}^{H\times W\times d_{emb}}$ through an attention mechanism by flattening the 2D grid into a 1D sequence $F_n\in\mathbb{R}^{HW\times d_{emb}}$. An additional row-wise position encoding, defined as an adaptation of the simple ALiBi encoding \cite{alibi}, is added inside the $\softmax$ operator of the attention mechanism: \begin{equation} \label{eq:alibi} \text{ALiBi}(Q,K,V)=\softmax\bigg(\frac{QK^T}{\sqrt{d_k}} + m\cdot M\bigg)V \\ \end{equation} where $m$ is an head-specific slope and $M$ assigns decreasing scores, as defined in the original ALiBi paper \cite{alibi}. $M$ is adapted to a grid setup by assigning the same score for each element in the same row, starting at $0$ for the top row, $-1$ for the one below and so on. It has been shown that ALiBi can reduce training time, increase generalization, and avoid hyperparameters in training transformers \cite{alibi}. Having defined the attention mechanism, the halting transformer is thus built as follows: \begin{equation} \label{eq:context_transformer} \begin{split} C_n'&=\LN(C_n+\text{ALiBi}(C_n, F_n, F_n)) \\ C_{n+1}&=\LN(C_n'+FFN^{ctx}_2(C_n')) \\ \lambda_n&=\sigma(FFN^{halt}_2(\cat(C_{n+1}))) \end{split} \end{equation} where the initial context sequence $C_0\in\mathbb{R}^{S_C\times d_{emb}}$ is initialized as a learnable weight matrix, and $\lambda_n$ is the conditioned halting probability from Sec. \ref{subsubsec:halting_mech}. \section{Experimental setup} \label{sec:exp_setup} \subsection{Datasets} Several configurations of the model architecture described above are trained on the problem instance $\Pi([1,4],[1,10])$, i.e., additions of 1 to 4 terms, each made of 1 to 10 digits. For each training example, a uniform random number in $[1,4]$ is picked as \#terms, and for each term a uniform random number in $[1,10]$ is picked as its \#digits, where each digit is uniformly picked from $\{0,1,\dots,9\}$. This allows to have a fair distribution of examples with respect to sequence length\footnote{Sampling terms directly from $[0,10^{11}-1]$ would bias the distribution towards higher numbers, as there would be a $9/10$ probability of generating $10$-digits numbers, but only $9/100$ for $9$-digits numbers, and so on.}. The $+$ symbol links each sampled term, and the $=$ symbol terminates the string. PAD symbols are appended to equalize lengths and allow to group sequences in batches. As output target, the model only receives the correct result of the sum. Generalization performance is tested on problem instances that have been solved to perfection ($>$99\% sequence accuracy) by related architectures: additions of 2 numbers of 15 digits each, solved by 2-LSTM \cite{grid_lstm}; additions of 2 numbers of 100 digits, solved by the Neural GPU \cite{ngpu_dec}; additions of 2 numbers of 602 digits, coarsely corresponding to the maximum length of 2000 binary digits also used to test the Neural GPU \cite{ngpu}; additions of 1 to 5 numbers of 1 to 5 digits, solved by an LSTM using ACT \cite{act}. It should be noted that the Neural GPU required a carefully tuned curriculum learning to reach this level of performance, and the LSTM+ACT model needed supervision on intermediate results to solve additions featuring many operands. We also include further test cases in order to better explore the extrapolation capability on the number of terms. \subsection{Model parameters and architectural variants} \label{subsec:model_params} The main parameters of our base model are summarized in Table \ref{table:model_params}\footnote{Note that the embedding size $d_{emb}$ is set as the tokens vector size throughout the whole network, from input embeddings to output sequence, including the context vectors. Dropout is used after all linear layers.}. To better investigate the role of each processing module we also test five different model variants, reported in Table \ref{table:all_models}, disabling or changing single components one at a time. Our base model has 1 local attention head for each element of the embedding vector; noGroups resembles the original self-attention definition \cite{transformer} regarding linear projections and number of heads; the SASA variant follows exactly the definition in \cite{sasa}; fixedTime uses a constant number of recurrent steps (set to $12$), without any dynamic halting mechanism; and PonderReg uses the usual PonderNet regularization, that is, the KL-divergence from a geometric distribution. \begin{table}[t] \centering \caption{Model parameters.} \begin{tabular}{|c | c|} \hline \textbf{Parameter name} & \textbf{Value}\\ \hline Embedding size $d_{emb}$ & $64$ \\ Dropout $p$ & $0.1$ \\ \hline Internal dimension of $FFN_2^{s2g}$ & $64$ \\ \hline Spatial extent $k$ (kernel size) & $3$\\ Local attention groups $g$ & $8$ \\ Local attention heads $h$ & $64$ \\ Internal dimension of $FFN_3^{ut}$ & $256$ \\ \hline Internal dimension of $FFN_2^{ctx}$ & $64$ \\ Internal dimension of $FFN_2^{halt}$ & $128$ \\ Context length $S_C$ & $3$ \\ Context attention heads & $8$ \\ Maximum number of steps & $40$ \\ Distribution's threshold $\epsilon$ & $0.05$ \\ \hline \end{tabular} \label{table:model_params} \end{table} \begin{table}[t] \centering \caption{Model variants.} \begin{tabular}{l c c c c c} \hline \textbf{Model} & $g$ & $h$ & dynamic halting & $p_n$ reg.\\ \hline base & 8 & 64 & \checkmark & ER \\ noGroups ($g=1$)& 1 & 64 & \checkmark & ER \\ SASA ($g=h$) & 8 & 8 & \checkmark & ER \\ fixedTime & 8 & 64 & & \\ ponderReg & 8 & 64 & \checkmark & KL-div \\ \hline \end{tabular} \label{table:all_models} \end{table} The grid sizes $H,W$ (Sec. \ref{subsubsec:preproc}) are always shared in a single batch of examples during training. Let $N$ and $D$ be the batch maximum number of terms and digits respectively. Then we set $H,W$ as: \begin{IEEEeqnarray}{c} H=N+F_H+R_H \\ W=D+F_W+R_W \end{IEEEeqnarray} where $F_H$, $F_W$ are two fixed scalars, $R_H$, $R_W$ are two uniform random variables. $F_H$ and $F_W$ are ``oversizes'' that leave the model space to work and eventually produce longer outputs. $R_H$ and $R_W$ are ``regularizers'' that help avoiding overfitting on a fixed grid size. To lower the computational cost of the model, we chose some small values: $F_H=0, F_W=2, R_H\in[0,3], R_W\in[0,3]$, and divided samples from the dataset into groups based on the number of terms in order to lower $H,W$ per batch. This choice won't affect training as the gradients are computed on all losses from each group reduced together. We chose two groups of $[1,2]$ and $[3,4]$ number of terms, and same $[1,10]$ number of digits. \subsection{Training details} \label{subsec:training} We adopt a cross entropy loss, where the PAD class is weighted 1/10 of other classes to balance its higher frequency of appearance and PonderNet regularizers are weighted by $\beta=$5e-2. All models are trained for 510 epochs of 10 training steps each, using the AdamW optimizer \cite{adamw} with the standard parameters $\beta_1=0.9$, $\beta_2=0.999$ and a weight decay of $0.1$ for all weights, excluding biases and embeddings. We employ a cosine annealing schedule with a learning rate $\eta$ ranging from 1e-3 to 5e-5 over a period of 30 epochs. Gradient norm is clipped to 10 to avoid exploding gradients. A batch size of 128 is used, making 64 samples for each of the two groups defined in Sec. \ref{subsec:model_params}. For each variant, we train 10 models initialized with different random seeds. The best models in terms of extrapolation of \#digits and \#terms are then selected and further trained for other 300 epochs in an ``overtraining'' phase, lowering the PonderNet weighting to $\beta=$5e-4 as the regularizer dominated the loss during the last epochs of the standard training phase. All models are trained using an NVIDIA Tesla K80 GPU\footnote{PyTorch source code available at \url{https://github.com/CognacS/tag-cat}}. \begin{table*}[t] \caption{Accuracy on different problem instances requiring extrapolation on the number of digits and number of operands.} \centering \begin{tabular}{|l | cc cc cc | cc cc cc cc |} \hline & \multicolumn{6}{ c|}{2-terms additions} & \multicolumn{8}{ c|}{N-terms additions} \\ \textbf{Model} & \multicolumn{2}{ c }{$15$ digits} & \multicolumn{2}{ c }{$100$ digits} & \multicolumn{2}{ c|}{$602$ digits} & \multicolumn{2}{ c }{$[1,4],[1,10]$} & \multicolumn{2}{ c }{$[1,5],[1,5]$} & \multicolumn{2}{ c }{$[5,6],[1,10]$} & \multicolumn{2}{ c|}{$[7,10],[1,10]$}\\ \cline{2-13} & char & seq & char & seq & char & seq & char & seq & char & seq & char & seq & char & seq\\ \hline base model & \textbf{1.0} & \textbf{1.0} & \textbf{0.99} & \textbf{0.99} & 0.98 & 0.0 & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{0.99} & \textbf{0.93} & \textbf{0.72} & \textbf{0.25}\\ noGroups variant ($g=1$) & \textbf{1.0} & \textbf{1.0} & \textbf{0.99} & \textbf{0.99} & \textbf{0.99} & \textbf{0.99} & 0.99 & 0.99 & 0.99 & 0.99 & 0.98 & 0.86 & 0.54 & 0.1\\ \hline \end{tabular} \label{table:comparison_results} \end{table*} \begin{table}[t] \centering \caption{Accuracy of the noGroups variant on additions featuring 2 terms composed by many digits (left). Accuracy of the base model on additions featuring $N$ terms of just 5 digits (right).} \begin{tabular}{c c c} \hline \#digits & char & seq\\ \hline $[1\quad\;\;,1000]$ & 0.99 & 0.99 \\ $[1000,1000]$ & 0.99 & 0.99 \\ $[1\quad\;\;,2000]$ & 0.99 & 0.99 \\ $[2000,2000]$ & 0.99 & 0.81 \\ $[1\quad\;\;,4000]$ & 0.99 & 0.58 \\ $[4000,4000]$ & 0.86 & 0.0 \\ \hline \end{tabular}\quad\quad\quad \begin{tabular}{c c c} \hline \#terms & char & seq\\ \hline $5$ & 1.0 & 1.0 \\ $6$ & 0.99 & 0.98 \\ $7$ & 0.98 & 0.86 \\ $8$ & 0.84 & 0.40 \\ $9$ & 0.66 & 0.09 \\ $10$ & 0.48 & 0.03 \\ \hline \end{tabular} \label{table:long_additions} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{rec_steps_acc.png} \caption{Correlation between the number of terms, recurrent steps before halting, and accuracy at character and sequence level when computing with our base overtrained model.} \label{fig:rec_steps_acc} \end{figure} \subsection{Evaluation Metrics} \label{subsec:eval_metrics} Task accuracy is computed by dividing the number of correct matches by the number of valid matches. As valid matches, we consider three scenarios: number-number, pad-number, number-pad, and ignore all correct pad-pad matches. This ensures that the accuracy measure is not inflated by the high frequency of correct pad-pad matches. Accuracy is computed both at \textit{character level}, that is, all characters for valid matches are considered, and at \textit{sequence level}, where any error in the sequence invalidates the entire sample. \section{Results} \label{sec:results} \subsection{Generalization capabilities} Results achieved by the two best overtrained models are reported in Table \ref{table:comparison_results} (note that accuracy values in the interval $[0.99, 1.0)$ are always rounded down to $0.99$, as we only consider $1.0$ to be a perfect score). Both models match the performance of state-of-the-art approaches, at the same time exhibiting remarkable accuracy on novel problem instances featuring more challenging extrapolation ranges over the number of operands (N-terms additions). Interestingly, the noGroups variant achieves better extrapolation on the number of digits, while the base model exhibits better extrapolation on the number of terms. In Table \ref{table:long_additions} we push these tests to the limit, showing that additions of 2 very long numbers can still be solved with high accuracy, while extrapolation on the number of terms appears more challenging. Fig. \ref{fig:rec_steps_acc} suggests a possible correlation between the drop in accuracy and the number of recurrent steps before halting, which seems to stabilize even if the increasing number of terms might in fact require more computing steps. We also encountered the same problem discussed in \cite{ngpu_dec}: models often fail when trying to carry over lengths higher than those found during training. Some representative examples are shown in Table \ref{table:examples_errors}, where changing the order of terms surprisingly returns different results. Errors occur when a large operand is followed by smaller ones (which requires to move all digits over long distances), when the carry must be iteratively propagated, or when the number of terms exceeds a certain value. \begin{table}[t] \caption{Handpicked representative examples.} \centering \begin{tabular}{r c c c} \hline Operation & Network pred. & True result & Correct\\ \hline 11134+1+1+1+1+1= & 139 & 11139 &\\ 1+1+1+1+1+11134= & 11139 & 11139 & \checkmark\\ 1+1+1+1+1+1+1+11134= & 11140 & 11141 &\\ 99999+1= & 100000 & 100000 & \checkmark \\ 999999999+1= & 999000000 & 1000000000 &\\ 999990000+9999+1= & 990000000 & 1000000000 &\\ 1+1+1+1+1+1= & 6 & 6 & \checkmark \\ 1+1+1+1+1+1+1= & 6 & 7 &\\ \hline \end{tabular} \label{table:examples_errors} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{recur_steps.png} \caption{Mean number of recurrent steps at training time learned by regularizing the halting distribution with our Explore-Reinforce and the KL-divergence with a geometric distribution. Colored areas represent the standard deviation. Jumps in recurrent steps are caused by the cosine annealing schedule.} \label{fig:abl_rec_steps_reg} \end{figure} \begin{table*}[t] \caption{Statistics on accuracy scores of different variants of our proposed model. Each variant was trained with 10 different initializations and the average $A$, standard deviation $SD$ and max score $M$ are reported with the format $A\pm SD(M)$.} \centering \begin{tabular}{|l | cc cc cc |} \hline \textbf{Model} & \multicolumn{2}{ c }{$[2,2],[602,602]$} & \multicolumn{2}{ c }{$[5,6],[1,10]$} & \multicolumn{2}{ c|}{$[7,10],[1,10]$}\\ \cline{2-7} & char & seq & char & seq & char & seq\\ \hline base & \stat{0.23}{0.27}{0.98} & \stat{0.00}{0.00}{0.00} & \stat{\textbf{0.93}}{0.09}{0.98} & \stat{\textbf{0.72}}{0.24}{\textbf{0.90}} & \stat{0.51}{0.14}{0.66} & \stat{\textbf{0.09}}{0.05}{0.16} \\ noGroups & \stat{\textbf{0.99}}{0.00}{\textbf{0.99}} & \stat{\textbf{0.63}}{0.34}{0.93} & \stat{0.74}{0.26}{0.96} & \stat{0.40}{0.27}{0.75} & \stat{0.29}{0.13}{0.51} & \stat{0.02}{0.02}{0.06} \\ SASA & \stat{0.70}{0.39}{\textbf{0.99}} & \stat{0.20}{0.33}{\textbf{0.95}} & \stat{0.81}{0.29}{0.98} & \stat{0.60}{0.30}{0.85} & \stat{0.45}{0.17}{0.64} & \stat{0.05}{0.04}{0.12} \\ fixedTime & \stat{0.44}{0.40}{\textbf{0.99}} & \stat{0.12}{0.25}{0.70} & \stat{0.76}{0.16}{0.95} & \stat{0.36}{0.27}{0.74} & \stat{0.27}{0.11}{0.42} & \stat{0.01}{0.01}{0.04} \\ ponderReg & \stat{0.19}{0.14}{0.52} & \stat{0.00}{0.00}{0.00} & \stat{\textbf{0.93}}{0.06}{\textbf{0.99}} & \stat{0.69}{0.21}{\textbf{0.90}} & \stat{\textbf{0.54}}{0.16}{\textbf{0.72}} & \stat{\textbf{0.09}}{0.08}{\textbf{0.21}} \\ \hline \end{tabular} \label{table:ablation_results} \end{table*} \begin{figure*}[t] \centering \raisebox{3mm}{\includegraphics[width=0.68\textwidth]{grid_actions.png}} \includegraphics[width=0.30\textwidth]{grid_making.png} \caption{Action probabilities (left) and resulting grid $G^{(S)}=G_0$ from the padded input sequence ``523 + 102 + 9416 = $<$PAD$>$ $<$PAD$>$''.} \label{fig:vis_grid_making} \end{figure*} Table \ref{table:ablation_results} reports the average scores, standard deviation, and max scores of different variants of the base model. These results confirm that perfect extrapolation on both \#digits and \#terms never happens, and model components seem to specialize in tackling one of these two degrees of freedom. The noGroups variant is the most solid on the digits extrapolation task $\Pi([2,2],[602,602])$, solved consistently with $>0.99$ accuracy, while other methods achieve perfect accuracy only with some lucky initialization. On the tasks requiring extrapolation over the number of terms $\Pi([5,6],[1,10])$ and $\Pi([7,10],[1,10])$, the original SASA variant is outperformed by our base model, and fixing the number of recurrent steps further degrades performances. The accuracy scores of the base and ponderReg variants are comparable; however, it should be noted that training the latter took 4 hours, compared with the 2 hours required by our base approach. This phenomenon can be explained by comparing the halting steps during training, as shown in Fig. \ref{fig:abl_rec_steps_reg}: our Explore-Reinforce regularization allows to learn a much more efficient criterion for halting. \begin{figure}[t] \centering \includegraphics[width=0.17\textwidth]{pos_based1.png} \includegraphics[width=0.17\textwidth]{pos_based2.png} \vspace{3mm} \includegraphics[width=0.17\textwidth]{con_based1.png} \includegraphics[width=0.17\textwidth]{con_based2.png} \caption{Examples of a position-based rule (top) and content-based rule (bottom).} \label{fig:attn_rules} \end{figure} \subsection{Analysis of step-by-step computation} \label{subsec:computation} In this section we explore how the model learned to solve the addition problem. Indeed, the use of sigmoid and $\softmax$ activations in focal points of the network, such as the Seq2Grid actions or the attention aggregation, increases its explainability, since importance is reflected on the magnitude of the activation. \subsubsection{Seq2Grid} The preprocessing module learned a reliable procedure to format the incoming sequence into a grid. As shown in Fig. \ref{fig:vis_grid_making}, all digits are appended to the top list while + signs break the row and push a new line. Equals signs and paddings are correctly ignored as they do not contribute to the final evaluation. This formatting is actually the same we humans use when solving additions with the column method. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{ring_nopads_note.png} \caption{PCA projection of token vectors produced by the model, highlighting the emergence of a ring-shaped structure.} \label{fig:emerging_repr} \end{figure} \subsubsection{Local Attention} The transformer learned to produce different kinds of ``rules'' used by attention heads to aggregate neighboring tokens. We found all of the rules explained in Sec. \ref{subsubsec:ut}; in particular, most position-based rules (example in top row of Fig. \ref{fig:attn_rules}) are used to aggregate a single token in a specific neighborhood location. More complex content-based rules seem to depend on the magnitude of digits: the example in bottom row of Fig. \ref{fig:attn_rules} shows that attention can focus on large digits in the neighboring row positions, but can also be payed to all surroundings in the case of large querying digits. \begin{comment} We computed the normalized scores for how each token in the grid contributed to the final result (by repeatedly applying attention scores for all tokens from last to first time step), and a bottom-up right-to-left flow of information is revealed. This implies a directionality of the computational process. From a different point of view, observing how the network classifies digits in each computational step (Fig. \ref{fig:computations}), the previous hypothesis is confirmed. We interpreted computations at each time step, and observing how classified digits change over time we noticed that operations include single-digit additions, carry passing, and digit refinement. These operations appear to be applied in a bottom-up, right-to-left fashion (as shown in Fig. \ref{fig:computations}, where arrows determine the directions of operations), similar to how we humans would compute a sum. Additionally, Fig. \ref{fig:computations} shows that the network learned to return partial results of the addition on each row. This is remarkable as the network never received partial results during training. Our explanation of this phenomenon is that doing so increases the chances of returning the right result regardless of the number of terms. \begin{figure}[t] \centering \includegraphics[width=0.24\textwidth]{step02_note.png} \includegraphics[width=0.24\textwidth]{step04_note.png} \includegraphics[width=0.24\textwidth]{step06_note.png} \includegraphics[width=0.24\textwidth]{step08_note.png} \caption{Snapshots from the addition ``1255 + 45987 + 12 + 5378 =''. Arrows indicate a possible interpretation of computations in a given time step, where blue arrows are single-digit additions, green arrows are carry passing, red arrows are refinement steps.} \label{fig:computations} \end{figure} \end{comment} \subsection{Emergent internal representations} We finally investigated how token vectors are manipulated by the model by visualizing its representational space using Principal Component Analysis. To do so, we sampled a large batch of different problems and plotted the first two components of the vectors extracted at each time step, colored according to the corresponding symbol produced by the model (plus, equal and PAD are ignored). As shown in Fig. \ref{fig:emerging_repr}, it is evident that the representational space self-organizes according to a ring-shaped structure, where vectors of the same class are clustered together and digits are ordered from 0 to 9, and then back to 0. Such structure makes sense, because it allows to linearly change the magnitude of produced digits by moving between adjacent clusters; moreover, when vectors corresponding to high valued digits have to propagate a carry they simply cycle back to zero, thus allowing to restart the incremental process. \begin{comment} \begin{figure}[t] \centering \includegraphics[width=0.24\textwidth]{token_dynamics_note.png} \includegraphics[width=0.24\textwidth]{ring_nopads_note.png} \caption{Dynamics of 3 token vectors through time when carries appear (left). Large pool of token vectors (PAD is ignored) classified by the network form a ring where numbers are ordered from 0 to 9 (right). Reduction with PCA.} \label{fig:emerging_repr} \end{figure} The same analysis is conducted for the context vectors explained in Sec. \ref{subsubsec:context_transf}. In our experiments, we used context sequences of 3 vectors, and observing the attention scores of context vectors with respect to the grid, each one is specialized in a particular task, where one focuses more on empty spots (PADs) while the other two pay attention to the actual addition. The middle plot shows the progression through time as before, where the t-SNE reduction reveals a fan-shaped ``halting interval'' where the context transformer decides where to halt. The rightmost plot supports this hypothesis as the extremes of the fan-shaped structures have very high halting probabilities. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{context_features_tsne_note.png} \caption{Large pool of context vectors colored as the context index in the context sequence (left), progression through time (middle) and halting probabilities (right). Reduction with t-SNE.} \label{fig:emerging_repr_context} \end{figure} \end{comment} \section{Discussion and Conclusion} \label{sec:conclusion} In this paper, we proposed a sophisticated yet lightweight deep learning model, assembling a variety of architectures and processing mechanisms with the aim of studying how neural networks could learn to solve multi-digit addition and generalize arithmetic knowledge to novel problems. The proposed model matches current state-of-the-art approaches on problems involving 2-operands that require extrapolation over the number of digits, at the same time exhibiting improved generalization on problems involving more operands. A distinguishing feature of our model is the use of a novel centralized halting mechanism, compatible with the definition of Universal Transformers and PonderNet, which allows to speed-up learning by calibrating the number of computational steps required to solve problems of different complexity. It is well-known that the lack of explicit inductive biases makes it very challenging for neural networks to extrapolate well on arithmetic problems. In this respect, our simulations suggest that equipping deep learning agents with external memory systems might be a key principle to promote systematic abstraction when learning algorithmic tasks. At the same time, the capability of our model to extrapolate on problems with a large number of operands is still fairly limited, motivating further efforts to improve neural network models of mathematical symbol grounding \cite{symbol_grounding_prob_2}. For example, future research could investigate whether generalization performance might improve by grounding arithmetic procedures on perceptual representations of numbers \cite{number_sense_emergentist, number_sense_visual_magnitude} and/or more advanced external representations mimicking the calculation tools invented by human cultures \cite{self_com_drl_number_repr}.
2024-02-18T23:39:57.755Z
2022-07-07T02:11:47.000Z
algebraic_stack_train_0000
955
7,649
proofpile-arXiv_065-4710
\section{Introduction and main result Let $(M,F)$ be a Finsler manifold. A closed curve on $(M,F)$ is a closed geodesic if it is locally the shortest path connecting any two nearby points on this curve. As usual, a closed geodesic $c:S^1={\bf R}/{\bf Z}\to M$ is {\it prime} if it is not a multiple covering (i.e., iteration) of any other closed geodesics. Here the $m$-th iteration $c^m$ of $c$ is defined by $c^m(t)=c(mt)$. The inverse curve $c^{-1}$ of $c$ is defined by $c^{-1}(t)=c(1-t)$ for $t\in {\bf R}$. Note that unlike Riemannian manifold, the inverse curve $c^{-1}$ of a closed geodesic $c$ on an irreversible Finsler manifold need not be a geodesic. We call two prime closed geodesics $c$ and $d$ {\it distinct} if there is no ${\theta}\in (0,1)$ such that $c(t)=d(t+{\theta})$ for all $t\in{\bf R}$. On a reversible Finsler (or Riemannian) manifold, two closed geodesics $c$ and $d$ are called { \it geometrically distinct} if $ c(S^1)\neq d(S^1)$, i.e., they have different image sets in $M$. We shall omit the word {\it distinct} when we talk about more than one prime closed geodesic. For a closed geodesic $c$ on an $(n+1)$-dimensional manifold $M$, denote by $P_c$ the linearized Poincar\'{e} map of $c$, which is a symplectic matrix, i.e., $P_c\in{\rm Sp}(2n)$. we define the {\it elliptic height } $e(P_c)$ of $P_c$ to be the total algebraic multiplicity of all eigenvalues of $P_c$ on the unit circle ${\bf U}=\{z\in{\bf C}|\; |z|=1\}$ in the complex plane ${\bf C}$. Since $P_c$ is symplectic, $e(P_c)$ is even and $0\le e(P_c)\le 2n$. A closed geodesic $c$ is called {\it elliptic} if $e(P_c)=2n$, i.e., all the eigenvalues of $P_c$ locate on ${\bf U}$; {\it irrationally elliptic} if, in the homotopy component ${\Omega}^0(P_c)$ of $P_c$ (cf. Section 2 below for the definition), $P_c$ can be connected to the ${\diamond}$-product of $n$ rotation matrices $R({\theta}_i)$ with ${\theta}_i$ being irrational multiple of $\pi$ for $1\le i\le n$; {\it hyperbolic} if $e(P_c)=0$, i.e., all the eigenvalues of $P_c$ locate away from ${\bf U}$; {\it non-degenerate} if $1$ is not an eigenvalue of $P_c$. A Finsler metric $F$ is called {\it bumpy} if all the closed geodesics on $(M,F)$ are non-degenerate. There is a famous conjecture in Riemannian geometry which claims the existence of infinitely many closed geodesics on any compact Riemannian manifold. This conjecture has been proved for many cases, but not yet for compact rank one symmetric spaces except for $S^2$. The results of Franks in \cite{Fra} and Bangert in \cite{Ban} imply that this conjecture is true for any Riemannian 2-sphere (cf. \cite{Hin2} and \cite{Hin3}). However for a Finsler manifold, the above conjecture does not hold due to the Katok's examples. It was quite surprising when Katok in \cite{Kat} found some irreversible Finsler metrics on spheres with only finitely many closed geodesics and all of them are non-degenerate and irrationally elliptic (cf. \cite{Zil}). Based on Katok's examples, Anosov in \cite{Ano} proposed the following conjecture (cf. \cite{Lon4}) \begin{eqnarray} {\cal N}(S^n,F)\ge 2\left[\frac{n+1}{2}\right] \quad \mbox{for any Finsler metric $F$ on}\ S^n, \label{1.1}\end{eqnarray} where, denote by ${\cal N}(M,F)$ the number of the distinct closed geodesics on $(M,F)$, and $[a]=\max\{k\in{\bf Z}\,|\,k\le a\}$. In 2005, Bangert and Long in \cite{BaL} proved this conjecture for any Finsler $2$-dimensional sphere $(S^2, F)$. Since then, the index iteration theory of closed geodesics (cf. \cite{Bot} and \cite{Lon3}) has been applied to study the closed geodesic problem on Finsler manifolds. When $n\ge 3$, the above conjectures in the Riemannian or Finsler case is still widely open in full of generality. About the multiplicity and stability problem of closed geodesics, two classes of typical conditions, including the positively curved condition and the non-degenerate (or bumpy) condition, have been used widely. In \cite{Rad3}, Rademacher has introduced the reversibility $\lambda=\lambda(M,F)$ of a compact Finsler manifold defined by \begin{eqnarray} \lambda=\max\{F(-X)\ |\ X\in TM,\ F(X)=1\}\ge 1.\nonumber\end{eqnarray} Then Rademacher in \cite{Rad4} has obtained some results about the multiplicity and stability of closed geodesics. For example, let $F$ be a Finsler metric on $S^{n}$ with reversibility ${\lambda}$ and flag curvature $K$ satisfying $\left(\frac{{\lambda}}{1+{\lambda}}\right)^2<K\le 1$, then there exist at least $n/2-1$ closed geodesics with length $<2n\pi$. If $\frac{9{\lambda}^2}{4(1+{\lambda})^2}<K\le 1$ and ${\lambda}<2$, then there exists a closed geodesic of elliptic-parabolic, i.e., its linearized Poincar\'{e} map split into $2$-dimensional rotations and a part whose eigenvalues are $\pm 1$. These results are some generalization of those in \cite{BTZ1} and \cite{BTZ2} in the Riemannian case. Recently, Wang in \cite{Wan} proved the conjecture (\ref{1.1}) for $(S^n,F)$ provided that $F$ is bumpy and its flag curvature $K$ satisfies $\left(\frac{\lambda}{1+\lambda}\right)^2<K\le 1$. Also in \cite{Wan}, Wang showed that for every bumpy Finsler metric $F$ on $S^n$ satisfying $\frac{9{\lambda}^2}{4(1+{\lambda})^2}<K\le 1$, there exist two prime elliptic closed geodesics provided the number of closed geodesics on $(S^n,F)$ is finite. As some further generalization, Duan, Long and Wang in \cite{DLW} obtained the optimal lower bound of the number of distinct closed geodesics on a compact simply-connected Finsler manifold $(M,F)$ if $F$ is bumpy and some much weak index conditions or positively curved conditions are satisfied. The first author in \cite{Dua1} and \cite{Dua2} proved that for every Finsler $(S^{n},F)$ for $n\ge 3$ with reversibility ${\lambda}$ and flag curvature $K$ satisfying $\left(\frac{{\lambda}}{1+{\lambda}}\right)^2<K\le 1$, either there exist infinitely many prime closed geodesics, or there exist exactly three prime closed geodesics and at least two of them are elliptic. In fact, the multiplicity and stability problem on high dimensional manifolds without the assumption of bumpy metrics is much difficult. In this paper, we further consider the positively curved Finsler $4$-dimensional sphere $(S^4,F)$ without the bumpy assumption, and obtain the following new progress about the multiplicity and stability of closed geodesics on $(S^4,F)$. \medskip {\bf Theorem 1.1.} {\it For every Finsler metric $F$ on a $4$-dimensional sphere $S^4$ with reversibility ${\lambda}$ and flag curvature $K$ satisfying $\frac{25}{9}\left(\frac{\lambda}{1+\lambda}\right)^2<K\le 1$, either there exist at least four prime closed geodesics, or there exist exactly three prime non-hyperbolic closed geodesics and at least two of them are irrationally elliptic.} \medskip First, under the assumption of positively curved condition $\frac{25}{9}\left(\frac{\lambda}{1+\lambda}\right)^2<K\le 1$, Theorem 1 and Theorem 4 in \cite{Rad3} established the lower bound of the length of closed geodesics, which in turn gives the lower bound of $i(c^m)$ and mean index $\hat{i}(c)$ for any closed geodesic $c$ on such $S^4$ (cf. Lemma 3.1 below). Second, we shall make full use of the enhanced common index jump theorem established in \cite{DLW}, which generalized the common index jump theorem in \cite{LoZ}, to obtain some crucial precise estimates of $i(c^m)$ and $\nu(c^m)$ (cf. Section 3.1 below). Note that Theorem 1.1 in \cite{Dua1} showed the existence of three prime closed geodesics on $(S^4,F)$ with reversibility ${\lambda}$ and flag curvature $K$ satisfying $\left(\frac{{\lambda}}{1+{\lambda}}\right)^2<K\le 1$. So, in order to prove our above Theorem 1.1, we assume the existence of exactly three prime closed geodesics on such $(S^4,F)$ (cf. the assumption (TCG) below). Finally, together with the Morse theory, under (TCG) we will carefully analyze the local and global information of these three prime closed geodesics and their iterates to complete the proof of Theorem 1.1 in Section 3.2. In addition, under the assumption (TCG), in Theorem 4.2 in Section 4, we obtain more precise information about the third prime closed geodesic except that it is non-hyperbolic. These information may be greatly helpful to completely solve the conjecture (\ref{1.1}) on the positively cured Finsler $(S^4,F)$ in the future. In this paper, let ${\bf N}$, ${\bf N}_0$, ${\bf Z}$, ${\bf Q}$, ${\bf R}$, and ${\bf C}$ denote the sets of natural integers, non-negative integers, integers, rational numbers, real numbers, and complex numbers respectively. We use only singular homology modules with ${\bf Q}$-coefficients. For an $S^1$-space $X$, we denote by $\overline{X}$ the quotient space $X/S^1$. We define the functions \begin{eqnarray} E(a)=\min\{k\in{\bf Z}\,|\,k\ge a\},\quad \varphi(a)=E(a)-[a],\quad \{a\}=a-[a]. \label{1.2} \end{eqnarray} Especially, $\varphi(a)=0$ if $ a\in{\bf Z}\,$, and $\varphi(a)=1$ if $ a\notin{\bf Z}\,$. \setcounter{equation}{0} \section{Morse theory and Morse indices of closed geodesics \subsection{Morse theory for closed geodesics} Let $M=(M,F)$ be a compact Finsler manifold, the space $\Lambda=\Lambda M$ of $H^1$-maps $\gamma:S^1\rightarrow M$ has a natural structure of Riemannian Hilbert manifolds on which the group $S^1={\bf R}/{\bf Z}$ acts continuously by isometries. This action is defined by $(s\cdot\gamma)(t)=\gamma(t+s)$ for all $\gamma\in{\Lambda}$ and $s, t\in S^1$. For any $\gamma\in\Lambda$, the energy functional is defined by \begin{equation} E(\gamma)=\frac{1}{2}\int_{S^1}F(\gamma(t),\dot{\gamma}(t))^2dt. \label{2.1}\end{equation} It is $C^{1,1}$ and invariant under the $S^1$-action. The critical points of $E$ of positive energies are precisely the closed geodesics $\gamma:S^1\to M$. The index form of the functional $E$ is well defined along any closed geodesic $c$ on $M$, which we denote by $E''(c)$. As usual, we denote by $i(c)$ and $\nu(c)$ the Morse index and nullity of $E$ at $c$. In the following, we denote by \begin{equation} {\Lambda}^\kappa=\{d\in {\Lambda}\;|\;E(d)\le\kappa\},\quad {\Lambda}^{\kappa-}=\{d\in {\Lambda}\;|\; E(d)<\kappa\}, \quad \forall \kappa\ge 0. \nonumber\end{equation} For a closed geodesic $c$ we set $ {\Lambda}(c)=\{{\gamma}\in{\Lambda}\mid E({\gamma})<E(c)\}$. Recall that respectively the mean index $\hat{i}(c)$ and the $S^1$-critical modules of $c^m$ are defined by \begin{equation} \hat{i}(c)=\lim_{m\rightarrow\infty}\frac{i(c^m)}{m}, \quad \overline{C}_*(E,c^m) = H_*\left(({\Lambda}(c^m)\cup S^1\cdot c^m)/S^1,{\Lambda}(c^m)/S^1; {\bf Q}\right).\label{2.3}\end{equation} We call a closed geodesic satisfying the isolation condition, if the following holds: \medskip {\bf (Iso) For all $m\in{\bf N}$ the orbit $S^1\cdot c^m$ is an isolated critical orbit of $E$. } \medskip Note that if the number of prime closed geodesics on a Finsler manifold is finite, then all the closed geodesics satisfy (Iso). If $c$ has multiplicity $m$, then the subgroup ${\bf Z}_m=\{\frac{n}{m}\mid 0\leq n<m\}$ of $S^1$ acts on $\overline{C}_*(E,c)$. As studied in p.59 of \cite{Rad2}, for all $m\in{\bf N}$, let $H_{\ast}(X,A)^{\pm{\bf Z}_m} = \{[\xi]\in H_{\ast}(X,A)\,|\,T_{\ast}[\xi]=\pm [\xi]\}$, where $T$ is a generator of the ${\bf Z}_m$-action. On $S^1$-critical modules of $c^m$, the following lemma holds: \medskip {\bf Lemma 2.1.} (cf. Satz 6.11 of \cite{Rad2} and \cite{BaL}) {\it Suppose $c$ is a prime closed geodesic on a Finsler manifold $M$ satisfying (Iso). Then there exist $U_{c^m}^-$ and $N_{c^m}$, the so-called local negative disk and the local characteristic manifold at $c^m$ respectively, such that $\nu(c^m)=\dim N_{c^m}$ and \begin{eqnarray} \overline{C}_q( E,c^m) &\equiv& H_q\left(({\Lambda}(c^m)\cup S^1\cdot c^m)/S^1, {\Lambda}(c^m)/S^1\right)\nonumber\\ &=& \left(H_{i(c^m)}(U_{c^m}^-\cup\{c^m\},U_{c^m}^-) \otimes H_{q-i(c^m)}(N_{c^m}\cup\{c^m\},N_{c^m})\right)^{+{\bf Z}_m}, \nonumber \end{eqnarray} (i) When $\nu(c^m)=0$, there holds $$ \overline{C}_q( E,c^m) = \left\{\matrix{ {\bf Q}, &\quad {\it if}\;\; i(c^m)-i(c)\in 2{\bf Z}\;\;{\it and}\;\; q=i(c^m),\; \cr 0, &\quad {\it otherwise}, \cr}\right. $$ (ii) When $\nu(c^m)>0$, there holds $$ \overline{C}_q( E,c^m)=H_{q-i(c^m)}(N_{c^m}\cup\{c^m\},N_{c^m})^{{\epsilon}(c^m){\bf Z}_m}, $$ where ${\epsilon}(c^m)=(-1)^{i(c^m)-i(c)}$.} \medskip Define \begin{equation} k_j(c^m) \equiv \dim\, H_j( N_{c^m}\cup\{c^m\},N_{c^m}), \quad k_j^{\pm 1}(c^m) \equiv \dim\, H_j(N_{c^m}\cup\{c^m\},N_{c^m})^{\pm{\bf Z}_m}. \label{2.4}\end{equation} Then we have \medskip {\bf Lemma 2.2.} (cf. \cite{Rad2}, \cite{LoD}, \cite{Wan}) {\it Let $c$ be a prime closed geodesic on a Finsler manifold $(M,F)$. Then (i) For any $m\in{\bf N}$, there holds $k_j(c^m)=0$ for $j\not\in [0,\nu(c^m)]$. (ii) For any $m\in{\bf N}$, $k_0(c^m)+k_{\nu(c^m)}(c^m)\le 1$ and if $k_0(c^m)+k_{\nu(c^m)}(c^m)=1$ then there holds $k_j(c^m)=0$ for $j\in (0,\nu(c^m))$. (iii) For any $m\in{\bf N}$, there holds $k_0^{+1}(c^m) = k_0(c^m)$ and $k_0^{-1}(c^m) = 0$. In particular, if $c^m$ is non-degenerate, there holds $k_0^{+1}(c^m) = k_0(c^m)=1$, and $k_0^{-1}(c^m) = k_j^{\pm 1}(c^m)=0$ for all $j\neq 0$. (iv) Suppose for some integer $m=np\ge 2$ with $n$ and $p\in{\bf N}$ the nullities satisfy $\nu(c^m)=\nu(c^n)$. Then there holds $k_j(c^m)=k_j(c^n)$ and ${k}_j^{\pm 1}(c^m)={k}^{\pm 1}_j(c^n)$ for any integer $j$. } \medskip Let $(M,F)$ be a compact simply connected Finsler manifold with finitely many closed geodesics. It is well known that for every prime closed geodesic $c$ on $(M,F)$, there holds either $\hat{i}(c)>0$ and then $i(c^m)\to +\infty$ as $m\to +\infty$, or $\hat{i}(c)=0$ and then $i(c^m)=0$ for all $m\in{\bf N}$. Denote those prime closed geodesics on $(M,F)$ with positive mean indices by $\{c_j\}_{1\le j\le k}$. Rademacher in \cite{Rad} and \cite{Rad2} established a celebrated mean index identity relating all the $c_j$s with the global homology of $M$ for compact simply connected Finsler manifolds (especially for $S^4$) as follows. \medskip {\bf Theorem 2.3.} (Satz 7.9 of \cite{Rad2}, cf. also \cite{DuL}, \cite{LoD} and \cite{Wan}) {\it Assume that there exist finitely many prime closed geodesics on $(S^4,F)$ and denote prime closed geodesics with positive mean indices by $\{c_j\}_{1\le j\le k}$ for some $k\in{\bf N}$. Then the following identity holds \begin{equation} \sum_{j=1}^k\frac{\hat{\chi}(c_j)}{\hat{i}(c_j)} = -\frac{2}{3}, \label{2.5}\end{equation} where \begin{equation} \hat{\chi}(c_j) = \frac{1}{n(c_j)}\sum_{1\le m\le n(c_j)} \chi(c_j^m)=\frac{1}{n(c_j)}\sum_{1\le m\le n(c_j) \atop 0\le l\le 2(n-1)}(-1)^{i(c_j^m)+l}k_l^{{\epsilon}(c_j^m)}(c_j^m)\in{\bf Q}, \label{2.6}\end{equation} and the analytical period $n(c_j)$ of $c_j$ is defined by (cf. \cite{LoD}) \begin{equation} n(c_j) = \min\{l\in{\bf N}\,|\,\nu(c_j^l)=\max_{m\ge 1}\nu(c_j^m),\;\; i(c_j^{m+l})-i(c_j^{m})\in 2{\bf Z}, \;\;\forall\,m\in{\bf N}\}. \label{2.7}\end{equation}} \medskip Set $\overline{{\Lambda}}^0=\overline{\Lambda}^0S^4 =\{{\rm constant\;point\;curves\;in\;}S^4\}\cong S^4$. Let $(X,Y)$ be a space pair such that the Betti numbers $b_i=b_i(X,Y)=\dim H_i(X,Y;{\bf Q})$ are finite for all $i\in {\bf Z}$. As usual the {\it Poincar\'e series} of $(X,Y)$ is defined by the formal power series $P(X, Y)=\sum_{i=0}^{\infty}b_it^i$. We need the following well known version of results on Betti numbers and the Morse inequality. \medskip {\bf Lemma 2.4.} (cf. Theorem 2.4 and Remark 2.5 of \cite{Rad} and \cite{Hin1}, Lemma 2.5 of \cite{DuL}) {\it Let $(S^4,F)$ be a $4$-dimensional Finsler sphere. Then, the Betti numbers are given by \begin{eqnarray} b_j &=& {\rm rank} H_j({\Lambda} S^4/S^1,{\Lambda}^0 S^4/S^1;{\bf Q}) \nonumber\\ &=& \left\{\matrix{ 2,&\quad {\it if}\quad j\in {\cal K}\equiv \{3k\,|\,3\le k\in 2{\bf N}+1\}, \cr 1,&\quad {\it if}\quad j\in \{2k+3\,|\,k\in{\bf N}_0\}\setminus{\cal K}, \cr 0,&\quad {\it otherwise}. \cr}\right. \label{2.8} \end{eqnarray}} {\bf Theorem 2.5.} (cf. Theorem I.4.3 of \cite{Cha}) {\it Let $(M,F)$ be a Finsler manifold with finitely many prime closed geodesics, denoted by $\{c_j\}_{1\le j\le k}$. Set \begin{eqnarray} M_q =\sum_{1\le j\le k,\; m\ge 1}\dim{\overline{C}}_q(E, c^m_j),\quad q\in{\bf Z}.\nonumber\end{eqnarray} Then for every integer $q\ge 0$ there holds } \begin{eqnarray} M_q - M_{q-1} + \cdots +(-1)^{q}M_0 &\ge& {b}_q - {b}_{q-1}+ \cdots + (-1)^{q}{b}_0, \label{2.9}\\ M_q &\ge& {b}_q. \label{2.10}\end{eqnarray} \subsection{Index iteration theory of closed geodesics In \cite{Lon1} of 1999, Y. Long established the basic normal form decomposition of symplectic matrices. Based on this result he further established the precise iteration formulae of indices of symplectic paths in \cite{Lon2} of 2000. Note that this index iteration formulae works for Morse indices of iterated closed geodesics (cf. \cite{Liu} and Chap. 12 of \cite{Lon3}). Since every closed geodesic on a sphere must be orientable. Then by Theorem 1.1 of \cite{Liu}, the initial Morse index of a closed geodesic on a Finsler $S^4$ coincides with the index of a corresponding symplectic path. As in \cite{Lon2}, denote by \begin{eqnarray} N_1({\lambda}, b) &=& \left(\matrix{{\lambda} & b\cr 0 & {\lambda}\cr}\right), \qquad {\rm for\;}{\lambda}=\pm 1, \; b\in{\bf R}, \label{2.11}\\ D({\lambda}) &=& \left(\matrix{{\lambda} & 0\cr 0 & {\lambda}^{-1}\cr}\right), \qquad {\rm for\;}{\lambda}\in{\bf R}\setminus\{0, \pm 1\}, \label{2.12}\\ R({\theta}) &=& \left(\matrix{\cos{\theta} & -\sin{\theta} \cr \sin{\theta} & \cos{\theta}\cr}\right), \qquad {\rm for\;}{\theta}\in (0,\pi)\cup (\pi,2\pi), \label{2.13}\\ N_2(e^{{\theta}\sqrt{-1}}, B) &=& \left(\matrix{ R({\theta}) & B \cr 0 & R({\theta})\cr}\right), \qquad {\rm for\;}{\theta}\in (0,\pi)\cup (\pi,2\pi)\;\; {\rm and}\; \nonumber\\ && \qquad B=\left(\matrix{b_1 & b_2\cr b_3 & b_4\cr}\right)\; {\rm with}\; b_j\in{\bf R}, \;\; {\rm and}\;\; b_2\not= b_3. \label{2.14}\end{eqnarray} Here $N_2(e^{{\theta}\sqrt{-1}}, B)$ is non-trivial if $(b_2-b_3)\sin\theta<0$, and trivial if $(b_2-b_3)\sin\theta>0$. As in \cite{Lon2}, the $\diamond$-sum (direct sum) of any two real matrices is defined by $$ \left(\matrix{A_1 & B_1\cr C_1 & D_1\cr}\right)_{2i\times 2i}\diamond \left(\matrix{A_2 & B_2\cr C_2 & D_2\cr}\right)_{2j\times 2j} =\left(\matrix{A_1 & 0 & B_1 & 0 \cr 0 & A_2 & 0& B_2\cr C_1 & 0 & D_1 & 0 \cr 0 & C_2 & 0 & D_2}\right). $$ For every $M\in{\rm Sp}(2n)$, the homotopy set $\Omega(M)$ of $M$ in ${\rm Sp}(2n)$ is defined by $$ {\Omega}(M)=\{N\in{\rm Sp}(2n)\,|\,{\sigma}(N)\cap{\bf U}={\sigma}(M)\cap{\bf U}\equiv\Gamma\;\mbox{and} \;\nu_{{\omega}}(N)=\nu_{{\omega}}(M),\ \forall{\omega}\in\Gamma\}, $$ where ${\sigma}(M)$ denotes the spectrum of $M$, $\nu_{{\omega}}(M)\equiv\dim_{{\bf C}}\ker_{{\bf C}}(M-{\omega} I)$ for ${\omega}\in{\bf U}$. The component ${\Omega}^0(M)$ of $P$ in ${\rm Sp}(2n)$ is defined by the path connected component of ${\Omega}(M)$ containing $M$. \medskip {\bf Theorem 2.6.} (cf. Theorem 7.8 of \cite{Lon1}, Theorems 1.2 and 1.3 of \cite{Lon2}, cf. also Theorem 1.8.10, Lemma 2.3.5 and Theorem 8.3.1 of \cite{Lon3}) {\it For every $P\in{\rm Sp}(2n-2)$, there exists a continuous path $f\in{\Omega}^0(P)$ such that $f(0)=P$ and \begin{eqnarray} f(1) &=& N_1(1,1)^{{\diamond} p_-}\,{\diamond}\,I_{2p_0}\,{\diamond}\,N_1(1,-1)^{{\diamond} p_+} {\diamond}\,N_1(-1,1)^{{\diamond} q_-}\,{\diamond}\,(-I_{2q_0})\,{\diamond}\,N_1(-1,-1)^{{\diamond} q_+} \nonumber\\ &&{\diamond}\,N_2(e^{{\alpha}_{1}\sqrt{-1}},A_{1})\,{\diamond}\,\cdots\,{\diamond}\,N_2(e^{{\alpha}_{r_{\ast}}\sqrt{-1}},A_{r_{\ast}}) {\diamond}\,N_2(e^{{\beta}_{1}\sqrt{-1}},B_{1})\,{\diamond}\,\cdots\,{\diamond}\,N_2(e^{{\beta}_{r_{0}}\sqrt{-1}},B_{r_{0}})\nonumber\\ &&{\diamond}\,R({\theta}_1)\,{\diamond}\,\cdots\,{\diamond}\,R({\theta}_{r'})\,{\diamond}\,R({\theta}_{r'+1})\,{\diamond}\,\cdots\,{\diamond}\,R({\theta}_r){\diamond}\,H(2)^{{\diamond} h},\label{2.15}\end{eqnarray} where $\frac{{\theta}_{j}}{2\pi}\in{\bf Q}\cap(0,1)\setminus \{\frac{1}{2}\} $ for $1\le j\le r'$ and $\frac{{\theta}_{j}}{2\pi}\notin{\bf Q}\cap(0,1)$ for $r'+1\le j\le r$; $N_2(e^{{\alpha}_{j}\sqrt{-1}},A_{j})$'s are nontrivial and $N_2(e^{{\beta}_{j}\sqrt{-1}},B_{j})$'s are trivial, and non-negative integers $p_-, p_0, p_+,q_-, q_0, q_+,r,r_\ast,r_0,h$ satisfy the equality \begin{equation} p_- + p_0 + p_+ + q_- + q_0 + q_+ + r + 2r_{\ast} + 2r_0 + h = n-1. \label{2.16}\end{equation} Let ${\gamma}\in{\cal P}_{\tau}(2n-2)=\{{\gamma}\in C([0,\tau],{\rm Sp}(2n-2))\,|\,{\gamma}(0)=I\}$. we extend ${\gamma}(t)$ to $t \in [0,m\tau]$ for every $m \in {\bf N}$ by \begin{eqnarray} {\gamma}^{m}(t)={\gamma}(t-j\tau){\gamma}(\tau)^j \qquad\forall j\tau\le t\le (j+1)\tau\ \mbox{and}\ j=0,1,\cdots,m-1.\label{2.17} \end{eqnarray} Denote the basic normal form decomposition of $P\equiv {\gamma}(\tau)$ by (\ref{2.15}). Then we have \begin{eqnarray} i({\gamma}^m) &=& m(i({\gamma})+p_-+p_0-r ) + 2\sum_{j=1}^r{E}\left(\frac{m{\theta}_j}{2\pi}\right) - r \nonumber\\ && - p_- - p_0 - {{1+(-1)^m}\over 2}(q_0+q_+) + 2\sum_{j=1}^{r_{\ast}}{\varphi}\left(\frac{m{\alpha}_j}{2\pi}\right) - 2r_{\ast}, \label{2.18}\\ \nu({\gamma}^m) &=& \nu({\gamma}) + {{1+(-1)^m}\over 2}(q_-+2q_0+q_+) + 2{\varsigma}(m,{\gamma}(\tau)), \label{2.19}\end{eqnarray} where we denote by \begin{equation} {\varsigma}(m,{\gamma}(\tau)) = r - \sum_{j=1}^r{\varphi}(\frac{m{\theta}_j}{2\pi}) + r_{\ast} - \sum_{j=1}^{r_{\ast}}{\varphi}(\frac{m{\alpha}_j}{2\pi}) + r_0 - \sum_{j=1}^{r_0}{\varphi}(\frac{m{\beta}_j}{2\pi}). \label{2.20}\end{equation}} \medskip Let \begin{equation} {\cal M}\equiv\{N_1(1,1); \;\;N_1(-1,a_2),\,a_2=\pm1;\;\;R({\theta}), \,{\theta}\in[0,2\pi);\,H(-2)\}. \end{equation} By Theorems 8.1.4-8.1.7 and 8.2.1-8.2.4 of \cite{Lon3}, we have \medskip {\bf Proposition 2.7.} {\it Every path ${\gamma}\in{\cal P}_{\tau}(2)$ with end matrix homotopic to some matrix in ${\cal M}$ has odd index $i({\gamma})$. Every path $\xi\in{\cal P}_{\tau}(2)$ with end matrix homotopic to $N_1(1,-1)$ or $H(2)$, and every path $\eta\in{\cal P}_{\tau}(4)$ with end matrix homotopic to $N_2({\omega},B)$ has even indices $i(\xi)$ and $i(\eta)$.} \medskip The common index jump theorem (cf. Theorem 4.3 of \cite{LoZ}) for symplectic paths has become one of the main tools in studying the multiplicity and stability of periodic orbits in Hamiltonian and symplectic dynamics. Recently, the following enhanced common index jump theorem has been obtained by Duan, Long and Wang in \cite{DLW}. \medskip {\bf Theorem 2.8.} (cf. Theorem 3.5 of \cite{DLW}) {\it Let $\gamma_k\in\mathcal{P}_{\tau_k}(2n)$ for $k=1,\cdots,q$ be a finite collection of symplectic paths. Let $M_k={\gamma}_k(\tau_k)$. We extend ${\gamma}_k$ to $[0,+\infty)$ by (\ref{2.17}) inductively. Suppose \begin{equation} \hat{i}({\gamma}_k,1) > 0, \qquad \forall\ k=1,\cdots,q. \label{2.21}\end{equation} Then for any fixed integer $\bar{m}\in {\bf N}$, there exist infinitely many $(q+1)$-tuples $(N, m_1,\cdots,m_q) \in {\bf N}^{q+1}$ such that for all $1\le k\le q$ and $1\le m\le \bar{m}$, there holds \begin{eqnarray} \nu({\gamma}_k,2m_k-m) &=& \nu({\gamma}_k,2m_k+m) = \nu({\gamma}_k, m), \label{2.22}\\ i({\gamma}_k,2m_k+m) &=& 2N+i({\gamma}_k,m), \label{2.23}\\ i({\gamma}_k,2m_k-m) &=& 2N-i({\gamma}_k,m)-2(S^+_{M_k}(1)+Q_k(m)), \label{2.24}\\ i({\gamma}_k, 2m_k)&=& 2N -(S^+_{M_k}(1)+C(M_k)-2\Delta_k), \label{2.25}\end{eqnarray} where $S_{M_k}^\pm({\omega})$ is the splitting number of $M_k$ at ${\omega}$ (cf. Definition 9.1.4 of \cite{Lon3}) and \begin{eqnarray} &&C(M_k)=\sum\limits_{0<\theta<2\pi}S^-_{M_k}(e^{\sqrt{-1}\theta}),\ \Delta_k = \sum_{0<\{m_k{\theta}/\pi\}<\delta}S^-_{M_k}(e^{\sqrt{-1}{\theta}}),\nonumber\\ &&Q_k(m) = \sum_{\theta\in(0,2\pi), e^{\sqrt{-1}{\theta}}\in{\sigma}(M_k),\atop \{\frac{m_k{\theta}}{\pi}\}= \{\frac{m{\theta}}{2\pi}\}=0} S^-_{M_k}(e^{\sqrt{-1}{\theta}}). \label{2.26} \end{eqnarray} More precisely, by (4.10), (4.40) and (4.41) in \cite{LoZ}, we have \begin{eqnarray} m_k=\left(\left[\frac{N}{\bar{M}\hat i(\gamma_k, 1)}\right]+\chi_k\right)\bar{M},\quad 1\le k\le q,\label{2.27}\end{eqnarray} where $\chi_k=0$ or $1$ for $1\le k\le q$ and $\frac{\bar{M}\theta}{\pi}\in{\bf Z}$ whenever $e^{\sqrt{-1}\theta}\in\sigma(M_k)$ and $\frac{\theta}{\pi}\in{\bf Q}$ for some $1\le k\le q$. Furthermore, for any fixed $M_0\in{\bf N}$, we may further require $M_0|N$, and for any $\epsilon>0$, we can choose $N$ and $\{\chi_k\}_{1\le k\le q}$ such that \begin{eqnarray} \left|\left\{\frac{N}{\bar{M}\hat i(\gamma_k, 1)}\right\}-\chi_k\right|<\epsilon,\quad 1\le k\le q.\label{2.28}\end{eqnarray}} \medskip We also have the following properties in the index iteration theory. \medskip {\bf Theorem 2.9.} (cf. Theorem 2.2 of \cite{LoZ} or Theorem 10.2.3 of \cite{Lon3}) {\it Let ${\gamma}\in P_{\tau}(2n)$. Then, for any $m\in N$, there holds $$ \nu({\gamma},m)-e(M)\le i({\gamma},m+1)-i({\gamma},m)-i({\gamma},1)\le\nu({\gamma},1)-\nu({\gamma},m+1)+e(M), $$ where $e(M)$ is the elliptic height defined in Section 1.} \setcounter{figure}{0} \setcounter{equation}{0} \section{Some index estimates and proof of Theorem 1.1} \subsection{Some index estimates for closed geodesics Firstly we make the following assumption {\bf (FCG)} {\it Suppose that there exist only finitely many prime closed geodesics $\{c_k\}_{k=1}^q$ on $(S^4,F)$ with reversibility $\lambda$ and flag curvature $K$ satisfying $\frac{25}{9}\left(\frac{{\lambda}}{1 + {\lambda}}\right)^2 < K \le 1$.} \medskip For any $1\le k\le q$, we rewrite (\ref{2.15}) as follows \begin{eqnarray} f_k(1) &=& N_1(1,1)^{{\diamond} p_{k,-}}\,{\diamond}\,I_{2p_{k,0}}\,{\diamond}\,N_1(1,-1)^{{\diamond} p_{k,+}}\nonumber\\ &&{\diamond}\,N_1(-1,1)^{{\diamond} q_{k,-}}\,{\diamond}\,(-I_{2q_{k,0}})\,{\diamond}\,N_1(-1,-1)^{{\diamond} q_{k,+}} \nonumber\\ && {\diamond}\,R({\theta}_{k,1})\,{\diamond}\,\cdots\,{\diamond}\,R({\theta}_{k,r_{k,1}})\,{\diamond}\,R(\td{{\theta}}_{k,1})\,{\diamond}\,\cdots\,{\diamond}\,R(\td{{\theta}}_{k,r_{k,2}})\nonumber\\ && {\diamond}\,N_2(e^{\sqrt{-1}{\alpha}_{k,1}},A_{k,1})\,{\diamond}\,\cdots\,{\diamond}\,N_2(e^{\sqrt{-1}{\alpha}_{k,r_{k,3}}},A_{k,r_{k,3}})\nonumber\\ && {\diamond}\,N_2(e^{\sqrt{-1}\td{{\alpha}}_{k,1}},\td{A}_{k,1})\,{\diamond}\,\cdots\,{\diamond}\,N_2(e^{\sqrt{-1}\td{{\alpha}}_{k,r_{k,4}}},\td{A}_{k,r_{k,4}})\nonumber\\ && {\diamond}\,N_2(e^{\sqrt{-1}{\beta}_{k,1}},B_{k,1})\,{\diamond}\,\cdots\,{\diamond}\,N_2(e^{\sqrt{-1}{\beta}_{k,r_{k,5}}},B_{k,r_{k,5}})\nonumber\\ && {\diamond}\,N_2(e^{\sqrt{-1}\td{{\beta}}_{k,1}},\td{B}_{k,1})\,{\diamond}\,\cdots\,{\diamond}\,N_2(e^{\sqrt{-1}\td{{\beta}}_{k,r_{k,6}}},\td{B}_{k,r_{k,6}}){\diamond}\,H(2)^{{\diamond} h_{k,+}}{\diamond}\,H(-2)^{{\diamond} h_{k,-}},\label{3.1.0} \end{eqnarray} where $\frac{{\theta}_{k,j}}{2\pi}\in{\bf Q}\cap(0,1)\setminus \{\frac{1}{2}\} $ for $1\le j\le r_{k,1}$, $\frac{\td{{\theta}}_{k,j}}{2\pi}\in(0,1)\setminus{\bf Q}$ for $1\le j\le r_{k,2}$, $\frac{{\alpha}_{k,j}}{2\pi}\in{\bf Q}\cap(0,1)\setminus \{\frac{1}{2}\} $ for $1\le j\le r_{k,3}$, $\frac{\td{{\alpha}}_{k,j}}{2\pi}\in(0,1)\setminus{\bf Q}$ for $1\le j\le r_{k,4}$, $\frac{{\beta}_{k,j}}{2\pi}\in{\bf Q}\cap(0,1)\setminus \{\frac{1}{2}\} $ for $1\le j\le r_{k,5}$, $\frac{\td{{\beta}}_{k,j}}{2\pi}\in(0,1)\setminus{\bf Q}$ for $1\le j\le r_{k,6}$; $N_2(e^{\sqrt{-1}{\alpha}_{k,j}},A_{k,j})$'s and $N_2(e^{\sqrt{-1}\td{{\alpha}}_{k,j}},\td{A}_{k,j})$'s are nontrivial and $N_2(e^{\sqrt{-1}{\beta}_{k,j}},B_{k,j})$'s and $N_2(e^{\sqrt{-1}\td{{\beta}}_{k,j}},\td{B}_{k,j})$'s are trivial, and non-negative integers $p_{k,-}$, $p_{k,0}$, $p_{k,+}$, $q_{k,-}$, $q_{k,0}$, $q_{k,+}$, $r_{k,1}$, $r_{k,2}$, $r_{k,3}$, $r_{k,4}$, $r_{k,5}$, $r_{k,6}$, $h_k=h_{k,+}+h_{k,-}$ satisfy the equality \begin{eqnarray} p_{k,-}+p_{k,0}+p_{k,+}+q_{k,-}+q_{k,0}+q_{k,+}+r_{k,1}+r_{k,2} + 2\sum_{j=3}^6 r_{k,j}+h_k= 3. \label{3.2.0} \end{eqnarray} \medskip {\bf Lemma 3.1.} {\it Under the assumption (FCG), for any prime closed geodesic $c_k$, $1\le k\le q$, there holds \begin{equation} i(c_k^m)\ge 3\left[\frac{5m}{3}\right],\qquad \forall\ m\in{\bf N}\label{3.3.0} \end{equation} and \begin{equation} \hat{i}(c_k)>5. \label{3.4.0} \end{equation}} {\bf Proof.} By the assumption (FCG), since the flag curvature $K$ satisfies $\frac{25}{9}\left(\frac{{\lambda}}{1 + {\lambda}}\right)^2 < K \le 1$, we can choose $\frac{25}{9}\left(\frac{\lambda}{\lambda+1}\right)^2<\delta\le K\le 1$. Then by Lemma 2 in \cite{Rad4}, it yields $$ \hat{i}(c_k)\ge 3\sqrt{\delta}\frac{1+\lambda}{\lambda}>5. $$ Note that it follows from Theorem 3 of \cite{Rad3} that $L(c_k^m)=mL(c_k)\ge m\pi\frac{1+\lambda}{\lambda}>\frac{5m}{3}\pi/\sqrt{\delta}$ for $m\ge 1$ and $1\le k\le q$. Then it follows from Lemma 3 of \cite{Rad3} that $i(c_k^m)\ge 3[\frac{5m}{3}]$. \hfill\vrule height0.18cm width0.14cm $\,$ \medskip Combining Lemma 3.1 with Theorem 2.9, it follows that \begin{eqnarray} i(c_k^{m+1})-i(c_k^m)-\nu(c_k^m)\ge i(c_k)-\frac{e(P_{c_k})}{2}\ge 0,\quad\forall\ m\in{\bf N},\ 1\le k\le q.\label{3.5.0} \end{eqnarray} Here the last inequality holds by the fact that $e(P_{c_k})\le 6$ and $i(c_k)\ge 3$. It follows from (\ref{3.4.0}), Theorem 4.3 in \cite{LoZ} and Theorem 2.8 that for any fixed integer $\bar{m}\in {\bf N}$, there exist infinitely many $(q+1)$-tuples $(N, m_1,\cdots, m_q)\in{\bf N}^{q+1}$ such that for any $1\le k\le q$ and $1\le m\le \bar{m}$, there holds \begin{eqnarray} i(c_k^{2m_k -m})+\nu(c_k^{2m_k-m})&=& 2N-i(c_k^{m})-\left(2S^+_{P_{c_k}}(1)+2Q_{k}(m)-\nu(c_k^{m})\right), \label{3.6.0}\\ i(c_k^{2m_k})&\ge& 2N-\frac{e(P_{c_k})}{2},\label{3.7.0}\\ i(c_k^{2m_k})+\nu(c_k^{2m_k})&\le& 2N+\frac{e(P_{c_k})}{2},\label{3.8.0}\\ i(c_k^{2m_k+m})&=&2N+i(c_k^{m}),\label{3.9.0} \end{eqnarray} where (\ref{3.7.0}) and (\ref{3.8.0}) follow from (4.32) and (4.33) in Theorem 4.3 in \cite{LoZ} respectively. Note that by List 9.1.12 of \cite{Lon3}, (\ref{2.26}), (\ref{2.19}) and $\nu(c_k)=p_{k,-}+2p_{k,0}+p_{k,+}$, we have \begin{eqnarray} &&S^+_{P_{c_k}}(1)=p_{k,-}+p_{k,0},\label{3.10.0}\\ &&C(P_{c_k})=q_{k,0}+q_{k,+}+r_{k,1}+r_{k,2}+2r_{k,3}+2r_{k,4},\label{3.11.0}\\ &&Q_{k}(m)=\frac{1+(-1)^m}{2}(q_{k,0}+q_{k,+})+(r_{k,1}+r_{k,3}) -\sum_{j=1}^{r_{k,1}}{\varphi}\left(\frac{m\theta_{k,j}}{2\pi}\right)-\sum_{j=1}^{r_{k,3}}{\varphi}\left(\frac{m{\alpha}_{k,j}}{2\pi}\right),\label{3.12.0}\\ &&\nu(c_k^m)=(p_{k,-}+2p_{k,0}+p_{k,+})+\frac{1+(-1)^m}{2}(q_{k,-}+2q_{k,0}+q_{k,+})+2(r_{k,1}+r_{k,3}+r_{k,5})\nonumber\\ &&\qquad\qquad -2\left(\sum_{j=1}^{r_{k,1}}{\varphi}\left(\frac{m{\theta}_{k,j}}{2\pi}\right)+\sum_{j=1}^{r_{k,3}}{\varphi}\left(\frac{m{\alpha}_{k,j}}{2\pi}\right) +\sum_{j=1}^{r_{k,5}}{\varphi}\left(\frac{m{\beta}_{k,j}}{2\pi}\right)\right).\label{3.13.0} \end{eqnarray} By (\ref{3.10.0}), (\ref{3.12.0}) and (\ref{3.13.0}), we obtain \begin{eqnarray} 2S^+_{P_{c_k}}(1)+2Q_k(m) -\nu(c_k^{m})=p_{k,-}-p_{k,+}-\frac{1+(-1)^m}{2}(q_{k,-}-q_{k,+}) -2r_{k,5}+2\sum_{j=1}^{r_{k,5}}{\varphi}\left(\frac{m{\beta}_{k,j}}{2\pi}\right),\nonumber \end{eqnarray} which, together with (\ref{3.6.0}), gives \begin{eqnarray} i(c_k^{2m_k -m})+\nu(c_k^{2m_k-m})&=& 2N-i(c_k^{m})-p_{k,-}+p_{k,+}+\frac{1+(-1)^m}{2}(q_{k,-}-q_{k,+})\nonumber\\ &&\quad +2r_{k,5}-2\sum_{j=1}^{r_{k,5}}{\varphi}\left(\frac{m{\beta}_{k,j}}{2\pi}\right),\quad\forall\ 1\le m\le \bar{m}.\label{3.14.0} \end{eqnarray} By (\ref{3.7.0})-(\ref{3.9.0}), (\ref{3.14.0}), (\ref{3.2.0}), (\ref{3.3.0}) and the fact $e(P_{c_k})\le 6$, there holds \begin{eqnarray} i(c_k^{2m_k-m})+\nu(c_k^{2m_k-m})&\le& 2N+3-3\left[\frac{5m}{3}\right], \quad\forall\ 1\le m\le \bar{m},\label{3.15.0}\\ 2N-3 &\le& 2N-\frac{e(P_{c_k})}{2}\le i(c_k^{2m_k}),\label{3.16.0}\\ i(c_k^{2m_k})+\nu(c_k^{2m_k})&\le& 2N+\frac{e(P_{c_k})}{2}\le 2N+3,\label{3.17.0}\\ 2N+3\left[\frac{5m}{3}\right]&\le& i(c_k^{2m_k+m}), \qquad\forall\ 1\le m\le \bar{m}.\label{3.18.0} \end{eqnarray} Note that by (\ref{3.5.0}), we have \begin{eqnarray*} &&i(c_k^m)\le i(c_k^{m+1}), \qquad i(c_k^m)+\nu(c_k^m)\le i(c_k^{m+1})+\nu(c_k^{m+1}),\quad\forall m\in {\bf N}, \end{eqnarray*} which, together with (\ref{3.15.0}) and (\ref{3.18.0}), implies \begin{eqnarray} &&i(c_k^m)+\nu(c_k^m)\le i(c_k^{2m_k -\bar{m}})+\nu(c_k^{2m_k-\bar{m}})\le 2N+3-3\left[\frac{5\bar{m}}{3}\right], \,\forall\ 1\le m\le 2m_k-\bar{m},\label{3.19.0}\\ &&2N+3\left[\frac{5\bar{m}}{3}\right]\le i(c_k^{2m_k+\bar{m}})\le i(c_k^{m}),\,\forall\ m\ge 2m_k+\bar{m}.\label{3.20.0} \end{eqnarray} In addition, by (\ref{2.25}), (\ref{3.10.0}), (\ref{3.11.0}) and (\ref{3.13.0}), the precise formulae of $i(c_k^{2m_k})+\nu(c_k^{2m_k})$ can be computed as follows \begin{eqnarray} i(c_k^{2m_k})+\nu(c_k^{2m_k}) &=&2N+2\Delta_k-(p_{k,-}+p_{k,0}+q_{k,0}+q_{k,+}+r_{k,1}+r_{k,2}+2r_{k,3}+2r_{k,4})\nonumber\\ &&+p_{k,+}+2p_{k,0}+p_{k,-}+q_{k,+}+2q_{k,0}+q_{k,-}+2r_{k,1}+2r_{k,3}+2r_{k,5}\nonumber\\ &=&2N+p_{k,0}+p_{k,+}+q_{k,-}+q_{k,0}\nonumber\\ &&\qquad +r_{k,1}+2r_{k,5}-r_{k,2}-2r_{k,4}+2\Delta_k,\quad k=1,\cdots,q, \label{3.21.0} \end{eqnarray} where \begin{eqnarray} \Delta_k \equiv \sum_{0<\{m_k{\theta}/\pi\}<\delta}S^-_{M_k}(e^{\sqrt{-1}{\theta}})\le r_{k,2}+r_{k,4}. \label{3.22.0}\end{eqnarray} by (\ref{2.26}) and List 9.1.12 of \cite{Lon3}. \medskip {\bf Lemma 3.2.} {\it Under the assumption (FCG), for $k=1,\cdots,q$ and $1\le m \le \bar{m}$, we have \begin{eqnarray} i(c_k^{2m_k -1})+\nu(c_k^{2m_k -1})&\le& 2N-3,\label{3.23.0}\\ i(c_k^{2m_k -2})+\nu(c_k^{2m_k -2})&\le& 2N-9.\label{3.24.0}\end{eqnarray}} {\bf Proof.} By (\ref{2.18}), we have \begin{eqnarray} \hat i(c_{k})&=&i(c_{k})+p_{k,-}+p_{k,0}-r_{k,1}-r_{k,2}+\sum_{j=1}^{r_{k,1}} \frac{\theta_{k,j}}{\pi}+\sum_{j=1}^{r_{k,2}} \frac{\tilde \theta_{k,j}}{\pi}\nonumber\\ &<& i(c_{k})+p_{k,-}+ p_{k,0}+r_{k,1}+r_{k,2}.\label{3.25.0} \end{eqnarray} Combining (\ref{3.25.0}) with (\ref{3.4.0}), there holds \begin{eqnarray} i(c_{k})+p_{k,-}+p_{k,0}+r_{k,1}+r_{k,2}\ge 6. \label{3.26.0} \end{eqnarray} Then by (\ref{3.14.0}), (\ref{3.26.0}) and (\ref{3.2.0}), we obtain \begin{eqnarray} i(c_k^{2m_k -1})+\nu(c_k^{2m_k -1})&=& 2N-i(c_k)-p_{k,-}+p_{k,+} \nonumber\\ &\le & 2N-6+p_{k,0}+r_{k,1}+r_{k,2}+p_{k,+}\le 2N-3.\label{3.27.0} \end{eqnarray} By (\ref{3.14.0}), (\ref{3.3.0}) and (\ref{3.2.0}), we get \begin{eqnarray} i(c_k^{2m_k -2})+\nu(c_k^{2m_k -2})= 2N-i(c_k^{2})-p_{k,-}+p_{k,+}+q_{k,-}-q_{k,+}\le 2N-9+3=2N-6.\label{3.28.0} \end{eqnarray} Now we assume $ i(c_k^{2m_k -2})+\nu(c_k^{2m_k -2})\ge 2N-8 $, $\forall\ k=1,\cdots,q$, which, together with (\ref{3.28.0}), gives $ i(c_k^{2m_k -2})+\nu(c_k^{2m_k-2})\in \{2N-6, 2N-7, 2N-8\}$. \medskip We continue the proof by distinguishing three cases. \medskip {\bf Case 1:} $i(c_k^{2m_k -2})+\nu(c_k^{2m_k -2})= 2N-6 $. In this case, by (\ref{3.3.0}) and (\ref{3.28.0}), we know that $p_{k,+}+q_{k,-}=3$ and $ i(c_k^2)=9$. It follows from (\ref{2.18}) and (\ref{3.2.0}) that $i(c_k^2)=2i(c)\in 2{\bf N}$ since $p_{k,+}+q_{k,-}=3$, thus Case 1 cannot happen. \medskip {\bf Case 2:} $ i(c_k^{2m_k -2})+\nu(c_k^{2m_k -2})= 2N-7 $. In this case, by (\ref{3.3.0}) and (\ref{3.28.0}), one of the following cases may happen. (i) $ i(c_k^2)=10 $ and $p_{k,+}+q_{k,-}=3$. (ii) $ i(c_k^2)=9 $, $p_{k,+}+q_{k,-}=2 $ and $ p_{k,-}+q_{k,+}=0 $. For (i), by (\ref{2.18}), we have $i(c_k^2)=2i(c_k)$ and $ \hat{i}(c_k)=i(c_{k}) $ since $p_{k,+}+q_{k,-}=3$. However, there holds $ \hat i(c_{k})>5 $ by (\ref{3.4.0}). So we have $i(c_k^2)=2i(c_k)=2\hat{i}(c_k)>10$, thus (i) of Case 2 cannot happen. For (ii), by (\ref{3.2.0}), there holds \begin{eqnarray} p_{k,0}+q_{k,0}+r_{k,1}+r_{k,2}+h_k=1.\label{3.29.0} \end{eqnarray} It follows from (\ref{2.18}) and (\ref{1.2}) that \begin{eqnarray} i(c_k^2)&=&2i(c_k)+p_{k,0}-q_{k,0}-3(r_{k,1}+r_{k,2})+2\sum_{j=1}^{r_{k,1}}E\left(\frac{\theta_{k,j}}{\pi}\right)+2\sum_{j=1}^{r_{k,2}}E\left(\frac{\tilde{\theta}_{k,j}}{\pi}\right)\nonumber\\ &>&2i(c_k)+p_{k,0}-q_{k,0}-3(r_{k,1}+r_{k,2})+2\sum_{j=1}^{r_{k,1}}\frac{\theta_{k,j}}{\pi}+2\sum_{j=1}^{r_{k,2}}\frac{\tilde{\theta}_{k,j}}{\pi}.\label{3.30.0} \end{eqnarray} Combining (\ref{3.30.0}) and (\ref{3.29.0}), it yields \begin{eqnarray} \hat i(c_{k})&=& i(c_{k})+p_{k,0}-r_{k,1}-r_{k,2}+\sum_{j=1}^{r_{k,1}} \frac{\theta_{k,j}}{\pi}+\sum_{j=1}^{r_{k,2}} \frac{\tilde \theta_{k,j}}{\pi}\nonumber\\ &<&\frac{1}{2}\left(i(c_k^2)+p_{k,0}+q_{k,0}+r_{k,1}+r_{k,2}\right)\le \frac{1}{2}\left(i(c_k^2)+1\right),\label{3.31.0} \end{eqnarray} which, together with (\ref{3.4.0}), yields $i(c_k^2)>9$, thus Case (ii) cannot happen. \medskip {\bf Case 3:} $ i(c_k^{2m_k -2})+\nu(c_k^{2m_k -2})= 2N-8 $. In this case, by (\ref{3.3.0}) and (\ref{3.28.0}), one of the following cases may happen. (i) $ i(c_k^2)=11 $ and $p_{k,+}+q_{k,-}=3$. (ii) $ i(c_k^2)=10 $, $p_{k,+}+q_{k,-}=2 $ and $ p_{k,-}+q_{k,+}=0 $. (iii) $ i(c_k^2)=9 $, $p_{k,+}+q_{k,-}=2 $ and $ p_{k,-}+q_{k,+}=1 $. (iv) $ i(c_k^2)=9 $, $p_{k,-}+q_{k,+}=0 $ and $p_{k,+}+q_{k,-}=1 $. For (i), similar to the arguments in Case 1, it can be shown that this case cannot happen. For (ii), similar to (\ref{3.29.0}) and the first equality in (\ref{3.30.0}), we have \begin{eqnarray} && p_{k,0}+q_{k,0}+r_{k,1}+r_{k,2}+h_k=1, \label{3.32.0}\\ && i(c_k^2)=p_{k,0}+q_{k,0}+r_{k,1}+r_{k,2}\quad ({\rm mod}\ 2),\label{3.33.0} \end{eqnarray} which yields \begin{eqnarray} i(c_k^2)=1+h_k\quad ({\rm mod}\ 2).\label{3.34.0} \end{eqnarray} Then we get $h_k=1$ by $ i(c_k^2)=10$ and (\ref{3.32.0}). By (\ref{2.18}) and (\ref{3.2.0}), we have $i(c_k^2)=2i(c_k)$ and $ \hat{i}(c_k)=i(c_{k}) $ since $p_{k,+}+q_{k,-}=2$ and $h_k=1$. However $ \hat i(c_{k})>5 $ by (\ref{3.4.0}), thus Case (ii) cannot happen. For (iii), by (\ref{2.18}) and (\ref{3.2.0}), we have $i(c_k^2)=2i(c_k)+p_{k,-}-q_{k,+}$ and $ \hat{i}(c_k)=i(c_k)+p_{k,-}$ since $ p_{k,-}+q_{k,+}=1 $ and $p_{k,+}+q_{k,-}=2 $. Then we obtain $2\hat{i}(c_k)=i(c_k^2)+1=10$. However $ \hat i(c_{k})>5 $ by (\ref{3.4.0}), thus Case (iii) cannot happen. For (iv), similar to the proof of (\ref{3.33.0}) in (ii) of Case 3, we have \begin{eqnarray} i(c_k^2)=p_{k,0}+q_{k,0}+r_{k,1}+r_{k,2}\quad ({\rm mod}\ 2).\label{3.35.0} \end{eqnarray} Therefore by (\ref{3.2.0}), (\ref{3.35.0}), $i(c_k^2)=9$ and $p_{k,+}+q_{k,-}=1$, we get \begin{eqnarray} p_{k,0}+q_{k,0}+r_{k,1}+r_{k,2}=1.\label{3.36.0} \end{eqnarray} Then similar to the proof of (\ref{3.31.0}) in (ii) of Case 2, we have \begin{eqnarray} \hat i(c_{k})<\frac{1}{2}\left(i(c_k^2)+p_{k,0}+q_{k,0}+r_{k,1}+r_{k,2}\right)= \frac{1}{2}\left(i(c_k^2)+1\right),\label{3.37.0} \end{eqnarray} which, together with (\ref{3.4.0}), yields $i(c_k^2)>9$, thus Case (iv) cannot happen. This completes the proof of Lemma 3.2.\hfill\vrule height0.18cm width0.14cm $\,$ \medskip Under the assumption (FCG), Theorem 1.1 of \cite{Dua2} shows that there exist at least two elliptic closed geodesics $c_1$ and $c_2$ on $(S^4,F)$ whose flag curvature satisfies $\left(\frac{{\lambda}}{1+{\lambda}}\right)^2<K\le 1$. The following Lemma gives some properties of these two closed geodesics which will be useful in the proof of Theorem 1.1. \medskip {\bf Lemma 3.3.} (cf. Lemma 3.1 and Lemma 3.3 of \cite{Dua1} and Section 3 of \cite{Dua2}) {\it Under the assumption (FCG), there exist at least two prime elliptic closed geodesics $c_1$ and $c_2$ on $(S^4,F)$ whose flag curvature satisfies $\left(\frac{{\lambda}}{1+{\lambda}}\right)^2<K\le 1$. Moreover, there exist infinitely many pairs of $(q+1)$-tuples $(N, m_1, m_2,\cdots, m_q)\in{\bf N}^{q+1}$ and $(N', m_1', m_2',\cdots, m_q')\in{\bf N}^{q+1}$ such that \begin{eqnarray} && i(c_1^{2m_1})+\nu(c_1^{2m_1})=2N+3,\quad \overline{C}_{2N+3}(E,c_1^{2m_1})={\bf Q},\label{3.38.0}\\ && i(c_2^{2m_2'})+\nu(c_2^{2m_2'})=2N'+3,\quad \overline{C}_{2N'+3}(E,c_2^{2m_2'})={\bf Q},\label{3.39.0}\\ && p_{k,-}=q_{k,+}=r_{k,3}=r_{k,4}=r_{k,6}=h_k=0,\quad k=1,2,\label{3.40.0}\\ && r_{1,2}=\Delta_1\ge 1,\qquad r_{2,2}=\Delta_2'\ge 1, \label{3.41.0}\\ && \Delta_k+\Delta_k'=r_{k,2},\quad k=1,2,\label{3.42.0}\end{eqnarray} where we can require $3|N$ or $3|N'$ as remarked in Theorem 2.8 and \begin{eqnarray} \Delta_k' \equiv \sum_{0<\{m_k'{\theta}/\pi\}<\delta}S^-_{M_k}(e^{\sqrt{-1}{\theta}}),\quad k=1,2.\label{3.43.0}\end{eqnarray} In addition, for these two closed geodesics $c_1$ and $c_2$, there holds \begin{eqnarray} k_{\nu(c_k^{n(c_k)})}^{{\epsilon}(c_k^{n(c_k)})}(c_k^{n(c_k)})=1,\quad k_{j}^{{\epsilon}(c_k^{n(c_k)})}(c_k^{n(c_k)})=0,\quad\forall\ 0\le j<\nu(c_k^{n(c_k)}),\ k=1,2. \label{3.44.0}\end{eqnarray}} \subsection{Proof of Theorem 1.1 In this section, let $(S^4,F)$ be a Finsler sphere of dimension $4$ with its reversibility $\lambda$ and flag curvature $K$ satisfying $\frac{25}{9}\left(\frac{{\lambda}}{1+{\lambda}}\right)^2<K\le 1$. In order to prove Theorem 1.1, according to Lemma 3.3 and Theorem 1.1 of \cite{Dua1}, we make the following assumption. \smallskip {\bf (TCG)} {\it Suppose that there exist exactly two prime elliptic closed geodesics $c_1$ and $c_2$ possessing all properties listed in Lemma 3.3, and the third prime closed geodesic $c_3$ on such $(S^4,F)$.} \smallskip In order to count the contribution of $c_k^m$ to the Morse-type number $M_q$, for the sake of convenience, we set \begin{eqnarray} M_q(k,m)=\dim{\overline{C}}_q(E, c^m_k),\quad \forall\ 1\le k\le 3,\ m\ge1,\ q\in{\bf N}_0. \label{4.0.1} \end{eqnarray} Next we fix $\bar{m}=4$. Before proving Theorem 1.1, firstly we establish several crucial lemmas. \medskip {\bf Lemma 3.4.} {\it For an integer $q$ satisfying $2N-9\le q\le 2N+17$, there holds} \begin{eqnarray} M_q=\left\{\matrix{ \sum_{1\le k\le 3, \atop 1\le m\le 2}M_q(k,2m_k-m),\quad &&{\it if}\quad q=2N-9, \cr \sum_{1\le k\le 3}M_q(k,2m_k-1),\quad &&{\it if}\quad 2N-8\le q\le 2N-4, \cr \sum_{1\le k\le 3, \atop 0\le m\le 1}M_q(k,2m_k-m),\quad &&{\it if}\quad q=2N-3, \cr \sum_{1\le k\le 3}M_q(k,2m_k),\quad &&{\it if}\quad 2N-2\le q\le 2N+2, \cr \sum_{1\le k\le 3, \atop 0\le m\le 1}M_q(k,2m_k+m),\quad &&{\it if}\quad q=2N+3, \cr \sum_{1\le k\le 3}M_q(k,2m_k+1),\quad &&{\it if}\quad 2N+4\le q\le 2N+8, \cr \sum_{1\le k\le 3, \atop 1\le m\le 2}M_q(k,2m_k+m),\quad &&{\it if}\quad 2N+9\le q\le 2N+14, \cr \sum_{1\le k\le 3, \atop 1\le m\le 3}M_q(k,2m_k+m),\quad &&{\it if}\quad 2N+15\le q\le 2N+17. \cr}\right. \end{eqnarray} \medskip {\bf Proof.} According to Lemma 2.1, (\ref{2.4}) and (i) of Lemma 2.2, we have \begin{eqnarray} M_q&=&\sum_{1\le k\le 3, \atop m\ge 1}M_q(k,m)=\sum_{1\le k\le 3, \atop m\ge 1}k_{q-i(c_k^m)}^{{\epsilon}(c_k^m)}(c_k^m)\nonumber\\ &=&\sum_{1\le k \le 3, \atop m\in\left\{m\in{\bf N}|i(c_k^m)\le q \le i(c_k^m)+\nu(c_k^m)\right\}}k_{q-i(c_k^m)}^{{\epsilon}(c_k^m)}(c_k^m)=\sum_{1\le k \le 3, \atop m\in\left\{m\in{\bf N} |i(c_k^m)\le q \le i(c_k^m)+\nu(c_k^m)\right\}}M_q(k,m).\label{3.47.0} \end{eqnarray} On one hand, by (\ref{3.19.0}), (\ref{3.15.0}), (\ref{3.24.0}), (\ref{3.23.0}) and (\ref{3.17.0}), it yields \begin{eqnarray} i(c_k^m)+\nu(c_k^m)\le\left\{\matrix{ 2N-15,&&\quad {\it if}\quad 1\le m\le 2m_k-4, \cr 2N-12,&&\quad {\it if}\quad m=2m_k-3, \cr 2N-9,&&\quad {\it if}\quad m=2m_k-2, \cr 2N-3,&&\quad {\it if}\quad m=2m_k-1, \cr 2N+3,&&\quad {\it if}\quad m=2m_k. \cr}\right.\label{3.48.0} \end{eqnarray} On the other hand, by (\ref{3.16.0}), (\ref{3.18.0}) and (\ref{3.20.0}), it yields \begin{eqnarray} i(c_k^m)\ge\left\{\matrix{ 2N-3,&&\quad {\it if}\quad m=2m_k, \cr 2N+3,&&\quad {\it if}\quad m=2m_k+1, \cr 2N+9,&&\quad {\it if}\quad m=2m_k+2, \cr 2N+15,&&\quad {\it if}\quad m=2m_k+3, \cr 2N+18,&&\quad {\it if}\quad m\ge 2m_k+4. \cr}\right.\label{3.49.0} \end{eqnarray} Combining (\ref{3.47.0})-(\ref{3.49.0}), we get Lemma 3.4. \hfill\vrule height0.18cm width0.14cm $\,$ \medskip {\bf Lemma 3.5.} {\it For some tuple $(k,m)$ with $k=1,2,3$ and $m\in{\bf N}$, if there exist some integers $q_1, q_2\in{\bf N}$ satisfying \begin{eqnarray} q_1\le i(c_k^m) \quad \mbox{and} \quad i(c_k^m)+\nu(c_k^m)\le q_2,\nonumber \end{eqnarray} then there holds \begin{eqnarray} M_{q_1}(k,m)+M_{q_2}(k,m)\le 1. \end{eqnarray} Furthermore, if $M_{q_1}(k,m)+M_{q_2}(k,m)=1$, then \begin{eqnarray} M_q(k,m)=0,\quad \forall\ q\neq q_1,q_2. \end{eqnarray}} {\bf Proof.} This follows directly from Lemma 2.1, (\ref{2.4}), (i) and (ii) of Lemma 2.2. \hfill\vrule height0.18cm width0.14cm $\,$ \medskip {\bf Lemma 3.6.} {\it For some $k\in \{1,2,3\}$, assume that either $M_{2N-q\pm1}(k,2m_k-1)\ge 1$ or $M_{2N+q\pm1}(k,2m_k+1)\ge 1$ for some even $q\in {\bf N}$, then there exists a continuous path $f_k\in C([0,1],{\Omega}^0(P_{c_k}))$ such that $f_k(0)=P_{c_k}$ and $f_k(1)$ belongs to one of the following five cases: (i) $I_4\,{\diamond}\,H(2)$, (ii) $N_1(1,1)\,{\diamond}\,I_2\,{\diamond}\,N_1(1,-1)$, (iii) $I_4\,{\diamond}\,N_1(1,-1)$, (iv) $N_1(1,1)\,{\diamond}\, I_4$, (v) $I_6$. And in either case, the index iteration formula of $c_k^m$ can be written as follows: \begin{eqnarray} i(c_k^m)=qm-p_{k,-}-p_{k,0},\quad \forall\ m\ge 1. \label{3.52.0} \end{eqnarray} } {\bf Proof.} We only give the proof under the assumption $M_{2N-q\pm1}(k,2m_k-1)\ge 1$. The proof under the assumption $M_{2N+q\pm1}(k,2m_k+1)\ge 1$ is similar. First, by Lemma 3.5 and the assumption $M_{2N-q\pm1}(k,2m_k-1)\ge 1$ in Lemma 3.6, we have \begin{eqnarray} i(c_k^{2m_k-1}) \le 2N-q-2, \qquad i(c_k^{2m_k-1})+\nu(c_k^{2m_k-1}) \ge 2N-q+2, \label{3.53.0} \end{eqnarray} which, together with $\nu(c_k^{2m_k-1})=\nu(c_k)$ by (\ref{2.22}), implies $\nu(c_k)=p_{k,-}+2p_{k,0}+p_{k,+}\in\{4,5,6\}$. If $\nu(c_k)=4$, by (\ref{3.53.0}), we have $i(c_k^{2m_k-1})= 2N-q-2$ and $i(c_k^{2m_k-1})+\nu(c_k^{2m_k-1})=2N-q+2$. And we also have $p_{k,-}+p_{k,0}+p_{k,+}=2$ or $3$ by (\ref{3.2.0}). Since $i(c_k^{2m_k-1})= 2N-q-2\in 2{\bf N}$, we get $i(c_k)\in 2{\bf N}$ by (\ref{2.24}), then by Proposition 2.7 and the symplectic additivity of symplectic paths (cf. Theorem 6.2.6 of \cite{Lon3}), we must have $p_{k,+}+h_{k,+}=1$. Therefore, if $p_{k,-}+p_{k,0}+p_{k,+}=2$, we must have $p_{k,0}=2$ since $\nu(c_k)=4$ and $h_{k,+}=1$, i.e., $f_k(1)=I_4\,{\diamond}\,H(2)$. If $p_{k,-}+p_{k,0}+p_{k,+}=3$, we must have $p_{k,0}=1$, $p_{k,+}=1$ and $p_{k,-}=1$, i.e., $f_k(1)=N_1(1,1)\,{\diamond}\,I_2\,{\diamond}\,N_1(1,-1)$. If $\nu(c_k)=5$, we have $p_{k,-}+p_{k,0}+p_{k,+}=3$. Then we must have $p_{k,0}=2$ and $p_{k,-}+p_{k,+}=1$. So either $i(c_k^{2m_k-1})= 2N-q-2$ when $p_{k,+}=1$, or $i(c_k^{2m_k-1})= 2N-q-3$ when $p_{k,-}=1$ by Proposition 2.7, the symplectic additivity and (\ref{3.53.0}), i.e., $f_k(1)=I_4\,{\diamond}\,N_1(1,-1)$ or $N_1(1,1)\,{\diamond}\, I_4$. If $\nu(c_k)=6$, we must have $p_{k,0}=3$, and then we have $i(c_k^{2m_k-1})= 2N-q-3$ by Proposition 2.7, the symplectic additivity and (\ref{3.53.0}), i.e., $f_k(1)=I_6$. Note that by (\ref{2.24}), (\ref{3.10.0}) and (\ref{3.12.0}), we get $$ i(c_k^{2m_k-1})=2N-i(c_k)-2p_{k,-}-2p_{k,0}, $$ and by the above arguments in either case, we have $i(c_k)=q-p_{k,-}-p_{k,0}$. Then by (\ref{2.18}), we have $$ i(c_k^m)=qm-p_{k,-}-p_{k,0}. $$ This completes the proof of Lemma 3.6. \hfill\vrule height0.18cm width0.14cm $\,$ \medskip {\bf Proof of Theorem 1.1.} \medskip At first, we consider the contribution of $c_k^m$ with $k=1,2,3$ and $m\in{\bf N}$ to the Morse-type numbers in Claim 1 and Claim 2. And then we use Claim 3 and Claim 4 to complete the proof of Theorem 1.1. \medskip {\bf Claim 1:} {\it For $2N-2\le q\le 2N+2$, there holds: (i) $M_q(1,m)=0$ for any $m\in{\bf N}$, and (ii) $M_q(k,m)=0$ for $k=2,3$ and $m\neq 2m_k$. In addition, $M_q(2,2m_2)=0$ for $q=2N-2,2N,2N+2$ and $M_{2N-1}(2,2m_2)+M_{2N+1}(2,2m_2)\le 1$.} \medskip {\bf Proof.} By Lemma 3.4, we know that $M_q(k,m)=0$ for $2N-2\le q\le 2N+2$, $m \neq 2m_k$ and $k=1,2,3$. Also note that by (\ref{3.38.0}), (\ref{3.44.0}), Lemma 2.1 and (\ref{2.4}), we have $$ M_q(1,2m_1)=0\quad \mbox{for}\quad 2N-2\le q\le 2N+2. $$ On one hand, there holds $$\nu(c_2^{2m_2})=\nu(c_2^{2m'_2})$$ by the choices of $m_2$ and $m_2'$ in (\ref{2.27}) of Theorem 2.8. On the other hand, it yields $$i(c_2^{2m_2} ) = i(c_2^{2m_2'} ) \quad({\rm mod}\ 2) $$ by (\ref{2.18}) of Theorem 2.6. So, $i(c_2^{2m_2} ) + \nu(c_2^{2m_2} )$ is odd since $i(c_2^{2m'_2} ) + \nu(c_2^{2m'_2} )$ is odd by (\ref{3.39.0}) of Lemma 3.3, which implies that $M_q(2,2m_2)=0$ for $q=2N-2, 2N, 2N+2$ and $M_{2N-1}(2,2m_2)+M_{2N+1}(2,2m_2)\le 1$ by (\ref{3.44.0}), Lemma 2.1 and (\ref{2.4}). Hence, Claim 1 holds. \medskip {\bf Claim 2:} {\it $M_{2N-1}(2,2m_2)=M_{2N+1}(2,2m_2)=0$.} \medskip {\bf Proof.} Otherwise, by Claim 1 we have \begin{eqnarray} M_{2N-1}(2,2m_2)+M_{2N+1}(2,2m_2)=1. \label{3.54.0} \end{eqnarray} Then by (\ref{3.16.0}) and Lemma 3.5, it yields \begin{eqnarray} M_{2N-3}(2,2m_2)=0. \label{3.55.0} \end{eqnarray} By (\ref{2.8}) and Theorem 2.5, we have $M_{2N-1}\ge b_{2N-1}=1$ and $M_{2N+1}\ge b_{2N+1}=1$, then we get $M_{2N-1}(3,2m_3)+M_{2N+1}(3,2m_3)\ge 1$ by Lemma 3.4, (i) of Claim 1 and (\ref{3.54.0}). Thus by (\ref{3.16.0}), (\ref{3.17.0}) and Lemma 3.5, it yields \begin{eqnarray} M_{2N-3}(3,2m_3)=M_{2N+3}(3,2m_3)=0. \label{3.56.0} \end{eqnarray} So, by Lemma 3.4, Claim 1, (\ref{3.54.0}) and (\ref{3.56.0}), we obtain \begin{eqnarray} \sum_{q=2N-2}^{2N+2}(-1)^q M_q=\sum_{q=2N-2}^{2N+2}(-1)^q \sum_{1\le k \le 3}M_q(k,2m_k)=\sum_{q=2N-3}^{2N+3}(-1)^q M_q(3,2m_3)-1.\label{3.57.0} \end{eqnarray} And by (\ref{3.16.0}), (\ref{3.17.0}), Lemma 2.1 and (\ref{2.4}), we have \begin{eqnarray} \sum_{q=2N-3}^{2N+3}(-1)^q M_q(3,2m_3) =\sum_{0\le l \le 6}(-1)^{i(c_3^{2m_3})+l}k_{l}^{{\epsilon}(c_3^{2m_3})}(c_3^{2m_3}).\label{3.58.0} \end{eqnarray} On the other hand, by (\ref{2.8}) and Theorem 2.5, we have \begin{eqnarray} \sum_{q=2N-2}^{2N+2}(-1)^q M_q\ge \sum_{q=2N-2}^{2N+2}(-1)^q b_q=-2.\label{3.59.0} \end{eqnarray} Combining (\ref{3.57.0})-(\ref{3.59.0}), we get \begin{eqnarray} \chi(c_3^{2m_3})=\sum_{0\le l \le 6}(-1)^{i(c_3^{2m_3})+l}k_{l}^{{\epsilon}(c_3^{2m_3})}(c_3^{2m_3}) \ge -1.\label{3.60.0} \end{eqnarray} Note that since $n(c_3)|2m_3$ and $\nu(c_3^{2m_3}) = \nu(c_3^{n(c_3)})$ by (\ref{2.7}) and (\ref{2.27}), there holds \begin{eqnarray} \chi(c_3^{n(c_3)})=\chi(c_3^{2m_3})\ge -1.\label{3.61.0} \end{eqnarray} By (\ref{3.38.0}), (\ref{3.44.0}), Lemma 2.1 and (\ref{2.4}), it yields $M_{2N-3}(1,2m_1)=0$. Together with (\ref{3.55.0}) and (\ref{3.56.0}), we have \begin{eqnarray} \sum_{1\le k\le 3}M_{2N-3}(k,2m_k)=0.\label{3.62.0} \end{eqnarray} Then combining Lemma 3.4 and (\ref{3.62.0}), we obtain that \begin{eqnarray} M_{2N-3}=\sum_{1\le k\le 3}M_{2N-3}(k,2m_k-1).\label{3.63.0} \end{eqnarray} One one hand, by (\ref{3.23.0}) and Lemma 3.5 it yields \begin{eqnarray} M_{2N-3}(k,2m_k-1)\le 1,\quad \forall\ k=1,2,3,\label{3.64.0} \end{eqnarray} then it follows from (\ref{3.63.0}) and (\ref{3.64.0}) that $M_{2N-3}\le 3$. On the other hand, we have $M_{2N-3}\ge b_{2N-3}=2$ by (\ref{2.8}) and Theorem 2.5. So it yields $M_{2N-3}\in \{2,3\}$. We continue the proof by distinguishing two cases. \medskip {\bf Case 1:} $M_{2N-3}=3$. \medskip In this case, it follows from (\ref{3.63.0}) and (\ref{3.64.0}) that $M_{2N-3}(k,2m_k-1)=1$ for $k=1,2,3$. Then according to (\ref{3.23.0}) and Lemma 3.5, there holds \begin{eqnarray} M_{2N-5}(k,2m_k-1)=0, \quad\forall\ k=1,2,3. \label{3.65.0} \end{eqnarray} Combining (\ref{3.65.0}) and Lemma 3.4, we get $M_{2N-5}=0$. But by (\ref{2.8}) and Theorem 2.5, we have $M_{2N-5} \ge b_{2N-5}=1$. This is a contradiction. \medskip {\bf Case 2:} $M_{2N-3}=2$. \medskip In this case, it follows from (\ref{3.63.0}) and (\ref{3.64.0}) that one of the following cases may happen: (i) $M_{2N-3}(3,2m_3-1)=0$ and $M_{2N-3}(1,2m_1-1)=M_{2N-3}(2,2m_1-1)=1$. (ii) $M_{2N-3}(3,2m_3-1)=1$ and $M_{2N-3}(1,2m_1-1)+M_{2N-3}(2,2m_2-1)=1$. For (i), by (\ref{3.23.0}) and Lemma 3.5, there holds \begin{eqnarray} M_q(k,2m_k-1)=0,\quad \forall\ q\neq 2N-3,\ k=1,2.\label{3.67.0} \end{eqnarray} So, according to Lemma 3.4 and (\ref{3.67.0}), we have \begin{eqnarray} M_{2N-5}&=&\sum_{1\le k\le 3}M_{2N-5}(k,2m_k-1)=M_{2N-5}(3,2m_3-1),\label{3.68.0}\\ M_{2N-7}&=&\sum_{1\le k\le 3}M_{2N-7}(k,2m_k-1)=M_{2N-7}(3,2m_3-1).\label{3.69.0} \end{eqnarray} By (\ref{2.8}) and Theorem 2.5, we have $M_{2N-5} \ge b_{2N-5}=1$ and $M_{2N-7} \ge b_{2N-7}=1$, then it follows from (\ref{3.68.0}) and (\ref{3.69.0}) that \begin{eqnarray} M_{2N-5}(3,2m_3-1)\ge 1,\quad M_{2N-7}(3,2m_3-1)\ge 1. \label{3.70.0} \end{eqnarray} So the assumption with $q=6$ in Lemma 3.6 is satisfied, and then by (\ref{3.52.0}) and (\ref{2.7}), we have \begin{eqnarray} i(c_3^m) &=& 6m-p_{3,-}-p_{3,0},\label{3.71.0}\\ n(c_3) &=& 1.\label{3.72.0} \end{eqnarray} Notice that $\nu(c_3^{2m_3-1})=\nu(c_3^{2m_3-2})=\nu(c_3)$ by (\ref{2.19}) for each of five cases in Lemma 3.6. Together with Lemma 2.1, (\ref{2.4}), (iv) of Lemma 2.2, (\ref{3.71.0}), (\ref{3.72.0}) and (i) of Case 2, it yields \begin{eqnarray} M_{2N-9}(3,2m_3-2)&=&k_{2N-9-i(c_3^{2m_3-2})}^{{\epsilon}(c_3^{2m_3-2})}(c_3^{2m_3-2})\nonumber\\ &=&k_{2N-9-i(c_3^{2m_3-2})}^{{\epsilon}(c_3^{2m_3-1})}(c_3^{2m_3-1})\nonumber\\ &=&M_{2N-9-i(c_3^{2m_3-2})+i(c_3^{2m_3-1})}(3,2m_3-1)\nonumber\\ &=&M_{2N-3}(3,2m_3-1)=0.\label{3.73.0} \end{eqnarray} Through comparing (\ref{3.9.0}) and (\ref{3.71.0}), we get $2N=12m_3$. Then by (\ref{3.71.0}) and (\ref{3.2.0}) it yields \begin{eqnarray} i(c_3^{2m_3-1})=12m_3-6-p_{3,-}-p_{3,0}\ge 2N-9,\label{3.74.0} \end{eqnarray} Then by (\ref{3.70.0}) and Lemma 3.5, there holds \begin{eqnarray} M_{2N-9}(3,2m_3-1)=0.\label{3.75.0} \end{eqnarray} Finally, combining (\ref{3.67.0}), (\ref{3.73.0}), (\ref{3.75.0}) and Lemma 3.4, we obtain \begin{eqnarray} M_{2N-9}=\sum_{1\le k\le 3, \atop 1\le m\le 2}M_{2N-9}(k,2m_k-m)=M_{2N-9}(1,2m_1-2)+M_{2N-9}(2,2m_2-2).\label{3.76.0} \end{eqnarray} By (\ref{3.24.0}) and (ii) of Lemma 2.2, we get $M_{2N-9}(k,2m_k-2)\le 1$ for $k=1,2$. Then it follows from (\ref{3.76.0}) that $M_{2N-9}\le 2$. However by (\ref{2.8}) and Theorem 2.5, we have $M_{2N-9}\ge b_{2N-9}=2$. Thus we get $M_{2N-9}= 2$. Now, in this case, we have \begin{eqnarray} M_{2N-3}=b_{2N-3},\quad M_{2N-9}=b_{2N-9}.\label{3.77.0} \end{eqnarray} Combining (\ref{2.8}), Theorem 2.5 and (\ref{3.77.0}), we obtain \begin{eqnarray} \sum_{q=2N-8}^{2N-4} (-1)^q M_{q}=\sum_{q=2N-8}^{2N-4} (-1)^q b_{q}=-2.\label{3.78.0} \end{eqnarray} Note that for $2N-8\le q\le 2N-4$, it follows from (\ref{3.23.0}) and (i) of this case that $M_q(k,2m_k-1)=0$ for $k=1,2$. So we get $M_q=M_q(3,2m_3-1)$ by Lemma 3.4, and then it yields \begin{eqnarray} \sum_{q=2N-8}^{2N-4} (-1)^q M_{q}=\sum_{q=2N-8}^{2N-4} (-1)^q M_{q}(3,2m_3-1).\label{3.79.0} \end{eqnarray} Note that $2N-9\le i(c_3^{2m_3-1})\le i(c_3^{2m_3-1})+\nu(c_3^{2m_3-1}) \le 2N-3$ by (\ref{3.74.0}) and (\ref{3.23.0}), according to (\ref{3.75.0}), (i) of this case, Lemma 2.1 and (\ref{2.4}), we obtain \begin{eqnarray} \sum_{q=2N-8}^{2N-4} (-1)^q M_{q}(3,2m_3-1)&=&\sum_{q=2N-9}^{2N-3} (-1)^q M_{q}(3,2m_3-1)\nonumber\\ &=&\sum_{0\le l \le 6}(-1)^{i(c_3^{2m_3-1})+l}k_{l}^{{\epsilon}(c_3^{2m_3-1})}(c_3^{2m_3-1}).\label{3.80.0} \end{eqnarray} Combining (\ref{3.78.0})-(\ref{3.80.0}), we get \begin{eqnarray} \chi(c_3^{2m_3-1})=\sum_{0\le l \le 6}(-1)^{i(c_3^{2m_3-1})+l}k_{l}^{{\epsilon}(c_3^{2m_3-1})}(c_3^{2m_3-1})=-2.\label{3.81.0} \end{eqnarray} However, since $n(c_3)=1$ by (\ref{3.72.0}), it follows from (iv) of Lemma 2.2 and (\ref{3.81.0}) that $\chi(c_3)=\chi(c_3^{2m_3-1})=-2$, which contradicts to (\ref{3.61.0}), thus Case (i) cannot happen. For (ii), without loss of generality, we assume that $M_{2N-3}(1,2m_1-1)=0$. Then by (\ref{3.23.0}) and Lemma 3.5, $M_q(k,2m_k-1)=0$ for $q\neq 2N-3$ and $k=2,3$. So, according to Lemma 3.4, we have \begin{eqnarray} M_{2N-5}=M_{2N-5}(1,2m_1-1),\quad M_{2N-7}=M_{2N-7}(1,2m_1-1). \label{3.82.0} \end{eqnarray} By (\ref{2.8}) and Theorem 2.5, we have $M_{2N-5} \ge b_{2N-5}=1$ and $M_{2N-7} \ge b_{2N-7}=1$, then it follows from (\ref{3.82.0}) that $M_{2N-5}(1,2m_1-1)\ge 1$ and $M_{2N-7}(1,2m_1-1)\ge 1$. Thus by Lemma 3.6, there holds $f_1(0)=P_{c_1}$ and $f_1(1)$ belongs to one of five cases in Lemma 3.6, which contradicts (\ref{3.40.0}) in Lemma 3.3. This completes the proof of Claim 2. \medskip {\bf Claim 3:} {\it $c_1$ and $c_2$ are irrationally elliptic.} \medskip {\bf Proof.} By (\ref{3.41.0}) and (\ref{3.42.0}), there holds $\Delta_2 = 0$. Then, together with the fact that $r_{2,3}=r_{2,4} = 0$ from (\ref{3.40.0}), it follows from (\ref{3.41.0}) and (\ref{3.21.0}) that \begin{eqnarray} 2N+1 &\ge& i(c_2^{2m_2}) + \nu(c_2^{2m_2} )\label{3.84.0} \\ &=&2N+(p_{2,0} +p_{2,+} +q_{2,-} +q_{2,0} +r_{2,1}+2r_{2,5}-r_{2,2})\nonumber\\ &\ge& 2N - 3, \label{3.85.0} \end{eqnarray} where (\ref{3.84.0}) holds by the fact that $p_{2,0} +p_{2,+} +q_{2,-} +q_{2,0} +r_{2,1}+2r_{2,5}\le 2$ from (\ref{3.2.0}) and (\ref{3.41.0}), and $r_{2,2}\ge 1$ from (\ref{3.41.0}), and the equality in (\ref{3.85.0}) holds if and only if $r_{2,2}=3$. On the other hand, by Claim 2, we have $i(c_2^{2m_2} ) + \nu(c_2^{2m_2} ) \notin \{2N-1, 2N+1\}$. Note that $i(c_2^{2m_2} )+\nu(c_2^{2m_2} )=i(c_2^{2m'_2} )+\nu(c_2^{2m'_2})=1\ ({\rm mod}\ 2)$ by (\ref{3.39.0}). Thus, by (\ref{3.84.0}), we obtain $i(c_2^{2m_2} ) + \nu(c_2^{2m_2} ) \le 2N-3$, which together with (\ref{3.85.0}) implies $r_{2,2} = 3$, i.e., $c_2$ is irrationally elliptic. By the symmetric properties of $c_1$ and $c_2$ in Lemma 3.3 (or, more precisely, replacing $N$ with $N'$ in the above arguments), we conclude that $c_1$ is also irrationally elliptic. This completes the proof of Claim 3. \medskip {\bf Claim 4:} {\it $c_3$ is non-hyperbolic.} \medskip {\bf Proof.} Assume that $c_3$ is hyperbolic, which, together with the assumption (TCG) and Claim 3, implies the Finsler metric $F$ on $S^4$ is bumpy. Then by Theorem 1.1 in \cite{DLW}, we know that there exist at least four distinct non-hyperbolic prime closed geodesics, which contradicts the assumption (TCG). Thus by $c_3$ is non-hyperbolic. Therefore, by the assumption (TCG), Claims 3 and 4 complete the proof of Theorem 1.1. \hfill\vrule height0.18cm width0.14cm $\,$ \setcounter{figure}{0} \setcounter{equation}{0} \section{Some further information about the third closed geodesic} In this section, under the assumption (TCG), we further study the third closed geodesic $c_3$ and obtain some much precise information about it (cf. Theorem 4.2 below). At first, we establish a similar result as Lemma 3.6. \medskip {\bf Lemma 4.1.} {\it For some $k\in \{1,2,3\}$, assume that either $M_{2N-q\pm1}(k,2m_k-2)\ge 1$ or $M_{2N+q\pm1}(k,2m_k+2)\ge 1$ for some even $q\in {\bf N}$, then there exists a continuous path $f_k\in C([0,1],{\Omega}^0(P_{c_k}))$ such that $f_k(0)=P_{c_k}$ and $f_k(1)$ belongs to one of the following cases: (i) $I_{2p_{k,0}}\,{\diamond}\,(-I_{2q_{k,0}})\,{\diamond}\,H(2)$ with $p_{k,0}+q_{k,0}=2$, (ii) $N_1(1,1)^{{\diamond} p_{k,-}}\,{\diamond}\,N_1(-1,-1)^{{\diamond} q_{k,+}}\,{\diamond}\,I_{2p_{k,0}}\,{\diamond}\,(-I_{2q_{k,0}})\,{\diamond}\,N_1(1,-1)^{{\diamond} p_{k,+}}\,{\diamond}\,N_1(-1,1)^{{\diamond} q_{k,-}}$ with $p_{k,-}+q_{k,+}=1$, $p_{k,0}+q_{k,0}=1$ and $p_{k,+}+q_{k,-}=1$, (iii) $I_{2p_{k,0}}\,{\diamond}\,(-I_{2q_{k,0}})\,{\diamond}\,N_1(1,-1)^{{\diamond} p_{k,+}}\,{\diamond}\,N_1(-1,1)^{{\diamond} q_{k,-}}$ with $p_{k,0}+q_{k,0}=2$ and $p_{k,+}+q_{k,-}=1$, (iv) $N_1(1,1)^{{\diamond} p_{k,-}}\,{\diamond}\,N_1(-1,-1)^{{\diamond} q_{k,+}}\,{\diamond}\,I_{2p_{k,0}}\,{\diamond}\,(-I_{2q_{k,0}})$ with $p_{k,0}+q_{k,0}=2$ and $p_{k,-}+q_{k,+}=1$, (v) $I_{2p_{k,0}}\,{\diamond}\,(-I_{2q_{k,0}})$ with $p_{k,0}+q_{k,0}=3$. And in either case, the index iteration formula of $c_k^m$ can be written as follows: \begin{eqnarray} i(c_k^m)=\frac{q}{2}m-(p_{k,-}+p_{k,0})-\frac{1+(-1)^m}{2}(q_{k,0}+q_{k,+}). \end{eqnarray}} {\bf Proof.} We only give the proof under the assumption $M_{2N+q\pm1}(k,2m_k+2)\ge 1$. The proof under the assumption $M_{2N-q\pm1}(k,2m_k-2)\ge 1$ is similar. First, by Lemma 3.5 and the assumption $M_{2N+q\pm1}(k,2m_k+2)\ge 1$ in Lemma 4.1, we have \begin{eqnarray} i(c_k^{2m_k+2})\le 2N+q-2,\quad i(c_k^{2m_k+2})+\nu(c_k^{2m_k+2})\ge 2N+q+2,\label{5.2} \end{eqnarray} which, together with $\nu(c_k^{2m_k+2})=\nu(c_k^2)$ by (\ref{2.22}), implies $\nu(c_k^2)=p_{k,-}+2p_{k,0}+p_{k,+}+q_{k,-}+2q_{k,0}+q_{k,+}\in\{4,5,6\}$. If $\nu(c_k^2)=4$, by (\ref{5.2}), we have $i(c_k^{2m_k+2})= 2N+q-2$ and $i(c_k^{2m_k+2})+\nu(c_k^{2m_k+2})=2N+q+2$. And by (\ref{3.2.0}) we also have \begin{eqnarray} p_{k,-}+p_{k,0}+p_{k,+}+q_{k,-}+q_{k,0}+q_{k,+}\in\{2,3\}. \label{5.3} \end{eqnarray} Since $i(c_k^{2m_k+2})= 2N+q-2\in 2{\bf N}$, we get $i(c_k^2)\in 2{\bf N}$ by (\ref{3.9.0}). Note that by (\ref{2.18}) and (\ref{3.2.0}), we have \begin{eqnarray} i(c_k^2)&=&p_{k,-}+p_{k,0}+q_{k,0}+q_{k,+}+r_{k,1}+r_{k,2}\quad({\rm mod}\ 2)\nonumber\\ &=&1+p_{k,+}+q_{k,-}+h_k\quad({\rm mod}\ 2). \label{5.4} \end{eqnarray} So, by (\ref{5.3}) and (\ref{3.2.0}), we get $p_{k,+}+q_{k,-}+h_{k}=1$ since $\nu(c_k^2)=4$. Therefore, if $p_{k,-}+p_{k,0}+p_{k,+}+q_{k,-}+q_{k,0}+q_{k,+}=2$, we must have $p_{k,0}+q_{k,0}=2$ and $h_k=1$, if $p_{k,-}+p_{k,0}+p_{k,+}+q_{k,-}+q_{k,0}+q_{k,+}=3$, we must have $p_{k,0}+q_{k,0}=1$, $p_{k,+}+q_{k,-}=1$ and $p_{k,-}+q_{k,+}=1$. If $\nu(c_k^2)=5$, we have $p_{k,-}+p_{k,0}+p_{k,+}+q_{k,-}+q_{k,0}+q_{k,+}=3$. Then we must have $p_{k,0}+q_{k,0}=2$ and $p_{k,-}+q_{k,+}+p_{k,+}+q_{k,-}=1$. And by (\ref{3.9.0}) and (\ref{5.4}) we have $i(c_k^{2m_k+2})= 2N+q-2$ when $p_{k,+}+q_{k,-}=1$, $i(c_k^{2m_k+2})= 2N+q-3$ when $p_{k,-}+q_{k,+}=1$. If $\nu(c_k)=6$, we must have $p_{k,0}+q_{k,0}=3$, and then we have $i(c_k^{2m_k+2})= 2N+q-3$ by (\ref{5.4}) and (\ref{3.9.0}). Note that by (\ref{3.9.0}), we get $i(c_k^{2m_k+2})=2N+i(c_k^2)$, and in either case, we have \begin{eqnarray} i(c_k^2)=q-p_{k,-}-p_{k,0}-q_{k,+}-q_{k,0}.\label{5.5} \end{eqnarray} Then by (\ref{2.18}) and according to the precise cases in Lemma 4.1, we have \begin{eqnarray} i(c_k^2)=2i(c_k)+p_{k,-}+p_{k,0}-q_{k,0}-q_{k,+}.\label{5.6} \end{eqnarray} Combining (\ref{2.18}), (\ref{5.5}) and (\ref{5.6}), it yields $$ i(c_k^m)=\frac{q}{2}m-(p_{k,-}+p_{k,0})-\frac{1+(-1)^m}{2}(q_{k,0}+q_{k,+}). $$ This completes the proof of Lemma 4.1. \hfill\vrule height0.18cm width0.14cm $\,$ \medskip {\bf Theorem 4.2.} {\it For every Finsler metric F on $S^4$ with reversibility ${\lambda}$ and flag curvature $K$ satisfying $\frac{25}{9}(\frac{{\lambda}}{{\lambda}+1})^2 < K \le 1$, suppose that there exist precisely three prime closed geodesics $c_1$, $c_2$ and $c_3$, then both $c_1$ and $c_2$ are irrationally elliptic with $i(c_1)=3$ and $ i(c_2)=9$, and $c_3$ is non-hyperbolic and must belong to one of the following precise classes: (i) $i(c_3)=3$ and $P_{c_3}\approx N_1(1,1)\,{\diamond}\, I_4$, (ii) $i(c_3)=3$ and $P_{c_3}\approx I_6$, (iii) $i(c_3)=4$ and $P_{c_3}\approx N_1(1,1)\,{\diamond}\,I_2\,{\diamond}\,N_1(1,-1)$, (iv) $i(c_3)=4$ and $P_{c_3}\approx I_4\,{\diamond}\,N_1(1,-1)$, (v) $i(c_3)=4$ and $P_{c_3}\approx I_4\,{\diamond}\,H(2)$, \noindent where and below, ``$P_{c_k}\approx A$'' means that there exists a continuous path $f_k\in C([0,1],{\Omega}^0(P_{c_k}))$ such that $f_k(0)=P_{c_k}$ and $f_k(1)=A$ in Theorem 2.6. } \medskip {\bf Proof.} Under the assumption (TCG), it follows from Theorem 1.1 that both $c_1$ and $c_2$ are irrationally elliptic. Then there holds $P_{c_k}\approx R(\tilde{\theta}_{k,1})\,{\diamond}\,R(\tilde{\theta}_{k,2})\,{\diamond}\,R(\tilde{\theta}_{k,3})$ for some $\frac{\tilde{\theta}_{k,1}}{2\pi}, \frac{\tilde{\theta}_{k,2}}{2\pi}, \frac{\tilde{\theta}_{k,3}}{2\pi}\in (0,1)\setminus{\bf Q} $ for $k=1,2$, respectively. Then by (\ref{2.18}), we have \begin{eqnarray} i(c_k^m)&=&m\left(i(c_k)-3\right)+2\sum_{j=1}^{3}E\left(\frac{m\tilde{\theta}_{k,j}}{2\pi}\right)-3,\quad \nu(c_k^m)=0,\quad k=1,2,\label{4.2}\\ \hat{i}(c_k)&=&i(c_k)-3+\sum_{j=1}^{3}\frac{\tilde{\theta}_{k,j}}{\pi},\quad k=1,2,\label{4.3} \end{eqnarray} and then by (\ref{2.7}), we get \begin{eqnarray} n(c_k)=1,\quad k=1,2.\label{4.4} \end{eqnarray} By (\ref{2.6}), (iii) of Lemma 2.2, (\ref{4.2}) and (\ref{4.4}), for the average Euler numbers of $c_1$ and $c_2$ we have \begin{eqnarray} \hat{\chi}(c_1)=-1,\quad\hat{\chi}(c_2)=-1.\label{4.5} \end{eqnarray} Noticing that $\nu(c_k^m)=0,\,\forall\ m\in{\bf N}$ and $k=1,2$, so by Lemma 2.1, (\ref{2.4}), and (iii) of Lemma 2.2, we have \begin{eqnarray} M_q(k,m)=\left\{\matrix{ 1,&&\quad {\it if}\quad q=i(c_k^m), \cr 0,&&\quad {\it if}\quad q\neq i(c_k^m), \cr}\right.\quad \mbox{for}\ k=1,2,\ m\in{\bf N}.\label{4.6} \end{eqnarray} By Claim 3 of the proof of Theorem 1.1, (\ref{4.2}) and (\ref{4.6}), we have \begin{eqnarray} i(c_2^{2m_2})+\nu(c_2^{2m_2})=i(c_2^{2m_2})=2N-3,\quad M_{2N-3}(2,2m_2)=1.\label{4.7} \end{eqnarray} By (\ref{3.38.0}) and (\ref{4.2}), we have \begin{eqnarray} i(c_1^{2m_1})+\nu(c_1^{2m_1})=i(c_1^{2m_1})=2N+3,\quad M_{2N+3}(1,2m_1)=1.\label{4.8} \end{eqnarray} It follows from (\ref{4.6})-(\ref{4.8}) and Lemma 3.4 that \begin{eqnarray} M_q=\sum_{1\le k\le 3}M_q(k,2m_k)=M_q(3,2m_3)\quad \mbox{for}\ 2N-2\le q\le 2N+2\label{4.9} \end{eqnarray} and \begin{eqnarray} M_{2N+3}(2,2m_2)=0.\label{4.10} \end{eqnarray} By (\ref{2.8}), Theorem 2.5 and (\ref{4.9}), we have $M_{2N-1}=M_{2N-1}(3,2m_3)\ge b_{2N-1}=1$ and $M_{2N+1}=M_{2N+1}(3,2m_3)\ge b_{2N+1}=1$, then by (\ref{3.16.0}), (\ref{3.17.0}) and Lemma 3.5, there holds \begin{eqnarray} M_{2N-3}(3,2m_3)=M_{2N+3}(3,2m_3)=0.\label{4.11} \end{eqnarray} By (\ref{4.8}), (\ref{4.10}), (\ref{4.11}) and Lemma 3.4, we have \begin{eqnarray} M_{2N+3}=\sum_{1 \le k\le 3}M_{2N+3}(k,2m_k+1)+1.\label{4.12} \end{eqnarray} By (\ref{3.18.0}) and Lemma 3.5, $M_{2N+3}(k,2m_k+1)\le 1$ for $k=1,2,3$. Then it follows from (\ref{4.12}) that $M_{2N+3}\le 4$. On the other hand, by (\ref{2.8}) and Theorem 2.5, we have $M_{2N+3}\ge b_{2N+3}=2$. Thus $M_{2N+3}\in\{2,3,4\}$. We continue the proof by distinguishing three cases. \medskip {\bf Case 1:} $M_{2N+3}=4$. \medskip In this case, by (\ref{4.12}), it yields \begin{eqnarray} M_{2N+3}(k,2m_k+1)=1,\quad\forall\ k=1,2,3.\label{4.13} \end{eqnarray} Then by (\ref{3.18.0}) and Lemma 3.5, we have \begin{eqnarray} M_{2N+5}(k,2m_k+1)=0,\quad\forall\ k=1,2,3.\label{4.14} \end{eqnarray} It follows from (\ref{4.14}) and Lemma 3.4 that $M_{2N+5}=0$. However by (\ref{2.8}) and Theorem 2.5, we have $M_{2N+5}\ge b_{2N+5}=1$, which is a contradiction. \medskip {\bf Case 2:} $M_{2N+3}=3$. \medskip In this case, by (\ref{4.12}), one of the following cases may happen. (i) $M_{2N+3}(3,2m_3+1)=0$ and $M_{2N+3}(1,2m_1+1)=M_{2N+3}(2,2m_2+1)=1$. (ii) $M_{2N+3}(3,2m_3+1)=1$ and $M_{2N+3}(1,2m_1+1)+M_{2N+3}(2,2m_2+1)=1$. For (i), by (\ref{4.6}), we have $i(c_1^{2m_1+1})=2N+3$ and $i(c_2^{2m_2+1})=2N+3$, then by (\ref{3.9.0}), it yields $i(c_1)=i(c_2)=3$. So, by (\ref{3.4.0}) and (\ref{4.3}), we get \begin{eqnarray} 5<\hat{i}(c_1)< 6,\quad 5<\hat{i}(c_2)< 6.\label{4.15} \end{eqnarray} By (\ref{3.18.0}), Lemma 3.5 and (i) of Case 2, there holds \begin{eqnarray} M_q(k,2m_k+1)=0\quad \mbox{for}\ q\neq 2N+3\ \mbox{and}\ k=1,2.\label{4.16} \end{eqnarray} So, according to Lemma 3.4 and (\ref{4.16}), we have \begin{eqnarray} M_{2N+5}&=&\sum_{1\le k\le 3}M_{2N+5}(k,2m_k+1)=M_{2N+5}(3,2m_3+1),\label{4.17}\\ M_{2N+7}&=&\sum_{1\le k\le 3}M_{2N+7}(k,2m_k+1)=M_{2N+7}(3,2m_3+1).\label{4.18} \end{eqnarray} By (\ref{2.8}) and Theorem 2.5, we have $M_{2N+5} \ge b_{2N+5}=1$ and $M_{2N+7} \ge b_{2N+7}=1$, then it follows from (\ref{4.17}) and (\ref{4.18}) that \begin{eqnarray} M_{2N+5}(3,2m_3+1)\ge 1,\quad M_{2N+7}(3,2m_3+1)\ge 1. \end{eqnarray} Thus the assupmtion with $q=6$ in Lemma 3.6 is satisfied, and then by (\ref{3.52.0}) we have \begin{eqnarray} i(c_3^m)=6m-p_{3,-}-p_{3,0}.\label{4.20} \end{eqnarray} Then we have $n(c_3)=1$ in either case of $P_{c_3}$ by (\ref{2.7}). By (\ref{4.20}), we have \begin{eqnarray} \hat{i}(c_3)=6.\label{4.21} \end{eqnarray} By (\ref{2.5}), we have the following identity \begin{eqnarray} \sum_{k=1}^{3}\frac{\hat{\chi}(c_k)}{\hat{i}(c_k)}=B(4,1)=-\frac{2}{3}.\label{4.22} \end{eqnarray} Combining (\ref{4.5}), (\ref{4.15}), (\ref{4.21}) and (\ref{4.22}), we obtain $-2<\hat{\chi}(c_3)=-4+\frac{6}{\hat{i}(c_1)}+\frac{6}{\hat{i}(c_2)}<-\frac{8}{5}$, which contradicts to $\hat{\chi}(c_3)\in {\bf Z}$, where the latter is due to $n(c_3)=1$ and the definition of $\hat{\chi}(c_3)$. For (ii), without loss of generality, we assume that $M_{2N+3}(1,2m_1+1)=0$ and $M_{2N+3}(2,2m_1+1)=1$. Thus, similarly, by (\ref{3.18.0}) and Lemma 3.5 and Lemma 3.4, we have \begin{eqnarray} M_{2N+5}=M_{2N+5}(1,2m_1+1),\quad M_{2N+7}=M_{2N+7}(1,2m_1+1).\label{4.23} \end{eqnarray} By (\ref{2.8}) and Theorem 2.5, we have $M_{2N+5} \ge b_{2N+5}=1$ and $M_{2N+7} \ge b_{2N+7}=1$. Then it follows from (\ref{4.23}) that $M_{2N+5}(1,2m_1+1)\ge 1$ and $M_{2N+7}(1,2m_1+1)\ge 1$. Thus by Lemma 3.6, there are five cases for $P_{c_1}$, which contradicts the fact that $c_1$ is irrationally elliptic. \medskip {\bf Case 3:} $M_{2N+3}=2$. \medskip In this case, by (\ref{4.12}), one of the following cases may happen: (i) $M_{2N+3}(3,2m_3+1)=1$ and $M_{2N+3}(1,2m_1+1)=M_{2N+3}(2,2m_2+1)=0$. (ii) $M_{2N+3}(3,2m_3+1)=0$ and $M_{2N+3}(1,2m_1+1)+M_{2N+3}(2,2m_2+1)=1$. For (i), by (\ref{3.18.0}) and Lemma 3.5, there holds \begin{eqnarray} M_q(3,2m_3+1)=0\quad \mbox{for}\ q\neq 2N+3,\label{4.25} \end{eqnarray} and then by Lemma 3.4, for $q=2N+5, 2N+7$, we have \begin{eqnarray} M_q=\sum_{1\le k \le 3}M_q(k,2m_k+1)=M_q(1,2m_1+1)+M_q(2,2m_2+1). \end{eqnarray} On the other hand, by (\ref{2.8}) and Theorem 2.5, we have $M_{2N+5}\ge b_{2N+5}=1$ and $M_{2N+7}\ge b_{2N+7}=1$, which, together with (\ref{4.6}), implies \begin{eqnarray} M_q(1,2m_1+1)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+5, \cr 0,\quad {\it if}\quad q\neq 2N+5, \cr}\right. \quad M_q(2,2m_2+1)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+7, \cr 0,\quad {\it if}\quad q\neq 2N+7, \cr}\right.\label{4.31.0} \end{eqnarray} or \begin{eqnarray} M_q(1,2m_1+1)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+7, \cr 0,\quad {\it if}\quad q\neq 2N+7, \cr}\right. \quad M_q(2,2m_2+1)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+5, \cr 0,\quad {\it if}\quad q\neq 2N+5. \cr}\right.\label{4.32.0} \end{eqnarray} So it follows from Lemma 3.4, (\ref{4.25}) and (\ref{4.31.0})-(\ref{4.32.0}) that \begin{eqnarray} M_{2N+9}= \sum_{1\le k\le 3, \atop 1\le m\le 2}M_{2N+9}(k,2m_k+m)= \sum_{1\le k\le 3}M_{2N+9}(k,2m_k+2).\label{4.27} \end{eqnarray} Without loss of generality, we assume that (\ref{4.31.0}) holds. So we get $i(c_1^{2m_1+1})=2N+5$ and $i(c_2^{2m_2+1})=2N+7$ by (\ref{4.6}). Then by (\ref{3.9.0}), it yields $i(c_1)=5$ and $i(c_2)=7$. Since $i(c_2)=7$, by (\ref{4.2}), we have the index iteration formula of $c_2$ as follows \begin{eqnarray} i(c_2^m)=4m-3+2\sum_{j=1}^{3}E\left(\frac{m\tilde{\theta}_{2,j}}{2\pi}\right).\label{4.29} \end{eqnarray} By (\ref{4.29}), it yields $i(c_2^{2})\ge 11$, then by (\ref{3.9.0}), we get $i(c_2^{2m_2+2})\ge 2N+11$. Thus by (\ref{4.6}), we have \begin{eqnarray} M_{2N+9}(2,2m_2+2)=0.\label{4.30} \end{eqnarray} By (\ref{3.18.0}) and Lemma 3.5, we have \begin{eqnarray} M_{2N+9}(k,2m_k+2)\le 1,\quad\forall\ k=1,2,3, \label{4.31} \end{eqnarray} which, together with (\ref{4.27}) and (\ref{4.30}), implies $M_{2N+9}\le 2$. However by (\ref{2.8}) and Theorem 2.5, we have $M_{2N+9}\ge b_{2N+9}=2$, which implies $M_{2N+9}=2$. So we have \begin{eqnarray} M_{2N+9}(1,2m_1+2)=M_{2N+9}(3,2m_3+2)=1.\label{4.32} \end{eqnarray} It follows from (\ref{4.6}) and (\ref{4.32}) that \begin{eqnarray} M_q(1,2m_1+2)=M_q(3,2m_3+2)=0\quad \mbox{for}\ q=2N+11,2N+13.\label{4.33} \end{eqnarray} Combining (\ref{4.25}), (\ref{4.31.0}), (\ref{4.33}) and Lemma 3.4, we obtain \begin{eqnarray} M_q=\sum_{1\le k\le 3, \atop 1\le m\le 2}M_q(k,2m_k+m)=M_q(2,2m_2+2)\quad \mbox{for}\ q=2N+11,2N+13. \end{eqnarray} Then by (\ref{4.6}), we obtain that $M_{2N+11}=0$ or $M_{2N+13}=0$, which contradicts to $M_{2N+11}\ge b_{2N+11}=1$ and $M_{2N+13}\ge b_{2N+13}=1$ by (\ref{2.8}) and Theorem 2.5. For (ii), without loss of generality, we assume that \begin{eqnarray} M_{2N+3}(2,2m_2+1)=0,\quad M_{2N+3}(1,2m_1+1)=1.\label{4.35} \end{eqnarray} By (\ref{4.6}), we have $i(c_1^{2m_1+1})=2N+3$ and \begin{eqnarray} M_q(1,2m_1+1)=0,\quad \forall\ q\neq 2N+3.\label{4.36} \end{eqnarray} Then by (\ref{3.9.0}), it yields $i(c_1)=3$. Then by (\ref{4.2}), we obtain \begin{eqnarray} i(c_1^m)=2\sum_{j=1}^{3}E\left(\frac{m\tilde{\theta}_{1,j}}{2\pi}\right)-3.\label{4.37} \end{eqnarray} By (\ref{4.37}), it yields $i(c_1^2)\le 9$. Then by (\ref{3.3.0}), we get $i(c_1^2)=9$. And then $i(c_1^{2m_1+2})=2N+9$ by (\ref{3.9.0}). Thus by (\ref{4.6}), we have \begin{eqnarray} M_q(1,2m_1+2)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+9, \cr 0,\quad {\it if}\quad q\neq 2N+9. \cr}\right.\label{4.38} \end{eqnarray} \medskip {\bf Claim 1:} {\it $i(c_2^{2m_2+1})\le 2N+9$, or equivalently, $i(c_2)\le 9$ by (\ref{3.9.0}).} \medskip If $i(c_2^{2m_2+1})>2N+9$, by (\ref{3.5.0}) we get $i(c_2^{2m_2+2})\ge i(c_2^{2m_2+1})>2N+9$. Then by (\ref{4.6}) we know \begin{eqnarray} M_q(2,2m_2+1)=M_q(2,2m_2+2)=0\quad \mbox{for}\ q=2N+5,2N+7,2N+9. \label{4.39} \end{eqnarray} Combining Lemma 3.4, (\ref{4.36}) and (\ref{4.39}), we have \begin{eqnarray} M_{2N+5}&=&\sum_{1\le k\le 3}M_{2N+5}(k,2m_k+1)=M_{2N+5}(3,2m_3+1),\label{4.40}\\ M_{2N+7}&=&\sum_{1\le k\le 3}M_{2N+7}(k,2m_k+1)=M_{2N+7}(3,2m_3+1).\label{4.41} \end{eqnarray} On the other hand, by (\ref{2.8}) and Theorem 2.5, we have $M_{2N+5}\ge b_{2N+5}=1$ and $M_{2N+7}\ge b_{2N+7}=1$. Then by (\ref{4.40}) and (\ref{4.41}), we know that \begin{eqnarray} M_{2N+5}(3,2m_3+1)\ge 1,\quad M_{2N+7}(3,2m_3+1)\ge 1.\label{4.42} \end{eqnarray} Thus the assumption with $q=6$ in Lemma 3.6 is satisfied, and then we have the index iteration formula of $c_3$ as follows \begin{eqnarray} i(c_3^m)=6m-p_{3,-}-p_{3,0}.\label{4.43} \end{eqnarray} Then by the fact $\nu(c_3)=p_{3,-}+2p_{3,0}+p_{3,+}$ and (\ref{3.2.0}), we get \begin{eqnarray} i(c_3)+\nu(c_3)=6+p_{3.0}+p_{3,+}\le 9.\label{4.44} \end{eqnarray} Then by (\ref{3.9.0}) and (\ref{2.22}), it yields $i(c_3^{2m_3+1})+\nu(c_3^{2m_3+1})\le 2N+9$. So, according to Lemma 3.5 and (\ref{4.42}), there holds \begin{eqnarray} M_{2N+9}(3,2m_3+1)=0.\label{4.45} \end{eqnarray} Similar to (\ref{3.73.0}), we obtain \begin{eqnarray} M_{2N+9}(3,2m_3+2)=M_{2N+3}(3,2m_3+1)=0.\label{4.46} \end{eqnarray} Combining (\ref{4.36}), (\ref{4.38}), (\ref{4.39}), (\ref{4.45}), (\ref{4.46}) and Lemma 3.4, we obtain that \begin{eqnarray} M_{2N+9}= \sum_{1\le k\le 3, \atop 1\le m\le 2}M_{2N+9}(k,2m_k+m)=M_{2N+9}(1,2m_1+2)=1,\label{4.47} \end{eqnarray} which gives a contradiction $1=M_{2N+9}\ge b_{2N+9}=2$ by (\ref{2.8}) and Theorem 2.5. This finished the proof of Claim 1. Note that $i(c_2)\neq 3$ by (\ref{4.35}), (\ref{4.6}) and (\ref{3.9.0}), it yields $i(c_2)\in\{5,7,9\}$ by Claim 1 since $i(c_2)$ is odd. Next we have three subcases according to the value of $i(c_2)$. \medskip {\bf Subcase 3.1:} $i(c_2)=5$. \medskip In this subcase, by (\ref{4.6}) and (\ref{3.9.0}), we have \begin{eqnarray} M_q(2,2m_2+1)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+5, \cr 0,\quad {\it if}\quad q\neq 2N+5. \cr}\right.\label{4.48} \end{eqnarray} By Lemma 3.4, (\ref{4.36}) and (\ref{4.48}), we obtain that \begin{eqnarray} M_{2N+7}=\sum_{1\le k\le 3}M_{2N+7}(k,2m_k+1)=M_{2N+7}(3,2m_3+1).\label{4.49} \end{eqnarray} Together with $M_{2N+7}\ge b_{2N+7}=1$ by (\ref{2.8}) and Theorem 2.5, we get \begin{eqnarray} M_{2N+7}(3,2m_3+1)\ge 1.\label{4.50} \end{eqnarray} \medskip {\bf Claim 2:} {\it $M_q(3,2m_3+1)=0$ for $q\ge 2N+11$.} \medskip If $M_{q_0}(3,2m_3+1)\ge 1$ for some $q_0\ge 2N+11$, then by (\ref{4.50}) and Lemma 3.5, we know that $i(c_3^{2m_3+1})\le 2N+6$ and $i(c_3^{2m_3+1})+\nu(c_3^{2m_3+1})\ge q_0+1\ge 2N+12$, which, together with the fact $\nu(c_3^{2m_3+1})\le 6$, implies $i(c_3^{2m_3+1})=2N+6$ and $\nu(c_3^{2m_3+1})=6$ and $q_0$ only can be $2N+11$. Now by (\ref{3.9.0}) and (\ref{2.22}), we have $i(c_3)=6$ and $\nu(c_3)=6$, which implies that $P_{c_3}\approx I_6$ by (\ref{3.1.0}), (\ref{3.2.0}) and the fact $\nu(c_3)=p_{3,-}+2p_{3,0}+p_{3,+}$. Then $i(c_3)$ must be odd by Proposition 2.7 and the symplectic additivity of symplectic paths. This contradicts to $i(c_3)=6$ and completes the proof of Claim 2. \medskip In summary, by Lemma 3.4, (\ref{4.36}), (\ref{4.48}), Claim 2 and (\ref{4.38}), there holds {\small \begin{eqnarray} &&M_q=\sum_{1\le k\le 3}M_q(k,2m_k+1)=M_q(3,2m_3+1)\quad\mbox{for}\ 2N+4\le q \le 2N+8,\ q\neq 2N+5,\label{4.51}\\ &&M_{2N+5}=\sum_{1\le k\le 3}M_{2N+5}(k,2m_k+1)=1+M_{2N+5}(3,2m_3+1),\label{4.52}\\ &&M_{2N+9}=\sum_{1\le k\le 3, \atop 1\le m\le 2}M_{2N+9}(k,2m_k+m)=1+M_{2N+9}(3,2m_3+1)+\sum_{2\le k\le 3}M_{2N+9}(k,2m_k+2),\label{4.53}\\ &&M_{2N+10}=\sum_{1\le k\le 3, \atop 1 \le m\le 2}M_{2N+10}(k,2m_k+m)=M_{2N+10}(3,2m_3+1)+\sum_{2\le k\le 3}M_{2N+10}(k,2m_k+2),\label{4.54}\\ &&M_q=\sum_{1\le k\le 3, \atop 1\le m\le 2}M_q(k,2m_k+m)=M_q(2,2m_2+2)+M_q(3,2m_3+2),\ 2N+11\le q \le 2N+14.\label{4.55} \end{eqnarray}} Note that by (\ref{4.2}) and $i(c_2)=5$, we have the index iteration formula of $c_2$ as follows \begin{eqnarray} i(c_2^m)=2m-3+2\sum_{j=1}^{3}E\left(\frac{m\tilde{\theta}_{2,j}}{2\pi}\right).\label{4.56} \end{eqnarray} It follows from (\ref{4.56}) that $i(c_2^2)\le 13$. Then by (\ref{3.3.0}) and the fact that $i(c_2^2)$ is odd, we get $i(c_2^2)\in \{9,11,13\}$. We continue the proof by distinguishing three values of $i(c_2^2)$. \medskip {\bf Subcase 3.1.1:} $i(c_2^2)=9$. \medskip In this subcase, by (\ref{3.9.0}), we have $i(c_2^{2m_2+2})=2N+9$, then by (\ref{4.6}), we have \begin{eqnarray} M_q(2,2m_2+2)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+9, \cr 0,\quad {\it if}\quad q\neq 2N+9. \cr}\right.\label{4.57} \end{eqnarray} It follows from (\ref{4.55}) and (\ref{4.57}) that \begin{eqnarray} M_{2N+11}=M_{2N+11}(3,2m_3+2),\quad M_{2N+13}=M_{2N+13}(3,2m_3+2),\label{4.58} \end{eqnarray} which, together with $M_{2N+11}\ge b_{2N+11}=1$ and $M_{2N+13}\ge b_{2N+13}=1$ by (\ref{2.8}) and Theorem 2.5, implies \begin{eqnarray} M_{2N+11}(3,2m_3+2)\ge 1,\quad M_{2N+13}(3,2m_3+2)\ge 1.\label{4.59} \end{eqnarray} Thus the assumption with $q=12$ in Lemma 4.1 is satisfied, and then we have \begin{eqnarray} i(c_3^m) &=& 6m-(p_{3,-}+p_{3,0})-\frac{1+(-1)^m}{2}(q_{3,0}+q_{3,+}),\label{4.60}\\ \nu(c_3^m) &=& p_{3,-}+2p_{3,0}+p_{3,+}+\frac{1+(-1)^m}{2}(q_{3,-}+2q_{3,0}+q_{3,+}).\label{4.61} \end{eqnarray} Then we can know that \begin{eqnarray} \hat{i}(c_3)=6\label{4.62} \end{eqnarray} and \begin{eqnarray} n(c_3)\in\{1,2\}.\label{4.63} \end{eqnarray} By (\ref{3.2.0}), we obtain $i(c_3^2)+\nu(c_3^2)=12+p_{3,0}+p_{3,+}+q_{3,-}+q_{3,0}\le 15$, which implies \begin{eqnarray} i(c_3^{2m_3+2})+\nu(c_3^{2m_3+2})\le 2N+15\label{4.64} \end{eqnarray} by (\ref{3.9.0}) and (\ref{2.22}). Then by Lemma 3.5 and (\ref{4.59}), there holds \begin{eqnarray} M_{2N+15}(3,2m_3+2)=0,\quad M_{2N+17}(3,2m_3+2)=0. \label{4.65} \end{eqnarray} It follows from Lemma 3.4, (\ref{4.36}), (\ref{4.38}), (\ref{4.48}), Claim 2, (\ref{4.57}) and (\ref{4.65}) that \begin{eqnarray} M_{2N+15}&=&\sum_{1\le k\le 3, \atop 1\le m\le 3}M_{2N+15}(k,2m_k+m)=\sum_{1\le k\le 3}M_{2N+15}(k,2m_k+3),\label{4.66}\\ M_{2N+17}&=&\sum_{1\le k\le 3, \atop 1\le m\le 3}M_{2N+17}(k,2m_k+m)=\sum_{1\le k\le 3}M_{2N+17}(k,2m_k+3).\label{4.67} \end{eqnarray} By (\ref{3.18.0}) and Lemma 3.5, we have \begin{eqnarray} M_{2N+15}(k,2m_k+3)\le 1,\quad\forall\ k=1,2,3.\label{4.68} \end{eqnarray} Then by (\ref{4.66}) we get $M_{2N+15}\le 3$. We claim that $M_{2N+15}\neq 3$. In fact, if $M_{2N+15}=3$, we have $M_{2N+15}(k,2m_k+3)= 1,\, k=1,2,3$. Then by (\ref{3.18.0}) and Lemma 3.5, there holds $M_{2N+17}(k,2m_k+3)=0,\, k=1,2,3$, which, together with (\ref{4.67}), implies $M_{2N+17}=0$. This gives a contradiction $0=M_{2N+17}\ge b_{2N+17}=1$ by (\ref{2.8}) and Theroem 2.5. Hence $M_{2N+15}\le 2$. However, again by (\ref{2.8}) and Theroem 2.5, we have $M_{2N+15}\ge b_{2N+15}=2$. So we get $M_{2N+15}=2$. In summary, in Case 3, we have \begin{eqnarray} M_{2N+3}=b_{2N+3},\quad M_{2N+15}=b_{2N+15}.\label{4.69} \end{eqnarray} Combining (\ref{4.69}), (\ref{2.8}) and Theorem 2.5, we obtain that \begin{eqnarray} \sum_{q=2N+4}^{2N+14}(-1)^{q}M_{q}=\sum_{q=2N+4}^{2N+14}(-1)^{q}b_{q}=-6.\label{4.70} \end{eqnarray} By (\ref{4.51})-(\ref{4.55}), (ii) of Case 3, Claim 2, (\ref{4.57}) and (\ref{4.65}), we have \begin{eqnarray} \sum_{q=2N+4}^{2N+14}(-1)^{q}M_{q}&=&-2+\sum_{q=2N+4}^{2N+10}(-1)^{q}M_{q}(3,2m_3+1)+ \sum_{2N+9}^{2N+14}(-1)^{q}\sum_{2\le k\le 3}M_q(k,2m_k+2)\nonumber\\ &=&-3+\sum_{q=2N+3}^{2N+15}(-1)^{q}M_{q}(3,2m_3+1)+ \sum_{2N+9}^{2N+15}(-1)^{q}M_q(3,2m_3+2).\label{4.71} \end{eqnarray} Note that $2N+3\le i(c_3^{2m_3+1})\le i(c_3^{2m_3+1})+\nu(c_3^{2m_3+1})\le 2N+15$ and $2N+9\le i(c_3^{2m_3+2})\le i(c_3^{2m_3+2})+\nu(c_3^{2m_3+2})\le 2N+15$ by (\ref{3.18.0}), (\ref{3.5.0}) and (\ref{4.64}), then according to Lemma 2.1 and (\ref{2.4}), we obtain \begin{eqnarray} &&\sum_{q=2N+3}^{2N+15}(-1)^{q}M_{q}(3,2m_3+1)+ \sum_{2N+9}^{2N+15}(-1)^{q}M_q(3,2m_3+2)\nonumber\\ &=&\sum_{0\le l \le 6}(-1)^{i(c_3^{2m_3+1})+l}k_l^{{\epsilon}(c_3^{2m_3+1})}(c_3^{2m_3+1})+\sum_{0\le l \le 6}(-1)^{i(c_3^{2m_3+2})+l}k_l^{{\epsilon}(c_3^{2m_3+2})}(c_3^{2m_3+2}).\label{4.72} \end{eqnarray} Combining (\ref{4.70}), (\ref{4.71}) and (\ref{4.72}), by (\ref{2.6}), we get \begin{eqnarray} \chi(c_3^{2m_3+1})+\chi(c_3^{2m_3+2})&=&\sum_{0\le l \le 6}(-1)^{i(c_3^{2m_3+1})+l}k_l^{{\epsilon}(c_3^{2m_3+1})}(c_3^{2m_3+1})\nonumber\\ &&+\sum_{0\le l \le 6}(-1)^{i(c_3^{2m_3+2})+l}k_l^{{\epsilon}(c_3^{2m_3+2})}(c_3^{2m_3+2})=-3.\label{4.73} \end{eqnarray} Note that by (\ref{4.63}) and (iv) of Lemma 2.2, there holds \begin{eqnarray} k_j^{{\epsilon}(c_3^{2m_3+1})}(c_3^{2m_3+1})=k_j^{{\epsilon}(c_3)}(c_3),\quad k_j^{{\epsilon}(c_3^{2m_3+2})}(c_3^{2m_3+2})=k_j^{{\epsilon}(c_3^{2})}(c_3^{2}),\quad\forall\ 0\le j\le 6.\label{4.74} \end{eqnarray} Then by (\ref{2.6}) it yields \begin{eqnarray} \chi(c_3)=\chi(c_3^{2m_3+1}),\quad\chi(c_3^2)=\chi(c_3^{2m_3+2}),\label{4.75} \end{eqnarray} which, together with (\ref{4.73}) and (\ref{4.63}), implies \begin{eqnarray} \chi(c_3)\neq\chi(c_3^2)\label{4.76} \end{eqnarray} since $\chi(c_3^m)\in{\bf Z}$, and \begin{eqnarray} n(c_3)=2.\label{4.77} \end{eqnarray} It follows from (\ref{2.6}), (\ref{4.73}), (\ref{4.75}) and (\ref{4.77}) that \begin{eqnarray} \hat{\chi}(c_3)=\frac{1}{2}\left(\chi(c_3)+\chi(c_3^2)\right) =\frac{1}{2}\left(\chi(c_3^{2m_3+1})+\chi(c_3^{2m_3+2})\right)=-\frac{3}{2}.\label{4.78} \end{eqnarray} Note that $\hat{i}(c_k)>5$, $k=1,2$ by (\ref{3.4.0}), so it follows from (\ref{4.5}), (\ref{4.62}) and (\ref{4.78}) that \begin{eqnarray} \frac{\hat{\chi}(c_1)}{\hat{i}(c_1)}=-\frac{1}{\hat{i}(c_1)}>-\frac{1}{5},\quad \frac{\hat{\chi}(c_2)}{\hat{i}(c_2)}=-\frac{1}{\hat{i}(c_2)}>-\frac{1}{5},\quad \frac{\hat{\chi}(c_3)}{\hat{i}(c_3)}=-\frac{1}{4},\label{4.79} \end{eqnarray} which, together with (\ref{2.5}), yields \begin{eqnarray} -\frac{2}{3}=\frac{\hat{\chi}(c_1)}{\hat{i}(c_1)}+\frac{\hat{\chi}(c_2)}{\hat{i}(c_2)}+\frac{\hat{\chi}(c_3)}{\hat{i}(c_3)}>-\frac{13}{20}.\label{4.80} \end{eqnarray} This is a contradiction. \medskip {\bf Subcase 3.1.2:} $i(c_2^2)=11$. \medskip In this subcase, by (\ref{3.9.0}), it yields $i(c_2^{2m_2+2})=2N+11$ and then by (\ref{4.6}), we have \begin{eqnarray} M_q(2,2m_2+2)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+11, \cr 0,\quad {\it if}\quad q\neq 2N+11, \cr}\right.\label{4.81} \end{eqnarray} which, together with (\ref{4.55}), implies \begin{eqnarray} M_{2N+13}=M_{2N+13}(3,2m_3+2).\label{4.82} \end{eqnarray} However by (\ref{2.8}) and Theorem 2.5, we have $M_{2N+13}\ge b_{2N+13}=1$, so we have \begin{eqnarray} M_{2N+13}(3,2m_3+2)\ge 1.\label{4.83} \end{eqnarray} Thus by (\ref{3.18.0}) and Lemma 3.5, there holds \begin{eqnarray} M_{2N+9}(3,2m_3+2)=0.\label{4.84} \end{eqnarray} Comnining (\ref{4.53}), (\ref{4.81}) and (\ref{4.84}), we obtain that \begin{eqnarray} M_{2N+9}=1+M_{2N+9}(3,2m_3+1).\label{4.85} \end{eqnarray} Since $M_{2N+9}\ge b_{2N+9}=2$ by (\ref{2.8}) and Theorem 2.5, by (\ref{4.85}), we know that \begin{eqnarray} M_{2N+9}(3,2m_3+1)\ge 1.\label{4.86} \end{eqnarray} Noticing that we also have $M_{2N+7}(3,2m_3+1)\ge 1$ by (\ref{4.50}), then by Lemma 3.6, we obtain that \begin{eqnarray} i(c_3^m)=8m-p_{3,-}-p_{3,0}.\label{4.87} \end{eqnarray} and \begin{eqnarray} n(c_3)=1.\label{4.88} \end{eqnarray} Similar to (\ref{3.73.0}), we have \begin{eqnarray} M_{2N+5}(3,2m_3+1)&=&k_{2N+5-i(c_3^{2m_3+1})}^{{\epsilon}(c_3^{2m_3+1})}(c_3^{2m_3+1})\nonumber\\ &=&k_{2N+5-i(c_3^{2m_3+1})}^{{\epsilon}(c_3^{2m_3+2})}(c_3^{2m_3+2})\nonumber\\ &=&M_{2N+5-i(c_3^{2m_3+1})+i(c_3^{2m_3+2})}(3,2m_3+2)\nonumber\\ &=&M_{2N+13}(3,2m_3+2).\label{4.89} \end{eqnarray} On the other hand, by (\ref{4.87}) and (\ref{3.2.0}), it yields $i(c_3)\ge 5$, then by (\ref{3.9.0}), we have $i(c_3^{2m_3+1})\ge 2N+5$. So, by (\ref{4.86}) and Lemma 3.5, there holds \begin{eqnarray} M_{2N+5}(3,2m_3+1)=0,\label{4.90} \end{eqnarray} which, together with (\ref{4.89}), contradicts to (\ref{4.83}). \medskip {\bf Subcase 3.1.3:} $i(c_2^2)=13$. \medskip In this subcase, by (\ref{3.9.0}), it yields $i(c_2^{2m_2+2})=2N+13$ and then by (\ref{4.6}), we have \begin{eqnarray} M_q(2,2m_2+2)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+13, \cr 0,\quad {\it if}\quad q\neq 2N+13, \cr}\right.\label{4.91} \end{eqnarray} which, together with (\ref{4.55}), implies \begin{eqnarray} M_{2N+11}=M_{2N+11}(3,2m_3+2).\label{4.92} \end{eqnarray} However by (\ref{2.8}) and Theorem 2.5, we have $M_{2N+11}\ge b_{2N+11}=1$, so we have \begin{eqnarray} M_{2N+11}(3,2m_3+2)\ge 1.\label{4.93} \end{eqnarray} Thus by (\ref{3.18.0}) and Lemma 3.5, there holds \begin{eqnarray} M_{2N+9}(3,2m_3+2)=0.\label{4.94} \end{eqnarray} Similar to Subcase 3.1.2, we have \begin{eqnarray} i(c_3^m)=8m-p_{3,-}-p_{3,0},\label{4.95} \end{eqnarray} and we obtain a contradiction \begin{eqnarray} 0=M_{2N+3}(3,2m_3+1)=M_{2N+11}(3,2m_3+2)\ge 1.\label{4.96} \end{eqnarray} \medskip {\bf Subcase 3.2:} $i(c_2)=7$. \medskip In this subcase, by (\ref{3.9.0}), it yields $i(c_2^{2m_2+1})=2N+7$ and then by (\ref{4.6}), we have \begin{eqnarray} M_q(2,2m_2+1)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+7, \cr 0,\quad {\it if}\quad q\neq 2N+7, \cr}\right.\label{4.97} \end{eqnarray} which, together with (\ref{4.36}) and Lemma 3.4, implies \begin{eqnarray} M_{2N+5}= \sum_{1\le k\le 3}M_{2N+5}(k,2m_k+1)=M_{2N+5}(3,2m_3+1).\label{4.98} \end{eqnarray} However by (\ref{2.8}) and Theorem 2.5, we have $M_{2N+5}\ge b_{2N+5}=1$, so we have \begin{eqnarray} M_{2N+5}(3,2m_3+1)\ge 1.\label{4.99} \end{eqnarray} \medskip {\bf Claim 3:} {\it $M_q(3,2m_3+1)=0$, $\forall\ q\ge 2N+9$.} \medskip If $M_{q_0}(3,2m_3+1)\ge 1$ for some $q_0\ge 2N+9$, then by (\ref{4.99}) and Lemma 3.5, we know that $i(c_3^{2m_3+1})\le 2N+4$ and $i(c_3^{2m_3+1})+\nu(c_3^{2m_3+1})\ge q_0+1\ge 2N+10$, which, together with the fact $\nu(c_3^{2m_3+1})\le 6$, implies $i(c_3^{2m_3+1})=2N+4$ and $\nu(c_3^{2m_3+1})=6$ and $q_0$ only can be $2N+9$. Now by (\ref{3.9.0}) and (\ref{2.22}), we have $i(c_3)=4$ and $\nu(c_3)=6$, which implies that $P_{c_3}\approx I_6$ by (\ref{3.1.0}), (\ref{3.2.0}) and the fact $\nu(c_3)=p_{3,-}+2p_{3,0}+p_{3,+}$. So $i(c_3)$ must be odd by Proposition 2.7 and the symplectic additivity of symplectic paths. This contradicts to $i(c_3)=4$ and completes the proof of Claim 3. Since $i(c_2)=7$ in this subcase, similar to the proof of (\ref{4.30}), we have \begin{eqnarray} M_{2N+9}(2,2m_2+2)=0.\label{4.100} \end{eqnarray} Then by Lemma 3.4, (\ref{4.36}), (\ref{4.97}), Claim 3, (\ref{4.38}) and (\ref{4.100}), we obtain \begin{eqnarray} M_{2N+9}=\sum_{1\le k\le 3, \atop 1\le m\le 2}M_{2N+9}(k,2m_k+m)=1+M_{2N+9}(3,2m_3+2),\label{4.101} \end{eqnarray} which, together with $M_{2N+9}\ge b_{2N+9}=2$ by (\ref{2.8}) and Theorem 2.5, implies that \begin{eqnarray} M_{2N+9}(3,2m_3+2)\ge 1.\label{4.102} \end{eqnarray} Then by (\ref{3.18.0}) and Lemma 3.5 we get \begin{eqnarray} M_q(3,2m_3+2)=0,\quad \forall\ q\neq 2N+9. \label{4.103} \end{eqnarray} By Lemma 3.4, (\ref{4.36}), (\ref{4.97}), Claim 3, (\ref{4.38}) and (\ref{4.103}), we obtain \begin{eqnarray} M_q=\sum_{1\le k\le 3, \atop 1\le m\le 2}M_q(k,2m_k+m)=M_q(2,2m_2+2), \quad\mbox{for}\ q=2N+11, 2N+13.\label{4.104} \end{eqnarray} So there holds $M_{2N+11}=0$ or $M_{2N+13}=0$ by (\ref{4.6}), which contradicts to $M_{2N+11}\ge b_{2N+11}=1$ and $M_{2N+13}\ge b_{2N+13}=1$ by (\ref{2.8}) and Theorem 2.5. \medskip {\bf Subcase 3.3:} $i(c_2)=9$. \medskip In this subcase, by (\ref{3.9.0}), it yields $i(c_2^{2m_2+1})=2N+9$ and then by (\ref{4.6}), we have \begin{eqnarray} M_q(2,2m_2+1)=\left\{\matrix{ 1,\quad {\it if}\quad q=2N+9, \cr 0,\quad {\it if}\quad q\neq 2N+9, \cr}\right.\label{4.105} \end{eqnarray} which, together with (\ref{4.36}) and Lemma 3.4, implies \begin{eqnarray} M_q=\sum_{1\le k\le 3}M_q(k,2m_k+1)=M_q(3,2m_3+1),\quad\mbox{for}\ q=2N+5, 2N+7.\label{4.106} \end{eqnarray} However by (\ref{2.8}) and Theorem 2.5, we have $M_{2N+5}\ge b_{2N+5}=1$ and $M_{2N+7}\ge b_{2N+7}=1$, so we have $M_{2N+5}(3,2m_3+1)\ge 1$ and $M_{2N+7}(3,2m_3+1)\ge 1$. Thus the assumption with $q=6$ of Lemma 3.6 is satisfied, and then by Lemma 3.6 we conclude that $c_3$ must belong to one of the classes in Theorem 4.2 and the index of $c_3$ is the following \begin{eqnarray} i(c_3)=6-p_{3,-}-p_{3,0}.\label{4.107} \end{eqnarray} This completes the proof of Theorem 4.2. \hfill\vrule height0.18cm width0.14cm $\,$ \medskip \noindent {\bf Acknowledgments} \medskip The authors would like to thank sincerely Professor Yiming Long for his careful reading of the manuscript and valuable suggestions. \bibliographystyle{abbrv}
2024-02-18T23:39:57.933Z
2022-07-07T02:07:22.000Z
algebraic_stack_train_0000
958
19,377
proofpile-arXiv_065-4828
\section*{Introduction} \section*{Photometric data} We obtained $\rm {UBV(RI)_{\rm C}}$ photometry of YY~Her, using a 60\,cm Cassegrain telescope at Piwnice Observatory near Toru\'n (Poland), equipped with an EMI~9558B photomultiplier (1991--1999), a RCA~C31034 photomultiplier (2001--2004) and a SBIG STL 1001 CCD camera (2005--2008). Additionally, we used data published by Hric et al. (2006), Tatarnikova et al. (2001), Miko\l{}ajewska et al. (2002) and data from ASAS (Pojmanski 2002). To eliminate the systematic shifts between the different photometric systems, some data sets were corrected as follows: Tatarnikova et al. ($-0\fm52$, $-0\fm16$, $+0\fm3$, $+0\fm5$) and Miko\l{}ajewska et al.($-0\fm22$, $-0\fm1$, $+0\fm27$, $+0\fm53$) in $B, V, R_{\rm C}, I_{\rm C}$ respectively. \section*{Analysis of the light curves and the orbital period estimation} The multicolor photometric observations of YY~Her are presented in Figure~1 (left panel). Fast Fourier Transform was used to search for the orbital period. The corresponding periodograms for all pass-bands are shown in Figure~1 (middle panel). Two peaks at about $\sim 575^{\rm d}$ and $\sim 284^{\rm d}$ dominate in these periodograms. The first one is not visible in the $I_{\rm C}$ filter and the second one is absent in the $U$ filter. Assuming that the highest peak corresponds to the orbital period, to estimate its mean value ($575\fd75$) we used the peaks in $U, B, V, R_{\rm C}$ filters. Using this mean value and measuring the moments of the primary minima from the $V$ light curve, we constructed the O-C diagram which allowed us to introduce a correction of $0\fd23$ for $\Delta T_{\rm 0}$ and $-1\fd17$ for $P_{\rm orb}$. Finally, we adopted the ephemeris $JD(I)_{\rm min} = 2450702\fd75 + 574\fd58 \times E $. The photometric data in all filters, phased with our ephemeris are shown in Figure~1 (right panel). The $U$ light curve shows a pure sine wave shape with a large amplitude $> 0\fm8$ and probably reflects the eclipse of the ionized HII zone by red giant and neutral HI region in the system. The second minimum appears in the $B$ light curve at orbital phase $0.5$ and is very well seen in the $V, (RI)_{\rm C}$ ones. We estimated its mean value $284\fd47$ using the peaks in the $V, (RI)_{\rm C}$ periodograms. We suggest that this second minimum is caused by ellipsoidal changes of the red giant. To measure the moments of the secondary minima, we used the $I_{\rm C}$ light curve in which these minima are best visible. Assuming $T_{\rm 0}=2450693\fd2$ and using the secondary period mean value ($284\fd47$) we estimated the corrections $\Delta T_{\rm 0}=+6\fd4$ and $\Delta P_{\rm orb}=-0\fd32$. Our final ephemeries for the second minimum is $JD(II)_{\rm min} = 2450699\fd6 + 284\fd15 \times E$. Double secondary period ($568\fd3$) is $\sim 6.3$ days shorter than the orbital period. This is a significant difference which will be analysing in the future. \begin{figure}[!htp] \centering \includegraphics[width=1.0\textwidth, angle=0]{wiecek_pic1.eps} \caption{Left panel: Multicolour light curves of YY~Her, Middle panel: Power spectrum for $UBV(RI)_{\rm C}$ filters data of YY~Her, Right panel: The residual $UBV(RI)_{\rm C}$ light curves of YY~Her phased with $574\fd58$ period.} \label{rys1} \end{figure} \acknowledgements This work is supported by the Polish MNiSW Grant N203~018~32/2338 and European ZPORR project in Kujawsko-Pomorskie Region "exhibitions for PhD students".
2024-02-18T23:39:58.418Z
2010-03-02T14:59:58.000Z
algebraic_stack_train_0000
979
612
proofpile-arXiv_065-4840
\section{Introduction} The present paper is devoted to the the study of a stochastic process followed by a particle moving through a scattering thermal bath while accelerated by an external field. The field prevents the particle from acquiring the Maxwell distribution of the bath. Our aim here is not only to establish the precise form of the stationary velocity distribution, as it was e.g. the case in the analysis presented in \cite{GP86}, but also to answer the physically relevant question of the dynamics of approach towards the long-time asymptotic state. The evolution of the distribution in position space will be thus also discussed. \bigskip We consider a one-dimensional dynamics described by the Boltzmann kinetic equation \begin{equation} \label{I1} \left( \frac{\partial}{\partial t}+v\frac{\partial}{\partial r} + a\frac{\partial}{\partial v} \right)f(r,v;t)=v_{\text{\tiny int}}^{1-\gamma} \rho \int \hbox{d} w |v-w|^{\gamma}[\, f(r,w;t)\,\phi(v)-f(r,v;t)\,\phi(w)\,] \end{equation} Here $f(r,v;t)$ is the probability density for finding the propagating particle at point $r$ with velocity $v$ at time $t$. The thermal bath particles are not coupled to the external field. Before binary encounters with the accelerated particle they are assumed to be in an equilibrium state with uniform temperature $T$ and density $\rho$ \begin{equation} \label{I2} \rho \,\phi(v) = \rho \sqrt{\frac{m}{2\pi k_{B}T}}\exp\left(-\frac{mv^{2}}{2k_{B}T} \right)= \frac{\rho}{v_{\text{\tiny th}}\sqrt{2\pi}}\exp\left[-\frac{1}{2}\left(\frac{v}{v_{\text{\tiny th}}} \right)^{2}\right] \end{equation} Here $\phi(v)$ is the Maxwell distribution, and \begin{equation} \label{I3} v_{\text{\tiny th}} = \sqrt{\frac{k_{B}T}{m}} \,. \end{equation} denotes the corresponding thermal velocity. The differential operator on the left-hand side of (\ref{I1}) generates motion with a constant acceleration $a$. The accelerated motion is permanently perturbed by instantaneous exchanges of velocities with thermalized bath particles. This is modeled by the Boltzmann collision term on the right hand side of equation (\ref{I1}), which accounts for elastic encounters between equal mass particles. The collision frequency depends therein on the absolute relative velocity $|v-w|$ through a simple power law with exponent $\gamma$. Finally $v_{\text{\tiny int}}$ is some characteristic velocity of the underlying interparticle interaction. \bigskip In the case of hard rods ($\gamma = 1$) the factor $|v-w|$ is the main source of difficulties in the attemps to rigorously determine the evolution of $f(r,v;t)$, since it prevents the effective use of Laplace and Fourier transformations. It was thus quite remarkable that a stationary velocity distribution could be analytically determined in that case, leading in particular to an explicit expression for the current at any value of the external acceleration \cite{GP86}. In that case, kinetic equation (\ref{I1}) has been solved exactly only at zero temperature where $\phi (v)|_{T=0} = \delta(v)$ \cite{JP83}. Also, when $\phi(v)$ is replaced by the distribution $[\delta(v-v_{0})+\delta(v+v_{0})]/2$ with a discrete velocity spectrum $\pm v_{0}$, an explicit analytic solution has been derived and analyzed in \cite{JP1986} and \cite{JPRS2006}. The physically relevant conclusions from those works can be summarized as follows \begin{itemize} \item[(i)] the approach to the asymptotic stationary velocity distribution is exponentially fast \item[(ii)] in the reference system moving with average velocity, the hydrodynamic diffusion mode governs the spreading of the distribution in position space \item[(iii)] the Green-Kubo autocorrelation formula for the diffusion coefficient applies in the non-equilibrium steady state \end{itemize} Our aim is to show that the general features (i)-(iii) persist when $\phi(v)$ is the Maxwell distribution with temperature $T>0$. However, in the present study, we restrict the analysis to cases $\gamma =0$ and $\gamma =2$, which are much simpler than the hard-rod one. Indeed, it turns out that the Fourier-Laplace transformation can then be effectively used to solve the initial value problem for equation (\ref{I1}). The simplifications occuring when $\gamma =0$ or $\gamma =2$ have been already exploited in other studies: for recent applications to granular fluids, see e.g. \cite{BCG2000}-\cite{MP2007} and references quoted therein. \bigskip In terms of dimensionless variables \begin{equation} \label{I4} w = v/v_{\text{\tiny th}}, \;\;\;\; x=r \, \rho \left(v_{\text{\tiny th}}/v_{\text{\tiny int}}\right)^{\gamma-1}, \;\;\;\; \tau = t \,\rho \, v_{\text{\tiny th}} \left(v_{\text{\tiny th}}/v_{\text{\tiny int}}\right)^{\gamma-1}\, , \end{equation} the kinetic equation (\ref{I1}) takes the form \begin{equation} \left( \frac{\partial}{\partial \tau}+w\frac{\partial}{\partial x} + \epsilon\frac{\partial}{\partial w} \right)F(x,w;\tau) = \int \hbox{d} u |w-u|^{\gamma} [F(x,u;\tau)\Phi(w)-F(x,w;\tau)\Phi(u)] \, , \label{I5} \end{equation} where $\Phi(w)$ is the dimensionless normalized gaussian \begin{equation} \label{I7} \Phi(w)= \frac{1}{\sqrt{2\pi}}e^{-w^{2}/2} \, , \end{equation} and $\epsilon$ is the dimensionless parameter \begin{equation} \label{I6} \epsilon = \left(v_{\text{\tiny th}}/v_{\text{\tiny int}}\right)^{1-\gamma} \, \frac{am\rho^{-1}}{ k_{B}T}\ \end{equation} proportional to the ratio between the energy $am\rho^{-1}$ provided to the particle on a mean free path, and thermal energy $k_{B}T$. That parameter can thus be looked upon as a measure of the strength of the field. Integration of (\ref{I5}) over the position space yields the kinetic equation for the velocity distribution \[ G(w;\tau)=\int \hbox{d} x F(x,w;\tau) \; ,\] which reads \begin{equation} \left( \frac{\partial}{\partial \tau} + \epsilon\frac{\partial}{\partial w} \right)G(w;\tau) = \int \hbox{d} u |w-u|^{\gamma} [G(u;\tau)\Phi(w)-G(w;\tau)\Phi(u)] \, . \label{I8} \end{equation} \bigskip The paper is organized as follows. In Section II, we consider the so-called Maxwell gas ($\gamma =0$). The explicit solution of the kinetic equation (\ref{I5}) enables a thorough discussion of the approach to the stationary state, together with a study of the structure of the stationary velocity distribution. In Section III, we proceed to a similar analysis for the very hard particle model ($\gamma =2$). Section IV contains conclusions. Some calculations have been relegated to Appendices. \section{The Maxwell gas} We consider here the simple version $\gamma =0$ of equation (\ref{I5}). One usually then refers to the Maxwell gas dynamics, in which the collision frequency does not depend on the speed of approach (see e.g. \cite{UFM63}). This case can be viewed upon as a very crude approximation to the hard rod dynamics ($\gamma=1$) obtained by replacing the relative speed $|v-c|$ of colliding particles by constant thermal velocity $v_{\text{\tiny th}}$, while $v_{\text{\tiny int}}$ is identified with $v_{\text{\tiny th}}$. Here, kinetic equation (\ref{I5}) takes the form \begin{eqnarray} \left( \frac{\partial}{\partial \tau}+w\frac{\partial}{\partial x} + \epsilon\frac{\partial}{\partial w} \right)F(x,w;\tau) & = & \int \hbox{d} u [F(x,u;\tau)\Phi(w)-F(x,w;\tau)\Phi(u)] \nonumber \\ & = & M_{0}(x;\tau)\Phi(w) - F(x,w;\tau) \label{II1} \end{eqnarray} where $M_{0}(x;\tau )$ denotes the zeroth moment \begin{equation} \label{II2} M_{0}(x;\tau) = \int \hbox{d} u F(x,u;\tau) \; . \end{equation} \bigskip Equation (\ref{II1}) can be conveniently rewritten as an integral equation \begin{multline} F(x,w;\tau) = e^{-\tau}F(x-w\tau+\epsilon\tau^{2}/2, w-\epsilon\tau;0) \\ + \int_{0}^{\tau}d\eta e^{-\eta}\Phi(w-\epsilon\eta)M_{0}(x-w\eta+\epsilon\eta^{2}/2; \tau-\eta) \, , \label{II3} \end{multline} with an explicit dependence on the initial condition $F(x,w;0)$. Integration of equation (\ref{II3}) over $x$ yields \begin{equation} \label{II4} G(w;\tau) = \int \hbox{d} x F(x,w;\tau) = e^{-\tau}G_{\text{\tiny in}}( w-\epsilon\tau) + N_{0} \int_{0}^{\tau} \hbox{d} \eta e^{-\eta}\Phi(w-\epsilon\eta) \, , \end{equation} where $G_{\text{\tiny in}}(w) = G(w;0)$ is the initial condition, and $N_{0}=\int \hbox{d} w \int \hbox{d} x F(x,w;\tau) = \int \hbox{d} w G(w;\tau)$ is the conserved normalization factor. \subsection{Stationary solution and relaxation of the velocity distribution} Putting $N_{0}=1$ in formula (\ref{II4}) yields the evolution law for the normalized velocity distribution \begin{equation} \label{IIG} G(w;\tau) = \int \hbox{d} x F(x,w;\tau) = e^{-\tau}G_{\text{\tiny in}}( w-\epsilon\tau) + \int_{0}^{\tau} \hbox{d} \eta e^{-\eta}\Phi(w-\epsilon\eta) \, , \end{equation} The first term on the right hand side of (\ref{IIG}) describes the decaying memory of the initial distribution : $G_{\text{\tiny in}}(w)$ propagates in the direction of the field with constant velocity $\epsilon$, while its amplitude is exponentially damped. Clearly, for times $\tau \gg 1$ that term can be neglected. \bigskip The second term in formula (\ref{IIG}) describes the approach to the asymptotic stationary distribution \begin{eqnarray} \label{II5} G_{\text{\tiny st}}(w) = G(w;\infty) & = & \int_{0}^{\infty} \hbox{d} \eta \; e^{-\eta}\;\Phi(w-\epsilon\eta) \nonumber \\ &=& \frac{1}{2\epsilon}\exp{\left(\frac{1}{2\epsilon^{2}}-\frac{w}{\epsilon} \right) } \left( 1+\text{Erf}\left(\frac{w\epsilon-1} {\epsilon\sqrt{2}}\right)\right)\, , \end{eqnarray} where \[ \text{Erf}(\xi)=\frac{2}{\sqrt{\pi}}\int_0^\xi \hbox{d} u \; \exp(-u^2) \] is the familiar error function. It is interesting to compare the decay-law of $G_{\text{\tiny st}}(w)$ at large velocities, to that corresponding to the case of hard-rod collisions. Using expression (\ref{II5}) we find the asymptotic formula \begin{equation} \label{II16} G_{\text{\tiny st}}(w) \sim \frac{1}{\epsilon} \exp{\left(\frac{1}{2\epsilon^{2}}-\frac{w}{\epsilon} \right) } \end{equation} when $w \to +\infty$. In contradistinction to the hard-rod case governed by an $\epsilon$-dependent gaussian law (see \cite{GP86}) we find here a purely exponential decay. The thermal bath is unable to impose via collisions its own gaussian decay because of insufficient collision frequency. The replacement of the relative speed in the Boltzmann collision operator by thermal velocity implies thus qualitative changes in the shape of the stationary velocity distribution. The plot of $G_{\text{\tiny st}}(w)$ for different values of $\epsilon$ is shown in Fig.~\ref{AP09a}. \begin{figure} \includegraphics[width=0.9\textwidth]{AP09a.eps} \caption{\label{AP09a} Stationary velocity distribution $G_{\text{\tiny st}}(w)$ for three values of $\epsilon$.} \end{figure} \bigskip Basic properties (i)-(iii) discussed in the Introduction turn out to be valid. Indeed, the inequality \begin{equation} \label{II6} G_{\text{\tiny st}}(w) - \int_{0}^{\tau}\hbox{d} \eta \; e^{-\eta}\; \Phi(w-\epsilon\eta) =\int_{\tau}^{\infty}\hbox{d} \; \eta e^{-\eta}\; \Phi(w-\epsilon\eta) < \frac{e^{-\tau}}{\epsilon} \end{equation} displays an uniform exponentially fast approach towards the stationary state. In particular, using formula (\ref{IIG}), we find that the average velocity $<w>(\tau)$ approaches the asymptotic value \begin{equation} \label{II7} <w>_{\text{\tiny st}} = \epsilon \end{equation} according to \begin{equation} \label{II8} <w>(\tau) = \int \hbox{d} w \, w\, G(w;\tau) = \epsilon + e^{-\tau}[ <w>_{\text{\tiny in}} - \epsilon ] \end{equation} We encounter here an exceptional situation where the linear response is exact for any value of the external field. \bigskip Equation (\ref{II4}) with $N_{0}$ put equal to zero can be used for the evaluation of the time-displaced velocity autocorrelation function \begin{equation} \label{II9} \Gamma(\tau) = <[ w(\tau) - <w>_{\text{\tiny st}} ] [w(0) - <w>_{\text{\tiny st}} ]>_{\text{\tiny st}} \, . \end{equation} where $<...>_{\text{\tiny st}}$ denotes the average over stationary state (\ref{II5}). The calculation presented in Appendix~\ref{B} provides the formula \begin{equation} \label{II10} \Gamma(\tau) = e^{-\tau} [ 1 + \epsilon^{2} ] \, , \end{equation} which yields a remarkably simple field dependence of the diffusion coefficient \begin{equation} \label{II11} D(\epsilon) = \int_{0}^{\infty} \hbox{d} \tau \; \Gamma(\tau) = 1 + \epsilon^{2} \, . \end{equation} \subsection{Relaxation of density: appearence of a hydrodynamic mode} Let us turn now to the analysis of the evolution of the normalized density $n(x;\tau)=M_{0}(x;\tau)$ in position space. It turns out that one can solve the complete integral equation (\ref{II3}) by applying to both sides Fourier and Laplace transformations. If we set \begin{equation} \label{II12bis} \tilde{F}(k,w;z) = \int_0^{\infty} \hbox{d} \tau\, e^{-z\tau} \int \hbox{d} x \, e^{-ikx}\, F(x,w;\tau) \, , \end{equation} we find \begin{multline} \label{II12} \tilde{F}(k,w;z) = \int_{0}^{\infty}\hbox{d} \tau \; {\rm exp} \left[ -ik\left( w\tau - \epsilon\frac{\tau^{2}}{2} \right) - (z+1)\tau \right] \\ \left\lbrace \hat{F}_{\text{\tiny in}}(k,w-\epsilon\tau) + \tilde{n}(k;z) \Phi(w-\epsilon\tau) \right\rbrace \; , \end{multline} where $\tilde{n}(k;z)$ is the Fourier-Laplace transform of $n(x;\tau)$, and \[ \hat{F}_{\text{\tiny in}}(k,w)= \int \hbox{d} x \, e^{-ikx}\, F(x,w;0) \] denotes the spatial Fourier transform of the initial condition. Equation (\ref{II12}) when integrated over the velocity space yields the formula \begin{equation} \label{II13} \tilde{n}(k;z) =\frac{1}{\zeta(k;z)}\int \hbox{d} w \int_{0}^{\infty}\hbox{d} \tau \; {\rm exp} \left[ -ik\left( w\tau - \epsilon\frac{\tau^{2}}{2} \right) - (z+1)\tau \right] \hat{F}_{\text{\tiny in}}(k,w-\epsilon\tau) \end{equation} with \begin{equation} \label{II14} \zeta(k;z) = 1 - \int_{0}^{\infty}\hbox{d} \tau \; {\rm exp} \left[ - (z+1)\tau - (ik\epsilon + k^{2})\frac{\tau^{2}}{2} \right] \, . \end{equation} The insertion of (\ref{II13}) into (\ref{II12}) provides a complete solution for $\tilde{F}(k,w;z)$ corresponding to a given initial condition. \bigskip Formula (\ref{II13}) shows that the time-dependence of the spatial distribution is defined by roots of the function $\zeta(k;z)$. In order to find the long-time hydrodynamic mode $z_{\rm{hy}}(k)$, we have to look for the root of $\zeta(k;z)$ which approaches $0$ when $k\to 0$. If we assume the asymptotic form \[ z_{\rm{hy}}(k) = c_{1}k + c_{2}k^{2} +o(k^2) \;\;\; \text{when} \;\;\; k\to 0\; , \] we find a unique self-consistent solution to equation $\zeta(k;z)=0$ of the form \begin{equation} \label{II15} z_{\rm{hy}}(k)= -i\epsilon k - (1+\epsilon^{2})k^{2} + o(k^2) = -i\epsilon k - D(\epsilon) k^{2} + o(k^2) \, . \end{equation} It has the structure of a propagating diffusive mode. It is important to note that the diffusion coefficient $D(\epsilon)$ equals $(1+\epsilon^{2})$ in accordance with the Green-Kubo result (\ref{II11}). We thus see that, in the reference system moving with constant velocity $\epsilon$, a classical diffusion process takes place in position space. \bigskip It has been argued in the literature that, in general, $z_{\rm{hy}}(k)$ is not an analytic function of $k$ at $k=0$ (see \textsl{e.g.} Ref.~\cite{ED75}). Here, that question can be precisely investigated as follows. According to the integral expression (\ref{II14}) of $\zeta(k;z)$, the hydrodynamic mode is a function of $\xi=ik\epsilon + k^2$. By combining differentiations with respect to $\xi$ under the integral sign with integration by parts, we find that $z_{\rm{hy}}(\xi)$ satisfies the second order differential equation \begin{equation} \label{IIhyddiff} \xi \frac{\hbox{d}^2 z_{\rm{hy}}^2}{\hbox{d} \xi^2} = 1+ \frac{\hbox{d} z_{\rm{hy}}}{\hbox{d} \xi} \; . \end{equation} Then, since $z_{\rm{hy}}(0)=0$, we find that $z_{\rm{hy}}(\xi)$ can be formally represented by an infinite entire series in $\xi$, \begin{equation} \label{IIhydTaylor} z_{\rm{hy}}(\xi) = \sum_{n=1}^{\infty} c_n \xi^n \; , \end{equation} with $c_1=-1$, $c_2=1$ and \[ |c_{n+1}| \geq 2^{n-1} \; n! \;\;\; \text{for} \;\;\; n \geq 2 \; .\] Thus, the radius of convergence of Taylor series (\ref{IIhydTaylor}) is zero, so $\xi=0$ is a singular point of function $z_{\rm{hy}}(\xi)$, as well as $k=0$ is a singular point of function $z_{\rm{hy}}(k)$. The nature of that singularity can be found by rewriting the root equation defining $z_{\rm{hy}}(\xi)$ as the implicit equation \begin{equation} \label{IIhydimp} 1-\text{Erf}\left(\frac{z_{\rm{hy}}+1}{\sqrt{2\xi}}\right) = \sqrt{\frac{2\xi}{\pi}} \; \exp\left(-\frac{(z_{\rm{hy}}+1)^2}{2\xi}\right) \; . \end{equation} The introduction of function $\sqrt{\xi}$ requires to define cut-lines ending at points $k=0$ and $k=-i\epsilon$ which are the two roots of equation $\xi(k)=0$. Since the integral in the r.h.s. of expression (\ref{II14}) diverges for $k$ imaginary of the form $k=iq$ with $q > 0$ or $q < -\epsilon$, it is natural to define such cut-lines as $[i0, i\infty[$ and $]-i\infty, -i\epsilon]$. The corresponding choice of determination for $\sqrt{\xi}$ is defined by $\sqrt{\xi(k^+)}= i\sqrt{q\epsilon+q^2}$ for $k^+=0^+ + iq$ with $q>0$, where $\sqrt{q\epsilon+q^2}$ is the usual real positive square root of the real positive number $(q\epsilon+q^2)$. Notice that, when complex variable $k$ makes a complete tour around point $k=0$ starting from $k^+=0^+ + iq$ on one side of the cut-line and ending at $k^-=0^- + iq$ on the other side (with vanishing difference $k^+-k^-$), $\sqrt{\xi(k)}$ changes sign from $\sqrt{\xi^+}$ to $\sqrt{\xi^-}=-\sqrt{\xi^+}$ with obvious notations. As shown by adding both implicit equations (\ref{IIhydimp}) for $k^+$ and $k^-$ respectively, $z_{\rm{hy}}^+$ does not reduce to $z_{\rm{hy}}^-$. The difference $(z_{\rm{hy}}^+-z_{\rm{hy}}^-)$ is of order $\exp(-1/(2|k|\epsilon))$, so $k=0$ is an essential singularity. \section{Very hard particles} Another interesting case is that of the so-called very hard particle model, where the collision frequency is proportional to the kinetic energy of the relative motion of the colliding pair. The corresponding exponent in the collision term of the Boltzmann equation (\ref{I1}) is now $\gamma=2$. This allows us to simplify the resolution of the kinetic equation. Owing to this fact, the very hard particle model, similarly to the Maxwell gas, has been studied in numerous works (see e.g. \cite{MHE1984}-\cite{CDT2005}, and references given therein). \bigskip Using dimensionless variables (\ref{I4}), we thus write the kinetic equation as \begin{equation} \left( \frac{\partial}{\partial \tau}+w\frac{\partial}{\partial x} + \epsilon\frac{\partial}{\partial w} \right)F(x,w;\tau) = \int \hbox{d} u |w-u|^{2} [F(x,u;\tau)\Phi(w)-F(x,w;\tau)\Phi(u)] \label{III1} \end{equation} \[ = [ w^2 M_{0}(x;\tau)-2w M_{1}(x;\tau)+M_{2}(x;\tau)]\Phi(w)-(w^2+1)F(x,w;\tau)\] where the moments $M_{j}(x;\tau)$ ($j=1,2,...$) are defined by \begin{equation} \label{IIIM} M_{j}(x;\tau) = \int \\d w w^j F(x,w;\tau) \; . \end{equation} The evolution equation of the velocity distribution $G(w;\tau)$ becomes \begin{equation} \label{III2} \left( \frac{\partial}{\partial \tau}+\epsilon\frac{\partial}{\partial w} \right)G(w;\tau) = [ N_{2}(\tau)-2wN_{1}(\tau)+w^{2}N_{0}]\Phi(w)-(w^{2}+1)G(w;\tau) \, , \end{equation} with the integrated moments \begin{equation} \label{IIIN} N_{j}(\tau)=\int \hbox{d} x\,M_{j}(x;\tau), \;\; j=0,1,2 \; . \end{equation} Notice that the integrated zeroth moment does not depend on time since the evolution conserves the initial normalization condition \[N_{0}(\tau)=\int \hbox{d} w \int \hbox{d} x F(x,w;\tau) = N_{0}\; .\] Hence, when $F(x,w;\tau)$ is a normalized probability density $N_{0}(\tau)=N_{0}=1$. \bigskip The simplification related to the choice $\gamma=2$, and more generally when $\gamma$ is an even integer, concerns the collision term in the general kinetic equation (\ref{I1}) which can be expressed in such cases in terms of a finite number of moments of the distribution function. The resolution of that equation becomes then straightforward within standard methods (see Appendix~\ref{A}). \subsection{Laplace transform of the velocity distribution} The expression for the Laplace transform of the normalized velocity distribution follows directly from the general formula (\ref{C3}) derived in Appendix~\ref{A} by putting $k=0$, and choosing $\tilde{M}_{0}(0,z)= \tilde{N}_{0}(z) = 1/z$. Within definition \begin{equation} \label{III7} S(w;z) =(z+1) w + \frac{w^3}{3} \, \end{equation} for the function $S(k,w;z)$ evaluated at $k=0$ (see definition (\ref{S})), we find \begin{multline} \label{III6} \epsilon \tilde{G}(w;z) = \frac{\epsilon}{z}\Phi(w) + \int_{-\infty}^{w}\hbox{d} u \exp \{ [S(u;z)-S(w;z)]/\epsilon \}\; \{ G_{\text{\tiny in}}(u) \\ + [\tilde{N}_{2}(z)-2u\tilde{N}_{1}(z)+ \frac{(\epsilon u -z-1)}{z}] \Phi(u) \} \; . \end{multline} \bigskip The two functions $\tilde{N}_{1}(z)$ and $ \tilde{N}_{2}(z)$ satisfy the system of equations \begin{eqnarray} \label{III8} 0 & = & A^{\text{\tiny (in)}}_{00}(0;z) + [ \tilde{N}_{2}(z) - (z+1)/z]A_{00}(0;z)+[\epsilon/z -2\tilde{N}_{1}(z)]A_{01}(0;z) \nonumber \\ \epsilon \tilde{N}_{1}(z) & = & A^{\text{\tiny (in)}}_{10}(0;z) + [ \tilde{N}_{2}(z) - (z+1)/z]A_{10}(0;z)+[\epsilon/z-2\tilde{N}_{1}(z)]A_{11}(0;z) \end{eqnarray} which is identical to (\ref{C7}) taken at $k=0$, while \begin{equation} \label{IIIA} A_{jl}(0;z)= \int \hbox{d} w \int_{-\infty}^{w}\hbox{d} u\; \exp \{ [S(u;z)-S(w;z)]/\epsilon \}\, w^{j}\, u^{l} \Phi(u) \; . \end{equation} Analogous formula holds for $A^{\text{\tiny (in)}}_{jl}(0;z)$ with the Maxwell distribution $\Phi(u)$ replaced by the initial condition $G(u;0)=G_{\text{\tiny in}}(u)$. Once system (\ref{III8}) has been solved, the insertion of the resulting expressions for $\tilde{N}_{1}(z)$ and $\tilde{N}_{2}(z)$ into formula (\ref{III6}) yields eventually an explicit solution of the kinetic equation for the velocity distribution \begin{multline} \label{III9} \tilde{G}(w;z) = \frac{\Phi(w)}{z} + \frac{1}{\epsilon}\int_{-\infty}^w \hbox{d} u \, \exp \left\{ [S(u;z)-S(w;z)]/{\epsilon}\right\} \\ \times\left\{ G_{\text{\tiny in}}(u) + [A_{\epsilon}(z)\, u - B_{\epsilon}(z) ]\Phi(u) \right\} \; . \end{multline} With the shorthand notations $A_{jl}(z) = A_{jl}(0;z)$ and $A^{\text{\tiny (in)}}_{jl}(z)=A^{\text{\tiny (in)}}_{jl}(0;z)$, the formulae for coefficients $A_{\epsilon}(z)$ and $ B_{\epsilon}(z)$ read \begin{equation} \label{Aepsilon} A_{\epsilon}(z)= \frac{1}{\Delta(z)} \left[\frac{\epsilon^2}{z} A_{00}(z)-2A_{00}(z)A_{10}^{\text{\tiny (in)}}(z) +2A_{10}(z)A_{00}^{\text{\tiny (in)}}(z)\right] \end{equation} and \begin{equation} \label{Bepsilon} B_{\epsilon}(z) = \frac{1}{\Delta(z)} \left[\frac{\epsilon^2}{z} A_{01}(z)+ \epsilon A_{00}^{\text{\tiny (in)}}(z) + 2A_{11}(z)A_{00}^{\text{\tiny (in)}}(z) -2A_{01}(z) A_{10}^{\text{\tiny (in)}}(z)\right] \; , \end{equation} where $\Delta(z)$, in accordance with the definition given in (\ref{C9}), is \begin{equation} \label{III22} \Delta (z)= \epsilon A_{00}(z) + 2\,\left( A_{00}(z)A_{11}(z)-A_{10}(z)A_{01}(z)\right) \, . \end{equation} \subsection{Stationary solution} At large times, $\tau \to \infty$, we expect the velocity distribution to reach some stationary state $G_{\text{\tiny st}}(w) = G(w;\infty)$. This can be easily checked by investigating the behaviour of $\tilde{G}(w;z)$ in the neighbourhood of $z=0$ at fixed velocity $w$. \bigskip All integrals over $u$ in formula (\ref{III9}) do converge for any complex value of $z$. Moreover, all their derivatives with respect to $z$ are also well defined, as shown by differentiation under the integral sign. Thus, such integrals are entire functions of $z$. The sole quantities in expression (\ref{III9}) which become singular at $z=0$ are the coefficients $A_{\epsilon}(z)$ and $B_{\epsilon}(z)$, and obviously the term $\Phi(w)/z$. In fact, both $A_{\epsilon}(z)$ and $B_{\epsilon}(z)$ exhibit simple poles at $z=0$. Hence, the stationary solution of the kinetic equation (\ref{III2}) does emerge when $\tau \to \infty$, and it is given by the residue of the simple pole of $\tilde{G}(w;z)$ at $z=0$, namely \begin{equation} \label{III10} G_{\text{\tiny st}}(w) = \Phi(w) + \frac{\epsilon }{\Delta(0)} \int_{-\infty}^w \hbox{d} u \exp \left[\frac{S(u;0)-S(w;0)}{\epsilon}\right][ A_{00}(0)\, u\, - A_{01}(0) ]\Phi(u) \, . \end{equation} In that expression, $A_{ij}(0)$ and $\Delta(0)$ are the non-zero values at $z=0$ of the analytic functions $A_{ij}(z)=A_{ij}(0;z)$ and $\Delta(z)=\Delta(0;z)$. Formula (\ref{III10}) does not depend on initial condition $G_{\text{\tiny in}}$. All initial conditions evolve towards the same unique stationary distribution (\ref{III10}). It can be checked that the direct resolution of the static version of kinetic equation (\ref{III2}) obtained by setting $\partial G/\partial \tau = 0$ does provide formula (\ref{III10}). \bigskip Since the external field accelerates the particle, the stationary solution is asymmetric with respect to the reflection $w \rightarrow -w$, and positive velocities are favoured. This leads to a finite current \begin{equation} \langle w \rangle_{\text{\tiny st}} = \int \hbox{d} w\, w \, G_{\text{\tiny st}}(w) = \frac{\epsilon }{\Delta(0)} [A_{00}(0)A_{11}(0)-A_{01}(0)A_{10}(0) ] \; . \label{III11} \end{equation} The asymptotic expansion at large velocities of $G_{\text{\tiny st}}(w)$, inferred from formula (\ref{III10}), reads \begin{equation} G_{\text{\tiny st}}(w) = \frac{1}{\sqrt{2\pi}}e^{-w^{2}/2} \left[ 1 + \frac{\epsilon^{2} A_{00}(0)}{\Delta(0)w} + O(\frac{1}{w^2}) \right] \,\,\,\,\, \text{when} \,\,\,\,\, |w| \to \infty \, . \label{III12} \end{equation} Therefore, the external field does not influence the leading large-velocity behaviour of $G_{\text{\tiny st}}(w)$, which is identical to that of the thermal bath. Its effects only arise in the first correction to the leading behaviour which is smaller by a factor of order $1/w$. The stationary distribution is drawn in Fig.~\ref{AP09b} for several increasing field strengths, $\epsilon = 1$, $\epsilon = 10$ and $\epsilon = 100$. \begin{figure} \includegraphics[width=0.9\textwidth]{AP09b.eps} \caption{\label{AP09b} Stationary velocity distribution $G_{\text{\tiny st}}(w)$ for three values of $\epsilon$.} \end{figure} \bigskip Let us study now the limit $\epsilon \to 0$ which corresponds to a weak external field. The main contributions to the integrals over $u$ in (\ref{III9}) arise from the region close to $w$. That observation motivates the use of a new integration variable $y=(w-u)/\epsilon$. The Taylor expansions of the resulting integrands in powers of $\epsilon$ generate then entire series in $\epsilon$, the first terms of which read \begin{equation} \label{III13} \int_{-\infty}^w \hbox{d} u \, u\, \Phi(u) \exp \left[\frac{S(u;0)-S(w;0)}{\epsilon}\right] = \epsilon \, \frac{w \Phi(w)}{1+w^2} + O(\epsilon^2) \end{equation} and \begin{equation} \label{III14} \int_{-\infty}^w \hbox{d} u \, \Phi(u) \exp \left[\frac{S(u,0)-S(w,0)}{\epsilon}\right] = \epsilon \, \frac{\Phi(w)}{1+w^2} + O(\epsilon^2) \, . \end{equation} Consequently, also functions $A_{ij}(0)$ and $\Delta(0)$ can be represented by power series in $\epsilon$ as they are obtained by calculating appropriate moments of expansions (\ref{III13}) and (\ref{III14}) over the velocity space. The corresponding small-$\epsilon$ expansion of the stationary velocity distribution reads \begin{equation} \label{III15} G_{\text{\tiny st}}(w) = \Phi(w) + \epsilon \left[ \frac{b\, w }{1+w^2} \right]\Phi(w) + O(\epsilon^2) \, , \end{equation} where \[ b = \left[ 1+2 \int \hbox{d} w\, \frac{w^2}{1+w^2}\Phi(w) \right]^{-1} \; . \] Of course, at $\epsilon = 0$, $G_{\text{\tiny st}}(w)$ reduces to the Maxwell distribution. The first correction is of order $\epsilon$, as expected from linear response theory. The corresponding current (\ref{III10}) reduces to \begin{equation} \langle w \rangle_{\text{\tiny st}} = \sigma \epsilon + O(\epsilon^2) \, , \label{III16} \end{equation} where the conductivity $\sigma$ is given by \begin{equation} \sigma = \frac{1}{2}( 1 - b ) \label{III17} \end{equation} It will be shown in the sequel that $\sigma = D_0= D(\epsilon = 0)$, where $D(\epsilon) $ is the diffusion coefficient given by the Green-Kubo formula. \bigskip Consider now the strong field limit $\epsilon \to \infty$. The corresponding behaviours of $A_{ij}(0)$ and $\Delta(0)$ are derived from the integral representations obtained in Appendix~\ref{C}. We then find at fixed $w$ \begin{equation} \label{III18} G_{\text{\tiny st}}(w) = \frac{\epsilon^{-1/3}}{\int_0^\infty \hbox{d} y \exp (-y^3/3)} \int_{-\infty}^w \hbox{d} u \, \Phi(u) \exp \left[\frac{S(u,0)-S(w,0)}{\epsilon}\right] + O(\epsilon^{-2/3}) \end{equation} For $w$ of order 1, the dominant term in the large-$\epsilon$ expansion of the integral in (\ref{III18}) reduces to \[ \int_{-\infty}^{w}\hbox{d} u \, \Phi(u)=\frac{1}{2}\left(1+\text{Erf}\left(\frac{w}{\sqrt{2}}\right)\right) \] and thus varies from $0$ to $1$ around the origin $w=0$. For larger values of the velocity, $w \sim \epsilon^{1/3}$, that integral behaves as $\exp (-w^3/(3\epsilon)$. The next term in the expansion (\ref{III18}) remains of order $\epsilon^{-2/3}$. Thus, when $\epsilon \to \infty$ at fixed $\epsilon^{-1/3} w$ the stationary solution is given by \begin{equation} \label{III19} G_{\text{\tiny st}}(w) \sim \theta(w)\, \frac{\epsilon^{-1/3}}{\int_0^\infty \hbox{d} y \exp (-y^3/3)} \, \exp \left[-(\epsilon^{-1/3}w)^3/3 \right] \, , \end{equation} where $\theta$ is the Heaviside step function. The whole distribution is shifted toward high velocities $w \sim \epsilon^{1/3}$, so that the resulting current (\ref{III11}) is of the same order of magnitude, \textsl{i.e.} \begin{equation} \langle w \rangle_{\text{\tiny st}} \sim \frac{3^{1/3} \Gamma(2/3)}{\Gamma(1/3)} \; \epsilon^{1/3} \,\,\,\, \text{when} \,\,\,\, \epsilon \to \infty \, , \label{III20} \end{equation} where $\Gamma$ is the Euler Gamma function. That behavior can be recovered within the following simple interpretation. At strong fields, the average velocity of the particle becomes large compared to the thermal velocity of scatterers. Since at each collision the particle exchanges its velocity with a thermalized scatterer, the variation of particle velocity between two successive collisions is of the order of $\langle v \rangle_{\text{\tiny st}}$. On the other hand, in the stationary state the same velocity variation is due to the acceleration $a$ coming from the external field, so it is of the order $a \tau_{\text{\tiny coll}}$ where $\tau_{\text{\tiny coll}}$ is the mean time between two successive collisions. This time can be reasonably estimated as the inverse collision frequency for a relative velocity $|v-c|$ of order $\langle v \rangle_{\text{\tiny st}}$. The consistency of those estimations requires the relation \begin{equation} \langle v \rangle_{\text{\tiny st}} \sim a \; \frac{v_{\text{\tiny int}}} {\rho \, \langle v \rangle_{\text{\tiny st}}^2} \, \label{III21} \end{equation} which indeed implies the $\epsilon^{1/3}$-behaviour (\ref{III20}) of the average velocity in dimensionless units. Contrary to the Maxwell case where the current remains linear in the applied field, here the current deviates from its linear-response form when the field increases : it grows more slowly because collisions are more efficient in dissipating the energy input of the field. In Fig.~\ref{AP09c}, we plot $\langle w \rangle_{\text{\tiny st}}$ as a function of $\epsilon$. \begin{figure} \includegraphics[width=0.9\textwidth]{AP09c.eps} \caption{\label{AP09c} Average current $\langle w \rangle_{\text{st}}$ as a function of $\epsilon$. The dashed line represents the linear Kubo term in the small-$\epsilon$ expansion (\ref{III16}) with conductivity $\sigma \simeq 0.2039$. The dotted line describes asymptotic formula (\ref{III20}) with $3^{1/3} \Gamma(2/3)/\Gamma(1/3) \simeq 0.7290$ valid in the limit $\epsilon \to \infty$.} \end{figure} \subsection{Relaxation towards the stationary solution} Let us study now the relaxation of the velocity distribution $G(w;\tau)$ towards the stationary solution $G_{\text{\tiny st}}(w)$. The decay of $[ G(w,\tau) - G_{\text{\tiny st}}(w) ]$ when $\tau \to \infty$ is controlled by the singularities of $\tilde{G}(w;z)$ in the complex plane, different from the pole at $z=0$. As already mentioned, all integrals in expression (\ref{III9}) are entire functions of $z$, so the singularities at $z \neq 0$ arise only in the coefficients $A_{\epsilon}(z)$ and $B_{\epsilon}(z)$. Thus, the first important conclusion is that the relaxation is uniform for the whole velocity spectrum. \bigskip According to expressions (\ref{Aepsilon}) and (\ref{Bepsilon}) defining $A_{\epsilon}(z)$ and $B_{\epsilon}(z)$ respectively, the singularities of those coefficients at points $z\neq 0$, correspond to zeros of the function $\Delta(z)$ given by expression (\ref{III22}). Since the analytic functions $A_{ij}(z)$ and $\Delta (z)$ do not depend on initial condition $G_{\text{\tiny in}}$. the relaxation is an intrinsic dynamical process, as expected. \bigskip After some algebra detailed in Appendix~\ref{C}, we find that $\Delta (z)$ reduces to the Laplace transform \begin{equation} \label{III23} \Delta (z)=\epsilon^{2} \, \int_0^{\infty} \hbox{d} y f_{\epsilon}(y) \exp (-zy) \end{equation} of the real, positive, and monotonously decreasing function \begin{equation} \label{III24} \epsilon^{2} f_{\epsilon}(y)= \frac{\epsilon^{2} (1+3y)}{(1+y)(1+2y)^{1/2}} \exp \left( -y -\epsilon^2 \frac{y^3(2+y)}{6(1+2y)} \right) \, . \end{equation} Owing to the fast decay of $f_{\epsilon}(y)$ the integral (\ref{III23}) converges for any $z$, so $\Delta (z)$ is an entire function of $z$. Also, the monotonic decay of $f_{\epsilon}(y)$ and its positivity imply some general properties for the roots of $\Delta (z)$. First of all, $\Delta (z)$ cannot vanish for $\Re (z) \geq 0$. Moreover, as $\Delta (z)$ is strictly positive for $z$ real, the zeros of $\Delta (z)$ appear in complex conjugate pairs, while they are isolated with strictly negative real parts and nonvanishing imaginary parts. Consequently, the long-time relaxation of the velocity distribution is governed by the pair of zeros which is closest to the imaginary axis. Noting them as $z^{\pm}=-\lambda \pm i \omega$ with $\omega \neq 0$ and $0 < \lambda$, we conclude that $G(w;\tau)$ relaxes towards $G_{\text{\tiny st}}(w)$ via exponentially damped oscillations \begin{equation} \label{III25} G(w;\tau) - G_{\text{\tiny st}}(w) \sim C(w) \cos [\omega \tau + \eta(w)] \exp (-\lambda \tau), \,\,\,\, \text{when} \,\,\,\, \tau \to \infty \, \end{equation} where $C(w)$ and $\eta(w)$ are an amplitude and a phase respectively. It should be noticed that both functions $C(w)$ and $\eta(w)$ depend on initial conditions. \bigskip At a given value of $\epsilon$, the zeros $z^\pm$ are found by solving numerically the equation $\Delta (z^\pm)=0$. In the weak- or strong-field limits, we can derive asymptotic formulae for such zeros as follows. First, as indicated by numerically computing $z^\pm$ for small values of $\epsilon$, $z^\pm$ collapse to $z_0=-1$ when $\epsilon \to 0$. The corresponding asymptotical behaviour can be derived by noting that, for $z$ close to $z_0$, the leading contributions to $\Delta(z)$ in integral (\ref{III23}) arise from large values of $y$. Then, we set $y=\xi/\epsilon^{2/3}$ and $z=-1 +s\; \epsilon^{2/3}$, which provide \begin{equation} \label{III26bis} \Delta (-1 +s\; \epsilon^{2/3}) \sim \frac{3\;\epsilon^{5/3}}{\sqrt{2}} \, \int_0^\infty \hbox{d} \xi \; \xi^{-1/2} \exp(-s\;\xi-\xi^3/12) \end{equation} when $\epsilon \to 0$ at fixed $s$. By numerical methods, we find the pair of complex conjugated zeros $s_0^\pm $ of integral \[ \int_0^\infty \hbox{d} \xi \; \xi^{-1/2} \exp(-s\;\xi-\xi^3/12) \] which are the closest to the imaginary axis. Therefore, when $\epsilon \to 0$, damping factor $\lambda(\epsilon)$ goes to $1$ according to \begin{equation} \label{III26ter} \lambda(\epsilon)=1-\Re(s_0^\pm)\;\epsilon^{2/3} +o(\epsilon^{2/3}) \end{equation} with $\Re(s_0^\pm) \simeq -1.169$, while frequency $\omega(\epsilon)$ vanishes as $\Im(s_0^+)\;\epsilon^{2/3}$ with $ \Im(s_0^+) \simeq 2.026$. Notice that for fixed $z$, not located on the real half-axis $]-\infty,-1]$, $\Delta(z)$ behaves as \begin{equation} \label{III26} \Delta (z) \sim \epsilon^{2} \, \Delta_0(z) \end{equation} when $\epsilon \to 0$, with \begin{multline} \label{III27} \Delta_0(z) = \sqrt{\frac{\pi}{2(z+1)}} \, e^{(z+1)/2} \left[ 1 - \text{Erf}\left(\sqrt{(z+1)/2}\right)\right] \\ \times \left[3-\sqrt{2\pi(z+1)} e^{(z+1)/2} \left(1 - \text{Erf}\left(\sqrt{(z+1)/2}\right)\right)\right] \, . \end{multline} Here, $\sqrt{(z+1)/2}$ is defined as the usual real positive square root $\sqrt{(x+1)/2}$ for real $z=x$ belonging to the half axis $x > -1$, while the complementary half-axis $z=x \leq -1$ is a cut-line ending at the branching point $z=-1$. That point is the singular point of $1/\Delta_0(z)$ closest to the imaginary axis, as strongly suggested by a numerical search of the zeros of $\Delta_0(z)$. Therefore, both $\lambda(\epsilon)$ and $\omega(\epsilon)$ are continuous functions of $\epsilon$ at $\epsilon=0$ with $\lambda(0)=1$ and $\omega(0)=0$. At $\epsilon=0$, the exponentially damped oscillating decay (\ref{III25}) becomes an exponentially damped monotonic decay multiplied by power-law $t^{-3/2}$. That power-law arises from the presence of a singular term of order $\sqrt{(z+1)/2}$ in the expansion of $\tilde{G}(w;z)$ around the branching point $z=-1$. \bigskip When $\epsilon \to \infty$, the zeros of $\Delta (z)$ are obtained by simultaneously changing $y$ to $\xi/\epsilon^{2/3}$ in the integral (\ref{III23}) and by rescaling $z$ as $\epsilon^{2/3}s$. This provides \begin{equation} \label{III28} \Delta (\epsilon^{2/3}s) \sim \epsilon^{4/3} \, \Delta_\infty (s) \,\,\,\,\text{when}\,\,\,\, \epsilon \to \infty\,\,\,\,\text{at fixed}\,\,\,\, s\, , \end{equation} with \begin{equation} \label{III29} \Delta_\infty (s)= \int_0^{\infty} \hbox{d} \xi \exp \left(-s\, \xi -\xi^3/3 \right)\, . \end{equation} Therefore, when $\epsilon \to \infty$, $z^{\pm}$ behave as $z^{\pm} \sim \epsilon^{2/3}s_\infty^\pm$, where $s_\infty^\pm$ are the zeros of $\Delta_\infty (s)$ closest to the imaginary axis. The corresponding large-$\epsilon$ asymptotical behaviour of the damping factor $\lambda(\epsilon)$ is \begin{equation} \label{III28bis} \lambda(\epsilon)=-\Re(s_\infty^\pm)\;\epsilon^{2/3} + o(\epsilon^{2/3}) \end{equation} with $\Re(s_\infty^\pm) \simeq -2.726$, while frequency $\omega(\epsilon)$ diverges as $\Im(s_\infty^+)\;\epsilon^{2/3}$ with $ \Im(s_\infty^+) \simeq 6.260$. Notice that the relaxation time $\lambda^{-1}(\epsilon)$ goes to zero as $\epsilon^{-2/3}$, like the average time between collisions $\tau_{\text{\tiny coll}} \sim \langle v \rangle_{\text{\tiny st}}/a$ used in our simple heuristic derivation of the $\epsilon$-dependence of the stationary current in the strong field limit. In Fig.~\ref{AP09e}, we draw the damping factor $\lambda(\epsilon)$ as a function of $\epsilon$. \begin{figure} \includegraphics[width=0.9\textwidth]{AP09e.eps} \caption{\label{AP09e} Damping factor $\lambda(\epsilon )$ as a function of $\epsilon$. The dashed and dotted lines represent the asymptotical behaviours (\ref{III26ter}) and (\ref{III28bis}) at small and large $\epsilon$ respectively.} \end{figure} \subsection{Relaxation of density in position space} In Appendix~\ref{A} we derive an explicit formula for the zeroth moment $\tilde{M}_{0}(k;z)$ of the distribution $\tilde{F}(k,w;z)$ which contains all information on the evolution of the spatial density of the propagating particle. The formula (\ref{C8}) clearly reveals the presence of a hydrodynamic pole in $\tilde{M}_{0}(k;z)$, namely the root of equation \begin{equation} \label{D1} z + (k^2 + i\epsilon k)\; U(k;z) = 0 \end{equation} where \begin{equation} \label{U} U(k;z) = \frac{A_{11}(k;z)A_{00}(k;z)-A_{10}(k;z)A_{01}(k;z)}{\epsilon A_{00}(k;z)+ 2[A_{11}(k;z)A_{00}(k;z)-A_{10}(k;z)A_{01}(k;z)]} \; . \end{equation} \bigskip If we consider the small-$k$ limit and if we assume the asymptotic form \begin{equation} \label{D2} z_{\rm{hy}}(k) = -ic k - D(\epsilon)\, k^2 + 0(k^2) \end{equation} for the hydrodynamic root, we find immediately from equation (\ref{D1}) the formula \begin{equation} \label{D3} c = \epsilon\; U(0;0) \; . \end{equation} This shows that the mode propagates with the average stationary velocity $ \langle w \rangle_{\text{\tiny st}} = \epsilon \, U(0;0) $ derived in expression (\ref{III11}). \bigskip In order to infer the formula for the diffusion coefficient $D(\epsilon)$, it is necessary to calculate the term linear in variable $k$ in the expansion of function $U(k;z)$ at $z=-ick$. Indeed, equation (\ref{D1}) implies the equality \begin{equation} \label{D4} D(\epsilon ) = U(0;0) + i\epsilon \, \frac{\hbox{d}}{\hbox{d} k}U(k;-ick)|_{k=0} \; . \end{equation} Taking into account the structure (\ref{D4}) of $U(k;z)$ we find the formula \begin{equation} \label{D5} D(\epsilon ) = \frac{\langle w \rangle_{\text{\tiny st}}}{\epsilon} + \frac{ A_{00}[{A}^{\prime}_{11}A_{00}- {A}^{\prime}_{01}A_{10}]+ A_{01}[ {A}^{\prime}_{00}A_{10} -{A}^{\prime}_{10}A_{00}]}{\Delta^{2}} \end{equation} where all $A_{jl}$ and $\Delta$ are taken at $k=z=0$, and where \begin{equation} \label{D6} {A}^{\prime}_{jl} = i\epsilon \frac{\hbox{d}}{\hbox{d} k}A_{jl}(k;-ick)|_{k=0} \; . \end{equation} A particularly useful representation of the derivative appearing in expression (\ref{D6}) can be deduced from formulae (\ref{S}) and (\ref{C5}) defining functions $A_{jl}(k;z)$. An integration by parts yields \begin{equation} \label{D7} {A}^{\prime}_{jl}=\int \hbox{d} w \int^{w}_{-\infty} \hbox{d} u\, (u-c) \int^{u}_{-\infty} \hbox{d} v \, w^j\, v^l\exp \{ [S(0,v;0)-S(0,w;0)]/\epsilon \}\Phi(v) \; . \end{equation} It is quite remarkable that equation (\ref{D7}) allows us to establish a relation between the quantities ${A}^{\prime}_{jl} $ and the stationary velocity distribution $G_{\text{\tiny st}}(w)$. Indeed, using equation (\ref{III10}), we readily obtain the equalities \begin{multline} \int \hbox{d} w \int_{-\infty}^{w} \hbox{d} u \exp \{ [S(0,v;)-S(0,w;0)]/\epsilon \}(u-c)\; G_{\text{\tiny st}}(u) \\ = A_{01}-c A_{00} + \frac{1}{\Delta} [A_{00}{A}^{\prime}_{01} - A_{01}{A}^{\prime}_{00} ] \equiv J_{01} \label{D8a} \end{multline} and \begin{multline} \int \hbox{d} w \int_{-\infty}^{w} \hbox{d} u \exp \{ [S(0,v;)-S(0,w;0)]/ \epsilon \}\,w\, (u-c)\; G_{\text{\tiny st}}(u) \\ = A_{11}-c A_{10} + \frac{1}{\Delta} [A_{00}{A}^{\prime}_{11} - A_{01}{A}^{\prime}_{10} ]\equiv J_{11} \; . \label{D9} \end{multline} Then, we find that the linear combination $(A_{00}J_{11}-A_{10}J_{01})$ of integrals $J_{11}$ and $J_{01}$ reduces to \begin{equation} \label{D10} A_{11}A_{00}-A_{10}A_{01} + \frac{1}{\Delta} \left\{ A_{00}[A_{00}{A}^{\prime}_{11} - A_{01}{A}^{\prime}_{10}]- A_{10}[A_{00}{A}^{\prime}_{01} - A_{01}{A}^{\prime}_{00} ] \right\} \; . \end{equation} The comparison of that expression with equation (\ref{D5}) leads to the compact final result \begin{equation} \label{D11} D(\epsilon ) = \frac{A_{00}J_{11}-A_{10}J_{01}}{\Delta} \; . \end{equation} The above formula involves, \textsl{via} coefficients $J_{11}$ and $J_{01}$, averages over the stationary velocity distribution. In fact, we show in Appendix~\ref{B} that expression (\ref{D11}) follows by extending, to the present out-of-equilibrium stationary state, the familiar Green-Kubo relation between the diffusion coefficient and the velocity fluctuations. That important fact is one of the main observations of the present study. \bigskip When $\epsilon \to 0$, the behaviour of $D(\epsilon)$ is easily infered by inserting the small-$\epsilon$ expansion (\ref{III15}) of the stationary velocity distribution $G_{\text{\tiny st}}(w)$ into formula (\ref{D11}). We find that $D(\epsilon)$ goes to conductivity $\sigma$ (\ref{III17}) as quoted above, with a negative $\epsilon^2$-correction. When $\epsilon \to \infty$, we can use the large-$\epsilon$ form (\ref{III19}) of $G_{\text{\tiny st}}(w)$ for evaluating coefficients $J_{11}$ and $ J_{01}$. Using also the corresponding behaviours of coefficients $A_{00}$ and $A_{10}$, we eventually obtain that $D(\epsilon)$ goes to the finite value \begin{equation} \label{D12} D_\infty = \frac{\Gamma^3(1/3)-9\Gamma(1/3)\Gamma(2/3)+6\Gamma^3(2/3)}{2 \Gamma^3(1/3)} \simeq 0.0384 \; . \end{equation} The external field dependence of the diffusion coefficient $D(\epsilon)$ is shown in Fig.~\ref{AP09d}. \bigskip The expansion (\ref{D2}) of $z_{\rm{hy}}(k)$ can be pursued beyond the $k^2$-diffusion term, by expanding function $U(k;z)$ in double entire series with respect to $z$ and $k$. According to the integral expression of functions $A_{jl}(k;z)$ derived in Appendix~\ref{C}, all coefficients of those double series are finite. This implies that the hydrodynamic root $z_{\rm{hy}}(k)$ of equation (\ref{D1}) can be formally represented by an entire series in $k$, namely \[ z_{\rm{hy}}(k)=\sum_{n=1}^\infty \alpha_n k^n \; ,\] with $\alpha_1=-ic$ and $\alpha_2=-D(\epsilon)$. Coefficient $\alpha_n$ ($n \geq 3$) can be straightforwardly computed once lowest-order coefficients $\alpha_p$ with $1 \leq p \leq n-1$ have been determined. As shown by that calculation, all coefficients are obviously finite. Therefore, and similarly to what happens in the Maxwell case, only positive integer powers of $k$ appear in the small-$k$ expansion of $z_{\rm{hy}}(k)$. Now, we are not able to determine the radius of convergence of that expansion, so we cannot conclude about the analyticity of function $z_{\rm{hy}}(k)$. However, we notice that, contrarily to the Maxwell case, the integrals defining $A_{jl}(k;z)$ remain well-defined for any complex value of $k$, as soon as $\epsilon \neq 0$ (see Appendic~\ref{C}). This suggests that $z_{\rm{hy}}(k)$ might be an analytic function of $k$ at $k=0$, except for $\epsilon = 0$, in which case $k=0$ should be a singular point. \begin{figure} \includegraphics[width=0.9\textwidth]{AP09d.eps} \caption{\label{AP09d} Diffusion coefficient $D(\epsilon )$ as a function of $\epsilon$. The dotted line represents the constant asymptotic value $D_{\infty}$.} \end{figure} \section{Concluding comments} The idea of this work was to perform a detailed study of the approach to an out-of-equilibrium stationary state, by considering systems for which analytic solutions can be derived. To this end we solved, within Boltzmann's kinetic theory, the one-dimensional initial value problem for the distribution of a particle accelerated by a constant external field and suffering elastic collisions with thermalized bath particles. Our exact results for the Maxwell model and for the very hard particle model support the general picture mentioned in the Introduction: \begin{itemize} \item a uniform exponentially fast relaxation of the velocity distribution \item diffusive spreading in space in the reference system moving with stationary flow \item equality between the diffusion coefficient appearing in the hydrodynamic mode and the one given by the generalized Green-Kubo formula \end{itemize} \bigskip Although both models display the same phenomena listed above, the variations of the respective quantities of interest with respect to $\epsilon$ are different. First we notice that, as far as deformations of the equilibrium Maxwell distribution are concerned, the external field is much less efficient for very hard particles. This is well illustrated by comparing figures \ref{AP09a} and \ref{AP09b} : for the Maxwell system, a significative deformation of $\Phi$ is found for $\epsilon=5$, while for the very-hard particle model a similar deformation is observed for $\epsilon=100$. This can be easily interpreted as follows. The collision frequency for very hard particles becomes much larger than its Maxwell gas counterpart when the external field increases, so it costs more energy to maintain a stationary distribution far from the equilibrium one. That mechanism also explains various related observations. For instance, the large-velocity behaviour of $G_{\text{\tiny st}}(w)$ is identical to the equilibrium Gaussian for very hard particles, while it takes an exponential form in the Maxwell gas. Also, the average current $\langle w \rangle_{\text{\tiny st}}$ increases more slowly when $\epsilon \to \infty$ for very hard particles, and the corresponding relaxation time $\lambda^{-1}(\epsilon)$ vanishes instead of remaining constant for the Maxwell gas. \bigskip Among the above phenomena, the emergence of a symmetric diffusion process in the moving reference frame is quite remarkable. In such a frame, there is some kind of cancellation between the action of the external field and the effects of collisions induced by the counterflow of bath particles with velocity $u_{\text{\tiny bath}}^{\ast}=- \langle v \rangle_{\text{\tiny st}}$. The corresponding diffusion coefficient $D(\epsilon)$ increases whith $\epsilon$ for the Maxwell gas (case $\gamma=0$), while it decreases and saturates to a finite value for very hard particles (case $\gamma=2$). Therefore, beyond the previous cancellation, it seems that the large number of collisions for $\gamma=2$ shrink equilibrium fluctuations. On the contrary, for $\gamma=0$, since $D(\epsilon)$ diverges when $\epsilon \to \infty$, the residual effect of collisions in the reference frame seems to vanish and particles tend to behave as if they were free. \bigskip We expect that the same qualitative picture should be valid in the hard rod case which corresponds to the intermediate value $\gamma = 1$ of the exponent $\gamma$ in equation (\ref{I1}). The quantitative behaviours should interpolate between those described for $\gamma=0$ and $\gamma=2$. For instance, the stationary distribution $G_{\text{\tiny st}}(w)$ computed in Ref.~\cite{GP86} displays a large-velocity asymptotic behaviour which is indeed intermediary between those derived here for $\gamma=0$ and $\gamma=2$. Also, the average current $<v>_{\text{\tiny st}}$ is of order $\epsilon^{1/2}$ for $\epsilon$ large, which lies between the $\epsilon$- and $\epsilon^{1/3}$-behaviours found for $\gamma=0$ and $\gamma=2$ respectively. Notice that the $\epsilon^{1/3}$-behaviour for $\gamma=2$ can be retrieved within a selfconsistent argument, which uses in an essential way the existence of the velocity scale related to the particle-particle interaction. Whereas the thermal velocity scale becomes irrelevant when $\epsilon\to\infty$, the interaction scale remains important. In the case of hard rods such an interaction scale does not show in the kinetic equation, and the unique combination of parameters having the dimension of velocity is $\sqrt{a/\rho}$, which does provide a different strong field behaviour of $<v>_{\text{\tiny st}}$ with order $\epsilon^{1/2}$.
2024-02-18T23:39:58.460Z
2010-03-04T11:31:33.000Z
algebraic_stack_train_0000
981
8,720
proofpile-arXiv_065-4848
\section{Introduction} The unprecedented angular resolution of the X-ray telescope {\it Chandra} led to the discovery of several new phenomena within various astrophysical systems. One of these are cold fronts detected in galaxy clusters. Initially observed in merging clusters, the prototypes are found in A2142 \citep{Maxim:2000}, A3667 \citep{Vikhlinin1:2001, Vikhlinin2:2001, Vikhlinin:2002} and 1E0657-56 \citep{Maxim:2002}. All these systems feature very sharp discontinuities in their X-ray images where the drop of the surface brightness (and correspondingly of the gas density) is accompanied by a jump in the gas temperature, with the denser region colder than the more rarefied region, unlike shock fronts. For this reason, these features have been dubbed ``cold fronts'' \citep{Vikhlinin1:2001}. The density and the temperature discontinuities have similar amplitude so that pressure is approximatively continuous across the front. Cold fronts have been initially interpreted as the edge of the cool core of a merging substructure which has survived the merger and is rapidly moving through the ambient gas \citep{Maxim:2000}. Cold fronts have successively been detected in the cores of some relaxed clusters (e.g. A1795: \citealp{Maxim:2001}; RX J1720.1+2638: \citealp{Mazzotta:2001}; A496: \citealp{Dupke:2003}; 2A 0335+096: \citealp{Mazzotta:2003}) and to date a large number of relaxed systems are known to host one. Since the presence of cold fronts in cool cores provides evidence of gas motions and possibly of departures from hydrostatic equilibrium, understanding the nature of such a widespread phenomenon is mandatory to characterize the dynamics of galaxy clusters. High resolution hydrodynamical simulations are, at present, the main technique to investigate the mechanisms generating cold fronts. Indeed, cold front features could already be detected in simulations published prior to the launch of {\it Chandra} \citep{Roettiger:1997, Roettiger:1998}. After cold fronts discovery, several hydrodynamical simulations have been developed to model the effect of the ram-pressure stripping in a merger event and the formation of the cold front feature in merging clusters \citep{Heinz:2003, Nagai:2003,Mathis:2005}. Several simulations have also been employed to understand the origin of cold fronts in relaxed non-merging clusters \citep[e.g.][]{Churazov:2003, TH:2005, AM06}. The emerging picture (\citealp{AM06}; see also \citealp{MM_review:2007} for a review) is that cold fronts arise during major merging events through ram-pressure stripping mechanisms which induce the discontinuity among the merging dense subcluster and the less dense surrounding ICM. In relaxed clusters, the cold fronts features are induced by minor merger events which produce a disturbance on the gas in the core, displace it from the center of the potential well and decouple it from the underlying dark matter through ram-pressure. Subsequently, a sloshing mechanism sets in, generating cold fronts. The necessary condition for triggering this mechanism is the presence of a steep entropy profile for the central gas which is generally fulfilled at the center of relaxed cool core clusters. Cold fronts are at present observed in a large number of galaxy clusters. \citet{Maxim:2003} analyzed a sample of 37 relaxed clusters observed with Chandra showing that cold fronts are present in the majority of the cores of relaxed clusters. Recently, \citet{Owers:2009} characterized a sample of nine cold fronts with quantitative measurements of the thermodynamic discontinuities across the edges and associated the presence of a cold front with evidence of merger activity.\\ While many objects have been studied in detail to understand the nature of cold fronts, we still lack a systematic investigation of the characteristics of these phenomena and of their host clusters through a large sample. The aim of this paper is to perform a systematic search of cold fronts in a representative sample and to investigate the properties of their parent clusters. Such a study is necessary to inspect the nature and origin of cold fronts and eventually to test the reliability of the picture emerging from the simulations. The sample is selected starting from the B55 flux limited sample by \citet{Edge:B55}. We use for our analysis {\it XMM-Newton} data. In spite of its limited spatial resolution with respect to {\it Chandra}, {\it XMM-Newton} has the positive attribute of having a large field of view, allowing a significant coverage of most of the clusters. In most cases, the clusters are inside the EPIC field of view up to a radius $\simgt 0.3 r_{180}$, allowing the characterization of the main thermodynamical properties well beyond the core regions. Additionally, the {\it XMM-Newton} large collecting area allows a good statistics for a large number of objects. Among the several physical properties characterizing the intracluster medium, we focus our attention on entropy. Entropy plays a key role in describing the thermodynamical state of the ICM, its distribution is a signature of the thermodynamical history of the cluster and it is also intimately related to the non-gravitational processes which may have occurred \citep{Voit_entropy:2002, Voit_entropy:2005}. Moreover, as previously stressed, simulations highlight how the steep gas entropy profile is a necessary condition for the onset of the sloshing mechanism and therefore for the presence of cold fronts in cool core clusters \citep{AM06}. The structure of the paper is the following. In \S\ \ref{sec:CF_sample} we describe the sample of clusters that we have analyzed and in \S\ \ref{sec:data_red} we provide details about the data reduction. Then we describe (see \S\ \ref{sec:search_CF}) the algorithm used for the systematic search of cold fronts in the cluster sample. We present our results about the occurrence and the origin of cold fronts in \S\ \ref{sec:occurr} and we discuss them in \S\ \ref{sec:disc}. We summarize our findings in \S\ \ref{sec:summary}. We adopt a $\Lambda$CDM cosmology with $\Omega_{\rm{m}} = 0.3$, $\Omega_\Lambda = 0.7$, and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$. \section{The sample} \label{sec:CF_sample} We use as a reference starting sample the flux limited sample by \cite{Edge:B55}. It includes 55 objects with flux $f_x > 1.7 \cdot 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ in the 2 - 10 keV energy band and is 90\% complete. All the clusters are located at redshift $z <0.2$. \begin{table*} \caption{The list of the 45 clusters belonging to our selected sample.} \label{table:sample} \centering \begin{tabular} {|l|c||l|c| } \hline {\bf cluster name} & {\bf redshift} & {\bf cluster name} & {\bf redshift} \\ \hline Centaurus & 0.0114 & A85 & 0.0551 \\ A1060 & 0.0126 & A3532 & 0.0554 \\ A262 & 0.0163 & A3667 & 0.0556 \\ AWM7 & 0.0172 & A2319 & 0.0557 \\ Perseus & 0.0176 & Cygnus A & 0.0561 \\ A1367 & 0.0220 & A2256 & 0.0581 \\ A4038 & 0.0300 & A3266 & 0.0589 \\ A2199 & 0.0302 & A3158 & 0.0597 \\ A496 & 0.0329 & A1795 & 0.0625 \\ 2A0335 & 0.0349 & A399 & 0.0718 \\ A2063 & 0.0349 & A2065 & 0.0726 \\ A2052 & 0.0355 & A401 & 0.0737 \\ A576 & 0.0389 & A3112 & 0.0750 \\ A3571 & 0.0391 & A2029 & 0.0773 \\ A119 & 0.0442 & A2255 & 0.0806 \\ MKW3s & 0.0450 & A1650 & 0.0838 \\ A1644 & 0.0473 & A1651 & 0.0849 \\ A4059 & 0.0475 & A2597 & 0.0852 \\ A3558 & 0.0480 & A478 & 0.0881 \\ A3562 & 0.0490 & PKS0745 & 0.1028 \\ Triang. Aus. & 0.0510 & A2204 & 0.1523 \\ Hydra A & 0.0538 & A1689 & 0.1832 \\ A754 & 0.0542 & & \\ \hline \end{tabular} \end{table*} Starting from this sample, we select all the clusters available in the {\it XMM-Newton} public archive. At the time of writing, Ophiuchus was not publicly available and the objects A2244, A644 had not been observed with {\it XMM-Newton}. For A754 we use the long observations we obtained from AO7 (P.I.: A. Leccardi). Observations for 3C129, A2142, A2147, A1736, A3391 are highly affected by soft protons: since their final good exposure time after cleaning procedures is generally below 5 ks for MOS and $<0.5$ ks for {\it pn}, these clusters have been excised from the sample. For clusters having more than one observation, we eliminate those observations with high soft protons contamination. We also exclude Virgo and Coma: their extension does not allow a significant coverage within the EPIC field of view. Our final sample is reduced to 45 objects. In Table \ref{table:sample} we filed the list of clusters belonging to our sample. The excision of a number of clusters invalidates the completeness of the sample. To verify if the final sample is representative of the cluster population, we inspect the distribution of the main cluster observables. We built the histograms for redshift, X-ray luminosity and temperature (see Fig. \ref{fig: compl}) both for the starting sample (light grey) and for the excluded clusters (dark grey). For X-ray luminosities ($L_X$) we refer to \citet{Reiprich:2002}, while temperatures are taken from \citet{Peres:1998}. The histograms show that excluded clusters do not introduce any obvious bias. We conclude that, even if the final adopted sample is not complete, it is representative of the cluster population. \begin{figure} \centering \includegraphics[angle=90,width=9.5 truecm] {completezza2.ps} \caption{Distributions of redshifts (a), X-ray luminosities (b) and temperatures (c) for the starting \citep{Edge:B55,Peres:1998} clusters sample (light grey) and for the excised objects (dark grey). The rejection of these objects does not introduce any obvious bias on the remaining subsample.} \label{fig: compl}% \end{figure} \section{Data reduction} \label{sec:data_red} Observation Data Files (ODF) were retrieved from the {\it XMM-Newton} archive and processed in a standard way with the Science Analysis System (SAS) v6.1. We apply the standard selection \verb|#XMMEA_EM| to the MOS event list (\verb|#XMMEA_EP| for {\it pn}) to automatically filter out artefact events. The soft protons cleaning was performed using a double filtering process \citep[see][]{Leccardi_temp:2008, Pratt_arnaud_3sigma}. The adoption of a threshold level and the exclusion of light curve intervals above the selected threshold allows the rejection of most flare events. In practice, we extract the light curve in the 10 - 12 keV (10 -13 keV) energy band for MOS ({\it pn}) using 100 second bins. We apply a threshold of 0.20 cts s$^{-1}$ for MOS and 0.50 cts s$^{-1}$ for {\it pn} to generate the filtered event file. However softer flares may exist such that their contribution above 10~keV is negligible. To remove this flare contamination, we apply the 3$\sigma$ clipping method \citep[see][]{Marty_3sigma:2003}: we extract a histogram of the light curve in the 2-5 keV band and fit this histogram with a Gaussian distribution. Since most flares have already been rejected in the previous step, the fit is usually very good. We then apply a threshold at the $+3 \sigma$ level and generate the filtered event file. After soft proton cleaning, we filter the event file according to \verb|FLAG| (\verb|FLAG|==0) and \verb|PATTERN| criteria (\verb|PATTERN|$\leq$12). To systematically search surface brightness discontinuities, we build for each cluster, the EPIC flux map: MOS1 + MOS2 + {\it pn} ({\it pn} images are corrected for out of time events). This flux image is computed in the 0.4 - 2 keV band following a procedure similar to the one described in \citet[see also \citealp{Rossetti_A3558:2007}]{Baldi_flux_map:2002}. We sum up the MOS1, MOS2 and {\it pn} source images to obtain the total source map, $S_{EPIC}$, and we compute EPIC source exposure map, $exp_{EPIC}$, by summing the source exposure maps of each detector. The EPIC count rate image is then defined as $cr_{EPIC} = S_{EPIC}/exp_{EPIC}$. Count rates are then converted to flux through the total conversion factor $cf_{EPIC}$ derived following the formula: $${{exp_{EPIC}}\over{cf_{EPIC}}} = {exp_{MOS1}\over{cf_{MOS1}}} + {exp_{MOS2}\over {cf_{MOS2}}} + {exp_{pn} \over { cf_{pn}}} $$ where $cf_{MOS1}, cf_{MOS2}, cf_{pn}$ and $exp_{MOS1}, exp_{MOS2},exp_{pn}$ are the conversion factors and the exposure maps of the three instruments. The EPIC source flux image is obtained using the relation $fx_{EPIC} = cf_{EPIC} \cdot cr_{EPIC}$. To remove the quiescent particle induced background and the cosmic background component we need EPIC background flux images. We use a large collection of background data, such as long observations of blank sky fields. We use 9 blank fields (for a total exposure time of $\sim$ 300 ks) selected by our own group \citep{Leccardi_temp:2008}. We compute the EPIC background flux image from the MOS1, MOS2 and {\it pn} background images and exposure maps using the same method applied to the EPIC source flux image. By subtracting the EPIC background flux image from the source flux image we derive an EPIC net flux image in units of $10^{-15}$ erg cm$^{-2}$s$^{-1}$pixel$^{-1}$ (one pixel is $8.5 \times 8.5$ arcsec$^2$). The net maps are used for the construction of the surface brightness profiles. To search for cold fronts, we also need the temperature maps of all the clusters of our sample. We adopt a modified version of the ``adaptive binning + Broad Band Fitting'' algorithm described in \citet{Rossetti_A3558:2007}, where we have substituted the \citet{Cappellari:2003} adaptive binning algorithm with its modified version by \citet{Diehl:2006}. \section{Systematic search of cold fronts} \label{sec:search_CF} \subsection{The detection algorithm} \label{sub:detect} In this section, we identify a suitable method to detect cold fronts, well aware that any general selection criterion will have some limitations and can introduce spurious effects, so that some cold fronts may be missed and some features may be classified as cold fronts although they are not. We will address this point in detail in \S\ \ref{sub:notes} and in \S\ \ref{sub:occ_general}. A cold front is characterized by the presence of a sharp decrease in the surface brightness ($S\!B$) profile typically accompanied by a rise of the gas temperature. We initially perform a systematic search of surface brightness discontinuities in the cluster sample and generate a list of candidate cold fronts. Subsequently, we examine the gas temperature behavior across the discontinuity, in order to rule out the hypothesis that the detected discontinuity is a shock front. We developed an algorithm to perform the systematic search of surface brightness discontinuities. We start from the EPIC flux maps (see \S\ \ref{sec:data_red}) that we have built for our sample and we divide each cluster map in 30$^{\circ}$\ wide sectors centered on the $S\!B$ peak. In most cases, we detect $S\!B$ discontinuities in several consecutive sectors, suggesting that most cold fronts have an angular extension significantly larger than 30$^{\circ}$. Consequently, unless the statistics is particularly low, fixed angular ranges can be used to find discontinuities. A possible bias on the detectability of cold fronts due to the width of the angular sectors will be discussed in \S\ \ref{sec:occurr}. For clusters having a low statistics or where the possible cold front is located near the cluster center (e.g. A262, A1795, A2199), we use an ``ad hoc'' choice of the sectors (45$^{\circ}$\ wide or even larger) to reveal the discontinuity in the surface brightness profiles. We build the surface brightness profile for each sector using the cluster X-ray emission peak as center. Sometimes, for merging or irregular clusters, a visual inspection of the images may suggest a different center, better suited to detect a sharp decrease of the surface brightness. In Fig. \ref{fig:center} we show the surface brightness images of A2319 and A1367 as an example. The black circles mark the centers we adopted to build the profiles. \begin{figure} \centering \includegraphics[angle=0,width=8truecm]{A2319_center_lr.ps} \includegraphics[angle=0,width=8truecm]{A1367_center_lr.ps} \caption{Surface brightness map for A2319 (top panel) and A1367 (bottom panel). The maps have been smoothed for a better visual inspection. The black circles show the position of the centers we chose for the radial profiles. They roughly match the centers of curvature of the candidate discontinuity (SE-E for A2319 and S for A1367) and do not match the X-ray emission peak. This choice allows a better characterization of the jump in the surface brightness profiles. Full resolution figures are available at: http://www.iasf-milano.inaf.it/$\sim$simona/pub/coldfronts/ghizzardi.pdf} \label{fig:center} \end{figure} For each cluster a set of profiles is obtained. While for some clusters (e.g. Centaurus, A496, 2A0335+096) the presence of a surface brightness discontinuity is apparent, in other clusters (e.g. A262) the surface brightness profile is not as sharp (see Fig. \ref{fig:discont}). Projection effects and resolution limits smooth the profiles: the surface brightness discontinuity will appear as a steepening of the profile in the radial range around the jump radius. In the approximation where profiles are described by power laws, the slopes measure the steepness of the profile. We use the power law slopes to characterize the surface brightness discontinuities. We identify for each cluster the possible discontinuity region with a visual inspection of the profile and of the image. We mark this region with the letter D and we set $S\!B \propto r^{-\alpha_D}$ in the corresponding radial range (see upper right panel of Fig. \ref{fig:discont} as an example). We compare the slope we find in this region with the slope obtained fitting the profile with the power law $S\!B \propto r^{-\alpha_{ND}}$ in the nearby (inner and/or outer) region or in other sectors of the cluster where no irregularities in the surface brightness profiles are present. The difference of the slopes $\Delta \alpha = \alpha_D - \alpha_{ND} $ quantifies the steepening of the profile. We require that $\Delta \alpha \ge 0.4$ to classify a region as discontinuous. The choice of this threshold relies on phenomenological considerations. All the jumps we measured have $\Delta \alpha$ values well above 0.5, while for regions without discontinuities $\Delta \alpha$ values are below 0.2. Examples of different surface brightness profiles are reported in Fig. \ref{fig:discont}. Centaurus cluster profiles (top left panel in the figure) steepen significantly between 50$^{\prime\prime}$\ -- 100$^{\prime\prime}$\ in the NE sector and between 170$^{\prime\prime}$\ -- 210$^{\prime\prime}$\ in the W-NW sector. In A262 and in A1060 (top right panel and bottom left panel respectively) the discontinuities are not as apparent as in Centaurus. Finally, the profile for AWM7 (bottom right panel) does not show any irregularity. We fit all the profiles in the different radial ranges with power laws. In Table \ref{table:delta_s} we report, for each region of the four clusters shown in Fig. \ref{fig:discont}, the ranges used for the fits, the slopes of the best fits, and the associated $\Delta \alpha$ values. \begin{figure*} \centering \includegraphics[angle=90,width=15truecm] {sb_discont.ps} \caption{Surface brightness profiles for some sectors of four clusters of our sample. For the Centaurus cluster we plot two interesting sectors: SE (filled circles) and N-NW (open diamonds) of the cluster core. The figure shows that profiles may have different behaviors. While discontinuities are apparent in some clusters (e.g. Centaurus cluster), in others they are not as sharp. Some systems (e.g. AWM7) have a regular profile. In upper right panel (A262) we plot the ranges used to fit the profile with power laws. The flag D marks the discontinuity region and ND marks the adjacent (inner and outer) regions. In all the panels the solid lines represent power law best fits (see text and Table \protect\ref{table:delta_s} for details). In these plots, surface brightness is given in $10^{-15}$ erg cm$^{-2}$ s$^{-1}$arcsec$^{-2}$ units.} \label{fig:discont}% \end{figure*} \begin{table*} \caption{Cold fronts candidates for the four clusters plotted in Fig. \ref{fig:discont}. We report the cluster name, the position angle (measured anticlockwise from East) of the candidate cold front, the different radial range considered, the slopes of the power laws that we find in the given radial range and the $\Delta \alpha$ values for the discontinuities in the surface brightness profiles. In the last column, a flag indicates which features are classified as discontinuities.} \label{table:delta_s} \centering \begin{tabular}{l c c c c c c } \hline\hline cluster & position angle & radial range & $\alpha$ & ranges for $\Delta \alpha$ & $\Delta \alpha$ & discont. \\ & (deg) & (arcsec) & & & & \\ \hline Centaurus (internal)& [120,150] & 10 - 50 & 0.41 & & & \\ Centaurus (external) & [120,150] & 100 - 240 & 0.84 & & & \\ Centaurus (possib disc) & [120,150] & 50 - 100 & 2.59 & $\alpha_{[50-100]}-\alpha_{[10-50]}$ & 2.19 & $\surd$ \\ Centaurus (possib disc) & [120,150] & 50 - 100 & 2.59 & $\alpha_{[50-100]}-\alpha_{[100-240]}$ & 1.75 & $\surd$ \\ \hline Centaurus (internal)& [30,60] & 10 - 170 & 0.88 & & & \\ Centaurus (external) & [30,60] & 210 - 600 & 1.16 & & & \\ Centaurus (possib disc) & [30,60] & 170 - 210 & 2.52 & $\alpha_{[170-210]}-\alpha_{[210-600]}$ & 1.64 & $\surd$ \\ Centaurus (possib disc) & [30,60] & 170 - 210 & 2.52 & $\alpha_{[170-210]}-\alpha_{[10-170]}$ & 1.36 & $\surd$ \\ \hline A262 (internal) & [-45,0] & 10 - 38 & 0.84 & & & \\ A262 (external) & [-45,0] & 70 - 145 & 0.72 & & & \\ A262 (possib disc) & [-45,0] & 38 - 70 & 1.66 & $\alpha_{[38-70]}-\alpha_{[10-38]} $ & 0.82 & $\surd$ \\ A262 (possib disc) & [-45,0] & 38 - 70 & 1.66 & $\alpha_{[38-70]}-\alpha_{[70-145]}$ & 0.94 & $\surd$ \\ \hline A1060 (possib disc) & [120,150] & 80 - 135 & 0.94 & & & \\ A1060 (external) & [120,150] & 135 - 250 & 0.92 & $\alpha_{[80-135]}-\alpha_{[135-250]}$ & 0.02 & $\times$ \\ \hline AWM7 & [-90,-60] & 20 - 200 & 0.76 & & 0.00 & $\times$ \\ \hline \end{tabular} \end{table*} For the Centaurus cluster, we consider two interesting sectors, SE and W-NW of the cluster core. The slopes we find for the W-NW sector (specifically, 120$^{\circ}$\ -- 150$^{\circ}$ , where the angles are measured in an anticlockwise direction from East) are $0.41$, $2.59$ and $0.84$ for the radial ranges: 10$^{\prime\prime}$\ -- 50$^{\prime\prime}$, 50$^{\prime\prime}$\ -- 100$^{\prime\prime}$, 100$^{\prime\prime}$\ -- 240$^{\prime\prime}$\ respectively. The $\Delta \alpha$ for the central region is 2.19 with respect to the innermost region and 1.75 with respect to the outer one. This is obviously classified as a discontinuity and is a candidate cold front. A similar analysis allows to assess that there is a discontinuity in the 170$^{\prime\prime}$\ -- 210$^{\prime\prime}$\ radial range in the 30$^{\circ}$\ -- 60$^{\circ}$\ sector. The quality of the Centaurus cluster profiles is extremely high thanks to its proximity and to the long observations (170 ks in the public archive at the time of writing) so that, even at the margin of the cold front located west of the core, from -60$^{\circ}$\ to 60$^{\circ}$, the discontinuity is still visible in the profile. The $\Delta \alpha$ we find for this case, 1.60, is smaller than in the previous case, but still high. In the top right panel, we show the profile for the SW-W region (-45$^{\circ}$\ -- $\,\,$0$^{\circ}$) in A262, where a discontinuity at $\sim 60$$^{\prime\prime}$, albeit not very sharp, is identifiable ($\Delta \alpha$ is slightly smaller than 1). In A1060 (bottom left panel) the putative discontinuity is around 2$^\prime$. As shown in Table \ref{table:delta_s}, the analysis of the slopes provide $\Delta \alpha = 0.02$. As a consequence, this feature is not classified as discontinuity. However, a different result could be obtained with a slightly different choice of the radial ranges used for the fits. If we restrict the range of the discontinuity region to 110$^{\prime\prime}$\ - 130$^{\prime\prime}$\ (3 points for the fit) and we choose 150$^{\prime\prime}$\ - 250$^{\prime\prime}$\ for the outer region (disregarding the 4 points immediately after the discontinuity where the profile is flat) $\Delta \alpha$ increases to 0.55 and this feature could be considered as a discontinuity. We believe that this choice of the radial range is rather extreme. In addition, we note that the temperature profile shows no variations in the same region. As a general rule, if $\Delta \alpha$ satisfies the discontinuity condition only for an {\emph{ad hoc}} choice of the radial range, we exclude it from the list of candidate cold fronts. Finally, in the bottom-right panel in Fig. \ref{fig:discont}, AWM7 shows a regular behavior. In the figure we report one region, but AWM7 is regular in all its sectors and all profiles are similar; for this cluster we find no discontinuities. \vskip0.3truecm \noindent The procedure described thus far detects surface brightness discontinuities and provides a list of candidate cold fronts. To upgrade a discontinuity to a cold front, we need to verify the behavior of the temperature profile across the surface brightness jump. To this aim, we build the binned temperature maps (see \S\ \ref{sec:data_red} and \citealt{Rossetti_A3558:2007}) for all the clusters of our sample. From these maps we derive the temperature profiles plotting all the bins whose barycentre is inside the sector hosting the surface brightness discontinuity. In none of the candidate cold fronts we observe a sharp decrease in the temperature profile as would be expected for a shock front. Almost all the surface brightness discontinuities that we find feature a sharp gas temperature rise. In some cases the temperature rises smoothly with no jumps. We remark that the cold front feature does not necessary require a temperature jump, since the thermal pressure of the gas inside the cold front is balanced by the sum of the thermal and ram pressures outside. This can be achieved also with a slow continuous rise of the temperature across the discontinuity. \subsection{Notes on individual clusters} \label{sub:notes} In Tab. \ref{table:CF_sample}, we list the clusters hosting one or more cold fronts. For each cluster, the table provides the center used to build the $S\!B$ profiles, the cold front, the radial and azimuthal position of the cold front and $\Delta \alpha$. The $\Delta \alpha$ reported for each cold front is the mean value obtained averaging over the different sectors hosting the discontinuities. As already remarked, the procedure adopted to classify cold fronts can fail in finding the discontinuities or provide some doubtful cases. Hence, comments are required for some individual systems. \begin{itemize} \item{{\bf A4059:} a visual inspection of the surface brightness map of A4059 hints to a possible cold front in the SW sector $\sim 30$$^{\prime\prime}$\ from the peak, but the discontinuity is barely visible in the profiles and we did not succeed in fitting it with power laws and deriving $\Delta \alpha$. We consider this as an unclassified case.} \item{{\bf A85:} This object has two possible cold fronts. The former lies in the NW sector $ \sim 80$$^{\prime\prime}$\ from the X-ray peak. In this sector $\Delta \alpha = 0.75$, above our threshold. However, the sector of the cold front is very narrow (30$^{\circ}$ ) and no discontinuities are detected in the nearby sectors. Table \ref{table:delta_s} shows that the cold fronts widths generally range from 60$^{\circ}$\ to 120$^{\circ}$ . Such a tiny extension for a cold front is unusual and we prefer to consider this as an unclassified case. Another cold front is present in a small subclump located 8$^\prime$\ south of the cluster and moving north towards the main structure \citep{Durret:98, Kempner:2002} . This is labeled as {\bf A85$^*$} in Table \ref{table:CF_sample} to distinguish it from the unclassified cold front of the main cluster.} \item{{\bf A2052:} the analysis of the discontinuities in A2052 is complex because of the presence of the bright shells surrounding the X-ray cavities \citep{Blanton_A2052:2003, Blanton_A2052:2001}. Remarkably, sharp decrements of the surface brightness profiles are detected just outside the rims, at distances of about 40$^{\prime\prime}$ -50$^{\prime\prime}$\ (rims are at 10$^{\prime\prime}$ -30$^{\prime\prime}$\ from the center). It is difficult to establish whether such sharp drops are real discontinuities or if they are associated to the bright shells. Recent results obtained from a deep {\it Chandra} observation \citep{Blanton_A2052_ripples:2007} show that some cavities and ripples are present in this cluster, similarly to what has been observed in Perseus clusters and M87 \citep{Fabian_Perseus:2006, Fabian_Perseus:2003, Forman_M87:2007}. Some weak shocks may also be present. The presence of cold fronts is unclear and we consider this cluster as unclassified.} \item{{\bf Hydra A:} in Hydra A, we detect a cold front in the 300$^{\circ}$\ -- 330$^{\circ}$\ sector $\sim 50$$^{\prime\prime}$\ from the core, inside the region of the weak shock which is at $\sim 200$$^{\prime\prime}$\ \citep{McNamara_HyA_shock:2005,Nulsen_HyA_shock:2005}. Similarly to A85, the sector of the cold front is narrow (30$^{\circ}$ ). The jump is located at the bending of the south radio lobe \citep[visible in the Hydra A radio maps at 1.4 GHz; see for example][]{Lane_HyA_1.4GHz:2004} towards east, near the SW cavity. In that region the temperature map shows several cold blobs and one of this produces the temperature rise coincident with the surface brightness steepening but no clear front in the temperature map is present. The structure of Hydra A is very complex with a strong interaction between the radio lobes and the ICM gas (\citealp{Nulsen_HyA_radio:2002, McNamara_HyA_radio:2000}; see also \citealp{McNamara_review:2007}). The cluster exhibits a number of cavities in the central regions \citep{Wise_HyA_core:2007}. The discontinuity we find is probably a result of such a complicated morphology and likely it is not a cold front. Even if this region satisfies all the required conditions we consider this as an unclassified case.} \end{itemize} Since the existence of a cold front in A2052, A4059 and Hydra A cannot be definitively established, we exclude these systems from the sample. \begin{table*} \caption{List of the cold fronts detected in the sample. For each cluster we report the center (RA and Dec) used to build surface brightness profiles, the azimuthal and radial positions of the cold front and the mean value of $\Delta \alpha$. Bold faced fonts mark clusters having a merging cold front while italic fonts mark clusters whose merger geometry is not clear and the origin of the cold front is not as obvious (see \S \ref{sec:occur}).} \label{table:CF_sample} \centering \begin{tabular}{l c c c c } \hline\hline Cluster name & center & position angle & jump radius & $\Delta \alpha$ \\ & & (deg) & (arcsec) & (mean value)\\ \hline Centaurus & 12:48:49.173 -41:18:45.65 & [-60 , 60] & 170 - 210 & 1.60\\ Centaurus & 12:48:49.173 -41:18:45.65 & [90 , 210] & 50 - 100 & 1.76\\ A262& 01:52:46.117 36:09:05.79 & [-110, 0] & 38 - 70 & 1.06 \\ A262& 01:52:46.117 36:09:05.79 & [30, 135] & 40 - 50 & 1.40 \\ Perseus & 03:19:48 41:30:40 & [-60, 0] & 250 - 300 & 1.72 \\ Perseus & 03:19:48 41:30:40 & [30, 150] & 140 - 200 & 1.80 \\ A2199 & 16:28:38.193 39:33:02.70 & [-75, -25] & 25 - 30 & 0.50 \\ A496 & 04:33:38.067 -13:15:40.91 & [-120, -30] & 40 - 55 & 0.79\\ A496 & 04:33:38.067 -13:15:40.91 & [-120, -75] & 180 - 280 & 1.33\\ A496 & 04:33:38.067 -13:15:40.91 & [30, 120] & 80 - 100 & 1.86\\ 2A0335+096 & 03:38:40.879 09:58:01.20 & [-120, -30] & 50 - 70 & 1.16 \\ A1644 & 12:57:12.231 -17:24:32.67 & [-180, -90] & 20 - 35 & 1.58\\ A3558 & 13:27:56.989 -31:29:50.00 & [-30, 120] & 80 - 120 & 1.04 \\ A1795 & 13:48:52.879 +26:35:27.80 & [-130 ,-80] & 60 - 70 & 1.14 \\ A2065 & 15:22:29.455 +27:42:23.81 & [-150, -90] & 80 - 100 & 1.26\\ \hline {\it A576} & 07:21:30.495 +55:45:45.32 & [-120, -60] & 80 - 100 & 0.72 \\ {\it A3562} & 13:33:36.766 -31:40:20.45 & [-150, -60] & 60 - 100 & 1.06\\ \hline {\bf A1367} & 11:44:53.5 19:44:19.12 & [-120 -60] & 350 (~70 from the peak) & 0.84\\ {\bf A754} & 09:09:20.098 -09:40:52.22 & [120, 240]& 80 - 150 & 1.14 \\ {\bf A85} $^*$ & 00:41:42.733 -09:26:33.10 & [0, 90] & 25 - 80 & 1.40 \\ {\bf A3667} & 20:12:41.653 -56:50:52.94 & [-180, -90] & 250 - 280 & 3.68 \\ {\bf A2319}& 19:21:11.097 +43:56:08.00 & [-180, -60] & 100 150 (~160 from the peak) & 1.85\\ {\bf A2256} & 17:02:33.009 +78:38:23.59 & [-135, -90] & 50 - 75 (~100 from the peak) & 2.00\\ {\bf A3266} & 04:31:13.951 -61:27:26.41 & [-90, -30] & 60 100 & 0.87\\ \hline \hline \end{tabular} \end{table*} \section{Cold fronts occurrence.} \label{sec:occurr} \noindent \subsection{Cold fronts occurrence: a general view} \label{sub:occ_general} The exclusion of three unclassified objects (namely A2052, A4059, Hydra A) reduces the sample to 42 objects of which 19 host a cold front, corresponding to a fraction of 0.45. Note that the cold front in the A85 subcluster (dubbed A85$^*$) is included. The list of the detected cold fronts, with their main properties, is filed in Table \ref{table:CF_sample}. Some clusters (e.g. Centaurus, A496, Perseus, A262) host more than one cold front. This phenomenon is not rare in cool core clusters. {\it Chandra} found multiple cold fronts in several systems such as A2204, A2029, Ophiuchus \citep{Sanders_A2204:2005, Clarke_A2029:2004, AM06,MM_review:2007}. The presence of multiple cold fronts in such clusters is likely related to the origin and development of this phenomenon in cool cores \citep{AM06,MM_review:2007}. We do not detect any cold front in some objects (i.e. A2204, A2029) , which are well-known cold front systems \citep{Sanders_A2204:2005, Clarke_A2029:2004}. For these clusters, the cold front is located in very central regions (14$^{\prime\prime}$\ and 30$^{\prime\prime}$\ from the center for A2029 and A2204, respectively). Therefore the discontinuity is well-resolved by {\it Chandra}, but it is hard to detect with {\it XMM-Newton}. A last comment concerns the bias in the measure of the occurrence of cold fronts due to instrumental and observational limits: projection effects induce a smoothing on the surface brightness and temperature profiles and can hide a non-negligible fraction of cold fronts. Projection also completely hides cold fronts having an inclination larger than about 30$^{\circ}$\ with respect to the plane of the sky. In addition, resolution limitations prevent the detection of cold fronts lying in the very central regions or cold fronts in distant clusters. Moreover, our detection algorithm may fail to detect some cold fronts with angular extension $<30$ $^{\circ}$ . All these effects significantly reduce the capability of detecting cold fronts. The frequency we find in our sample is therefore a lower limit of the true occurrence. \subsection{Cold fronts occurrence: relation with redshift} \label{sub:redsh} In our sample, no cold fronts are detected in systems at redshifts greater than about $0.075$. We already remarked that A2204 and A2029 have been classified as cold front clusters from {\it Chandra} data analysis but we fail in detecting their discontinuities because they lie at small distances ($ \simlt 30$$^{\prime\prime}$ ) from the X-ray peak, under the {\it XMM-Newton} resolution. For both A2204 and A2029 $z > 0.075$. This suggests that the lack of detection of cold fronts at the highest redshifts of the sample is most likely related to a resolution limit rather than to a real evolution. This effect is clearly shown in Fig. \ref{fig:rcf-z} where we plot the distances (in arcsec) from the cluster center of all the cold fronts detected in our sample as a function of the redshift of the systems they belong to. Red points label merging clusters (see Table \ref{table:CF_sample} and \S \ref{sub:merging}) and black points label the remaining systems. Dashed dotted lines plot fixed physical distances (20, 50, 80, 150 kpc) at the various redshifts. From this figure, it is evident that cold fronts lying at $\sim$ 20-80 kpc from the cluster center are observed only in nearby systems ($ z \simlt 0.05$). Moving towards higher redshifts where these physical distances progressively fall below the resolution limit (30$^{\prime\prime}$, red solid line in the figure) cold fronts cannot be detected anymore. For $z > 0.05$, we have found only cold fronts at distances $ \simgt 80-100$ kpc from the cluster center, with a prominent presence of merging systems whose cold fronts are generally located at large distances from the core (see \S \ref{sub:merging}). \begin{figure} \centering \includegraphics[angle=0,width=9.2 truecm]{rfc_wcurve_z_merg.ps} \caption{Distances from the cluster center of the cold fronts detected in the sample plotted as a function of the redshift of their hosting systems. Red points label merging clusters (see Table \ref{table:CF_sample}) and black points label the remaining systems. We omitted in this figure A85* since the cold front lies in a subclump. Dot dashed lines plot fixed physical distances at the various redshifts. Red solid line marks the {\it XMM-Newton} resolution limit at 30$^{\prime\prime}$ .} \label{fig:rcf-z}% \end{figure} On the basis of the analysis of Fig. \ref{fig:rcf-z}, we decided to apply a further selection on our sample, namely we cut the maximum redshift at $ z= 0.075$ (i.e. the redshift where we stop detecting cold fronts). The resulting sample is reduced to 32 objects with a cold front occurrence of $59\%$. We note that the sample may be biased against clusters having cold fronts at small distances from the center, inducing an underestimation of the cold front occurrence we measure. However, in the case of cold fronts lying at distances $r \simgt 40$ kpc the sample can be considered, to a first approximation unaffected by a redshift bias. \subsection{Occurrence and origin of cold fronts.} \label{sec:occur} In this section, we investigate what discriminates clusters without cold fronts from clusters hosting one or more. \subsubsection{Merger cold fronts} \label{sub:merging} We start by focussing our attention on merger cold fronts. Some of our systems are well known merging clusters. The morphology of these systems is generally complex and no unique center can be identified in their surface brightness maps. In many of these systems \citep[e.g. A3667][]{Vikhlinin1:2001, Vikhlinin2:2001, Vikhlinin:2002,Briel:2004} the merger process is occurring close to the plane of the sky and the geometry of the event is clear. The origin and the evolution of cold fronts in these systems is most likely related to the merger process. More specifically, the motion of a cold dense core of a subsystem which moves in the atmosphere of the main cluster during a merger event induces the formation of a cold front feature. Typically, the subcluster is stripped of its outermost gas and the ram pressure exerted on the surviving dense cloud by the less dense surrounding gas produces the contact discontinuity between the two subsystems, generating the cold front \citep{MM_review:2007}. In other objects, such as A3562 and A576, where the X-ray merger geometry is not as clear, the nature of the cold fronts we observe is not as obvious. A3562 is a cluster lying in the core of the Shapley supercluster, one of the largest mass concentrations in the local Universe. The presence of a radio halo in this cluster \citep{Giacintucci:2005} provides an indication of a merger activity, since radio halos have been found only in interacting systems. Evidences of interaction come also from {\it Beppo}-SAX data \citep{Bardelli:2002}. Using {\it XMM-Newton} data, \citet{Finoguenov:2004} suggest that the SC 1329-313 group southwest of A3562 has passed to the north of A3562 and the cluster core is likely oscillating in response to the passage of the group. A576 is another peculiar system whose discontinuity has been observed also with {\it Chandra} by \citet{Kempner:2004} who propose that the core of the cluster is the remnant of a merging subcluster. This picture was also suggested by \citet{Mohr:1996} from an analysis of the galaxy population. Recently, \citet{Dupke:2007} found that the system is consistent with a line-of-sight merger. According to this picture, the cold front we find in A576 is likely a merger cold front. The clusters of our sample hosting merger cold fronts are listed in the last part of Table \ref{table:CF_sample} and marked with a bold-faced font. We include in this class A85$^*$, the A85 subclump falling on the main structure. A576 and A3562, which are merging clusters where the cold front origin is not as readily associated to the merger event, are placed in a separate category and marked with an italic font in Table \ref{table:CF_sample}. \subsection{Non-merger cold fronts: the entropy profile} \label{sub:non-merging} In this subsection we investigate what discriminates clusters without cold fronts from clusters hosting at least one, once we exclude the clusters having a merger cold front (see above \S\ \ref{sub:merging}) and clusters where the X-ray merger geometry is not clear. We focus on the remaining subsample (23 clusters) which includes both clusters undergoing a merger event which is not lying in the plane of the sky and clusters which do not present any sign of merging processes. In this subsample only 10 out of the 23 clusters host a cold front (main properties are listed in the first part of Table \ref{table:CF_sample}). We try to understand what determines the presence of cold fronts in these systems, studying the radial entropy profile for each of these clusters. As is conventional in X-ray astronomy, we quantify the entropy using the adiabatic constant $K = kT n_e^{-2/3}$ ($T$ and $n_e$ are the gas temperature and density respectively and $k$ is a constant) following \citet{Voit_entropy:2005}. The specific entropy $s$ is related to $K$ through the relation $s \propto {\rm ln} K$. We will refer to $K$ as ``entropy'' throughout the paper. To obtain the entropy profiles, we derive the radial profiles of the electron density $n_e$ and the temperature $T$, by deprojecting the observed surface brightness and temperature, under the assumption of spherical symmetry. The projected temperature and the surface brightness have been derived through a spectral analysis of the clusters using concentric annuli \citep[see][for details]{Rossetti:2010}. To perform deprojection, we adopted the procedure described in \citet{Ghizzardi_M87:2004}. \begin{figure} \centering \includegraphics[angle=0,width=9.3 truecm] {all_entropy_group_ok_r_low_065.ps} \caption{Scaled entropy profiles for the subsample described in \S \ref{sub:non-merging}. Each curve is a locally weighted fit to the data points (see text for details) to reduce the scatter. Red solid lines are the profiles of clusters hosting cold fronts, while green solid lines are the profiles of clusters without cold fronts. The red dot dashed curve is the profile for A3558. Cygnus A is omitted (see text for details).} \label{fig:ent_prof_low}% \end{figure} In Fig. \ref{fig:ent_prof_low}, we plot all the derived entropy profiles. Radii are scaled to $r_{180}$, the radius within which the mean density is 180 times the critical density\footnote{ $M_{180}=180\rho_c(z)(4\pi/3)r_{180}^3$, where $\rho_c(z)=h^2(z)3H_0^2/8\pi G$ and $h^2(z)=\Omega_{\rm{m}}(1+z)^3+\Omega_{\Lambda}$ }. The values of $r_{180}$ have been derived using its relationship with the cluster mean temperature as in \citet{Arnaud_r180:2005} \citep[see also][]{Leccardi_temp:2008}. The entropy is scaled using the empirical entropy scaling law $K \propto h(z)^{-4/3} T_{10}^{0.65}$, $h^2(z) = \Omega_{\rm{m}}(1 + z)^3 + \Omega_\Lambda$ \citep{Pratt_entropy:2006, Ponman_entropy_sc:2003}; $T_{10}$ is the mean temperature of each cluster in units of 10 keV, as in \citet{Pratt_entropy:2006}. The curves plotted in Fig. \ref{fig:ent_prof_low} are actually a locally weighted regression \citep[LOWESS regression, see][]{Sanderson_lowess:2006,Sanderson_lowess:2005} in the log-log space to reduce the scatter and provide a better view of the profiles behavior. Clusters hosting a cold front (hereafter CF clusters) are denoted with red lines while clusters without cold fronts (hereafter NCF) are denoted with green lines. We marked as particular case A3558 (red dot dashed curve). The behavior of this cluster will be discussed in Sec. \ref{sec:disc}. The cluster Cygnus A has been discarded here because of the strong contamination by the central AGN: the presence of the two hot spots invalidates the assumption of spherical symmetry and does not allow to deproject the temperature and surface brightness radial profiles. Fig. \ref{fig:ent_prof_low} shows that all the profiles have a similar trend at large radii ($r \simgt 0.08 r_{180}$). Moving towards the innermost regions, the profiles spread out and we observe a large scatter. More precisely, the NCF clusters (green solid curves) typically have central entropies higher than CF clusters (red solid curves). Moreover, systems hosting cold fronts seem to have a steeper profile than clusters without cold fronts. \begin{figure} \centering \includegraphics[angle=0,width=9.5 truecm] {mean_prof_entropy_wr_fit_065_nopc.ps} \caption{Thin lines are the mean scaled entropy profiles for clusters hosting cold fronts (red area) and clusters without cold fronts (green area). The shaded areas represent the standard deviation from the mean profiles. The thick blue line is the power law obtained fitting all the profiles in the radial range [0.08-0.3]$r_{180}$. Blue points are the distances of the detected cold fronts (excluding A3558) from the cluster center.} \label{fig:entropy_shade}% \end{figure} In Fig. \ref{fig:entropy_shade} we plot the two averaged profiles (thin solid curves) for CF and NCF clusters. The shaded areas represent the standard deviation from the mean profiles. As in Fig. \ref{fig:ent_prof_low}, color codes label CF clusters (red area) and NCF clusters (green area). We find that the mean profiles are similar at large radii. Fitting all the entropy profiles with a power law in the radial range [0.08 - 0.3]$r_{180}$, we find a slope $\alpha=0.95 \pm 0.01$. For comparison, \citet{Pratt_entropy:2006} found a slope $\alpha = 1.14 \pm 0.06$, while, \citet{Pratt_Arnaud:2005} find $\alpha = 0.94\pm 0.14$ and \citet{Piffaretti:2005} find $\alpha = 0.95 \pm 0.02$. Restricting to $[0.1-0.3] r_{180}$ we find a slightly steeper power law with $\alpha=1.08 \pm 0.02$ in accordance with the theoretical value of 1.1 predicted by \citet{Tozzi_Norman:2001} \citep[see also][]{Voit_Ponman_entropy:2003,Borgani:2002}. Moving towards the innermost regions, for $r \simlt 0.08 r_{180}$, the two mean profiles decouple. The NCF mean cluster profile exhibits a central entropy excess with respect to the outer power law model. The slope in the $[0.01 - 0.08] r_{180}$ range is $\alpha = 0.64 \pm 0.01$ significantly lower than the outer power law slope. On the contrary, the CF cluster profiles become steeper in the same radial range, reaching lower central values. Fitting with a power law the CF mean entropy profile in the $[0.01-0.08] r_{180} $ range, we find a slope of $1.22 \pm 0.01$, significantly higher than the external power law slope. The mean entropy values in the innermost bin ($r=0.005 r_{180}$) are $94.5 \pm 5.5 $ keV cm$^2$ and $10.8 \pm 1.6$ keV cm$^2$ for NCF and CF clusters respectively. Excluding the particular case A3558 (see \S \ref{sec:disc}) does not significantly change results on best fit values. Since the steepness of the entropy profile is an indicator of the presence of a ``cool core'' (e.\,g.\, \citealt{Cavagnolo:2009}), clusters with a steep entropy profile likely feature also a temperature decrement and a brightness excess in their internal regions. Therefore, one may argue that we do not detect cold fronts in objects with a flat entropy profile just because their surface brightness is lower than that of clusters where we do detect cold fronts. However, as it can be seen from the profiles reported in Appendix, we detect cold fronts in the centers of cool core objects (e.\,g.\, A262) where the $S\!B$ is high, but also in the outer regions of merging clusters where the $S\!B$ is much lower (e.g. A3667, A85$^*$). In Fig.\, \ref{fig:discont}, we show an example of a cluster where we do not detect CF, AWM7, which has a $S\!B$ comparable to those of the clusters where we do detect cold fronts. This is the case for almost all the clusters of our remaining sample where we do not detect cold fronts. \section{Discussion} \label{sec:disc} The origin of cold fronts in clusters manifestly undergoing a merger event can be related to the motion of a dense cold cloud of gas within the atmosphere of another subcluster. Conversely, the presence (or the absence) of these features in the subsample of non-merging clusters and clusters where the merger is not close to the plane of the sky is not clearly understood. Fig. \ref{fig:entropy_shade} provides some hints to help understand what determines the presence of cold fronts in these systems. The general picture emerging here is that the entropy profile discriminates among the two classes (CF and NCF) of clusters. While at large radii the (scaled) entropy profiles of these clusters are very similar, in the innermost regions ($r \simlt 0.08r_{180}$) their behaviors differ. Our finding of a steep entropy gradient in CF cluster is in agreement with theoretical expectations. Indeed, simulations by \citet{AM06} show that cold fronts can rise and develop in the cores of clusters if the entropy sharply decreases towards the center (as typically occurs in the center of cool core clusters). According to these simulations, cold fronts develop as a consequence of minor merger events; during its passage near the center of a cluster, a merging subclump induces some disturbance on the low entropy gas of the core and displaces it from the center of the potential well. If the entropy profile is steep, the cool gas starts sinking towards the minimum of the gravitational potential, a sloshing mechanism sets in and cold fronts arise. If the entropy profile is not steep, the entropy contrast is insufficient for the cool gas to flow back and for the sloshing mechanism to set in. In agreement with this picture, we find that cold fronts form only in regions where the entropy profile sharply decreases. In Fig. \ref{fig:entropy_shade}, we plot (blue filled circles) the cold fronts positions measured for the non-merging clusters of our sample. Excluding the outermost cold front of A496 (this cluster hosts three cold fronts) which lies at a distance of $\sim 0.08 r_{180}$ from the peak, all the cold fronts we detect lie at distances smaller than $\sim 0.05 r_{180}$, where entropy profiles steepen, and greater than $\sim 0.01r_{180}$ where, in many systems, {\it Chandra} detects a flattening \citep{Donahue_entropy:2006}. \begin{figure} \centering \includegraphics[angle=0,width=9.5 truecm] {mean_prof_entropy_wr_fit_065_pc.ps} \caption{Mean scaled entropy profiles as in Fig. \ref{fig:entropy_shade}. Red dot-dashed curve is the profile of A3558. The blue filled square is the A3558 cold front distance from the cluster center. } \label{fig:entropy_outliers}% \end{figure} The large majority of clusters of the final subsample obey the general rule that cold fronts are hosted by systems with a steep entropy profile in their centers. However, as already pointed out, A3558 is a peculiar case: although its entropy profile is similar to the NCF clusters profiles, it hosts a cold front. Some comments are needed to understand why this outlier does not follow the general behavior. The cold front for A3558 (blue square in Fig. \ref{fig:entropy_outliers}) is located at larger distance ($ r \sim 0.05 r_{180}$) with respect to all the other cold fronts we detect, in a region where a weak entropy gradient, not as sharp as in the other CF clusters profiles, is present. This cluster lies at the center of the Shapley supercluster and its special behavior of this cluster is likely related to the unique environment in which it is embedded. To understand the reason why cold fronts can arise in such a system, we refer once more to \citet{AM06} simulations. When the entropy profile of the cluster does not sharply decrease in the center, the central cold gas is easily pushed away from the dark matter peak, at the merging subclump passage. Accordingly, the cold front emerges at a large distance from the core. However, there is no entropy contrast to trigger the sloshing mechanism and this cold front will not develop further. Cold fronts rising in these systems are short-lived and therefore rare phenomena (see Fig.\, 12 in \citealt{AM06}). A3558 is embedded in a very unrelaxed environment where merging events are frequent, and therefore the probability to form (and to observe) such fronts is higher. Alternatively, this cold front might be a merger cold front that we failed to recognize due to the fact that A3558 cannot be classified easily as a merging cluster. As discussed in \citet{Rossetti_A3558:2007}, it presents some features similar to those of cool core clusters and other properties that are more common in merging clusters. One of the main findings of our paper is that we detect at least one cold front in all steep entropy gradient clusters in the final subsample. \citet{AM06} show that once the sloshing mechanism sets in cold fronts can be recognized in all the projection planes, even if they are more prominent on the merger plane (see Fig. 19 of their paper). However, the limited resolution of our instruments allow us to recognize only the most apparent brightness discontinuity. Indeed, we have performed some simulations of cold fronts projection with the {\it XMM-Newton} PSF and we found that cold fronts can only be observed if they lie within some 30$^{\circ}$\ of the plane of the sky. This means that our 100\% detection rate implies that most steep entropy clusters must host more than one ``prominent'' cold front. This abundance of cold fronts suggests that, whatever the triggering mechanism might be, it must have a high occurrence rate. Since the prominent cold fronts that we can detect are located on the merger plane, the detection of one or more cold fronts in all our steep entropy systems seems to indicate that a sizeable fraction of them are currently experiencing more than one minor merger. Assuming that, crudely speaking, cold fronts are visible for a timescale of about 3 Gyr (this is the case for the dark matter + gas simulation in \citealt{AM06}, while for the dark matter only this timescale is longer), our cold front detection rate translates into a minimum merger frequency of 1/3 merger event per halo per Gyr. If we further assume a minimum mass ratio of 1/10 we can compare our rate with rates expected from cosmological simulations. Using Fig.8 in a recent paper by \citet{Fakhouri:2008} we find a merger rate of $\sim 0.2$ merger per halo per Gyr for mass ratio larger that 1/10. This is somewhat smaller than the minimum rate implied by our observed cold front rates however, given the numerous simplifications we have applied in our calculation, we deem it to be in acceptable agreement. Gas sloshing may provide an important contribution to the cooling-heating problem in cool core clusters \citep{ZuHone:2009}. The sloshing gas typically moves at sub-(or trans-)sonic velocities carrying a kinetic energy comparable to the thermal energy but the dissipation of this kinetic energy to thermal energy is too slow compared to cooling \citep{Maxim:2001}. However the sloshing mechanism also brings the outer high entropy gas into the core, mixing it with the cooling gas and resulting in a heat inflow which can prevent the formation of a ``cooling flow'' for periods of time 1-3 Gyr \citep{ZuHone:2009}. If subcluster encounters are frequent enough, as it is suggested by our high detection rate, the sloshing mechanism can efficiently offset cooling. Intriguingly the sloshing mechanism operates preferentially in steep entropy profile clusters, i.e. precisely those which require heating to offset the cooling. With the coming into operation of the first space-borne micro-calorimeter, quite likely the one onboard the ASTRO-H mission \citep{Taka_NEXT:2008}, it will be possible to investigate gas motions in the direction of the line of sight, i.e. orthogonally with respect to that of the plane of the sky sampled with cold fronts. The combination of the two informations will afford a reliable estimate of the motions of the ICM in clusters core and estimate their role in offsetting cooling. \section{Summary} \label{sec:summary} We have performed a systematic search of cold fronts using {\it XMM-Newton} data for a sample of 45 objects extracted from the B55 flux limited sample \citep{Edge:B55}. The main results of our work are the following: \begin{itemize} \item{Excluding three unclassified cases, we find that 19 clusters out of 42 host at least one cold front.} \item{We do not detect any cold front in systems having redshift greater than about 0.075. This is most likely related to {\it XMM-Newton} resolution limit. By cutting our sample at $z = 0.075$, we restrict our sample to 32 objects with a cold front occurrence of 59\% .} \item{Cold fronts are easily detected in systems that are manifestly undergoing a merger event in (or close to) the plane of the sky.} \item{Out of the 23 clusters of the remaining subsample (systems undergoing a merger event which is not lying in the plane of the sky and non-merging clusters) 10 objects exhibit a cold front. For this final subsample, the entropy profile of systems hosting cold fronts is found to be steeper than that of clusters without them. The difference is observed at radii smaller than about $0.08 r_{180}$ where all our cold fronts are found.} \item{Our findings are in agreement with simulation based predictions. As shown by \citet{AM06} an entropy gradient is a necessary ingredient to trigger gas sloshing. } \item{Since projection effects highly limit the capability of detecting cold fronts, the finding that all the clusters with a steep entropy profile host a cold front implies that most clusters with a steep entropy profile must have more than one cold front. } \item{Under the assumption that cold fronts in cool core clusters are triggered by minor mergers, we estimate a minimum of 1/3 events per halo per Gyr, which is somewhat larger than that expected from cosmological simulations \citep{Fakhouri:2008}.} \item{Gas sloshing may provide an important contribution to the cooling-heating problem in cool core clusters. A robust assessment of the gas motions associated to the sloshing phenomenom will become possible with the coming into operation of the first space borne microcalorimeter.} \end{itemize} \begin{acknowledgements} The authors thank the referee for useful comments The authors are pleased to acknowledge Sabrina De Grandi and Fabio Gastaldello whose suggestions have significantly improved the paper. \end{acknowledgements} \bibliographystyle{aa}
2024-02-18T23:39:58.496Z
2010-03-04T17:57:25.000Z
algebraic_stack_train_0000
985
9,925
proofpile-arXiv_065-4980
\section{Introduction} We assume that the audience is familiar with the concept of a Courant-Friedrichs-Lewy (CFL) condition \cite{wikipedia-CFL-condition}. Loosely speaking, the CFL condition states: When a partial differential equation, for example the wave equation \begin{eqnarray} \label{eq:wave} \partial_t^2 u & = & c^2\, \Delta u \quad\textrm{,} \end{eqnarray} is integrated numerically, then the time step size $\delta t$ is limited by the spatial resolution $\delta x$ and the maximum propagation speed $c$ by \begin{eqnarray} \label{eq:cfl} \delta t & < & Q\, \frac{\delta x}{c} \quad\textrm{.} \end{eqnarray} Here $Q$ is a constant of order $1$ that depends on the time integration method (and details of the spatial discretisation). Choosing a time step size larger than this is unstable and must therefore be avoided. (There are time integration methods that do not have such a stability limit, but these are expensive and not commonly used in numerical relativity, so we will ignore them here.) \section{Example: Exponential Decay} In real-world equations, there are also other restrictions which limit the time step size, and which may be independent of the spatial resolution. One simple example for this is the exponential decay \begin{eqnarray} \label{eq:decay} \partial_t u & = & - \lambda\, u \end{eqnarray} where $\lambda > 0$ is the decay constant. Note that this equation is an ordinary differential equation, as there are no spatial derivatives. The solutions of (\ref{eq:decay}) are given by \begin{eqnarray} u(t) & = & A\, \exp\{ - \lambda t \} \end{eqnarray} with amplitude $A$. The decay constant $\lambda$ has dimension $1/T$. The time step size is limited by \begin{eqnarray} \delta t & < & Q'\, \frac{1}{\lambda} \end{eqnarray} where $Q'$ is a constant of order $1$ that depends on the time integration method. Choosing a time step size larger than this is unstable and must therefore be avoided. (As with the CFL criterion, there are time integration methods that do not have such a stability limit.) As an example, let us consider the forward Euler scheme with a step size $\delta t$. This leads to the discrete time evolution equation \begin{eqnarray} \frac{u^{n+1} - u^n}{\delta t} & = & - \lambda\, u^n \end{eqnarray} or \begin{eqnarray} u^{n+1} & = & (1 - \delta t\, \lambda)\, u^n \quad\textrm{.} \end{eqnarray} This system is unstable e.g.\ if $|u^{n+1}| > |u^n|$ (there are also other definitions of stability), or if \begin{eqnarray} |1 - \delta t\, \lambda| & > & 1 \quad\textrm{,} \end{eqnarray} which is the case for $\delta t > 2 / \lambda$ (and also for $\delta t < 0$). In this case, the solution oscillates between positive and negative values with an exponentially growing amplitude. \section{Gamma Driver} The BSSN \cite{Alcubierre99d} Gamma Driver condition is a time evolution equation for the shift vector $\beta^i$, given by (see e.g.\ (43) in \cite{Alcubierre02a}) \begin{eqnarray} \label{eq:gamma-driver} \partial_t^2 \beta^i & = & F\, \partial_t \tilde \Gamma^i - \eta\, \partial_t \beta^i \quad\textrm{.} \end{eqnarray} There exist variations of the Gamma Driver condition, but the fundamental form of the equation remains the same. The term $F\, \partial_t \tilde \Gamma^i$ contains second spatial derivatives of the shift $\beta^i$ and renders this a hyperbolic, wave-type equation for the shift. The parameter $\eta>0$ is a damping parameter, very similar to $\lambda$ in (\ref{eq:decay}) above. It drives $\partial_t \beta^i$ to zero, so that the shift $\beta^i$ will tend to a constant in stationary spacetimes. (This makes this a \emph{symmetry-seeking} gauge condition, since $\partial_t$ will then tend to the corresponding Killing vector.) Let us now consider a simple spacetime which is spatially homogeneous, i.e.\ where all spatial derivatives vanish. In this case (see e.g.\ (40) in \cite{Alcubierre02a}), $\partial_t \tilde \Gamma^i = 0$, and only the damped oscillator equation \begin{eqnarray} \partial_t^2 \beta^i & = & - \eta\, \partial_t \beta^i \end{eqnarray} remains. As we have seen above, solving this equation numerically still imposes a time step size limit, even though there is no length scale introduced by the spatial discretisation, so the spatial resolution can be chosen to be arbitrarily large; there is therefore no CFL limit. This demonstrates that the damping time scale set by the parameter $\eta$ introduces a resolution-independent time step size limit. This instability was e.g.\ reported in \cite{Sperhake:2006cy}, below (13) there, without explaining its cause. The authors state that the choice $\eta=2$ is unstable near the outer boundary, and they therefore choose $\eta=1$ instead. Decreasing $\eta$ by a factor of $2$ increases the time step size limit correspondingly. The explanation presented above was first brought forth by Carsten Gundlach \cite{Gundlach2008a} and Ian Hawke \cite{Hawke2008a}. To our knowledge, it has not yet been discussed in the literature elsewhere. Harmonic formulations of the Einstein equations have driver parameters similar to the BSSN Gamma Driver parameter $\eta$. Spatially varying parameters were introduced in harmonic formulations to simplify the gauge dynamics in the wave extraction zone far away from the origin (see e.g.\ (8) in \cite{Scheel:2008rj}). \cite{Palenzuela:2009hx} uses a harmonic formulation with mesh refinement, and describes using this spatial dependence also to avoid time stepping instabilities (see (45) there). \section{Mesh Refinement} When using mesh refinement to study compact objects, such as black holes, neutron stars, or binary systems of these, one generally uses a grid structure that has a fine resolution near the centre and successively coarser resolutions further away from the centre. With full Berger-Oliger AMR that uses sub-cycling in time, the CFL factors on all refinement levels are the same, and thus the time step sizes increase as one moves away from the centre. This makes it possible that the time step size on the coarsest grids does not satisfy the stability condition for the Gamma Driver damping parameter $\eta$ any more. One solution to this problem is to omit sub-cycling in time for the coarsest grids by choosing the same time step size for some of the coarsest grids. This was first advocated by \cite{Bruegmann:2003aw}, although it was introduced there to allow large shift vectors near the outer boundary as necessary for a co-rotating coordinate system. It was later used in \cite{Brugmann:2008zz} (see section IV there) to avoid an instability near the outer boundary, although the instability is there not attributed to the Gamma Driver. Omitting sub-cycling in time on the coarsest grids often increases the computational cost only marginally, since most of the computation time is spent on the finest levels. Another solution is to choose a spatially varying parameter $\eta$, e.g.\ based on the coordinate radius and mimicking the temporal resolution of the grid structure, which may grow linearly with the radius. This follows the interpretation of $\eta$ setting the damping timescale, which must not be larger than the timescale set by the time discretisation. One possible spatially varying definition for $\eta$ could be \begin{eqnarray} \label{eq:varying} \eta(r) & := & \eta^*\; \frac{R^2}{r^2 + R^2} \quad, \end{eqnarray} where $r$ is the coordinate distance from the centre of the black hole. The parameter $R$ defines a transition radius between an inner region, where $\eta$ is approximately equal to $\eta^*$, and an outer region, where $\eta$ gradually decreases to zero. This definition is simple, smooth, and differentiable, and mimics a ``typical'' mesh refinement setup, where the resolution $h$ grows approximately linearly with the radius $r$. Another, simpler definition for $\eta$ (which is not smooth -- but smoothness is not necessary; $\eta$ could even be discontinuous) is \begin{eqnarray} \label{eq:varying-simple} \eta(r) & := & \eta^*\; \left\{ \begin{array}{llll} 1 & \mathrm{for} & r \le R & \textrm{(near the origin)} \\ \frac{R}{r} & \mathrm{for} & r \ge R & \textrm{(far away)} \end{array} \right. , \end{eqnarray} which is e.g.\ implemented in the \texttt{McLachlan} code \cite{ES-mclachlanweb}. If there are multiple black holes, possibly with differing resolution requirements, then prescriptions such as (\ref{eq:varying}) or (\ref{eq:varying-simple}) need to be suitably generalised, e.g.\ via \begin{eqnarray} \label{eq:multiple} \frac{1}{\eta(r)} & := & \frac{1}{\eta_1(r_1)} + \frac{1}{\eta_2(r_2)} \quad, \end{eqnarray} where $\eta_1$ and $\eta_2$ are the contributions from the individual black holes, with $r_1$ and $r_2$ the distances to their centres. This form of (\ref{eq:multiple}) is motivated by the dimension of $\eta$, which is $1/M$, so that two superposed black holes of masses $m_1$ and $m_2$ lead to the same definition of $\eta$ as a single black hole with mass $m_1+m_2$. Another prescription for a spatially varying $\eta$ has been suggested in \cite{Mueller:2009jx}. In this prescription, $\eta$ depends on the determinant of the three-metric, and it thus takes the masses of the black hole(s) automatically into account. This prescription is motivated by binary systems of black holes with unequal masses, where $\eta$ near the individual black holes should be adapted to the individual black holes' masses, and it may be more suitable to use this instead of (\ref{eq:multiple}). There can be other limitations of the time step size near the outer boundary, coming e.g.\ from the boundary condition itself. In particular, radiative boundary conditions impose a CFL limit that may be stricter than the CFL condition from the time evolution equations in the interior. \begin{acknowledgements} We thank Peter Diener, Christian D. Ott, and Ulrich Sperhake for valuable input, and for suggesting and implementing the spatially varying beta driver in (\ref{eq:varying-simple}). We also thank Bernd Brügmann for his comments. % This work was supported by the NSF awards \#0721915 and \#0905046. It used computational resources provided by LSU, LONI, and the NSF TeraGrid allocation TG-MCA02N014. \end{acknowledgements} \bibliographystyle{bibtex/unsrt-url}
2024-02-18T23:39:59.055Z
2010-06-18T02:02:26.000Z
algebraic_stack_train_0000
1,004
1,638
proofpile-arXiv_065-5000
\section{Introduction} New constructive Bosonic field theory methods have been recently proposed \cite{R1,MR1,GMR}. The method called Loop vertex expansion or Cactus expansion \cite{R1,MR1,MNRS}. is based on applying a canonical forest formula to repackage perturbation theory in a better way. This allows to compute the connected quantities of the theory by the same formula but summed over trees rather than forests. Combining the forest formula with the intermediate field method leads to a convenient resummation of $\phi^4$ perturbation theory. The main advantage of this formalism over previous cluster and Mayer expansions is that connected functions are captured by a single formula, and e.g. a Borel summability theorem for matrix $\phi^4$ models can be obtained which scales correctly with the size of the matrix. In this paper we extend this method, which at first sight looks limited to $\phi^4$ interactions, to show that it is in fact suitable for any stable quantum field theory. For simplicity we restrict ourselves to interactions of the $\lambda \phi^{2k}$ type in zero dimension. We introduce several intermediate fields instead of one for the $\phi^4$ model. We also take care of the integration contours to bound the integral over intermediate fields. We prove the Borel-Le Roy summability of the right order for this class of theories. Extension to quantum field theories in more than 0 dimension in the line of \cite{MR1} is devoted to a future publication, but should follow from the method of this paper and the local nature of the interaction. \section{The Forest Formula} This formula, a key tool in constructive theory, was perfected along the years by many authors \cite{BK,AR1}. It is shown here as a Taylor-Lagrange expansion, in which a function of many link variables is expanded around the origin in a careful and symmetric way which stops with an integral remainder before the derivatives create any cycles. Consider $n$ points. The set of pairs $P_n$ of such points has $n(n-1)/2$ elements $\ell = (i,j)$ for $1\le i < j \le n$. Consider a smooth function $f$ of $n(n-1)/2$ variables $x_\ell$, $\ell \in {\cal{P}}_n$. Noting $\partial_\ell$ for $\frac{\partial}{\partial x_\ell}$, the forest formula is \begin{theorem} \begin{equation}\label{treeformul1} f(1,\dots ,1) = \sum_{{\cal{F}}} \big[ \prod_{\ell\in {\cal{F}}} \int_0^1 dw_\ell \big] \big( [ \prod_{\ell\in {\cal{F}}} \partial_\ell ] f \big) \cdot [ X^{\cal{F}} (\{ w_{\ell'}\} ) ] \end{equation} where \begin{itemize} \item the sum over ${\cal{F}}$ is over forests over the $n$ vertices, including the empty one, \item $x^{\cal{F}}_\ell (\{ w_{\ell'}\} )$ is the infimum of the $w_{\ell'}$ for $\ell'$ in the unique path from $i$ to $j$ in ${\cal{F}}$, where $\ell = (i,j)$. If there is no such path, $x^{\cal{F}}_\ell (\{ w_{\ell'}\} ) = 0$ by definition. \item The symmetric $n$ by $n$ matrix $X^{\cal{F}} (\{w\})$ defined by $X^{\cal{F}}_{ii} = 1$ and $X^{\cal{F}}_{ij} =x^{\cal{F}}_{ij} (\{ w_{\ell'}\} ) $ for $1\le i < j \le n$ is positive. \end{itemize} \end{theorem} \begin{proof} We do not reproduce here the many proofs of formula (\ref{treeformul1}) \cite{BK}\cite{AR1}, but we recall the reason for which the matrix $X^{\cal{F}} (\{w\})$ is positive. It is because for any ordering of the $\{ w\}$ parameters it can be written as a (different!) convex combination of positive block matrices of the $I_q$ type. \begin{definition} A block $I_q$ of dimension $q$ is defined as a $q\times q$ matrix with all the elements $1$. For example, a block of dimension $3$ is: \begin{equation} I_3=\begin{pmatrix} 1 & 1 & 1\\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{pmatrix} . \end{equation} \end{definition} Consider indeed a forest ${\cal{F}}$ with $p\le n-1$ elements and an ordering \begin{equation} 0= w_{p+1} \leq w_{p}\leq w_{p-1} \leq...\leq w_{1} \leq w_{0}=1, \end{equation} then \begin{equation}\label{convexkey} X^{\cal{F}}(\{w\}) =\sum_{k=1}^{p+1}(w_{k-1}-w_{k})X^{{\cal{F}},k} \end{equation} where $X^{{\cal{F}},k}_{ij} $ is 1 if $i$ and $j$ are connected by the $k-1$ first lines of the forest, and is 0 otherwise. We have \begin{equation} \sum_{k=1}^{p+1}(w_{k-1}-w_{k}) = 1 . \end{equation} Therefore $X^{{\cal{F}},k} $ is a matrix obtained by gluing the blocks corresponding to the connected components of the forest ${\cal{F}}^k$, where ${\cal{F}}^k$ is the subforest of ${\cal{F}}$ made of the $k-1$ first lines of the forest in the ordering. \end{proof} We need later the fact that the Gaussian measure $d\mu_{I_q} (a_1, ... a_q)$ with covariance $I_q$ really corresponds to a single Gaussian variable, say $a_1$, with covariance 1, plus $q-1$ delta functions: \begin{equation}\label{blockvari} d\mu_{I_q} (a_1 ... a_q) = \frac{d a_1}{\sqrt{2 \pi}} e^{- a_1^2/ 2} \prod_{i=2}^q \delta(a_{1} - a_i ) da_{i} \; . \end{equation} \section{$\phi^6$ constructive theory in zero dimension} We consider a massless $\phi^6$ scalar theory in zero dimension, where $\phi$ is simply a number. The Lagrangian reads: \begin{equation} {\cal{L}}=-\frac{1}{2}\phi^2-\lambda \phi^6 \end{equation} and the partition function is \begin{equation} Z(\lambda)= \int \frac{d\phi}{\sqrt{2\pi}} e^{-\frac{1}{2}\phi^2} e^{-\lambda\phi^6} . \end{equation} The covariance of the normalized Gaussian measure $\frac{d\phi}{\sqrt{2\pi}} e^{-\frac{1}{2}\phi^2}$ is simply \begin{equation} < \phi^2> =1 . \end{equation} \subsection{Intermediate Field Representation} We introduce a real intermediate field $\sigma$ to rewrite the interaction. This leads to \begin{equation} Z(\lambda)= \int\frac{d\phi}{\sqrt{2\pi}} e^{-\frac{1}{2}\phi^2} e^{-\lambda\phi^6}=\int \frac{d\phi}{\sqrt{2\pi}} e^{-\frac{1}{2}\phi^2} \int \frac{d\sigma}{\sqrt{2\pi}} e^{-\frac{1}{2}\sigma^2} e^{i\sqrt{2\lambda}\phi^3\sigma}. \end{equation} The induced interaction term could be further transformed as \begin{equation} \sqrt{2}\phi^3\sigma= \frac{1}{\sqrt 2} [(\phi\sigma+\phi^2)^2-\phi^2\sigma^2-\phi^4] . \end{equation} We then introduce another three intermediate fields to write the partition function as \begin{eqnarray} Z(\lambda) &=&\int \frac{d\phi}{\sqrt{2\pi}} e^{-\frac{1}{2}\phi^2} \int \frac{d\sigma}{\sqrt{2\pi}} e^{-\frac{1}{2}\sigma^2} \int \frac{da \sqrt{i}}{\sqrt{2\pi}} e^{i[(2\lambda)^{1/4}(\phi\sigma+\phi^2)a-a^2/2]}\nonumber\\ &\times& \int \frac{db}{\sqrt{2i\pi}}e^{-i[(2\lambda)^{1/4}\phi\sigma b-b^2/2]}\int \frac{dc}{\sqrt{2i\pi}} e^{-i[(2\lambda)^{1/4}\phi^2 c-c^2/2]} . \end{eqnarray} Integrating out the fields $\phi$ and $\sigma$ we get: \begin{equation} \label{imagauss} Z(\lambda) = \int \frac{da \sqrt{i}}{\sqrt{2\pi}} \frac{db}{\sqrt{2i\pi}} \frac{dc}{\sqrt{2i\pi}} e^{i(b^2+c^2-a^2)/2}e^{V} \end{equation} where \begin{equation} V=-\frac{1}{2}{\rm Tr} \ln[{\mathds{1}}+i(2\lambda)^{1/4}\begin{pmatrix} c-a & b-a \\ b-a & 0\\ \end{pmatrix}]=-\frac{1}{2}{\rm Tr}\ln({\mathds{1}}+iH) , \end{equation} where \begin{equation} {\mathds{1}}=\begin{pmatrix} 1 & 0\\ 0 & 1 \\ \end{pmatrix} , \quad H=(2\lambda)^{1/4}\ \begin{pmatrix} c-a & b-a \\ b-a & 0\\ \end{pmatrix} . \end{equation} Obviously $H$ defined above is Hermitian for $\lambda \ge 0$. The new resulting integrals (\ref{imagauss}) over $a$, $b$ and $c$ are oscillating and still formal, and we have to slightly change the contours of integration to make them well-defined, but this is postponed to the next section. We use the replica method to write the exponential as: \begin{equation} e^{V}=\sum_{n}\frac{V^n}{n!}=\sum_{n}\frac{1}{n!}\prod_{v=1}^n V_v \end{equation} where \begin{equation} V_v=V_v (a^v,b^v,c^v) . \end{equation} Then applying the forest formula, the connected function could be written as a sum over trees ${\cal{T}}$ whose nodes are loop vertices, and whose lines are of three different types, corresponding to Wick contractions of $a$, $b$ and $c$. Calling ${\cal{T}}_a$, ${\cal{T}}_b$ and ${\cal{T}}_c$ the three corresponding subset of lines of the tree we have \begin{theorem} \begin{eqnarray}\label{treeformul} \log Z(\lambda) &=& \sum_{n=1}^{\infty}\frac{1}{n!}\ \sum_{{\cal{T}} \; {\rm with }\; n\; {\rm vertices}}\ Y_{\cal{T}} \\ Y_{\cal{T}} &=& \bigg\{ \prod_{\ell\in {\cal{T}}} \big[ \int_0^1 dw_\ell \big]\bigg\} \int d\nu_{\cal{T}} (\{a^v,b^v,c^v\}, \{ w \}) \nonumber \\ &\times& \bigg\{ \prod_{\ell\in {\cal{T}}_a} \big[ \delta _{v,v'} \frac{\delta}{\delta a^{v(\ell)}} \frac{\delta}{\delta a^{v'(\ell)}} \big] \bigg\} \bigg\{ \prod_{\ell\in {\cal{T}}_b} \big[ \delta _{v,v'} \frac{\delta}{\delta b^{v(\ell)}} \frac{\delta}{\delta b^{v'(\ell)}} \big] \bigg\}\nonumber\\ &\times& \bigg\{ \prod_{\ell\in {\cal{T}}_c} \big[ \delta _{v,v'} \frac{\delta}{\delta c^{v(\ell)}} \frac{\delta}{\delta c^{v'(\ell)}} \big] \bigg\}\prod_{v=1}^n V_v \end{eqnarray} where \begin{itemize} \item each line $\ell$ of the tree joins two different loop vertices $V^{v(\ell)}$ and $V^{v'(\ell)}$, \item the sum is over trees joining $n$ loop vertices, which have therefore $n-1$ lines. These lines can be of type $a$, $b$ or $c$. \item the normalized ``imaginary'' Gaussian measure $d\nu_T (\{a^v, b^v, c^v\}, \{ w \}) $ over the three intermediate fields $a^v$, $b^v$ and $c^v$ has covariance \begin{eqnarray}<a^v, a^{v'}>&=& - i w^T (v, v', \{ w\}),\\ <b^v, b^{v'}>&=& i w^T (v, v', \{ w\}),\\ <c^v, c^{v'}>&=& i w^T (v, v', \{ w\}),\\ <a^v, b^v>&=&<b^v, c^v>\; = \; <a^v, c^v>\;=\; 0 \end{eqnarray} where $w^T (v, v', \{ w\})$ is 1 if $v=v'$, and the infimum of the $w_\ell$ for $\ell$ running over the unique path from $v$ to $v'$ in $T$ if $v\ne v'$. This measure will become well-defined since the matrix $w^T$ is positive, if we perform appropriate contour deformations. \end{itemize} \end{theorem} If we distinguish the matrix indices which correspond to the former $\phi$ and $\sigma$ fields, there are in fact four kinds of half-vertices in the loop vertex expansion and five different kinds of lines. The coupling constant for each half-vertex is $(2\lambda)^{1/4}$, and the coupling constant for each vertex (namely each line of the loop vertex tree) is therefore $(2\lambda)^{1/2}$. \begin{figure}[!htb]\label{halfv} \centering \includegraphics[scale=0.8]{vertex.pdf} \caption{The 4 half-vertices} \label{vertex} \end{figure} \begin{figure}[!htb]\label{fulver} \centering \includegraphics[scale=0.8]{fulver.pdf} \caption{The 5 vertices} \label{vertex5} \end{figure} \subsection{Contour Deformation} \label{contourdefo} The integral over the fields $a$, $b$ and $c$ is not absolutely convergent, so we have to choose the right contour to make it well-defined. As the covariance for the three fields are quite similar, we will consider $a$ first and deform the integration contour. The idea is that, we first of all use the formula (\ref{convexkey}) to write the field $a$ as an independent sum of $p+1$ fields $a_{k}$ according to the blocks: \begin{equation} a=\sum_{k=1}^{p+1}\sum_{v=1}^n a_{k,v} \end{equation} whose covariance is \begin{equation} <a_{k,v}, a_{k,v'}> = (w_{k-1} - w_k) X^{{\cal{F}}, k}_{v,v'} . \end{equation} Precisely because the covariance of $a_k$ is made of blocks $r$, we should perform a single contour deformation for each block. We have a formula similar to (\ref{blockvari}) for each block with variables $a_1, ... a_n$, but now we should remember that the covariances are $iI_q$, not $I_q$. Hence we have \begin{equation} \label{singlevar} d\mu_{iI_q} = \frac{d a_1}{\sqrt{2 \pi}} e^{- i a_1^2/ 2} \prod_{i=2}^q \delta (a_{1} - a_i ) da_{i} . \end{equation} In the partition function we have an integration of the type \begin{equation} \int_{-\infty}^{\infty} da f(a) e^{ia^2 /2} \end{equation} where $f$ is the product of the resolvents which are analytic and bounded in a open neighborhood of the band ${\cal{B}}= \{ \Im a \le A^{-1} \}$ of the real axis, where $A$ is large. This integral is not absolutely convergent. Nevertheless we can bound it in terms of $\sup_{\cal{B}} \vert f\vert$. Indeed we can deform the integral contour so that the new contour remains in the band ${\cal{B}}$ and the new variable is: \begin{eqnarray} a'_1&=&a_1-i\frac{a_1}{A|a_1|+1}, \ a'_1 \to a_1 - i {\rm sgn}\; a_1/A \ {\rm if}\ a_1\to \pm\infty \end{eqnarray} Then the bound of the integral over $a_1$ becomes: \begin{eqnarray} &&\bigg| \int d a_1 f(a'_1) e^{-ia_1^2/2(w_{k-1}-w_k)-\frac{2 a_1^2}{2(w_{k-1}-w_k)(A|a_1|+1)} +i \frac{a_1^2}{2(w_{k-1}-w_k)(A|a_1|+1)^2}}\bigg| \nonumber \\ \nonumber && \le \sup_{\cal{B}}\vert f \vert \int d a_1 e^{-\frac{2 a_1^2}{2(w_{k-1}-w_k)(A|a_1|+1)} } \\ && \le 2 (w_{k-1}-w_k)A\; \sup_{\cal{B}}\vert f \vert . \label{mainwbound} \end{eqnarray} So each time we integrate out an intermediate field we get $\sup_{\cal{B}} \vert f \vert$ times a factor $2(w_{k-1}-w_k)A$ in the bound. Then for the integration of all the intermediate fields $a_k$ we would have at order $n$ a total factor in the bound: \begin{eqnarray} &&\prod_{k=1}^{p+1}\prod_{v=1}^n 2A(w_{k-1}-w_k)\le\prod_{k=1}^{p+1} \prod_{v=1}^n e^{ 2A(w_{k-1}-w_k)}=\prod_{v=1}^n e^{\sum_{k=1}^{p+1} 2A(w_{k-1}-w_k)}\nonumber\\ &\le&\prod_{v=1}^n e^{2A}\le (e^{2A})^n \label{contourbound} \end{eqnarray} where we have used the fact that \begin{equation} \sum_k (w_{k-1}-w_k)\le 1 . \end{equation} \begin{figure}[!htb] \centering \includegraphics[scale=0.8]{contout.pdf} \caption{The integral contour for a.} \label{contour} \end{figure} As the signs for $b$ and $c$ are different from $a$ in the covariance, the integral contour for $b$ and $c$ are also different. The contour for $b$ could be chosen as: \begin{eqnarray} b'_1&=&b_1+i\frac{b_1}{A|b_1|+1}, \ b'_1 \to b_1 + i {\rm sgn}\; b_1/A \ {\rm if}\ b_1\to \pm\infty \nonumber \\ \end{eqnarray} and the integral contour for $c$ is the same as that for $b$. Then the bounds proceed exactly like in (\ref{mainwbound}). \begin{figure}[!htb] \centering \includegraphics[scale=0.8]{order2.pdf} \caption{The analyticity domain $C^2_R$} \label{borel1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[scale=0.8]{ana1.pdf} \caption{The analyticity domain ${\cal{D}}^2$} \label{borel2} \end{figure} \begin{figure}[!htb] \centering \includegraphics[scale=0.8]{analytic2.pdf} \caption{The analytic continuation} \label{borel3} \end{figure} The function $f$ is a product of resolvents of the type $(1+iH)^{-1}$ turning around the tree after using the tree formula \cite{R1}. On the real axis $\Vert (1+iH)^{-1}\Vert \le 1$. But after contour deformation the bound is slightly altered. $1+iH$ will also be changed into \begin{equation} 1+iH-(2\lambda)^{1/4}\begin{pmatrix} \epsilon(a) & \epsilon(a) \\ \epsilon(a) & 0\\ \end{pmatrix} \end{equation} with $\epsilon = 1/A$ a small number. As $\lambda<<1$, $\Vert {\lambda}^{1/4}\begin{pmatrix} \epsilon(a) & \epsilon(a) \\ \epsilon(a) & 0\\ \end{pmatrix}\Vert <<1$. So after we change the integral contours, the denominators are still bounded by $K= 1 + O(1/A)$. This bound changes to $\sqrt2 + O(1/A)$ if we take $-\pi < Arg \lambda <+\pi$, that is $-\pi/4 < Arg \lambda^{1/4} <+\pi/4$. As essentially the factor $O(1/A)$ doesn't change the bound of the resolvent, hence the power counting of the connected function, we shall forget it in the rest of this paper. As $H$ is a linear function of $b$ and $c$, we could use the same method for $b$ and $c$ and the resulting integral is finite. \subsection{Borel summability} Let us introduce the $N$-th order Taylor remainder operator $R^N$ which acts on a function $f(\lambda)$ through \begin{eqnarray} R^N f = f(\lambda)-\sum_{n=0}^{N} a_n \lambda^n = \lambda^{N+1} \int_0^1 \frac{(1-t)^{N}}{N!} f^{(N+1)} ( t \lambda) dt . \end{eqnarray} \begin{theorem}(Nevanlinna-Le Roy)\cite{Borel, Sok} A series $\sum_{n=0}\frac{a_n}{n!}\lambda^n$ is Borel summable to the function $f(\lambda)$ of order $k$ if the following conditions are met: \begin{itemize} \item For some rational number $k>0$, $f(\lambda)$ is analytic in the domain $C^k_R=\{\lambda\in C: \Re \lambda^{-1/k}> R^{-1}\}$. $C_R$ is a disk for $k=1$. \item The function $f(\lambda)$ admits $\sum_{n=0}^\infty a_n \lambda^n$ as a strong asymptotic expansion to all orders as $|\lambda|$ $\rightarrow 0$ in $C_R$ with uniform estimate in $C^k_R$: \begin{equation} \left| R^N f \right|\leqslant A B^N \Gamma(kN+1)|\lambda|^{N+1}. \end{equation} where $A$ and $B$ are some constants. \end{itemize} Then the Borel-Le Roy transform of order $k$ reads: \begin{equation} B^{(k)}_f(u)=\sum_{n=0}^\infty \frac{a_n}{\Gamma(kn+1)}u^n, \end{equation} it is holomorphic for $|u|<B^{-1}$, it admits an analytic continuation to the strip $\{u\in C: |\Im u|< R, \Re u>0\}$ and for $0\leqslant R$, one has \begin{equation} f(\lambda)=\frac{1}{k\lambda}\int_{0}^{\infty}B^{(k)}_f (u) exp[-(u/\lambda)^{1/k}](u/\lambda)^{(1/{k-1})}du . \end{equation} \end{theorem} \begin{theorem} The partition function $Z(\lambda)$ for $\phi^6$ theory is Borel-Le Roy summable of order 2.\label{th1} \end{theorem} \begin{proof} The remainder after the Taylor expansion of $\phi^6$ at $N$th order reads: \begin{equation} R^N Z(\lambda)=(-\lambda)^{N+1} \int_0^1dt\int d\phi\frac{(1-t)^N}{N!}\phi^{6(N+1)}e^{-t\lambda\phi^6-\frac{\phi^2}{2}} . \end{equation} We use the Cauchy-Schwarz inequality: \begin{eqnarray} && |R^{N} Z(\lambda)|= \vert \lambda \vert^{N+1} \int_0^1dt\int d\phi\frac{(1-t)^N}{N!}[\phi^{12(N+1)}e^{-2t\lambda\phi^6-{\phi^2}}]^{1/2} \\&\leqslant&\vert \lambda \vert^{N+1} \int_0^1dt\frac{(1-t)^N}{N!}(\int d\phi \phi^{12(N+1)} e^{-\phi^2/2})^{1/2} (\int d\phi e^{-2t\lambda \phi^6}e^{-\phi^2/2})^{1/2} .\nonumber \end{eqnarray} The first term is bounded by $[(12(N+1))!!]^{1/2}/N!\sim {(6N)!!}/N!\sim (2N)!$, where $\sim \cdots$ means $\le K^N \times \cdots$. Now consider the second term. We perform a scaling on $\phi$ as \begin{equation} \lambda^{1/6}\phi=u \end{equation} then \begin{equation}\label{analytrem} \int d\phi e^{-2t\lambda \phi^6}e^{-\phi^2/2}=\int_{-\infty}^{\infty}e^{-2t u^6-\lambda^{-1/3}u^2}\frac{du}{\lambda^{1/6}} . \end{equation} For $-\pi < Arg{\lambda}< \pi$, we have $-\pi/3 < Arg(\lambda^{1/3})< \pi/3$. Let us define ${\cal{D}}^2 = \{ \lambda \vert -\pi < Arg{\lambda}< \pi \}$. We have $C^2_R \subset {\cal{D}}^2 $. The corresponding analytic domains are shown in figure \ref{borel1} and \ref{borel2}. We shall prove analyticity and Taylor remainder bounds in ${\cal{D}}^2$ rather than $C^2_R$. In ${\cal{D}}^2$ the integrand of (\ref{analytrem}) is analytic in $\lambda$ and we always have $\Re \lambda^{-1/3}>0$. Moreover we have uniform convergence \begin{equation} \vert \int_{-\infty}^{\infty}e^{-2t u^6-\lambda^{-1/3}u^2}\frac{du}{\lambda^{1/6}}\vert \le \int_{-\infty}^{\infty}e^{- (\Re \lambda^{-1/3} ) u^2}\frac{du}{ \vert \lambda^{1/6} \vert } \le \sqrt \pi . \end{equation} This proves that the partition function is Borel summable of order $2$. \end{proof} The rest of this section is devoted to prove the following more difficult results: \begin{theorem}\label{theoborel3} The connected function $\log Z(\lambda)$ with potential $\lambda\phi^{6}$ is Borel-Le Roy summable of order $2$. \end{theorem} \begin{proof} We use the loop vertex representation (\ref{treeformul}) of $\log Z(\lambda)$. We shall first prove uniform convergence of this loop vertex representation in the domain ${\cal{D}}^2_{\epsilon}= \{ \lambda \vert -\pi < Arg{\lambda}< \pi \ {\rm and} \ \vert \lambda \vert < \epsilon \}$ and then prove the Taylor remainder bound. \begin{lemma} In the domain ${\cal{D}}^2_{\epsilon}$ each term $Y_{\cal{T}} (\lambda)$ is bounded by $ \epsilon^{(n-1)/2} K^n$\label{lem1}. \end{lemma} \begin{proof} In the loop vertex expansion remember there are 4 different kinds of half- vertices, as shown in Figure \ref{halfv}, and five different types of tree lines after contraction of the $a$ $b$ or $c$ intermediate fields, as shown in Figure (\ref{fulver}). We shall first of all prove that the resolvents are bounded. Consider \begin{equation} \frac{1}{1+i H}=\frac{1}{\begin{pmatrix} 1 & 0 \\ 0 & 1\\ \end{pmatrix}+i(2\lambda)^{1/4} \begin{pmatrix} c-a & b-a \\ b-a & 0\\ \end{pmatrix}} . \end{equation} The denominator could always be diagonalized and the result reads: \begin{equation} \frac{1}{1+i H}=\frac{1}{\begin{pmatrix} 1+i{(2\lambda)}^{1/4}\omega_{+}& 0 \\ 0 & 1+i{(2\lambda)}^{1/4}\omega_{-} \\ \end{pmatrix}}, \end{equation} where \begin{eqnarray} \omega_{+}&=&\big(c-a+\sqrt{(c-a)^2+4(b-a)^2}\,\big)/2 >0\nonumber\\ \omega_{-}&=&\big(c-a-\sqrt{(c-a)^2+4(b-a)^2}\,\big)/2<0 . \end{eqnarray} The analyticity domain for $\lambda$ contains at least ${\cal{D}}^2$. Hence \begin{equation}-\pi/4< {\rm Arg} (\lambda^{1/4})<\pi/4. \end{equation} It implies \begin{equation} |(1+i{\lambda}^{1/4}\omega_{+})^{-1}|<\sqrt{2}\; , \;\; |(1+i{\lambda}^{1/4}\omega_{-})^{-1}|<\sqrt{2}. \end{equation} So each resolvent is bounded as \begin{equation} \Vert \frac{1}{1+iH} \Vert \le \sqrt{2}(1+O(1/A)) . \end{equation} Again we shall forget the inessential factor $O(1/A)$. Now we know that the resolvents multiply around the tree in each contribution $Y_{{\cal{T}}}$ \cite{R1}. Hence for a tree of order $n$, the product of all the $2(n-1)$ resolvents in the tree is bounded by $\sqrt2^{2(n-1)}$. The global trace adds a factor 2 to the bound so that \begin{equation} \left| {\rm Tr} \prod_{{\rm around}\ {\cal{T}}} \frac{1}{1+iH} \right|\le 2 \cdot \sqrt2^{2(n-1)} = 2^{n}. \end{equation} Now we consider the vertices. A tree ${\cal{T}}$ at $n$-th order has $n-1$ vertices. Each vertex contributes a factor $\sqrt\lambda$, hence we have a factor $\lambda^{(n-1)/2}$ in $Y_{{\cal{T}}}$. There are $5$ different kinds of vertices, in the loop vertex expansion, but when considering the trace over the products of the resolvents, we only have 3 choice each time which corresponding to whether the intermediate field is $a$, $b$ or $c$. So the choice over the type of the vertices is bounded by an additional factor $3^{n-1}$. Don't forget that we have also a factor $(e^{2A})^n$ from the contour deformation. So, composing this bound with the resolvent bound we have \begin{equation} \vert Y_{\cal{T}}(\lambda) \vert\le2 ^{n} 3^{n-1}|\lambda|^{(n-1)/2}(e^{2A})^n\le\epsilon^{(n-1)/2} K^n \end{equation} where $K=6e^{2A}$. So we have proved this Lemma. \end{proof} Cayley's theorem states that the number of labeled trees over $n$ vertices is $n^{n-2}$. Hence combining it with the Lemma we get convergence and analyticity of the loop vertex representation in the domain ${\cal{D}}^2_{\epsilon}$: \begin{eqnarray} \sum_{n=1}^{\infty}\frac{1}{n!}\ \sum_{{\cal{T}} \; {\rm with }\; n\; {\rm vertices}}\ \vert Y_{\cal{T}}(\lambda) \vert &\le& \sum_{n=1}^{\infty}\frac{n^{n-2}}{n!} \epsilon^{(n-1)/2} K^n \nonumber \\ &\le & \sum_{n=1}^{\infty} \epsilon^{(n-1)/2} (eK)^n \end{eqnarray} where we used Stirling's formula. This converges for small enough $\epsilon$. Actually since $K=6e^{2A}$, $\epsilon = e^{-2A-2}/36$ works. We now turn to the Taylor remainder bound. The remainder formula reads: \begin{eqnarray} &&R^N\log Z(\lambda)= \sum_{n=1}^{\infty}\frac{1}{n!}\ \sum_{{\cal{T}} \; {\rm with }\; n\; {\rm vertices}}\ R^N Y_{\cal{T}}(\lambda). \label{sumtree1} \end{eqnarray} For trees with ${\cal{T}}$ with $n \ge 2N+3$ we have $R^N [Y_{\cal{T}} (\lambda)] = Y_{{\cal{T}}}$, hence inserting the estimate of the previous Lemma \begin{eqnarray}\label{sumtree2} \vert \sum_{n=2N+3}^\infty \frac{1}{n!}\ \sum_{{\cal{T}} \; {\rm with }\; n\; {\rm vertices}} R^N Y_{\cal{T}}(\lambda) \vert &\le& \vert \lambda\vert ^{N+1} \sum_{n=2N+3}^{\infty} \epsilon^{(n-1)/2 - (N+1)} (eK)^n \nonumber\\ &\le&\vert\lambda\vert^{N+1} K^N \end{eqnarray} for $\lambda \in {\cal{D}}^2_{\epsilon}$. So we need only to consider now trees with $n \le 2N+2$ vertices. Defining $\bar Y$ through $Y_{\cal{T}} (\lambda) = \lambda^{(n-1)/2} \bar Y_{\cal{T}} (\lambda)$ we have for such trees \begin{equation} R^N Y_{\cal{T}} = \lambda^{(n-1)/2} R^{N- (n-1)/2} \bar Y_{\cal{T}}, \end{equation} and the following bound: \begin{lemma} In the domain ${\cal{D}}^2_{\epsilon}$ we have for trees ${\cal{T}}$ with $n \le 2N+2$ \begin{eqnarray} \vert \lambda^{(n-1)/2} R^{N- (n-1)/2} \bar Y_{\cal{T}} \vert \le \vert \lambda \vert^{N+1} K^N \Gamma(2N - n+1) . \end{eqnarray}\label{lem2} \end{lemma} \begin{proof} The $R^{N- (n-1)/2}$ operator now acts on the product of resolvents \begin{equation} {\rm Tr} \prod_{ {\rm around}\ {\cal{T}}} \frac{1}{1+iH}. \end{equation} We can evaluate it through a Taylor-Lagrange integral formula, and this formula brings intermediate fields $a$, $b$ or $c$ to the numerator. The choice of which resolvent is derived gives a factorial but which is compensated by the factorial in the Taylor formula itself, so these choices contribute only $K^N$ to the bound. Since each half vertex contributes a coupling constant $\lambda^{1/4}$, the number of such fields brought to the numerator by the Taylor Lagrange formula must obey \begin{equation} (n_a+n_b+n_c)/4=N -(n-1)/2 \end{equation} as this should be compatible with the fact that we expand to order $\lambda^{N+1}$. Therefore we have: \begin{eqnarray} \vert R^{N- (n-1)/2} \bar Y_{\cal{T}} \vert &\le& K^N \vert \lambda\vert^{N- (n-1)/2} \\ &&\sum_{n_a,n_b,n_c \atop n_a+n_b+n_c =4N -2n + 2} \hskip-.7cm \int d\mu(a)d\mu(b)d\mu (c) a^{n_a}b^{n_b}c^{n_c}\nonumber \end{eqnarray} where $d\mu(a)d\mu(b)d\mu (c)$ are the oscillating Gaussian measures with contour deformation for the fields $a$, $b$ and $c$ respectively. After using the usual bound on the resolvents and Wick contraction for the intermediate fields we get: \begin{eqnarray} \vert \lambda^{(n-1)/2} R^{N- (n-1)/2} \bar Y_{\cal{T}} \vert \le \vert \lambda\vert ^{N+1}K^N (n_a+n_b+n_c)!! \end{eqnarray} So the remainder is bounded in the worst case $n=0$ by: \begin{eqnarray} &&\vert \lambda\vert ^{N+1}K^N (n_a+n_b+n_c)!!\\ &=&\vert \lambda\vert ^{N+1}K^N(4N-2n+2)!!\le\lambda^{N+1}K(4N+2)!! \le \lambda^{N+1}K^N(2N)!\nonumber \end{eqnarray} where $K$ is a generic name for a constant. \end{proof} Combining lemmas \ref{lem1} and \ref{lem2} together with (\ref{sumtree1}) and (\ref{sumtree2}) proves that $\log Z (\lambda)$ is analytic in some ${\cal{D}}^2_{\epsilon}$ domain, hence in some $C^2_R$ domain and that the Taylor remainder at order $N$ is bounded by $\vert \lambda\vert ^{N+1}K^N \Gamma(2N+1)$. This completes the proof of Theorem \ref{theoborel3}. \end{proof} \section{$\phi^{2k}$ theory in zero dimension} \subsection{The intermediate fields for $\phi^{2k}$ theory} In the general case of a $\lambda\phi^{2k}$ interaction, we could introduce the intermediate fields inductively, and in each step we attribute to the interaction term of a field $\phi$ with an intermediate field a coupling constant $\lambda^{\frac{1}{2k}}$. In the first step we introduce a first intermediate field $\sigma_1$ and forgeting the inessential normalizing factor, and the result reads: \begin{equation} e^{- \lambda\phi^{2k}}=\int{d\sigma_1}e^{-\sigma_1^2/2+i\sqrt{\lambda}\phi^k\sigma_1} \end{equation} and \begin{equation} \label{gene} 2\sqrt{\lambda}\phi^k \sigma_1=[(\lambda^{\frac{1}{2k}}\phi \sigma_1+\lambda^{\frac{k-1}{2k}}\phi^{k-1})^2-\lambda^{\frac{1}{k}}\phi^2\sigma_1^2-\lambda^{\frac{k-1}{k}}\phi^{2k-2} ] . \end{equation} For the first term in the r.h.s. we shall introduce another intermediate field $\sigma_2$ and we have: \begin{equation} e^{i(\lambda^{\frac{1}{2k}}\phi \sigma_1+\lambda^{\frac{k-1}{2k}}\phi^{k-1})^2}=\int d \sigma_2 e^{i(\lambda^{\frac{1}{2k}}\phi \sigma_1+\lambda^{\frac{k-1}{2k}}\phi^{k-1})\sigma_2} e^{-i\sigma_2^2 /2}. \end{equation} For the second term we have simply \begin{equation} e^{- i \lambda^{\frac{1}{k}}\phi^2\sigma_1^2}=\int d\sigma_3 e^{- i\lambda^{\frac{1}{2k}}\sigma_3 \phi \sigma_1} e^{i \sigma_3^2/2}. \end{equation} The third term has the potential $\phi^{2k-2}$ which means that we have the same type of interaction but with the degree of the potential lowered by $2$ and the coupling constant lowered by degree $\lambda^{\frac{1}{k}}$. We could repeat this process inductively until the final form is linear at most in each of the final $[3(k-2) +1]$ intermediate fields $\sigma_i$, quadratic at most in $\phi$, and trilinear in all fields taken together, which means the field $\phi $ and all the intermediate fields Remark that we can maintain imaginary factors throughout the induction, by using imaginary Gaussian integrals. Again we integrate out some of the intermediate fields and the initial field $\phi$. The result could always be written in the form (up to inessential normalization constants) \begin{equation} Z(\lambda) = \int \prod_r da_r \prod_s db_s\prod dc\ e^{i(a_1^2- a_2^2- a_3^2+ b_1^2- b_2^2- b_3^2 \pm c^2 \cdots)/2}e^{V} \end{equation} where \begin{equation} V=-\frac{1}{2}{\rm Tr}\ln[A+iH(\{a\},\{b\},\{c\}...)].=-\frac{1}{2}{\rm Tr}\ln[G]. \label{fulmatrix} \end{equation} Here $A=diag(1,1, i, -i...)$ where the number of $1$ depends on whether $k$ is even or odd: if $k$ is even, there is only one $1$ in $A$ and the other diagonal elements are $\pm i$; if $k$ is odd the first two diagonal elements are $1$s and the other diagonal elements are $\pm i$. $a_i$, $b_i$, $c_i...$ represent the remaining intermediate fields. $H$ is a symmetric matrix with nonzero elements appearing only in the first line and the fist colum, for example: \begin{equation} H=\lambda^{\frac{1}{2k}}\begin{pmatrix} \lambda^{\frac{1}{2k}}g_1( a_i, b_i, c_i...) & g_2 ( a_i, b_i, c_i...) & g_3 ( a_i, b_i, c_i...) &... \\ g_2 ( a_i, b_i, c_i...) & 0 & 0 & ...\\ g_3( a_i, b_i, c_i...) & 0 & 0 & ...\\ ...&... &... & ...\\ \end{pmatrix}. \label{fulllinearmatrix} \end{equation} Here $g_j( a_i)$ is a sum of {\emph{linear}} terms in the intermediate fields that appears in the determinant. We take the $e^{-\lambda\phi^{8}}$ model for example. In this case $k=4$, so we associate to each field $\phi$ a coupling constant $\lambda^{\frac{1}{2k}}=\lambda^{1/8}$. The interaction form can also be written as \begin{equation} \int d \sigma d b_i d X e^{-X G X^t} e^{-\frac{1}{2}\sigma^2-\frac{i}{2}(a_1^2-a_2^2-a_3^2+b_1^2-b_2^2-b_3^2)} \end{equation} where \begin{equation} X=\begin{pmatrix} \phi, a_1, a_2, a_3 \end{pmatrix} \end{equation} and \begin{eqnarray} &&G=A+iH= \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & i & 0 & 0\\ 0 & 0 & -i & 0\\ 0 & 0 & 0 & -i\\ \end{pmatrix} \nonumber\\&+&i\lambda^{1/8}\begin{pmatrix} -\lambda^{1/8}(b_1-b_3) & -(b_1+\sigma) & \sigma & b_1-b_2 \\ -(b_1+\sigma) & 0 & 0 & 0\\ \sigma & 0 & 0 & 0\\ b_1-b_2& 0 & 0 & 0\\ \end{pmatrix}\nonumber \\ \end{eqnarray} where $a_i$, $b_i$ and $\sigma$ are the intermediate fields. It is not surprising that we have an element with a coupling constant $\lambda^{1/4}$ in the matrix, as this term corresponds to the interaction term $\phi^2 (b_1-b_3)$, and we associate to each $\phi$ a factor $\lambda^{1/8}$: We have a similar situation for all other $\phi^{2k}$ case, see (\ref{fulllinearmatrix}). Then we consider a more complicated example, the $\exp(-\lambda\phi^{10})$ model. In this case we have $k=5$ and the coupling constant for each field $\phi$ is $\lambda^{1/10}$. The interaction form can be written as \begin{equation} \int d a_i d c_i d X e^{-X G X^t} e^{-\frac{i}{2}(a_1^2-a_2^2-a_3^2+c_1^2-c_2^2-c_3^2)}, \end{equation} where \begin{equation} X=\begin{pmatrix} \phi, \sigma, b_1, b_2, b_3 \end{pmatrix} \end{equation} and \begin{eqnarray} &&G=A+iH= \begin{pmatrix} 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & i & 0 & 0\\ 0 & 0 & 0 & -i & 0\\ 0 & 0 & 0 & 0 & -i\\ \end{pmatrix}\\ &+&i\lambda^{1/10}\begin{pmatrix} -\lambda^{1/10}(c_1-c_3) & -(a_1-a_2) & -(a_1+\sigma) & a_1-a_3 & c_1-c_2 \\ -(a_1-a_2) & 0 & 0 & 0 &0\\ -(a_1+\sigma)& 0 & 0 & 0 & 0\\ a_1-a_3& 0 & 0 & 0 & 0\\ c_1-c_2& 0 & 0 & 0 & 0\\ \end{pmatrix}.\nonumber \end{eqnarray} where $a_i$, $b_i$, $c_i$ and $\sigma$ are the intermediate fields. \begin{lemma} The inverse of the matrix $G$ exists and is bounded by $\sqrt{2}$. \end{lemma} \begin{proof} The matrix $G=A+iH$ is a symmetric matrix that has only non zero elements in the first line, the first row and the diagonal. We have \begin{equation} G=A+iH=A({\mathds{1}}+iA^{-1}H) . \end{equation} As $A$ is a diagonal matrix whose elements are either $1$ or $\pm i$, the inverse of $A$ is bounded and has a similar structure. So in the following we consider only the inverse of the matrix ${\mathds{1}}+iA^{-1}H$. For a general $\phi^{2k}$ theory we have \begin{equation} iA^{-1}H= i \lambda^{\frac{1}{2k}}\begin{pmatrix} \lambda^{\frac{1}{2k}}d_1 & d_2 & d_3 &...&...&d_n\\ \pm i d_2 & 0 & 0 & ...&...&0\\ \pm i d_3 & 0 & 0 & 0 & ...& 0\\ ... & ... & ..&.... & ...&...\\ \pm i d_n & 0 & 0 & 0 &0 & 0\\ \end{pmatrix}. \end{equation} where $d_i$ is an arbitrary element of $A^{-1}H$ which has non vanishing elements only in the first row and first column. The matrix $A^{-1}H$ has only two non vanishing eigenvalues, each of multiplicity 1. The exact formula for the eigenvalues depends also on whether $k$ is even or not. For $k$ even, we have \begin{eqnarray} \omega_{\pm}=\frac{-\lambda^{\frac{1}{2k}} d_1(1\pm \sqrt{1-\frac{4i\lambda^{-1/k}B}{d_1^2}})}{2} \end{eqnarray} where \begin{equation} B=\pm d_2^2 \pm d_3^2 \pm...\pm d_n^2 \end{equation} is a combination of the squares of the intermediate fields. When $k$ is odd, we have \begin{equation} \omega_{\pm}=\frac{-\lambda^{\frac{1}{2k}} d_1(1\pm \sqrt{1+\frac{4\lambda^{-1/k}}{d_1^2}(d_2^2-iB')})}{2} \end{equation} and in this case \begin{equation} B'= \pm d_3^2 \pm...\pm d_n^2 . \end{equation} Through some basic calculation we can easily find that in both case we have \begin{equation} | 1+i\omega_{\pm}|\geq \frac{1}{\sqrt{2}} . \end{equation} So we have \begin{eqnarray} {\mathds{1}}+iA^{-1}H= P \begin{pmatrix} 1+i\omega_{+} & 0 & 0 & 0 &...& 0\\ 0 & 1+i\omega_{-} & 0 & 0 & ...&0\\ 0 & 0 & 1 & 0 &...& 0\\ ...& ... & ...& ... & ...&...\\ 0 & 0 & 0 & 0 &...& 1\\ \end{pmatrix} P^{-1} \\ \end{eqnarray} where $P$ is the diagonalizing matrix. Therefore $ {\mathds{1}}+iA^{-1}H$ is invertible and its inverse has eigenvalues $(1+i\omega_{\pm})^{-1}$ and 1 so that we have \begin{equation} \parallel [{\mathds{1}}+iA^{-1}H]^{-1} \parallel \leqslant \sqrt{2} . \end{equation} This proves this lemma. \end{proof} \subsection{The analytic domain and contour deformation} In the $\lambda\phi^{2k}$ theory, the analytic domain for the coupling constant $\lambda$ is \begin{equation} -\frac{(k-1)\pi}{2}\leqslant {\rm Arg\; } \lambda\leqslant-\frac{(k-1)\pi}{2}. \end{equation} As for each $\phi$ we have a coupling constant $\lambda^{\frac{1}{2k}}$, and as in the matrix $G$ each element is linear in $\phi$, we have for each element $a_i$ in $G$ the relation: \begin{equation} -\frac{(k-1)\pi}{4k} \leqslant {\rm Arg\; } {a_i}\leqslant \frac{(k-1)\pi}{4k}. \end{equation} Similarly we could prove that the inverse of the matrix $G$ is bounded for all $\lambda$ in its analytic domain. To be more precisely, we have \begin{equation} \vert 1+i\omega_{\pm} \vert \geq c\sin {\frac{\pi}{2k}} \end{equation} for either $k$ even or odd, with $c$ a small constant. So we proved that the inverse of matrix $G$ exists and is bounded. The contour deformations then proceeds as in the previous section, since all the intermediate fields integrals are of the same type and we get again a bound of the type (\ref{contourbound}). \subsection{Borel summability } The proof of the Borel summability for $\phi^{2k}$ theory is quite similar to the $\phi^6$ theory. We shall first of all consider the Borel summability for the partition function $Z$ and then the connected function $\log Z$. \begin{theorem} The partition function of a field theory with potential $\lambda\phi^{2k}$ is Borel summable or order $k-1$. \end{theorem} \begin{proof} This theorem is easy and do not need any loop vertex expansion. In this case the analytic region for $\lambda$ would become ${\cal{D}}^{k-1}=\{-\frac{(k-1)\pi}{2}<Arg(\lambda)<\frac{(k-1)\pi}{2}\}$. The argument for analyticity and Taylor bounds is the same as above, replacing $2$ by $(k-1)$. \end{proof} \begin{theorem} The connected function $\log Z(\lambda)$ for the theory with potential $\lambda \phi^{2k}$ is Borel-Le Roy summability at order $k-1$. \end{theorem} \begin{proof} The argument is quite similar to the $\lambda\phi^6$ case and we need the loop vertex expansion. In (\ref{gene}) we have shown that the general potential $\lambda\phi^{2k}$ could always be expressed in terms of intermediate fields. After each step we have a new potential $i\phi^{2k-2}$. We could get the bound for the connected function with the same method as in the $\phi^6$ case. Now we consider the factorials. In the intermediate fields expression, each intermediate field is linearly interacting with a field $\phi$ and coupling constant $\lambda^{\frac{1}{2k}}$, so after the expansion to $N$-th order of the coupling constant $\lambda$, and Wick contraction, we get a factor \begin{equation} \frac{[2kN]!!}{N!}\sim (k-1)N!. \end{equation} Combining all the arguments above we find that the remainder of the Taylor expansion is bounded by \begin{equation} |\lambda|^{N+1}K^N\Gamma[(k-1)N+1] \end{equation} So the connected function is Borel -Le Roy summable of order $k-1$. \end{proof} \section{Conclusion and Perspectives} It is now clear that the traditional constructive tool of decomposing space into an \emph{ad hoc} lattice of cubes and performing cluster expansions is not fundamental and can be replaced by better techniques. The loop vertex expansion \cite{R1} and \cite{MR1} is the first of these. A different but related approach is proposed in \cite{GMR}. The fundamental idea of the loop vertex expansion is to decompose an interaction of arbitrary degree until trilinear or ``three body'' interactions are reached, since these are the most ``basic''. The basic objects are loops made out of a subfamily of the corresponding fields. The loops are made of resolvents, which are uniformly bounded in the case of stable interactions, and they are joined by explicit propagators into cacti structures. This technique reconciles constructive theory and the spirit of Feynman's perturbative theory. The essential mathematical problem of field theory is to iteratively compute connected functions in order to perform renormalization. In Feynman's graphical representation of field theory, connected functions were very easy to compute since they were written as sums of connected graphs, but the corresponding formulas have no mathematical meaning since the expansion diverges. In the loop vertex expansion formalism connected functions are still very easy to identify as they are written as sums of connected cacti, but these sums are now convergent, hence the corresponding formulas are mathematically meaningful. It will become increasingly necessary in our opinion to develop advanced constructive techniques such as loop vertex expansions to understand nonperturbatively new field theories such as non commutative field theories or group field theories of quantum gravity. Indeed these theories include non-local aspects which, up to our knowledge, cannot be treated through lattice of cubes decomposition and traditional cluster expansions. Still a long road is to be performed to validate this new constructive philosophy and to push it up to the level where we can reproduce with it all the previous results of the constructive literature over the last decades. This paper accomplished a small but significant step in showing that the decomposition into trilinear interactions does not work solely for the $\phi^4$ interaction but also for interactions of any degree. But clearly the limitation to zero dimension must now be lifted. The three main steps ahead are the construction of models in single renormalization group slice, the construction of matrix models with arbitrarily high degree interaction and correct scaling as the size of the matrix gets large, and finally the inclusion of renormalization. \subsection{Sliced $\phi^{2k}$model in any dimension} We could easily generalize the loop vertex expansion method to a $\phi^{2k}$ theory in any dimension in a single renormalization group slice, by following \cite{MR1}. We only sketch the general idea in this paper, details being devoted to a future publication. For instance In $D$ dimensions the propagator in a single renormalization group slice reads: \begin{equation} C_j(x, y)=\int_{M^{-2j}}^{M^{-2j+2}}e^{-\alpha m^2}e^{-(x-y)^2/{4\alpha}}\alpha^{-D/2} d\alpha\leqslant KM^{(D-2/2)j}e^{-cM^j|x-y|} \end{equation} In $\phi^{2k}$ we associate to each field $\phi$ a coupling constant $\phi^{1/2k}$ and an operator $D_j=C_j^{1/2}$. And we still have \begin{equation} \int d_{\mu_{C_j}}(\phi)e^{-\int\lambda\phi^{2k}}=\int d \nu(\sigma_i)e^{-\frac{1}{2}log(A+iH)} \end{equation} where $A$ is the same as in the formula (\ref{fulmatrix}), and \begin{equation} H=\lambda^{\frac{1}{2k}}\begin{pmatrix} \lambda^{\frac{1}{2k}} D_j g_1( a_i, b_i, c_i...)D_j & g_2 ( a_i, b_i, c_i...)D_j & g_3 ( a_i, b_i, c_i...)D_j &... \\ D_jg_2 ( a_i, b_i, c_i...) & 0 & 0 & ...\\ D_jg_3( a_i, b_i, c_i...) & 0 & 0 & ...\\ ...&... &... & ...\\ \end{pmatrix}. \label{ful2d} \end{equation} We find that the form of $H$ is quite similar to the formula (\ref{fulllinearmatrix}) except that to each factor $\lambda^{\frac{1}{2k}}$ is now associated a factor $D_j$ and we require that the first column is the transpose conjugate to the first row, as $D_j$ are all operators now. Then the proof of the uniform Borel summability and the decay of connected functions should follow in the same way as in \cite{MR1}. \subsection{Matrix models} A very interesting property of loop vertex expansions is to allow uniform Borel summability theorems on matrix models with the right scaling of the Borel radius as the matrix gets large \cite{R1}. To extend this result to a single matrix model with $\phi^{2k}$ interaction and matrix of size $N$ we should prove Borel-Le Roy summability with a radius that scales like $N^{-(k-1)}$. This seems doable but all the intermediate fields are now matrix like and one should carefully control the normalization factors associated to contour deformation of the corresponding fields in section \ref{contourdefo}. \subsection{Renormalization} This is the most difficult part. The first goal should be eg to construct very simple models such as the $\phi^4_2$ model which requires only Wick ordering with the loop vertex expansion technique. Then we expect the loop vertex expansion should be applied to just renormalizable models such as infrared $\phi^4_4$ \cite{GK,FMRS} and ultimately it should be a key tool for the hopefully full construction of an interacting field theory in four dimensions, namely the Grosse-Wulkenhaar model \cite{GW1,RVW,DGMR,GW2}. \medskip \noindent{\bf Acknowledgments} We thank Jacques Magnen and Alan Sokal for useful discussions or suggestions.
2024-02-18T23:39:59.126Z
2010-03-04T14:15:20.000Z
algebraic_stack_train_0000
1,009
7,716
proofpile-arXiv_065-5054
\section{Introduction} A concept of vertex operator algebras $V=(V,Y,\1,\omega)$ was introduced by Borcherds \cite{B} with a purpose to prove the moonshine conjecture \cite{CN} and then as a stage for the conjecture, Frenkel, Lepowsky and Meurman \cite{FLMe} constructed the moonshine vertex operator algebra $V^{\natural}$ using a $\Z_2$-orbifold construction from the Leech lattice vertex operator algebra $V_{\Lambda}$. A vertex operator algebra (shortly VOA) is now understood as an algebraic version of a two-dimensional conformal field theory (shortly CFT). Among CFTs, a rational CFT has an important meaning because one can determine all representations precisely, where we interpret "rational" as meaning that it has only completely reducible modules. Therefore, it is important to find new VOAs whose modules are all completely reducible. One way to construct such candidates is an orbifold theory. It is a theory of the fixed point subVOA $V^{\sigma}$ given by a finite automorphism $\sigma$ of $V$. For example, if all $V$-modules are completely reducible, then $V^{\sigma}$ is expected to have the same property. Furthermore, $V$ has a special module called a twisted module (see \cite{DLiMa}), which is a direct sum of $V^{\sigma}$-modules on which $V$ acts as a permutation in a sense. So there is a natural question if every simple $V^{\sigma}$-module appears as a submodule of some $\sigma^j$-twisted (or ordinary) $V$-module. These statement are ambiguously expected to be true \cite{DHVW}. In this paper, we will only treat a simple VOA $V=\oplus_{m=0}^{\infty}V_m$ of CFT-type with a nonsingular invariant bilinear form, that is, $V_0=\C \1$ and $V$ is isomorphic to its restricted dual $V'$. Except for the complete reducibility of modules, another important condition for VOAs is $C_2$-cofiniteness. This is defined by the condition that a subspace $$C_2(V):=<v_{-2}u \mid v,u\in V>_{\C}$$ of $V$ has a finite co-dimension in $V$, where $v_{-2}$ denotes a coefficient of vertex operator $$Y(v,z)=\sum_{m\in \Z} v_mz^{-1-m}\in {\rm End}(V)[[z,z^{-1}]]$$ of $v\in V$ at $z$. This assumption was introduced by Zhu \cite{Z} as a technical condition to prove an ${\rm SL}_2(\Z)$-invariance property of the space of trace functions of VOAs whose modules are completely reducible. However, as the author has shown in \cite{Mi1}, this is a natural condition from a view point of the representation theory (i.e. it is equivalent to the nonexistence of un-$\N$-gradable modules) and is enough to prove some kind of ${\rm SL}_2(\Z)$-invariance property. Moreover, the author has recently shown in \cite{Mi2} and \cite{Mi3} that if a $C_2$-cofinite VOA $V$ satisfies $V'\cong V$, then the fusion products preserves the exactness of sequences, (see Proposition 1 (vii)). Proving $C_2$-cofiniteness is not easy, but it is essentially easier than proving rationality (the completely reducibility of all modules). Furthermore, when we want to prove rationality, we usually classify all simple modules at first. However, if $V$ is $C_2$-cofinite and $V'\cong V$, then it is enough to show that $V$ is projective as a $V$-module, (see Proposition 1 (vi)). As a corollary, we will prove the following theorem in \S 3.\\ \noindent {\bf Theorem A} \quad {\it Let $V$ be a simple VOA of CFT-type. Assume $V'\cong V$ and all $V$-modules are completely reducible. If $\sigma$ is an automorphism of $V$ of finite order and a fixed point subVOA $V^{\sigma}$ is $C_2$-cofinite, then all $V^{\sigma}$-modules are completely reducible. Moreover, every simple $V^{\sigma}$-module appears as a $V^{\sigma}$-submodule of some $\sigma^j$-twisted (or ordinary) $V$-module. }\\ Let's show examples of orbifold models. For every positive definite even lattice $L$, there is a VOA $V_L$ associated with $L$ called a lattice VOA. As well known, all $V_L$-modules are completely reducible \cite{D}. If $\sigma$ acts on $L$ as $-1$, then all $V_L^{\sigma}$-modules are classified in \cite{Ys}. This result relies heavily on a wonderful result done by Dong and Nagatomo \cite{DNa} about the fixed pointed subVOA of free bosonic Fock space. Unfortunately, there is no such a result for other automorphisms at the present time. The main object in this paper is an automorphism $\sigma$ of $L$ of order three. For a special lattice and an automorphism, there is a classification of simple modules \cite{TYa}. We will treat a general case. \\ \noindent {\bf Theorem B} \quad {\it Let $L$ be a positive definite even lattice and $V_L$ a lattice VOA associated with $L$. Let $\sigma\in {\rm Aut}(L)$ of order three. We use the same notation for an automorphism of $V_L$ lifted from $\sigma$. Then a fixed point subVOA $V_L^{\sigma}$ is $C_2$-cofinite.}\\ At the end of this paper, as applications of these results, we will give two $\Z_3$-orbifold constructions as examples. One is a VOA which has the same character as $V^{\natural}$ does and the other is a new meromorphic $c=24$ VOA No. 32 in Schellekens' list \cite{S}.\\ \noindent {\bf Theorem C} \quad {\it Let $\Lambda$ be a positive definite even unimodular lattice of rank $N$ with an automorphism $\sigma$ of order three. Let $H=\Lambda^{\sigma}$ be a fixed point sublattice of $\Lambda$ and assume that $N-{\rm rank}(H)$ is divisible by $3$. Then we are able to construct a VOA $\widetilde{V}$ by a $\Z_3$-orbifold construction from a lattice VOA $V_{\Lambda}$. } \section{Preliminary results} \subsection{$C_2$-cofiniteness} In this situation, we will assume $C_2$-cofiniteness only. All modules in this paper are finitely generated. \vspace{-4mm}\\ \begin{prn}\label{prn:C2} Let $V$ be a $C_2$-cofinite VOA. Then we have the followings: \\ (i) Every module is $\Z_+$-gradable and weights of modules are in {\rm $\Q$, \cite{Mi1}}. \\ (ii) The number of inequivalent simple modules is finite, {\rm \cite{GN},\cite{DLiMa}}.\\ (iii) Set $V=B+C_2(V)$ for $B$ spanned by homogeneous elements. Then for any module $W$ generated from one element $w$ has a spanning set $\{v^1_{n_1}....v^k_{n_k}w \mid v^i\in B, \quad n_1<\cdots <n_k\}$. Hence every f.g. $V$-module is $C_n$-cofinite for any $n=1,2,...$, {\rm \cite{Mi1},\cite{Bu},\cite{GN}}.\\ (iv) Every $V$-module has a projective cover. \\ (v) For any $V$-modules $W$ and $U$, a fusion product $W\boxtimes U$ is well-defined as f.g. modules. \\ (vi) If $V\cong V'$, then $V$ is projective if and only if all modules are completely reducible. \\ (vii) If $V\cong V'$, then for any exact sequence $0\to A\to B\to C\to 0$ and a module $W$, $0\to W\boxtimes A\to W\boxtimes B \to W\boxtimes C \to 0$ is still exact. See {\rm \cite{Mi2}, \cite{Mi3}} for (iv)$\sim$ (vii). \end{prn} \subsection{Intertwining operators} For $V$-modules $A,B,C$, let ${\cal I}_{A,B}^C$ be the set of (logarithmic) intertwining operators of $A$ from $B$ to $C$. Since $V$ is $C_2$-cofinite, ${\cal Y}$ satisfies a differential equation of regular singular points and so there is $K\in \N$ such that ${\cal Y}$ has a form $${\cal Y}(a,z)=\sum_{i=0}^K \sum_{n\in \C} a_{(n,i)}z^{-n-1}\log^iz\in {\rm Hom}(B,C)\{z\}[\log z].$$ We note that $L(0)$ may not be semisimple. Let ${\rm wt}$ denote the semisimple part of $L(0)$. Set ${\cal Y}^{(m)}(a,z)=\sum_{n\in \C} a_{(n,m)}z^{-n-1}$. From the $L(-1)$-derivative property for ${\cal Y}$, we have two important properties: $$\begin{array}{rl} \displaystyle{{\cal Y}^{(m)}(a,z)=}&\displaystyle{\frac{1}{m!}(z\frac{d}{dz}-zL(-1))^m{\cal Y}^{(0)}(a,z)}, \qquad \mbox{ and }\cr \displaystyle{(i+1)a_{(n,i+1)}b =}&\displaystyle{-(L(0)\!-\!{\rm wt})a_{(n,i)}b+((L(0)\!-\!{\rm wt})a)_{(n,i)}b+a_{(n,i)}((L(0)\!-\!{\rm wt})b)} \end{array} $$ for $b\in B$. In particular, ${{\cal Y}}^{(K)}(\ast,z)$ is an ordinary intertwining operator (i.e. of formal power series). One more important result is that $(L(0)-{\rm wt})W$ is a proper submodule for a $V$-module $W\not=0$. As Huang has shown in \cite{H1}, for $d'\in D', a\in A, b\in B, c\in C$ and intertwining operators ${\cal Y}_1\in {\cal I}_{A,E}^D$, ${\cal Y}_2\in {\cal I}_{B,C}^E$, ${\cal Y}_3\in {\cal I}_{F,C}^D$ and ${\cal Y}_4\in {\cal I}_{A,B}^F$, the formal power series (with logarithmic terms) $$\langle d',{\cal Y}_1(a,x){\cal Y}_2(b,y)c\rangle \quad\mbox{ and }\quad \langle d',{\cal Y}_3({\cal Y}_4(a,x-y)b,y)c\rangle $$ are absolutely convergent when $|x|>|y|>0$ and $|y|>|x-y|>0$, respectively, and can all be analytically extended to multi-valued analytic functions on $$M^2=\{(x,y)\in \C^2 \mid xy(x-y)\not=0 \}.$$ As he did, we are able to lift them to single-valued analytic functions $$ E(\langle d,{\cal Y}_1(a,x){\cal Y}_2(b,y)c\rangle ) \quad\mbox{ and }\quad E(\langle d,{\cal Y}_3({\cal Y}_4(a,x-y)b,y)c\rangle ) $$ on the universal covering $\widetilde{M^2}$ of $M^2$. As he remarked, single-valued liftings are not unique, but the existence of such functions is enough for our arguments. The important fact is that if we fix $A,B,C,D$, then these functions are given as solutions of the same differential equations. Therefore, for ${\cal Y}_1\in {\cal I}_{A,E}^D,{\cal Y}_2\in {\cal I}_{B,C}^E$ there are ${\cal Y}_5\in {\cal I}_{A\boxtimes B,C}^D$ and ${\cal Y}_6\in {\cal I}_{B,A\boxtimes C}^D$ such that $$\begin{array}{cl} E(\langle d',{\cal Y}_1(a,x){\cal Y}_2(b,y)c\rangle) =E(\langle d',{\cal Y}_5({\cal Y}_{A,B}^{\boxtimes}(a,x-y)b,y)c\rangle) &\mbox{ and}\cr E(\langle d',{\cal Y}_1(a,x){\cal Y}_2(b,y)c\rangle) =E(\langle d',{\cal Y}_6(b,y){\cal Y}_{A,C}^{\boxtimes}(a,x)c\rangle), & \end{array}$$ where ${\cal Y}_{A,B}^{\boxtimes}$ denotes an intertwining operator to define a fusion product $A\boxtimes B$. \subsection{VOA whose modules are all completely reducible} In this subsection, we will explain very important known properties of a simple $C_2$-cofinite VOA $V$ whose modules are all completely reducible and $V\cong V'$. Let $N$ be the central charge of $V$ and $\{V\cong W^0,\cdots,W^s\}$ the set of all simple $V$-modules.\\ \noindent {\bf 2.3.1 Zhu's modular invariance property}\\ For $v\in V_m$ with $L(1)v=0$, we consider a trace function $$ T_{W^i}(v;\tau)={\rm Tr}_{W^i} o(v)q^{L(0) -N/24}, $$ where $q=e^{2\pi \sqrt{-1}\tau}$ and $o(v)=v_{{\rm wt}(v)-1}$ is a grade-preserving operator of $v$ on $W^i$. In particular, a trace function of the vacuum $\1\in V$ $$ T_{W^i}(\1;\tau)=q^{-N/24}\sum_{m} \dim (W^i_m) q^m $$ is called a character of $W^i=\oplus_m W^i_m$ and we denote it by $\ch(W^i)$. As Zhu has shown in \cite{Z}, these functions are well-defined in the upper half plane ${\cal H}=\{\tau\in \C \mid {\rm Im}(\tau)>0\}$. A wonderful property of these VOAs is an ${\rm SL}_2(\Z)$-invariance property. Namely, there are $s_{ij}\in \C$ which does not depend on $v$ such that $$(1/\tau)^{{\rm wt}(v)}T_{W^i}(v;-1/\tau)=\sum_{j=0}^s s_{ij}T_{W^j}(v;\tau).$$ We will call the transformation in the left side an $S$-transformation of $T_{W^i}$ and then the matrix $S=(s_{ij})_{i,j=0,\ldots,s}$ an $S$-matrix of $V$. \\ \noindent {\bf 2.3.2 Dong, Li and Mason's modular invariance property}\\ Dong, Li and Mason \cite{DLiMa} extended the above result to the case where they also consider an automorphism $\sigma$ of order $n$. For $g,h\in <\sigma>$, they introduced a concept of $g$-twisted $h$-stable modules $W$. Let's explain a $\sigma$-twisted module briefly. See \cite{DLiMa} for the detail. Decompose $$V=\oplus_{i=0}^{n-1} V^{(i)}, \quad \mbox{ where } V^{(i)}= \{v\in V\mid \sigma(v)=e^{2\pi \sqrt{-1} i/n}v\}.$$ Clearly, $V^{(0)}$ is a subVOA. A simple $\sigma$-twisted module $W$ has a grading $W=\oplus_{m=0}^{\infty} W_{r+m/n}$ such that every $W^{(i)}=\oplus_{k=0}^{\infty} W_{r+i/n+k}$ is a simple $V$-module, where $W_t$ denotes the space consisting of elements of weight $t$. Furthermore, there is $r\in \Z$ such that $$v_t(W^{(i)})\subseteq W^{(i+rp)}$$ for $v\in V^{(p)}$ and $t\in \Q$, where we consider $W^{(k)}=W^{(h)}$ if $k\equiv h \pmod{n}$. We will call $W^{(i)}$ a $V^{(0)}$-sector of $W$. On the other hand, the definition of $h$-stable modules $W$ is $h\circ W\cong W$ as $V$-modules, where $v_n(h\circ w)=h\circ (h(v))_n w$ for $w\in W$. This implies the existence of an endomorphism $\phi(h)$ of $W$ such that $\phi(h)^{-1}v_n\phi(h)=(h(v))_n$ for $n\in \Z$ and $v\in V$. For such a module, they consider a trace function $$ T_W(h,v;\tau)={\rm Tr}_{W} \phi(h)o(v)e^{2\pi \sqrt{-1}(\tau -c/24)} $$ of $\phi(h)o(v)$ on a $g$-twisted $h$-stable module $W$ for $v\in V_m$ with $L(1)v=0$. Then they have shown that for $A=\begin{pmatrix}a&b\cr e&f\end{pmatrix}\in {\rm SL}_2(\Z)$, $A$-transformation $$ (e\tau+f)^{-m} T_{W}(h,v;\frac{a\tau+b}{e\tau+f})$$ of $T_W(h,v;\tau)$ is a linear sum of trace functions of $\phi(g^ah^e)o(v)$ on $g^ah^e$-stable $g^bh^f$-twisted modules under the assumption that all twisted modules are completely reducible. In particular, $(\frac{1}{\tau})^mT_V(\sigma,v;-1/\tau)$ is a linear sum of trace functions of $o(v)$ on $\sigma$-twisted modules. An interesting case is that $V$ has exactly one simple module $V$. In this case, $V$ has only one simple $\sigma^i$-twisted module $V(\sigma^i)$ for each $i$ and so they are all $\sigma^j$-stable. If the weights of elements in $V(\sigma^i)$ are in $\Z/n$, then a homomorphism $\phi(\sigma)$ for a $\sigma$-stable module is given by $\phi(\sigma)=e^{2\pi r\sqrt{-1}L(0)}$ since $v_t(W^{(i)})\subseteq W^{(i+rp)}$ for $v\in V^{(p)}$ for some $r\in \Z$. \\ \noindent {\bf 2.3.3 Moose-Seiberg and Huang's Verlinde formula}\\ By the assumption, $W^i\boxtimes W^j$ decomposes into the direct sum of simple modules: $$W^i\boxtimes W^j\cong \oplus_k (\underbrace{W^k\oplus \cdots \oplus W^k}_{N_{i,j}^k}).$$ We call $N_{ij}^k$ a fusion rule. A mysterious property of $C_2$-cofinite VOA whose modules are all completely reducible is a relation between the fusion rules and the entries of the $S$-matrix of $V$, which was mentioned by Verlinde \cite{V} and proved by Moose-Seiberg \cite{MoS} and Huang \cite{H2}. For example, the following is Corollary 5.4 in \cite{H2}. \\ \noindent {\bf Theorem} \qquad {\it Let $V$ be a $C_2$-cofinite VOA of CFT-type and assume that $V\cong V'$ and all modules are completely reducible. Then $S$ is symmetric and the square $S^2$ is a permutation matrix which shifts $i$ to $i'$, where $W^0=V$ and $W^{k'}=(W^k)'$. Moreover, we have: $$\mbox{}\qquad \qquad \qquad N_{i,j}^k=\sum_{h=0}^s \frac{S_{ih}S_{jh}S_{hk'}}{S_{0h}} \qquad \qquad \hfill \mbox{{\rm (Verlinde Formula)}}.$$} \section{Proof of Theorem A} Let $\sigma$ be an automorphism of $V$ of order $n$ and set $V^{(i)}=\{v\in V \mid \sigma(v)=e^{2\pi \sqrt{-1}i/n}v \}$. Then $V=\oplus_{i=0}^{n-1} V^{(i)}$ and $V^{(0)}$ is a subVOA of $V$ and $V^{(i)}$ are all simple $V^{(0)}$-modules. We use $v^{(i)}$ to denote an element in $V^{(i)}$ and set $U=V^{(0)}$. We will first show that $U$-modules are completely reducible. It is enough to show that $U$ is projective as a $U$-module by Proposition \ref{prn:C2} (vi). Suppose false and let $0\to B\to P\xrightarrow{\phi} U\to 0$ be a non-split extension of $U$. Let $q\in P$ such that $\phi(q)=\1\in U$. Viewing $V$ as a $U$-module, we define a fusion product $W=V\boxtimes_U P$ and let $I^{\boxtimes P}\in {\cal I}_{V,P}^{V\boxtimes P}$ and $I^{\boxtimes B}\in {\cal I}_{V,B}^{V\boxtimes B}$ be an intertwining operator to define $V\boxtimes_UP$ and its restriction to $B$, respectively. We may assume $I^{\boxtimes P}(\1,z)p=p$ for $p\in P$. We set $W^{(i)}=V^{(i)}\boxtimes_U P$. We note $W=W^0\oplus \cdots \oplus W^{(n-1)}$ and $W^0=P$. Similarly, we set $R=V\boxtimes_U B$ and $R^i=V^{(i)}\boxtimes_U B$. By the flatness property, $R^i$ is a submodule of $W^i$ for each $i$. As we explained in \S 2.2, there is ${\cal Y}\in {\cal I}_{V,W}^W$ such that $$ E(\langle w', {\cal Y}(v,z_1)I^{\boxtimes P}(w,z_2)p\rangle) =E(\langle w', I^{\boxtimes P}(Y(v,z_1-z_2)w,z_2)p\rangle)$$ for $v,w\in V$, $w'\in W'$ and $p\in P$. From the definition of ${\cal Y}$ and the Commutativity of vertex operators of $V$, we have $$ \begin{array}{rl} E(\langle w',{\cal Y}(v^1,z_1){\cal Y}(v^2,z_2)I^{\boxtimes P}(w,z)p\rangle) =&\!E(\langle w',{\cal Y}(v^1,z_1)I^{\boxtimes P}(Y(v^2,z_2-z)w,z)p\rangle)\cr =&\!E(\langle w',I^{\boxtimes P}(Y(v^1,z_1-z)Y(v^2,z_2-z)w,z)p\rangle)\cr =&\!E(\langle w',I^{\boxtimes P}(Y(v^2,z_2-z)Y(v^1,z_1-z)w,z)p\rangle)\cr =&\!E(\langle w',{\cal Y}(v^2,z_2){\cal Y}(v^1,z_1)I^{\boxtimes P}(w,z)p\rangle) \end{array}$$ for $v^1,v^2\in V$, which implies the Commutativity of $\{{\cal Y}(v,z) \mid v\in V\}$. We also have $$ \begin{array}{rl} E(\langle w',{\cal Y}(v^1,z_1){\cal Y}(v^2,z_2)I^{\boxtimes P}(w,z)p\rangle) =&\!E(\langle w',I^{\boxtimes P}(Y(v^1,z_1-z)Y(v^2,z_2-z)w,z)p\rangle)\cr =&\!E(\langle w',I^{\boxtimes P}(Y(Y(v^1,z_1-z_2)v^2,z_2-z)w,z)p\rangle)\cr =&\!E(\langle w',{\cal Y}(Y(v^1,z_1-z_2)v^2,z_2)I^{\boxtimes P}(w,z)p\rangle). \end{array}$$ Furthermore, taking $w=\1$, we obtain ${\cal Y}(v,z)p=I^{\boxtimes P}(v,z)p$ for $v\in V, p\in P$ since $$\begin{array}{rl} E(\langle w',{\cal Y}(v,z_1)p\rangle)=&E(\langle w',{\cal Y}(v,z_1)I^{\boxtimes P}(\1,z_2)p\rangle) =E(\langle w',I^{\boxtimes P}(Y(v,z_1-z_2)\1,z_2)p\rangle)\cr =&E(\langle w',I^{\boxtimes P}(e^{(z_1-z_2)L(-1)}v,z_2)p\rangle) =E(\langle w',I^{\boxtimes P}(v,z_2+z_1-z_2)p\rangle)\cr =&E(\langle w',I^{\boxtimes P}(v,z_1)p\rangle). \end{array}$$ By definition of ${\cal Y}$, $V\boxtimes_U B$ is a ${\cal Y}$-invariant submodule and ${\cal Y}(v^{(i)},z)W^j\subseteq W^{i+j}\{z\}[\log z]$ for $v^{(i)}\in V^{(i)}$. We will assert one general result. \vspace{-4mm}\\ \begin{prn}\label{prn:simplecurrent} $V^{(i)}$ is a simple current as a $V^{(0)}$-module, that is, $V^{(i)}\boxtimes_{V^{(0)}} D$ is simple for any simple $V^{(0)}$-module $D$. \end{prn} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad Set $Q=V\boxtimes_U D$ and $Q^{(i)}=V^{(i)}\boxtimes_U D$. For $Q$, we will use the same arguments as above. Suppose that $Q^{(i)}$ contains a proper submodule $S$. Then $S^{\perp}\cap (Q^{(i)})'\not=0$ and so we have $$\begin{array}{rl} E(\langle d',{\cal Y}(v^{(i)},z_1){\cal Y}(v^{(n-i)},z)s\rangle)=&E(\langle d',{\cal Y}(Y(v^{(i)},z_1-z)v^{(n-i)},z)s\rangle) \cr =&E(\langle d',Y(Y(v^{(i)},z_1-z)v^{(n-i)},z)s\rangle)=0 \end{array}$$ for $d'\in S^{\perp}$ and $s\in S$ since $Y(v^{(i)},z)v^{(n-i)}\in U\{z\}[\log z]$. On the other hand, since $E(\langle q',{\cal Y}(v^{(i)},z_1){\cal Y}(v^{(n-i)},z)s\rangle)= E(\langle q',{\cal Y}(Y(v^{(i)},z_1-z)v^{(n-i)},z)s\rangle)\not=0$ for some $q'\in (Q^{(i)})', v^{(i)}\in V^{(i)}$ and $v^{(n-i)}\in V^{(n-i)}$, the coefficients in $\{{\cal Y}(v^{(n-i)},z)s\mid s\in S, v^{(n-i)}\in V^{(n-i)}\}$ spans $D$ and so those in $\{ {\cal Y}(v^{(i)},z_1){\cal Y}(v^{(n-i)},z)s\mid v^{(i)}\in V^{(i)}, v^{(n-i)}\in V^{(n-i)} \}$ spans $Q^{(i)}$. Therefore, we have a contradiction. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} Let's go back to the proof of Theorem A. We note that $R^{(i)}=V^{(i)}\boxtimes_U B$ is simple by the above proposition. If some $R^{(i)}$ does not have integer weights, say, $R^{(1)}$, then $W^1=V^{(1)}\oplus R^{(1)}$ as $U$-modules. By the same arguments as above, the space spanned by the coefficients in $\{{\cal Y}(v^{(n-1)},z)V^{(1)}\mid v^{(n-1)}\in V^{(n-1)}\}$ is a nonzero $U$-submodule of $P$, but does not contain $B$, which contradicts the choice of $P$. We have hence obtained that all elements in $R$ and $W$ have integer weights. If ${\cal Y}(v,z)$ has no $\log z$-terms, then ${\cal Y}(v,z)$ is an integer power series and so the above Commutativity implies that $W$ is a $V$-module. However, since all $V$-modules are completely reducible, $0\to V\boxtimes B\to V\boxtimes P\to V$ has to split and so does $0\to B\to P\to U\to 0$, which contradicts the choice of $P$. Therefore, ${\cal Y}(v,z)$ has $\log z$-terms for some $v\in V$. Then $(L(0)-{\rm wt})W\not=0$ and so $(L(0)-{\rm wt})W=R$. Then $R\cong W/R\cong V$ as $V$-modules and so ${\cal Y}$ has a form $$ {\cal Y}(v, z)=\sum_{n\in \Z} v_nz^{-n-1} + \sum_{n\in \Z} v_{n,1}z^{-n-1} \log z$$ on $W$ with $I(v,z):=\sum v_{n,1}z^{-n-1}\in {\cal I}_{V,V}^V$. By the above arguments, $I(\ast,z)$ is an intertwining operator of $V$ from $W/R\cong V$ to $R\cong V$ and so ${\cal Y}(\1,z)=1+\lambda (L(0)-{\rm wt})\log z$ for some $0\not=\lambda\in \C$, which contradicts the fact that $P$ is a $U$-module. The remaining thing is to show the following: \vspace{-4mm}\\ \begin{lmm}\label{lmm:allappear} Every simple $U$-module appears in a $\sigma^j$-twisted (or ordinary) $V$-module as a $U$-submodule for some $j$. \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad Let $\{W^0,...,W^s\}$ be the set of all simple $U$-modules. For $u\in U_m$ with $L(1)u=0$, let's consider trace functions $$T_{V}(\sigma^i,u;\tau)= {\rm Tr}_V \sigma^i o(u) q^{L(0)-c/24}. $$ Then, by a modular invariance property of the space of trace functions on twisted modules \cite{DLiMa}, $(1/\tau)^mT_U(\sigma^i,u;-1/\tau)$ is a linear combination of trace functions on $\sigma^i$-twisted modules. On the other hand, we clearly have $ \sum_{i=0}^{n-1} T_{V}(\sigma^i,u;\tau)=nT_U(u;\tau)$ and Theorem 5.5 in \cite{H2} implies that there are $0\not=\lambda_i\in \C$ for every $i=1,\ldots,s$ such that $$(1/\tau)^mT_U(u;-1/\tau)=\sum_{i=0}^s \lambda_i T_{W^i}(u;\tau). $$ As Zhu has shown in \cite{Z}, $\{T_{W^i} \mid i=1,\ldots,s \}$ is a linear independent set, the above fact implies that the space spanned by $U$-composition factors of (twisted) $V$-modules covers all simple $U$-modules. This completes the proof of Theorem A. \section{Proof of Theorem B} We will show that $V_L^{\sigma}$ is $C_2$-cofinite for any triality automorphism $\sigma$ of $L$. Abusing the notation, we use the same notation to denote an automorphism of $V_L$ lifted from $\sigma$. \subsection{Preliminary results} Let's explain the definition of lattice VOA $V_L$, but we will give only necessary properties for the proof of Theorem B. See \cite{FLMe} for the precise definition. We first reduce the case. One advantage of proving $C_2$-cofiniteness is unexacting. For example, it is enough to show that $V_H^{\sigma}$ is $C_2$-cofinite for some $\sigma$-invariant full-sublattice $H$ of $L$, since $V_L^{\sigma}$ is a $V_H^{\sigma}$-module with a composition series of finite length, see Proposition \ref{prn:C2}. Since every lattice VOA is $C_2$-cofinite and $V^1\otimes V^2$ is $C_2$-cofinite when the both $V^i$ are $C_2$-cofinite, it is enough to prove the $C_2$-cofiniteness of $V_L^{\sigma}$ for $L=\Z x+\Z y$ with $y=\sigma(x)$ and $-x-y=\sigma^2(x)$. By taking a sublattice, we may assume that $\langle x,x\rangle=-2\langle x,y\rangle=18M>72$ and $M$ is even. Let's show a construction of a lattice VOA $V_L$ for $L=\Z x+\Z y$. Using a $2$-dimensional vector space $\C L=\C\otimes_{\Z}L$ with a nonsingular bilinear form $\langle\cdot,\cdot\rangle$, we first define a VOA $M_2(1):=S(\C L\otimes \C[t^{-1}]t^{-1})$ of free bosonic Fock space as follows: Consider $\C L$ as a commutative Lie algebra with a non-degenerated symmetric bilinear form, we construct the corresponding affine Lie algebra $\C L[t,t^{-1}]\oplus \C$ with a product $$ [v\otimes t^n, u\otimes t^m]=\delta_{n+m,0}n\langle v,u\rangle, $$ for $v,u\in \C L$. Hereafter we use $v(n)$ to denote $v\otimes t^n$. We then consider its universal enveloping algebra $U(\C L[t,t^{-1}])$. It has a commutative subalgebra $U(\C L[t])$. Using a one dimensional $U(\C L[t])$-module $\C e^{\gamma}$ with $$ \mu(0)e^{\gamma}=\langle \mu,\gamma\rangle e^{\gamma}, \quad \mbox{ and }\quad \mu(n)e^{\gamma}=0 \mbox{ for }n>0 $$ for $\mu\in \C L$, we define a $U(\C L[t,t^{-1}])$-module: $$ M_2(1)e^{\gamma}:=U(\C L[t,t^{-1}])\otimes_{U(\C L[t])}\C e^{\gamma}.$$ As vector spaces, $M_2(1)e^{\gamma}\cong S(\C L[t^{-1}]t^{-1})$ a symmetric tensor algebra. Since $L=\Z x+\Z y$, $M_2(1)e^{\gamma}$ is spanned by $$\{x(-i_1)\cdots x(-i_k)y(-j_1)\cdots y(-j_h)e^{\gamma}\mid i_1\geq \cdots\geq i_k>0, j_1\geq\cdots\geq j_h>0\} $$ and we define its weight by $\sum i_s+\sum j_t+\frac{\langle \gamma,\gamma\rangle}{2}$. Forgetting about the vertex operators, a lattice VOA associated with $L$ is defined as a vector space $$V_L=\oplus_{\gamma\in L} M_2(1)e^{\gamma}.$$ We introduce an $\N$-gradation on $V_L=\oplus_{m\in \N} (V_L)_m$ by weights and let $(V_L)_m$ be the space of elements with weight $m$. We next define a vertex operator $Y(u,z)=\sum u_mz^{-m-1}\in {\rm End}(V_L)[[z,z^{-1}]]$ of $u\in V_L$ as follows: \\ First, a vertex operator of $v(-1)e^0$ for $v\in \C L$ and that of $e^{\gamma}$ are defined by $$ \begin{array}{rl} Y(v(-1)e^0,z):&=\sum_{n\in \Z} v(n)z^{-n-1} \qquad \qquad \mbox{ and}\cr Y(e^{\gamma},z):&=E^-(-\gamma,z)E^+(-\gamma,z)e^{\gamma}z^{\gamma}, \end{array}$$ where $$\begin{array}{l} E^{\pm}(\gamma,z)=\sum_{n=0}^{\infty}\frac{1}{n!}(\sum_{n\in {\Z}_+} \frac{\gamma(\pm n)}{\pm n}z^{\mp n})^n \in {\rm End}(V_L)[[z^{\mp 1}]], \cr e^{\gamma}e^{\mu}=e^{\gamma+\mu} \qquad \mbox{ and }\qquad z^{\gamma}e^{\mu} =z^{\langle \gamma,\mu\rangle}e^{\mu}. \end{array}$$ The vertex operators of other elements are inductively defined by using normal products $$(v(-m)u)_n= \displaystyle{\sum_{i=0}^{\infty}}(-1)^i\binom{-m}{i}\{v(-m-i)u_{n+i}\!-\!(-1)^mu_{-m+n-i}v(i)\} \eqno{(2)}$$ for $v\in \C L$ and $u\in M_n(1)e^{\gamma}$, where$\binom{-m}{i}=\frac{(-m)(-m-1)\cdots(-m-i+1)}{i!}$, and then we extend them linearly. We will frequently use this normal product development $(2)$. From now on, we use notation $\1$ to denote $e^0$ and call it a Vacuum. Important properties of the Vacuum $\1$ are $v_n\1$=0 and $v_{-1}\1=v$ for any $n\geq 0$ and $v\in V$. We also denote $M_n(1)e^0$ by $M_n(1)$. Let $\{x^{\ast},y^{\ast}\}$ be a dual basis of $\C L$ for $\{x,y\}$. Then we have a Virasoro element $\omega=\frac{1}{2}\{x(-1)x^{\ast}(-1)\1+y(-1)y^{\ast}(-1)\1\}$. We denote the operator $\omega_m$ by $L(m-1)$. Important properties of a Virasoro element are $$(L(-1)v)_m=-mv_{m-1} \quad \mbox{ and }\quad L(0)v={\rm wt}(v)v.$$ For a VOA $V$ and its module $W$, we set: $$\begin{array}{l} C_m(V):=<v_{-m}u \mid v,u\in \oplus_{m\geq 1}V_m >_{\C}\qquad \mbox{ and} \cr C_2(V)W:=<v_{-2}w \mid v\in V, w\in W>_{\C}. \end{array}$$ \subsection{In the free bosonic Fock space} Viewing $\C L$ as a $\C[\sigma]$-module, we have $\C L=\C a\oplus \C a'$ with $\sigma(a)=e^{2\pi \sqrt{-1}/3}a$ and $\sigma(a')=e^{-2\pi \sqrt{-1}/3}a'$ and $\langle a,a'\rangle=1$. Then $u=a(-i_1)\cdots a(-i_h)a'(-j_1)\cdots a'(-j_k)\1$ is $\sigma$-invariant if and only if $h-k\equiv 0 \pmod{3}$. We note that $\omega=a(-1)a'=a'(-1)a$ is the Virasoro element. Set $${\cal P}_k=U(\C L[t,t^{-1}])\C L[t]t^k,$$ which are left ideals of $U(\C L[t,t^{-1}])$. Then ${\cal P}_0e^0=0$ and ${\cal P}_1e^{\gamma}=0$ for $\gamma\in L$. From now on, $\equiv$ denotes a congruence relation modulo $C_2(M_2(1)^{\sigma})M_2(1)$. First of all, we will note the following lemma which comes from the normal product (2). \vspace{-4mm}\\ \begin{lmm}\label{lmm:reduction} Let $k=1$ or $2$ and $v\in M_2(1)$. For any $n,m>0$, there are $\lambda_{s,t}\in \Q$ such that $$ a(-n)a'(-m)v-\sum_{s,t>0} \lambda_{s,t}(a(-s)a'(-t)\1)_{-k}v \in {\cal P}_{1-k}v. $$ For any $l,m,n\in \Z$ with $n\geq k$, there are $\lambda_{s,t,u}\in \Q$ such that $$a(-l)a(-m)a(-n)v -\sum \lambda_{s,t,u}(a(-s)a(-t)a(-u)\1)_{-k}v \in {\cal P}_{1-k}.$$ \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad We will prove only the second case. Since $$\begin{array}{rl} (a(-i)\gamma)_{-k}v=& \sum_{j=0}^{\infty}\binom{-i}{j}(-1)^{j}\{a(-i-j)\gamma_{-k+i}-(-1)^i\gamma_{-i-k-j}a(j)\}v\cr \in & \sum_{j=0}^{\infty}\binom{-i}{j}(-1)^{j}a(-i-j)\gamma_{-k+i}v+{\cal P}_{0}v, \end{array}$$ there are $\lambda_i, \lambda_{ij}\in \Q$ such that $$\begin{array}{l} (a(-s)a(-t)a(-u)\1)_{-k}\in \sum_{i=0}^{\infty}\lambda_i a(-s-i)\{a(-t)a(-u)\1\}_{-k+i}+{\cal P}_{0} \cr \mbox{}\qquad =\sum_{i=0}^{\infty}\lambda_{i,j} a(-s-i)\sum_{j=0}^{\infty}a(-t-j)\{a(-u)\1\}_{-k+i+j}+{\cal P}_{0} \cr \mbox{}\qquad =\sum_{i=0}^{\infty}\lambda_{i,j} a(-s-i)\sum_{j=0}^{\infty}a(-t-j)a(-u-k+1+i+j)+{\cal P}_{0}. \end{array}$$ We hence have $a(-l)a(-m)a(-n)v-(a(-l)a(-m)a(-n+k-1)\1)_{-k}v$ is equivalent to a $\Q$-linear combination of $$\{a(-s)a(-t)a(-u)v \quad \mbox{ with } 0<u<n, s\geq l, t\geq m\}$$ modulo ${\cal P}_{-k+1}v$. Iterating these steps, we can reduce them to the case where $u<k$. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} We next explain an expression, which we will use. As a spanning set of $V_L^{\sigma}$, we usually use elements of the form $$ \mu=\sum_{t=0}^2\sigma^t(a(-i_1)\cdots a(-i_h)a'(-j_1)\cdots a'(-j_k)e^{\gamma}) \qquad \mbox{ with } i_s,j_t>0, $$ but we will permit to use $a(0)$ and $a'(0)$ so that $$ \mu=\frac{1}{\langle a,\gamma\rangle^s \langle b,\gamma\rangle^t}a(-i_1)\cdots a(-i_h)a'(-j_1) \cdots a'(-j_k)a(0)^sa'(0)^t(\sum_{i=0}^2e^{\sigma^i(\gamma)}), \eqno{(3)}$$ where $ a(-i_1)\cdots a(-i_h)a'(-j_1)\cdots a'(-j_k)a(0)^sa'(0)^t$ is $\langle \sigma\rangle$-invariant and $a'(m)^n$ denotes $\underbrace{a'(m)\cdots a'(m)}_n$. From now on, $E^{\gamma}$ denotes $\sum_{i=0}^2e^{\sigma^i(\gamma)}$ and we will call $h$ and $k$ the numbers of $a$-terms and $a'$-terms, respectively. \subsection{Modulo $C_2(M_2(1)^{\sigma})$} For $u=a(-i_1)\cdots a(-i_h)a'(-j_1)\cdots a'(-j_k)\1$, if at least one of $\{i_s, j_t\mid s=1,...,h, t=1,...,k\}$ is not $1$, then we call $u$ a weight loss element. Set $${\cal S}_2=\{a(-1)^{i}a'(-1)^j\1, a(-i)a(-j)a, a'(-i)a'(-j)a', a(-i)a'(-1)^2a, a(-i)a', \1 \mid i,j\in \N\}.$$ \begin{prn}\label{prn:reduction} Let $u=a(-i_1)\cdots a(-i_h)a'(-j_1)\cdots a'(-j_k)\1$ be a weight loss element.\\ If $|h-k|\geq 4$, then $u \in C_2(M_2(1)^{\sigma})$. \\ If $h-k=3$, then $u\in <a(-i_1)a(-i_2)a\mid i_1,i_2\in \N>_{\C}+\C_2(M_2(1)^{\sigma})$. \\ If $h=k$, then $u\in <a(-{\rm wt}(u)+3)a'(-1)^2a, a(-{\rm wt}(u)+1)a'>_{\C}+\C_2(M_2(1)^{\sigma})$. \\ In particular, we have \quad$\displaystyle{M_2(1)^{\sigma}=C_2(M_2(1)^{\sigma})+<{\cal S}_2>_{\C}}$. \end{prn} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad We will prove the last statement. The others come from the same arguments. We first note that $\omega_0\beta=\beta_{-2}\1\in C_2(M_2(1)^{\sigma})$ for $\beta\in M_2(1)^{\sigma}$ and $a(-h)a(-k)a(-m)\1$ is congruent to a linear sum of elements of type $a(-i)a(-j)a$ since $\omega_0(a(-r_1)\cdots a(-r_k)\1)=\sum_{i=1}^k r_i a(-r_1)\cdots a(-r_i-1)\cdots a(-r_k)\1$. Suppose $h-k\geq 4$ and $u\not\in C_2(M_2(1)^{\sigma})+<{\cal S}_2>_{\C}$. We take $u$ such that the total number $h+k$ is minimal. At least one of $i_s,j_t$ is not $1$. Since $h\geq 4$, by using suitable triple terms of $a$, we may assume $i_1=1$ by Lemma \ref{lmm:reduction}. Then by choosing other suitable triple $a$-terms, we may also assume $i_2=1$ and then $i_3=2$. Then $$2u -(a(-1)a(-1)a)_{-2}a(-i_4)\cdots a(-i_h)a'(-i_1)\cdots a'(-i_k)\1$$ is congruent to a linear sum of elements with the total number of terms is less than $h+k$, which contradicts the choice of $u$. We next treat the case $h-k=3$. By applying the same arguments to $a(-n)a'(-m)$, we can reduce to the case $h=3$ and $k=0$ as we desired. If $h=k$ and $h\geq 3$, then using the same argument as above, we can reduce to $u=a(-n)a'(-m)a(-1)a'(-1)$ and $n\geq 2$. If $m\geq 2$, then $u$ is congruent to a linear sum of $a(-n-m+1)a'(-1)^2a$ and $a(-n-m-1)a'$ by Lemma \ref{lmm:reduction}. Therefore we obtain $M_2(1)^{\sigma}=C_2(M_2(1)^{\sigma})+<{\cal S}_2>_{\C}$. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} \subsection{A subring} We note that $M_2(1)^{\sigma}/C_2(M_2(1)^{\sigma})$ is a commutative ring with the $-1$-normal product since $[\alpha_{-1},\beta_{-1}] =\sum_{i=0}^{\infty}(-1)^i (\alpha_i\beta)_{-2-i}$ for any $\alpha,\beta\in V_L$. Let ${\cal O}$ be the subspace of $M_2(1)^{\sigma}/C_2(M_2(1)^{\sigma})$ spanned by elements with the same number of $a$-terms and $b$-terms and ${\cal O}^{even}$ the subspace of ${\cal O}$ spanned by elements with even weights. Clearly, ${\cal O}$ and ${\cal O}^{even}$ are subrings of $M_2(1)^{\sigma}/C_2(M_2(1)^{\sigma})$ since ${\rm wt}(\alpha_{-1}\beta)={\rm wt}(\alpha)+{\rm wt}(\beta)$. Let's study an algebraic structure of ${\cal O}^{even}$. Set $\gamma(n)=a(-n+1)a'$. To simplify the notation, we sometimes omit subscript $-1$ denoting $-1$-normal product, for example, $\gamma(n)\gamma(m)$ denotes $\gamma(n)_{-1}\gamma(m)$. From $0\equiv \omega_0(a(-n)a'(-m)\1)=na(-n-1)a'(-m)\1+ma(-n)a'(-m-1)\1$, we have: \vspace{-4mm}\\ \begin{lmm}\label{lmm:aa} $\mbox{}\qquad\displaystyle{ a(-n)a'(-m-1)\1\equiv \binom{-n}{m}\gamma(n+m+1)} \pmod{\omega_0V_L}\hfill {\rm (4)}$ \end{lmm} \begin{prn}\label{prn:aaaa} $$\begin{array}{l} a(-r)a(-m)a'(-n)a'\equiv \binom{-m}{n-1}\gamma(r+1)\gamma(m+n) -\frac{(-1)^{n-1}(r+m+n-1)!(m+n+r+1)}{(r-1)!(m-1)!(n-1)!(m+1)(r+n)}\gamma(t) \end{array}$$ modulo $\omega_0V_L$, where $t=r+m+n+1$. In particular, by replacing $r$ with $m$, we have $$\gamma(n+3)\equiv \frac{6}{(n-1)(n-2)(n+3)}\{\gamma(3)_{-1}\gamma(n) -(n-1)\gamma(2)_{-1}\gamma(n+1)\}$$ for $n\geq 3$ and so $\gamma(n)\in C_1(M_2(1)^{\sigma})$ for $n\geq 6$. \end{prn} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad The assertion comes from the direct calculation: $$\begin{array}{l} \binom{-m}{n-1}\gamma(r+1)\gamma(m+n)\equiv (a(-r)a')_{-1}a(-m)a'(-n)\1\cr \equiv \sum_i\binom{-r}{i}(-1)^i\{a(-r-i)a'(-1+i)-(-1)^{-r}a'(-r-1-i)a(i)\} a(-m)a'(-n)\1 \cr \equiv a(-r)a'(-1)a(-m)a'(-n)\1+\binom{r+m}{m+1}ma(-r-m-1)a'(-n)\1 \cr \mbox{}\qquad -(-1)^r\binom{r+n-1}{n}a'(-r-1-n)na(-m)\1 \cr \equiv a(-r)a(-m)a'(-n)a'+\{\binom{r+m}{m+1}m\binom{-r-m-1}{n-1}-(-1)^{n}\binom{r+n-1}{n}n \binom{m+r+n-1}{r+n}\}\gamma(t) \cr \equiv a(-r)a(-m)a'(-n)a'+ \frac{(-1)^{n-1} (r+m+n-1)!(m+r+n+1)}{(r-1)!(m-1)!(n-1)!(m+1)(r+n)}\gamma(t). \hfill \mbox{\hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm}} \end{array}$$ For example, we will use the following: $$\begin{array}{c} 2\gamma(6)\equiv \gamma(3)\gamma(3)-2\gamma(2)\gamma(4), \qquad 7\gamma(7)\equiv \gamma(3)\gamma(4)-3\gamma(2)\gamma(5), \cr 16\gamma(8)\equiv \gamma(3)\gamma(5)-4\gamma(2)\gamma(6), \qquad 30\gamma(8)\equiv \gamma(4)\gamma(4)-6\gamma(2)\gamma(6). \end{array}$$ \vspace{-2mm} \begin{lmm}\label{lmm:CO}\qquad ${\cal O}=<\gamma(2)^n, \gamma(n+1), \gamma(2)\gamma(m)\1\mid n,m=2,\ldots >_{\C}$. \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad By Proposition \ref{prn:reduction}, ${\cal O}$ is spanned by $\{a(-1)^na'(-1)^n\1, a(-n)a'(-1)^2a, \gamma(m)\}$. By Proposition \ref{prn:aaaa}, we get $a(-n)a'(-1)^2a-\gamma(2)\gamma(n+1)\in \Q\gamma(n+3)$. We also have that $a(-1)^na'(-1)^n\1-\gamma(2)^n$ is congruence to a linear sum of $a(-2n+3)a'(-1)^2a$ and $\gamma(2n)$ modulo $C_2(M_2(1)^{\sigma})$, which proves the desired result. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} Set ${\cal S}_1=\{a(-i_1)a(-i_2)a, a'(-i_1)a'(-i_2)a', a(-i_3)a', \1 \mid i_1,i_2\leq 5, i_3\leq 4 \}$. \vspace{-4mm}\\ \begin{prn}\label{prn:C1} $M_2(1)^{\sigma}=C_1(M_2(1)^{\sigma})+<{\cal S}_1>_{\C}$. In particular, $M_2(1)^{\sigma}$ is $C_1$-cofinite. \end{prn} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad To simplify the notation, set $C_1=C_1(M_2(1))^{\sigma}$ in this proof. Suppose that the proposition is false and let $$u=a(-i_1)\cdots a(-i_h)a'(-j_1)\cdots a'(-j_k)\1\not\in C_1+<{\cal S}_1>_{\C}.$$ We take $u$ such that the number of terms is minimal. By Lemma \ref{lmm:reduction} and \ref{lmm:aa}, we may assume $u=a(-i_1)a(-i_2)a$ or $u=a(-m)a'$. By Lemma \ref{lmm:aa} and Proposition \ref{prn:aaaa}, we obtain $a(-m)a'\in C_1$ for $m\geq 5$. Since $C_1$ is closed by the $0$-th product, we have: $$\begin{array}{l} (1) \qquad C_1\ni (a(-k+1)a')_0(a(-1)a(-1)a)=3(k-1)a(-k)a(-1)^2\1 \qquad \mbox{ and so}\cr (2) \qquad C_1\ni (a(-n)a')_{0}a(-1)^2a(-k)\1=2a(-n-1)a(-k)a+ka(-n-k)a(-1)a \end{array}$$ for $k\geq 6$ and any $n$. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} We next express ${\cal O}$ as a $\C[\gamma(2)]$-module. We need the following lemma. \vspace{-4mm}\\ \begin{lmm}\label{lmm:gamma8} $\mbox{}\qquad 120\gamma(7)\1\equiv 8\gamma(2)\gamma(5)\1+\gamma(2)^2\gamma(3)\1$ \\ $\mbox{}\qquad \qquad \qquad \qquad 60\gamma(8)\1\equiv 6\gamma(2)\gamma(3)^2\1-13\gamma(2)^2\gamma(4)\1$. \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad Since $\mbox{}\quad 0\equiv (a(-1)a(-1)a)_{-2}a'(-1)a'(-1)a' $\\ $\equiv 3a(-1)^2a(-2)a'(-1)^2a'+18a(-4)a(-1)a'(-1)a'+18a(-3)a(-2)a'(-1)a'+18\gamma(7)$, \\ we have: $$a(-1)^2a(-2)a'(-1)^2a'\equiv -6a(-4)a(-1)a'(-1)a'-6a(-3)a(-2)a'(-1)a'-6\gamma(7).$$ Using Proposition \ref{prn:aaaa} and the above lemma, we obtain the first congruence expression: $$\begin{array}{rl} \multicolumn{2}{l}{\gamma(2)^2\gamma(3)\equiv(a(-1)a')_{-1}\{a(-1)a'(-1)a(-2)a'\}+\gamma(2)\{2a(-4)a'+a'(-3)a(-2)\1\}}\cr \equiv& (a(-1)a'(-1)a(-1)a'(-1)a(-2)a'+a(-3)a'(-1)a(-2)a'\cr &+2a(-4)a(-1)a'(-1)a'+2a'(-3)a(-1)a(-2)a'+5\gamma(2)\gamma(5) \cr \equiv& -6a(-4)a(-1)a'(-1)a'-6a(-3)a(-2)a'(-1)a'-6\gamma(7) \cr &+a(-3)a'(-1)a(-2)a'+2a(-4)a'(-1)^2a+2a'(-3)a(-1)a(-2)a'+5\gamma(2)\gamma(5)\cr \equiv& -4a(-4)a(-1)a'(-1)a'-5a(-3)a(-2)a'(-1)a'-6\gamma(7)\cr &+2a'(-3)a(-1)a(-2)a'+5\gamma(2)\gamma(5) \cr \equiv& -4\{\binom{-4}{0}\gamma(2)\gamma(5)-[4+\binom{-4}{2}]\gamma(7)\} -5\{3\gamma(2)\gamma(5)-28\gamma(7)\}-6\gamma(7) \cr &+2\{\binom{-2}{2}\gamma(2)\gamma(5)-[2\binom{-4}{2}+3\binom{-2}{4}]\gamma(7)\} +5\gamma(2)\gamma(5) \cr \equiv& 120\gamma(7)-8\gamma(2)\gamma(5). \end{array}$$ By expanding $0\equiv (a(-1)a(-1)a)_{-2}a'(-2)a'(-1)a'$, we have, $$\begin{array}{rl} \multicolumn{2}{l}{-(a(-2)a(-1)a(-1))a'(-2)a'(-1)a' }\cr \equiv & 4a(-4)a(-1)a'(-2)a' +4a(-3)a(-2)a'(-2)a' +4a(-5)a(-1)a'(-1)a' \cr &+4a(-4)a(-2)a'(-1)a' +2a(-3)a(-3)a'(-1)a'+8\gamma(8) \end{array}$$ and then we obtain: $$\begin{array}{l} 2\gamma(2)\gamma(2)\gamma(4)\equiv -(a(-1)a')_{-1}\{a(-1)a(-2)a'(-2)a'-16\gamma(6))\} \cr \equiv -(a(-1)^2a'(-1)^2ba(-2)a'(-2)-a(-3)a(-2)a'(-2)a'-2a(-4)a(-1)a'(-1)a'(-2) \cr \mbox{}\quad -a'(-3)a(-1)a(-2)a'(-2)-2a'(-4)a(-1)a'(-1)a(-2)+16\gamma(2)\gamma(6) \cr \equiv 4a(-4)a(-1)a'(-2)a' +4a(-3)a(-2)a'(-2)a' +4a(-5)a(-1)a'(-1)a' \cr \mbox{}\quad+4a(-4)a(-2)a'(-1)a'+2a(-3)a(-3)a'(-1)a'+8\gamma(8)\cr \mbox{}\quad-a(-3)a'(-1)a(-2)a'(-2)-2a(-4)a(-1)a'(-1)a'(-2)-a'(-3)a(-1)a(-2)a'(-2)\cr \mbox{}\quad-2a'(-4)a(-1)a'(-1)a(-2)+16\gamma(2)\gamma(6) \cr \equiv 2a(-3)a(-2)a'(-2)a' +4a(-5)a(-1)a'(-1)a' +4a(-4)a(-2)a'(-1)a' \cr \mbox{}\quad+2a(-3)a(-3)a'(-1)a'+16\gamma(2)\gamma(6)+8\gamma(8) \cr \equiv 2(180\gamma(8)-3\gamma(3)\gamma(5))+4(\gamma(2)\gamma(6)-20\gamma(8)) \cr \mbox{}\quad+4(\gamma(3)\gamma(5)-64\gamma(8))+2(-90\gamma(8)+\gamma(4)\gamma(4))+16\gamma(2)\gamma(6)+8\gamma(8) \cr \equiv -120\gamma(8)+12\gamma(2)\gamma(3)^2-24\gamma(2)^2\gamma(4). \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} \end{array}$$ By the above lemma, the direct calculation shows: $$\begin{array}{l} 2\gamma(4)\gamma(4)\equiv 12\gamma(2)\gamma(6)+60\gamma(8)\equiv 12\gamma(2)\gamma(3)^2-25\gamma(2)^2\gamma(4), \cr 15\gamma(3)\gamma(5)=60\gamma(2)\gamma(6)+240\gamma(8)\equiv 54\gamma(2)\gamma(3)^2-112\gamma(2)^2\gamma(4), \cr 120\gamma(3)\gamma(4)\equiv 120(7\gamma(7)+3\gamma(2)\gamma(5))\equiv 7\gamma(2)^2\gamma(3)+416\gamma(2)\gamma(5). \end{array}$$ Therefore, ${\cal O}^{even}$ has a subring $\displaystyle{{\cal O}_{\Q}^{even}=\Q[\gamma(2)]\gamma(2)+\Q[\gamma(2)]\gamma(3)\gamma(3)+\Q[\gamma(2)]\gamma(4)}$. \subsection{Elements $a(-1)a(-1)a$} We denote $a(-1)a(-1)a$ and $a'(-1)a'(-1)a'$ by $\alpha$ and $\beta$, respectively. \vspace{-4mm}\\ \begin{lmm}\label{lmm:gamma3} $\mbox{}\qquad \gamma(2)_{-1}\gamma(2)_{-1}\gamma(2) \equiv \alpha_{-1}\beta-264\gamma(2)_{-1}\gamma(4)\1+117\gamma(3)_{-1}\gamma(3)$ \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad From the direct calculation, we have: $$\begin{array}{l} \alpha_{-1}\beta\equiv (a(-1)a(-1)a)_{-1}a'(-1)a'(-1)a' \cr \equiv a(-1)^3a'(-1)^3\1+18a(-3)a(-1)a'(-1)a'+9a(-2)a(-2)a'(-1)a'+18a(-5)a'. \end{array}$$ Therefore, by Proposition \ref{prn:aaaa}, we obtain: $$\begin{array}{rl} \gamma(2)^3\equiv& (a(-1)a')_{-1}\{a(-1)a'(-1)a(-1)a'+2\gamma(4)\} \cr \equiv& (a(-1)^3a'(-1)^3\1+2a(-3)a(-1)a'(-1)a'+2a'(-3)a(-1)a'(-1)a+2\gamma(2)\gamma(4) \cr \equiv& \alpha_{-1}\beta-14\{\gamma(2)\gamma(4)-9\gamma(6)\} -9\{\gamma(3)^2-16\gamma(6)\}-18\gamma(6)+2\gamma(2)\gamma(4) \cr \equiv& \alpha_{-1}\beta-264\gamma(2)\gamma(4)+117\gamma(3)^2. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} \end{array}$$ \subsection{The action of $\gamma(4)$} In this subsection, we will consider elements modulo $C_2(M_2(1)^{\sigma})$ and we abuse $=$ to denote $\equiv$. By \S 4.4, we have shown that ${\cal O}_{\Q}^{even}$ is closed by the $-1$-product and $${\cal A}_{\Q}^{even}=\Q[\gamma(2)]\gamma(4)+\Q[\gamma(2)]\gamma(3)\gamma(3)$$ is an ideal modulo $C_2(M_2(1)^{\sigma}$. Let ${\cal Q}$ be an ideal generated by $\alpha_{-1}\beta$. We note $\alpha_{-1}\beta\equiv \gamma(2)^3+264\gamma(2)\gamma(4)-117\gamma(3)^2$. We will see the action of $\gamma(4)$ on ${\cal A}_{\Q}^{even}$. \vspace{-4mm}\\ \begin{lmm}\label{lmm:CQ} ${\cal Q}={\cal O}_{\Q}^{even}$. \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad We already know $\gamma(4)^2\equiv 6\gamma(2)\gamma(3)^2-\frac{25}{2}\gamma(2)^2\gamma(4)$. Since $\gamma(3)\gamma(5)\equiv 54\gamma(2)\gamma(3)^2-112\gamma(2)^2\gamma(4)$, we have: $$\begin{array}{rl} 1800\gamma(4)\gamma(3)^2\equiv &15\gamma(3)\{7\gamma(2)^2\gamma(3)+416\gamma(2)\gamma(5)\} \cr \equiv &105\gamma(2)^2\gamma(3)^2+416\gamma(2)\{54\gamma(2)\gamma(3)^2-112\gamma(2)^2\gamma(4)\}\cr \equiv &22569\gamma(2)^2\gamma(3)^2-46592\gamma(2)^3\gamma(4). \end{array}$$ Therefore the action of $\gamma(4)$ on ${\cal A}_{\Q}^{even}$ is expressed by $\displaystyle{\gamma(2)^2 \left( \begin{array}{cc} \frac{22569}{1800} & \frac{-46592}{1800} \cr & \cr 6 & \frac{-25}{2} \end{array}\right)}$. The eigenpolynomial of $1800\gamma(4)$ is $X^2-69X-4608900$ and its discriminant is $3\sqrt{2048929}$, which is not a rational number. Therefore, the action of $\gamma(4)/\gamma(2)^2$ on $\Q\gamma(2)\gamma(4)+\Q\gamma(3)^2$ is irreducible over $\Q$. Furthermore, since $$\begin{array}{l} (\alpha_{-1}\beta)_{-1}\gamma(4)\equiv(\gamma(2)^3-264\gamma(2)\gamma(4)-117\gamma(3)^2)\gamma(4) \cr \mbox{}\quad \equiv \gamma(2)^3\gamma(4)-264\gamma(2)\{6\gamma(2)\gamma(3)^2 -\frac{25}{2}\gamma(2)^2\gamma(4)\}\cr \mbox{}\qquad -117\gamma(3)\{\frac{7}{120}\gamma(2)^2\gamma(3)+\frac{416}{120}\gamma(2)\gamma(5)\} \cr \mbox{}\quad \equiv 3301\gamma(2)^3\gamma(4)-\{1584+\frac{273}{40}\}\gamma(2)\gamma(3)^2 -\frac{39\times 52}{5}\gamma(2)\{\frac{54}{15}\gamma(2)\gamma(3)^2-\frac{112}{15}\gamma(2)^2\gamma(4)\} \cr \mbox{}\quad \equiv (3301+\frac{13\times 52\times 112}{25})\gamma(2)^2\gamma(4)-\{1584+\frac{273}{40}, +\frac{39\times 52\times 18}{25}\}\gamma(2)^2\gamma(3)^2, \end{array}$$ we have ${\cal Q}_{\Q}^{even}\cap {\cal A}_{\Q}^{even}\not=0$ and so $$(<\alpha_{-1}\beta, \gamma(4)\alpha_{-1}\beta,\gamma(4)^2\alpha_{-1}\beta>_{\Q})_n=({\cal O}_{\Q}^{even})_n \qquad \mbox{ for }n\geq 14. \hfill \mbox{\hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm}}$$ \subsection{Nilpotency of $\alpha$ modulo $C_2(V_L^{\sigma})V_L$} From now on, $\equiv$ denotes the congruence modulo $C_2(V_L^{\sigma})V_L$. We next show that \vspace{-4mm} \\ \begin{lmm}\label{lmm:nilpotent} $\mbox{}\qquad(a(-i_1)a(-i_2)a)_{-1}$ and $(a'(-j_1)a'(-j_2)a')_{-1}$ are all nilpotent in \\ $M_2(1)^{\sigma}/(C_2(V_L^{\sigma})\cap M_2(1))$ for any $i_1,i_2,j_1,j_2$. \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad Except $\alpha$ and $\beta$, the square of the remainings are zero by Proposition \ref{prn:reduction}. We will prove that $\alpha_{-1}$ is nilpotent. Since ${\rm wt}(e^x)=9M$, ${\rm wt}(e^{x-y})=27M$ and ${\rm wt}(e^{2x+y})=27M$ for $y=\sigma(x)$, we have $e^y_{-1-k}e^{-x}=e^{-x-y}_{-1-k}e^{x}=0$ for $k<9M$ and so $$E^{x}_{-1-k}E^{-x}=\sum_{i=0}^2\sigma^i(E^x_{-1-k}e^{-x}) =\sum_{i=0}^2\sigma^i(e^x_{-1-k}e^{-x})\in M_2(1)^{\sigma}\cap C_2(V_L^{\sigma}) \quad \mbox{ for } 1<k<9M, $$ where $E^x$ denotes $e^x+e^y+e^{-x-y}$. Multiplying $(\alpha_{-1})^{6M+9}$ to $E^{x}_{-4}e^{-x}$, the number of $a$-terms in $(\alpha_{-1})^{6M+9}E^{x}_{-4}e^{-x}$ is $6$ more than that of $a'$-terms and so all elements with weight loss vanished. Hence $$(\alpha_{-1})^{6M+9}E^{x}_{-4}e^{-x}\equiv \frac{1}{(18M+3)!}(\alpha_{-1})^{6M+9}(x(-1))^{18M+3}\1\in C_2(V_L^{\sigma}).$$ Set $x=r a+s a'$, then since we multiply many $a(-1)$, $(\alpha_{-1})^{6M+9+k}$ annihilates all elements except for $a(-1)$ and $a'(-1)$ by Proposition \ref{prn:reduction} and so we have: $$\begin{array}{l} (\alpha_{-1})^{6M+9}(x(-1))^{18M+3}\1\equiv a(-1)^{18M+27}(r a(-1)+s a'(-1))^{18M+3}\1 \cr \mbox{}\quad \qquad\equiv \sum_{i=0}^{18M+3}\binom{18M+3}{i}r^{18M+3-i}s^i a(-1)^{36M+30-i}\gamma(2)^i \cr \mbox{}\quad\qquad\equiv \sum_{i=0}^{6M+1}\binom{18M+3}{3i}r^{18M+3-3i}s^{3i}(\alpha_{-1})^{12M+10-i}\gamma(2)^{3i}\cr \mbox{}\qquad\qquad+\sum_{i=0}^{6M}\binom{18M+3}{3i+1}r^{18M+2-3i}s^{3i+1}(\alpha_{-1})^{12M+9-i}a(-1)a(-1)\gamma(2)^{3i+1}\cr \mbox{}\qquad\qquad+\sum_{i=0}^{6M}\binom{18M+3}{3i+2}r^{18M+1-3i}s^{3i+2}(\alpha_{-1})^{12M+9-i}a(-1)\gamma(2)^{3i+2}. \end{array}$$ Similarly, since we obtain $$\begin{array}{l} (\alpha_{-1})^{6M+9}E^{x}_{-4}a(-1)e^{-x}= \alpha_{-1}^{6M+9}(x(-1))^{18M+3}a(-1)\1+\alpha_{-1}^{6M+9}\langle a,x\rangle(x(-1))^{18M+4}\1 \cr \mbox{}\qquad=\alpha_{-1}^{6M+9}(x(-1))^{18M+3}a(-1)\1+\alpha_{-1}^{6M+9}\langle a,x\rangle E^x_{-5}e^{-x} \cr \mbox{}\qquad \equiv \alpha_{-1}^{6M+9}(x(-1))^{18M+3}a(-1)\1 \cr \mbox{}\qquad \equiv \sum_{i=0}^{6M+1}\binom{18M+3}{3i}r^{18M+3-3i}s^{3i}\alpha_{-1}^{12M+10-i}a(-1)\gamma(2)^{3i}\cr \mbox{}\qquad\quad+\sum_{i=0}^{6M}\binom{18M+3}{3i+1}r^{18M+2-3i}s^{3i+1}\alpha_{-1}^{12M+9-i+1}\gamma(2)^{3i+1}\cr \mbox{}\qquad\quad+\sum_{i=0}^{6M}\binom{18M+3}{3i+2}r^{18M+1-3i}s^{3i+2}\alpha_{-1}^{12M+9-i}a(-1)^2\gamma(2)^{3i+2}\cr \multicolumn{1}{l}{\mbox{and}} \cr \alpha_{-1}^{6M+9}E^{x}_{-4}a(-1)^2e^{-x}=\alpha_{-1}^{6M+9}(x(-1))^{18M+3}a(-1)^2\1 +2\langle a,x\rangle\alpha_{-1}^{6M+9}(x(-1))^{18M+4}a \cr \mbox{}\qquad\quad+2\langle a,x\rangle^2\alpha_{-1}^{6M+9}(x(-1))^{18M+5}\1, \cr \mbox{}\qquad\equiv \alpha_{-1}^{6M+9}(x(-1))^{18M+3}a(-1)^2\1 \cr \mbox{}\qquad\equiv \sum_{i=0}^{6M+1}\binom{18M+3}{3i}r^{18M+3-3i}s^{3i}\alpha_{-1}^{12M+10-i}a(-1)^2\gamma(2)^{3i}\cr \mbox{}\qquad\quad+\sum_{i=0}^{6M}\binom{18M+3}{3i+1}r^{18M+2-3i}s^{3i+1}\alpha_{-1}^{12M+9-i+1}a(-1)\gamma(2)^{3i+1}\cr \mbox{}\qquad\quad+\sum_{i=0}^{6M}\binom{18M+3}{3i+2}r^{18M+1-3i}s^{3i+2}\alpha_{-1}^{12M+9-i+1}\gamma(2)^{3i+2},\cr \end{array}$$ we have $$\begin{array}{l} a(-1)\alpha_{-1}^{6M+9}(x(-1))^{18M+3}\1\in C_2(V_L^{\sigma})V_L, \cr a'(-1)\alpha_{-1}^{6M+9}(x(-1))^{18M+3}\1 \in C_2(V_L^{\sigma})V_L \qquad \mbox{ and}\cr a(-1)a(-1)\alpha_{-1}^{6M+9}(x(-1))^{18M+3}\1\in C_2(V_L^{\sigma})V_L. \end{array}$$ Hence $$\alpha_{-1}^{6M+9+k}a(-1)^ea'(-1)^k(x(-1))^{18M+3}\1$$ is a linear sum of elements of the form $$\alpha_{-1}^{6M+9+k}v_{-1}(u\cdot(x(-1))^{18M+3}\1),$$ where $v$ is a $\sigma$-invariant element and $u \in \{ \1_{-1}, a(-1), a(-1)a(-1)\}$ by Lemma \ref{lmm:reduction}. Therefore we obtain $$\alpha_{-1}^{6M+9+k}a(-1)^ea'(-1)^k(x(-1))^{18M+3}\1\in C_2(V_L^{\sigma})V_L$$ for any $e, k\geq 0$. We also get a similar result for $y=\sigma(x)$ as for $x$. Therefore we have: $$\alpha_{-1}^{12M+18}(\lambda x(-1)+\mu y(-1))^{36M+6}\1\in C_2(V_L^{\sigma})V_L$$ for any $\lambda, \mu\in \C$. By choosing suitable $\lambda$ and $\mu$ so that $\lambda x(-1)+\mu y(-1)=a(-1)$, we have $$\alpha_{-1}^{12M+18}a(-1)^{36M+6}\1=\alpha_{-1}^{48M+24}\1 \in C_2(V_L^{\sigma}),$$ which implies that $\alpha_{-1}$ is nilpotent modulo $C_2(V_L^{\sigma})$. Similarly, $\beta_{-1}$ is nilpotent. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} Since $\alpha, \beta$ are nilpotent and ${\cal O}^{even}={\cal O}^{even}\alpha_{-1}\beta$, we have the following: \vspace{-4mm}\\ \begin{prn}\label{prn:M2C2} $\mbox{}\qquad \displaystyle{\dim \left(M_2(1)^{\sigma}/(M_2(1)^{\sigma}\cap C_2(V_L^{\sigma}))\right) <\infty}$. \end{prn} \subsection{$C_2$-cofiniteness of $V_L^{\sigma}$} By the previous proposition, there is an integer $N_0$ such that $v^1_{-1}\cdots v^k_{-1}\gamma\in C_2(V_L^{\sigma})$ for any $v^i\in {\cal S}_1$ and $\gamma\in V_L^{\sigma}$ if ${\rm wt} (v^1_{-1}\cdots v^k_{-1}\1) \geq N_0$. Set $N=N_0+9M+30$. Our final step is to prove that $$V_L^{\sigma}=C_2(V_L^{\sigma})+\oplus_{n\leq N}(M_2(1))_n^{\sigma} +\oplus_{n\leq N}(M_2(1)E^x)_n^{\sigma} +\oplus_{n\leq N}(M_2(1)E^{-x})_n^{\sigma},$$ which implies the $C_2$-cofiniteness of $V_L^{\sigma}$. For $\mu\not=0$, set \\ ${\cal R}=\left\{d^k_{i_k}\cdots d^1_{i_1}b_{i_0}a(r)a'(0)E^{\mu}\mid \begin{tabular}{l}(i) $i_k\leq,\ldots,\leq i_1\leq -1$, $i_0\leq 0$, and \\ (ii) $d^i\in {\cal S}_1$, ${\rm wt}(b_{i_0}a(r)a'(0)E^{\mu})-{\rm wt}(E^{\mu})\leq 30$ \end{tabular}\right\}$. \begin{prn}\label{prn:spanningset} $(M_2(1)E^{\mu})^{\sigma}=<{\cal R}>_{\C}+C_2(V_L^{\sigma})$. In particular, if $v\in (M_2(1)E^{\mu})^{\sigma}$ has a weight greater than ${\rm wt}(E^{\mu})+N_0+30$, then $v\in C_2(V_L^{\sigma})$. \end{prn} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad Suppose false and we take $u\not\in <{\cal R}>_{\C}+C_2(V_L^{\sigma})$ such that ${\rm wt}(u)$ is minimal. Since $M_2(1)E^{\mu}$ is an irreducible $M_2(1)^{\sigma}$-module, we may assume $$ u=c^k_{i_k}\cdots c^1_{i_1}E^{\mu}$$ with $c^i\in M_2(1)^{\sigma}$. We take the above expression such that $\sum_{i=1}^k {\rm wt}(c^i)$ is minimal and if $\sum_{i=1}^k {\rm wt}(c^i)$ is the same, then $k$ is maximal. Since $(e_{-1}f)_k=\sum_{i=0}^{\infty}( e_{-1-i}f_{k+i}+f_{k-1-i}e_i)$ and ${\rm wt}(e_{-1}f)={\rm wt}(e)+{\rm wt}(f)$, we may assume $c^i\in {\cal S}_1$. Also, since $e_sf_t-f_te_s=\sum_{i=0}^{\infty}\binom{s}{i}(e_if)_{s+t-i}$ and ${\rm wt}(e_if)<{\rm wt}(e)+{\rm wt}(f)$ for $i\geq 0$, we may assume $i_k\leq \cdots \leq i_1$. By the minimality of ${\rm wt}(u)$, we have $0\leq i_k$ and $$\sum_{i=1}^k {\rm wt}(c^i)=({\rm wt}(u)-{\rm wt}(E^{\mu}))+\sum_{i=1}^k (1+i_j).$$ To simplify the notation, we will call $\sum_{i=1}^k(1+i_j)$ $\sigma$-loss weight. Since ${\rm wt}(E^{\mu})$ and ${\rm wt}(u)$ are fixed, we have chosen $u=c^k_{i_k}\cdots c^1_{i_1}E^{\mu}$ such that the $\sigma$-loss weight is minimal. We note that ${\rm wt}(c^i)\leq 11$ for $c^i\in {\cal S}_1$. On the other hand, by Lemma \ref{lmm:reduction}, $u$ is also a linear sum of elements of the form $$e^r_{-1}\cdots e^1_{-1}F,$$ where $e^i\in M_2(1)^{\sigma}$ and $F$ is one of $${\cal D}=\{a(-m-n)a'(0)E^{\mu}, a(-m)a(-n)a(0)E^{\mu}, a'(-m)a'(-n)a'(0)E^{\mu}\}.$$ By the minimality of ${\rm wt}(u)$, $u$ is a linear sum of elements in ${\cal D}$ and $m+n+{\rm wt}(E^{\mu})={\rm wt}(u)$. We assert that the $\sigma$-loss weight of $u$ is less than or equal to three. For the elements $a(-m-n)a'(0)E^{\mu}$, we get $a(-m-n)a'(0)E^{\mu}=(a'(-m-n-1)a)_1E^{\mu}$, which has only $\sigma$-loss weight two. Before we start the proof for the remaining case, we note $$\begin{array}{l} (a'(-m-1)a)_1(a'(-n-1)a)_1E^{\mu}=(a'(-m-1)a)_1a(-n)a'(0)E^{\mu} \cr \mbox{}\qquad =\sum\binom{-m-1}{i}(-1)^i(-1)^ma(-m-i)a'(i)a(-n)a'(0)E^{\mu} \cr \mbox{}\qquad =\binom{-m-1}{n}(-1)^{n+m}na(-m-1-n)a'(0)E^{\mu}+(-1)^ma(-m)a'(0)a(-n)a'(0)E^{\mu}. \cr \end{array}$$ Suppose $a(-m)a(-n)a(0)E^{\mu}$ has a $\sigma$-loss weight greater than three. By ignoring elements with $\sigma$-loss weight less than three, we have $$\begin{array}{l} \frac{\langle b,\mu\rangle^2}{\langle a,\mu\rangle}a(-m)a(-n)a(0)E^{\mu} =a(-m)a'(0)a(-n)a'(0)E^{\mu}\cr \mbox{}\quad\equiv (a'(-m-\!1)a)_1(a'(-n\!-\!1)a)_1E^{\mu} \cr \mbox{}\quad =(a'(-m-1)a)_1(a(-n-1)a')_1E^{\mu}+(a'(-m-1)a)_1(\omega_0\gamma(n+1)+\cdots))_1E^{\mu}\cr \mbox{}\quad \equiv (a'(-m-1)a)_1(a(-n-1)a')_1E^{\mu}+(\omega_0\gamma(n+1))_1(a'(-m-1)a)_1E^{\mu}\cr \mbox{}\quad \equiv (a'(-m-1)a)_1a'(-n)a(0)E^{\mu}-\gamma(n+1)_0(a'(-m-1)a)_1E^{\mu}\cr \mbox{}\quad \equiv \sum \binom{-m-1}{i}(-1)^i\{a'(-m-1-i)a(1+i)-(-1)^{m+1}a(-m-i)a'(i)\}a'(-n)a(0)E^{\mu}\cr \mbox{}\quad \equiv \binom{-m-1}{n-1}(-1)^{n-1}na'(-m-n)a(0)E^{\mu}-(-1)^{m+1}a(-m)a'(0)a'(-n)a(0)E^{\mu}\cr \mbox{}\quad \equiv \lambda_1 a(-m-n)a'(0)E^{\mu}+\mu_1 a(-m)a'(-n)E^{\mu} \cr \mbox{}\quad \equiv \lambda_2 a(-m-n)a'(0)E^{\mu}+\mu_2 a'(-m-n)a(0)E^{\mu}\equiv 0 \cr \end{array}$$ for some $\lambda_i$ and $\mu_j$, which is a contradiction. Therefore, the $\sigma$-loss weight of $u$ is less than or equal to three. In particular, $k\leq 3$ and ${\rm wt}(u)-E^{\mu}\leq 30$. Therefore, the elements $a(-m-n)a'(0)E^{\mu}$ and $\gamma(n+1)_0(a'(-m-1)a)_1E^{\mu}$ are all in $<{\cal R}>_{\C}+C_2(V_L^{\sigma})$. In order to show $a(-m)a(-n)a(0)E^{\mu}\in <{\cal R}>_{\C}+C_2(V_L^{\sigma})$, we have exactly the same congruence expressions as above modulo $<{\cal R}>_{\C}+C_2(V_L^{\sigma})$. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} Set $K=M_2(1)^{\sigma}+(M_2(1)E^{x})^{\sigma}+(M_2(1)E^{-x})^{\sigma}$. Since we have already shown that if $v\in K$ and ${\rm wt}(v)>N$, then $v\in C_2(V_L^{\sigma})$. The remaining is to show $$ V_L^{\sigma}=K+C_2(V_L^{\sigma}).$$ By Proposition \ref{prn:spanningset}, it is enough to show that $$a(-n)a'(0)E^{\mu} \in K+C_2(V_L^{\sigma})$$ for $1\leq n \leq 30$ and $\mu\not\in \{0,\pm x,\pm y, \pm(x+y)\}$. We first treat the following case:\vspace{-4mm}\\ \begin{lmm}\label{lmm:modulo3} For any $n+m\equiv 0 \pmod{3}$, we have $E^{mx+ny}\in C_2(V_L^{\sigma})+K$. \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad We note that if $n+m\equiv 0 \pmod{3}$, then there is $\gamma\in L$ such that $E^{mx+ny}=E^{\pm(\sigma(\gamma)-\gamma)}$. Set $2k=\langle \gamma,\gamma\rangle$. Then since $\langle \gamma-\sigma(\gamma),\gamma-\sigma(\gamma)\rangle=6k$, we have $$\begin{array}{l} E^{\gamma}_{-1-k}E^{-\gamma}\in M_2(1)+E^{\sigma(\gamma)-\gamma}+ E^{-\sigma(\gamma)+\gamma},\cr E^{\gamma}_{-k}a(-1)e^{-\gamma}\in M_2(1)+\langle \sigma(\gamma),a\rangle e^{\sigma(\gamma)-\gamma}+ \langle \sigma^2(\gamma),a\rangle e^{-\sigma(\gamma)+\gamma}, \quad \mbox{ and} \cr E^{\gamma}_{-k}\sum_{i=0}^2 \sigma^i(a(-1)e^{-\gamma}) \in M_2(1)+\langle \sigma(\gamma),a\rangle E^{\sigma(\gamma)-\gamma}+ \langle \sigma^2(\gamma),a\rangle E^{-\sigma(\gamma)+\gamma}. \end{array}$$ Therefore, we obtain $E^{\sigma(\gamma)-\gamma}, E^{-\sigma(\gamma)+\gamma}\in C_2(V_L^{\sigma})+M_2(1)$. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} For $E^{\mu}$ with $\mu=mx+ny$ and $m+n\equiv \pm 1 \pmod{3}$, we need the following lemma. \vspace{-4mm}\\ \begin{lmm}\label{lmm:modulo1} (1) For $m,n$ with $m+n\equiv 1 \pmod{3}$, there are $\gamma\in L$ satisfying $\gamma-\sigma^i(\gamma-\mu)=mx+ny$ for some $i=1,2$ and $\mu\in \{x,y,-x-y\}$ such that $\langle \gamma,-\sigma^1(\gamma-\mu)\rangle$ and $\langle \gamma,-\sigma^2(\gamma-\mu)\rangle$ are both positive. \\ (2) For $m,n$ with $m+n\equiv 2 \pmod{3}$, there are $\gamma\in L$, $i=1,2$, $\mu\in \{-x,-y,+x+y\}$ such that $\gamma-\sigma^i(\gamma-\mu)=mx+ny$, $\langle \gamma,-\sigma^1(\gamma-\mu)\rangle>0$ and $\langle \gamma,-\sigma^2(\gamma-\mu)\rangle>0$. \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad We first note that for $\gamma=px+qy$ and $-\gamma-x-y$, we have $$\begin{array}{l} \langle \sigma(\gamma),-\gamma-x-y\rangle=p^2+q^2-pq+2p-q=(q-\frac{p+1}{2})^2+\frac{3}{4}(p+1)^2-1 \cr \langle \sigma^2(\gamma),-\gamma-x-y\rangle=p^2+q^2-pq+2q-p=(p-\frac{q+1}{2})^2+\frac{3}{4}(q+1)^2-1, \end{array}$$ and so the both are positive except $-2\leq p,q\leq 1$. For $\mu=mx+ny$ with $m+n\equiv 1\pmod{3}$, we may assume $m,n\leq 0$ by taking a conjugate by $<\sigma>$. If $\mu=mx+ny\not\in\{x,y,-x-y,-2y\}$, then by setting $\gamma=px+qy$ with $q=\frac{-m-n+1}{3}$ and $p=\frac{n-2m+2}{3}$, we obtain $\sigma(\gamma)-\gamma-x-y=\mu$ and $\langle \sigma(\gamma),-\gamma-x-y\rangle$ and $\langle \sigma^2(\gamma),-\gamma-x-y\rangle$ are positive. In the case $\mu=-2y$, we choose $q=\frac{-m-n+1}{3}$ and $p=\frac{-2n+m+2}{3}$, then we have $\sigma^2(\gamma)-\gamma-x-y=\mu$ and $\langle \sigma^1(\gamma),-\gamma-x-y\rangle$ and $\langle \sigma^2(\gamma),-\gamma-x-y\rangle$ are positive. \\ (2) comes from (1) by replacing $x$ and $y$ by $-x$ and $-y$, respectively. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} By the above lemmas, for any $\mu$, there are $\gamma, \gamma'$ and $k$ such that $$ E^{\gamma}_{-2-k}e^{-\gamma'}\in e^{\mu}+e^{\mu'}+M_2(1)e^{\pm x} \quad \mbox{ and so } \quad E^{\gamma}_{-2-k}E^{-\gamma'}\in E^{\mu}+E^{\mu'}+M_2(1)E^{\pm x}.$$ We also have $$E^{\gamma}_{-2-k+1}\sum_{i=0}^2\sigma^i(a(-1)e^{-\gamma'}) \in \langle a,\gamma\rangle E^{\mu}+\langle a,\sigma(\gamma)\rangle E^{\mu'}+M_2(1)E^{\pm x}, $$ which implies $E^{\mu}, E^{\mu'}\in M_2(1)E^{\pm x}+C_2(V_L^{\sigma})$ for any $\mu$. The remaining is to show $a(-n)a'(0)E^{\mu}\in M_2(1)E^{\pm x}+C_2(V_L^{\sigma})$ for $n\leq 30$. Actually, we obtain $$\begin{array}{l} E^{\gamma}_{-2-k+1+n}a(-n)a(-n)e^{-\gamma'}\cr \mbox{}\qquad \in 2n\langle a,\gamma\rangle a(-n)e^{\mu} +2n\langle a,\sigma(\gamma)\rangle a(-n)e^{\mu'} +E^{\gamma}_{-2-k+1}e^{-\gamma'}+M_2(1)e^{\pm x} \qquad \mbox{ and}\cr E^{\gamma}_{-2-k+1+2n}a(-n)a(-n)a(-n)e^{-\gamma'}\cr \mbox{}\qquad \in 6n^2\langle a,\gamma\rangle a(-n)e^{\mu} +6n^2\langle a,\sigma(\gamma)\rangle a(-n)e^{\mu'} +E^{\gamma}_{-2-k+1}e^{-\gamma'}+M_2(1)e^{\pm x}. \end{array}$$ Therefore, we have $$\begin{array}{l} a(-n)e^{\mu}, a(-n)e^{\mu'}\in C_2(V_L^{\sigma})V_L+M_2(1)e^{\pm x} \qquad \mbox{ and so}\cr a(-n)a'(0)E^{\mu}, a(-n)a'(0)E^{\mu'}\in C_2(V_L^{\sigma})+M_2(1)E^{\pm x} \end{array}$$ for $n\leq 30$. This completes the proof of Theorem B. \section{$\Z_3$-orbifold construction} Using the known results in \S 2 and Theorem A and B, we will show $\Z_3$-orbifold constructions. Let $\Lambda$ be a positive definite even unimodular lattice of rank $N$ with a triality automorphism $\sigma$. We note $8|N$. In this section, $\xi$ denotes $e^{2\pi \sqrt{-1}/3}$. Since $\Lambda$ is unimodular, a lattice VOA $V_{\Lambda}$ has exactly one simple module $V_{\Lambda}$ and all modules are completely reducible (\cite{D}). By \cite{DLiMa}, it has one $\sigma$-twisted module $V_{\Lambda}(\sigma)$ and one $\sigma^2$-twisted module $V_{\Lambda}(\sigma^2)$. Decompose them as direct sums of simple $V_{\Lambda}^{\sigma}$-modules: $$V_{\Lambda}=W^0\oplus W^1\oplus W^2, \quad V_{\Lambda}(\sigma)=W^3\oplus W^4\oplus W^5, \quad V_{\Lambda}(\sigma^2)=W^6\oplus W^7\oplus W^8. $$ We first show that the weights of elements in $V_{\Lambda}, V(\sigma)$ and $V(\sigma^2)$ are in $\Z/3$. Set $H=\Lambda^{\sigma}$ and $H'=\{u\in \Q H\mid \langle u,h\rangle\in \Z \mbox{ for }h\in H\}$ the dual of $H$. Set $s={\rm rank}(H)$, then from the assumption $t=(N-s)/2$ is divisible by three. As it is well known, the character $T_{V_H}(\1;\tau)$ of $V_H$ is $\frac{\theta_{H}(\tau)}{\eta(\tau)^s}$, where $\eta(\tau)=q^{1/24}\prod_{n=1}^{\infty}(1-q^n)$ is the Dedekind eta-function. Since $\Lambda$ is unimodular, $3H'\subseteq H$ and the restriction of $\Lambda$ into $\Q H$ covers $H'/H$ and so the weights of elements in $V_H$-modules are in $\Z/3$. Hence the powers of $q$ in the character of simple $V_H$-modules are all in $-s/24+\Z/3$ and so are those of $q$ in $T_{V_H}(\1;-1/\tau)$ by Zhu's theory (\S 2.3.1). Since $$ T_{V_{\Lambda}}(\sigma,\1;\tau)=q^{-N/24}\frac{\theta_H(\tau)}{\prod_n(1-q^n)^s}\times \frac{1}{\prod_n(1-\xi q^n)^t(1-\xi^{-1}q^n)^t} =T_{V_H}(1,\1;\tau)\frac{\eta(\tau)^t}{\eta(3\tau)^t}$$ and $T_{V(\sigma)}(1,\1;\tau)$ is a scalar times of $$\begin{array}{rl} T_{V_{\Lambda}}(\sigma,\1;-1/\tau)=&T_{V_H}(1,\1;-1/\tau)\frac{\eta(-1/\tau)^t}{\eta(-3/\tau)^t} =T_{V_H}(1,\1;-1/\tau)\frac{(\frac{\tau}{\sqrt{-1}})^{t/2}\eta(\tau)^t}{(\frac{\tau}{3\sqrt{-1}})^{t/2}\eta(\tau/3)^t}\cr =&3^{t/2}T_{V_H}(1,\1;-1/\tau)\frac{\eta(\tau)^t}{\eta(\tau/3)^t}\cr =&3^{t/2}q^{-2t/24} T_{V_H}(1,\1;-1/\tau)q^{t/9}\frac{\prod_n(1-q^n)^t}{\prod_n(1-q^{n/3})^t}, \end{array}\eqno{(5)}$$ we have that the powers of $q$ in $T_{V(\sigma)}(1,\1;\tau)$ are in $-N/24+\Z/3$. Therefore, we may assume that the weights of elements in $W^i$ are in $i/3+\Z$ for $i\geq 3$. In particular, all elements in $$\widetilde{V}=W^{0}\oplus W^{3}\oplus W^{6}$$ have integer weights. Our next aim is to show that $\widetilde{V}$ has a structure of a vertex operator algebra. By Theorem B, $V_{\Lambda}^{\sigma}$ is $C_2$-cofinite and all modules are completely reducible and $V_{\Lambda}^{\sigma}$ has exactly nine simple modules $\{W^{i}\mid 0\leq i\leq 8\}$. \vspace{-4mm}\\ \begin{lmm}\label{lmm:allsimplecurrent} $W^{i}$ are all simple currents, that is, $W^{i}\boxtimes W^j$ are simple modules for any $i,j$. Moreover, $\widetilde{V}$ is closed by the fusion products. \end{lmm} \par \vspace{3mm}\noindent [{\bf Proof}] \qquad Let determine the entries of the $S$-matrix $(s_{ij})$ of $V_L^{\sigma}$. Decompose $S$ into $S=(A_{ij})_{i,j=1,2,3}$ with $3\times 3$-matrices $A_{ij}$. Since $S$ is symmetric, $A_{ij}={}^tA_{ji}$. Simplify the notation, we denote $T_{W^i}(v;\tau)$ by $W^i(\tau)$. As we explained in \S 2.3.2, there are $\lambda_i\in \C$ $(i=0,1,2)$ such that the $S$-transformation shifts $$ W^0(\tau) +\xi^i W^1(\tau) +\xi^{2i} W^2(\tau) \to \lambda_i(W^{3i}(\tau) +W^{3i+1}(\tau) +W^{3i+2}(\tau) ).$$ Namely, the first three columns of $S$ are $$(A_{11}A_{12}A_{13})=\frac{1}{3}\left( \begin{array}{ccccccccc} \lambda_0&\lambda_0&\lambda_0&\lambda_1&\lambda_1&\lambda_1&\lambda_2&\lambda_2&\lambda_2 \cr \lambda_0&\lambda_0&\lambda_0&\xi^2 \lambda_1&\xi^2 \lambda_1&\xi^2\lambda_1&\xi \lambda_2&\xi \lambda_2&\xi \lambda_2 \cr \lambda_0&\lambda_0&\lambda_0&\xi \lambda_1&\xi \lambda_1&\xi \lambda_1&\xi^2\lambda_2&\xi^2\lambda_2&\xi^2\lambda_2 \end{array}\right).$$ Since $S^2$ is a permutation matrix which shifts $W$ to its restricted dual $W'$, we get $\lambda_i^2=1$. We next consider the characters $\ch(W)=T_{W}(1,\1;\tau)$. In this case, since $\ch(W')=\ch(W)$, we have $\ch(W^1)=\ch(W^2)$, $\ch(W^{3+i})$=$\ch(W^{6+i})$ for $i=0,1,2$. Clearly, $\{\ch(W^0),\ch(W^1),\ch(W^3),\ch(W^4),\ch(W^5)\}$ is a linearly independent set. Since (5) has $q^{1/3+\Z}$-parts, $A_{12}+A_{13}\not=0$ and so $\lambda_1=\lambda_2$. Similarly, since $\ch(W^{3+i})=\ch(W^{6+i})$, we have $A_{22}+A_{23}=A_{32}+A_{33}$. Furthermore, since $A_{33}=A_{22}+A_{23}-{}^tA_{23}$ is symmetric, $A_{23}$ is symmetric and $A_{22}=A_{33}$. As we explained in \S 2.3.2, there are $\mu_i\in \C$ $(i=1,2)$ such that the $S$-transformation shifts $$W^3(\tau)+\xi^i W^4(\tau)+\xi^{2i} W^5(\tau) \to \mu_i(W^{3i}(\tau)+\xi^2W^{3i+1}(\tau)+\xi W^{3i+2}(\tau)) \qquad \mbox{ for }i=1,2.$$ From these information and \S 2.3.2, we know the entries of $S$: $$ (S_{ij})= \frac{1}{3}\begin{pmatrix} \lambda_0&\lambda_0&\lambda_0& \lambda_1&\lambda_1&\lambda_1& \lambda_1&\lambda_1&\lambda_1 \cr \lambda_0&\lambda_0&\lambda_0& \xi^2\lambda_1&\xi^2\lambda_1&\xi^2\lambda_1& \xi \lambda_1&\xi \lambda_1&\xi \lambda_1\cr \lambda_0&\lambda_0&\lambda_0& \xi \lambda_1&\xi \lambda_1&\xi \lambda_1& \xi^2\lambda_1&\xi^2\lambda_1&\xi^2\lambda_1 \cr \lambda_1&\xi^2\lambda_1&\xi \lambda_1& \mu_1&\xi \mu_1& \xi^2 \mu_1 &\mu_2&\xi^2 \mu_2&\xi \mu_2\cr \lambda_1&\xi^2\lambda_1&\xi \lambda_1& \xi \mu_1&\xi^2 \mu_1&\mu_1&\xi^2 \mu_2&\xi \mu_2&\mu_2 \cr \lambda_1&\xi^2\lambda_1&\xi \lambda_1& \xi^2 \mu_1&\mu_1&\xi \mu_1&\xi \mu_2&\mu_2&\xi^2 \mu_2 \cr \lambda_1&\xi \lambda_1&\xi^2\lambda_1& \mu_2&\xi^2 \mu_2&\xi \mu_2&\mu_1&\xi \mu_1&\xi^2 \mu_1 \cr \lambda_1&\xi \lambda_1&\xi^2\lambda_1& \xi^2 \mu_2&\xi \mu_2&\mu_2&\xi \mu_1&\xi^2 \mu_1&\mu_1 \cr \lambda_1&\xi \lambda_1&\xi^2\lambda_1& \xi \mu_2&\mu_2&\xi^2 \mu_2&\xi^2 \mu_1&\mu_1&\xi \mu_1, \end{pmatrix}$$ with $\lambda_i^2=\mu_1\mu_2=1$. This implies $\overline{S_{ih}}S_{i'h}=1$ and so $$ N_{i,i'}^k=\sum_{h} \frac{S_{ih}S_{i'h}S_{hk'}}{S_{0h}}=\sum_h \frac{S_{hk'}}{S_{0h}}.$$ Therefore, $N_{i,i'}^k\not=0$ if and only if $k=0$ and $N_{i,i'}^0=1$. Namely, $W^i\boxtimes (W^i)'=V_{\Lambda}^{\sigma}$ for every $i$, If $R\boxtimes W^i$ is not simple, then $(R\boxtimes W^i)\boxtimes (W^i)'\cong R\boxtimes (W^i\boxtimes (W^i)')\cong R$ is not simple. Therefore, $W^i$ are all simple current. By considering the characters, we have: $$\begin{array}{rl} T_{V_{\Lambda}}(\sigma,\1;\tau)=&T_{V_H}(1,\1;\tau)\frac{\eta(\tau)^s}{\eta(3\tau)^s} =\ch(W^0)+\xi\ch(W^1)+\xi^2\ch(W^2)\cr T_{V_{\Lambda}}(\sigma,\1,-1/\tau)=&\lambda_1\{\ch(W^3)+\ch(W^4)+\ch(W^5)\} \cr T_{V_{\Lambda}}(\sigma,\1,-1/(\tau+1))=&e^{2\pi \sqrt{-1}N/24}\lambda_1\{\ch(W^3)+\xi\ch(W^4)+\xi^2\ch(W^5)\}\cr T_{V_{\Lambda}}(\sigma,\1,-1/((-1/\tau)+1))=&e^{2\pi \sqrt{-1}N/24}\lambda_1\mu_1\{\ch(W^3)+\xi^2\ch(W^4)+\xi\ch(W^5)\} \end{array}$$ from the above $S$-matrix. On the other hand, since $$\begin{array}{rl} T_{V_{\Lambda}}(\sigma,\1,-1/((-1/\tau)+1))=&T_{V_{\Lambda}}(\sigma,\1,-1-\frac{1}{\tau-1}) \cr =&e^{-2\pi \sqrt{-1}N/24}T_{V_{\Lambda}}(\sigma,\1,-1/(\tau-1)) \cr =&e^{-4\pi \sqrt{-1}N/24}\lambda_1\{\ch(W^3)+\xi^2\ch(W^4)+\xi \ch(W^5)\} \end{array}$$ we have $\mu_1=e^{-6\pi \sqrt{-1}N/24}=1$ and $\mu_1=1$ since $8|N$. Then the $S$-matrix implies $\lambda_1=\lambda_0$ and $W^3\boxtimes W^3=W^6$ and $W^3\boxtimes W^6=W^0$. As we showed, $$ \widetilde{V}=W^{0}\oplus W^{3}\oplus W^{6}$$ is closed by fusion product intertwining operators. Let $Y^i$ and $I^{i}$ denote a vertex operator of $V_{\Lambda}^{\sigma}$ on $W^{3i}$ and a tensor product intertwining operator to define $W^{3}\boxtimes W^{3i}$, respectively. We may assume that $I^1(w,z)v=e^{L(-1)z}Y(v,-z)w$ for $v\in W^{0}$. Set $$\tilde{Y}(v,z)=\begin{pmatrix} Y^{0}(v,z)&0&0 \cr 0&Y^{1}(v,z)&0\cr 0&0& Y^{2}(v,z) \end{pmatrix}, \quad \tilde{Y}(w,z)=\begin{pmatrix}0& I^{0}(w,z)& 0 \cr 0&0& I^1(w,z) \cr I^2(w,z)&0&0 \end{pmatrix} $$ for $v\in W^{0}=V_L^{\sigma}$ and $w\in W^{3}$. By the definition of intertwining operators, $\tilde{Y}(w,z)$ satisfy Commutativity with $\tilde{Y}(v,z)$ for $v\in W^{0}$. The remaining thing is to prove that $\tilde{Y}(w,z)$ satisfies Commutativity with itself. Since $\dim {\cal I}_{W^{3},W^{3}}^{W^{6}}=1$, there is $\lambda\in \C$ such that $$E(\langle w^2, I^2(w,z_1)I^1(u,z_2)v\rangle)=E(\langle w^2, \lambda I^2(u,z_2)I^1(w,z_1)v\rangle)$$ for $w,u\in W^{3}$, $w^2\in (W^{6})'=W^{3}$. Clearly, $\lambda=\pm 1$. If $\lambda=-1$, then $$0=E(\langle w^2, I^2(w,z_1)I^1(w,z_2)v\rangle)$$ for any $w\in W^{3}, w^2\in W^{6}, v\in W^{0}$. Since $W^{3}$ is simple and $I^1(w,z)v=e^{L(-1)z}Y(v,-z)w$, $$\{ \mbox{the coefficients in }I^1(w,z_2)v \mid v\in W^{0}\}$$ spans $W^{1}$ by \cite{DMa2} and so $I^2(w,z_1)=0$, which is a contradiction. Therefore, we have $$E(\langle w^2, I^2(w,z_1)I^1(u,z_2)v\rangle)=E(\langle w^2, I^2(u,z_2)I^1(w,z_1)v\rangle),$$ which implies the Commutativity of $\tilde{Y}(v,z)$ with itself. Since we already know the action of $W^0=V_{\Lambda}^{\sigma}$, we can define vertex operators of all elements in $\widetilde{V}$ by the normal products, which makes $\widetilde{V}$ a vertex operator algebra. \hfill \quad\hbox{\rule[-2pt]{3pt}{6pt}}\par\vspace{3mm} \vspace{3mm} \subsection{The moonshine VOA} Let's apply the above construction to the Leech lattice $\Lambda$ and a fixed point free automorphism $\sigma$ of $\Lambda$ of order three. Then a trace function $T_{V_{\Lambda}}(\sigma,\1;\tau)$ of $\sigma$ on $V_{\Lambda}$ is $$ q^{-1}(\frac{1}{\prod_{n=1}^{\infty}(1-\xi q^n)})^{12} (\frac{1}{\prod_{n=1}^{\infty}(1-\xi^2 q^n)})^{12}=q^{-1}(\frac{1}{\prod_{n=1}^{\infty}(1+q^n+q^{2n})})^{12}=\frac{\eta(\tau)^{12}}{\eta(3\tau)^{12}}. $$ Hence, a character function of the twisted module $V_{\Lambda}(\sigma)$ is $$ \ch(V_{\Lambda}(\sigma))=\ch(W^{3})+\ch(W^{4})+\ch(W^{5})=T_{V_{\Lambda}}(\sigma,\1,-1/\tau)= 3^6q^{-1}q^{4/3}\frac{\prod_{n=1}^{\infty}(1-q^n)^{12}}{\prod_{n=1}^{\infty}(1-q^{n/3})^{12}},$$ which implies that $W^{3}$ (also $W^{6}$) has no elements of weight $1$ and $\ch(\widetilde{V}_{\Lambda})=J(\tau)$. By an easy calculation, $\dim W^3_2=3^6(12+12+\binom{12}{2})=65610$ and so a triality automorphism of $\widetilde{V}$ defined by $e^{2\pi \sqrt{-1}i/3}$ on $W^{3i}$ for $i=0,1,2$ is corresponding to ${\rm 3B}\in {\mathbb M}$ if $\widetilde{V}\cong V^{\natural}$. \subsection{A new VOA No.32 in Schellekens' list} We next start from a Niemeier lattice $N$ of type $E_6^4$ and a triality automorphism $\sigma$ which acts on the first entry $E_6$ as fixed point free and permutes the last three $E_6$, where we choose $<(0,1,1,1),(1,1,2,0)>$ as a set of glue vectors of $N$ for $E_6^4$. We note that since $E_6$ contains a full sublattice $A_2^3$, $E_6$ has a fixed point free automorphism of order three. Since $t=9$, in order to determine the dimension of $(\tilde{V}_N)_1$, it is enough to see the constant term of $q^{6/24}\frac{\Theta_H(-1/\tau)}{\eta(-1/\tau)^6}$. Since the fixed point sublattice $H$ is isomorphic to $\sqrt{3}E_6^{\ast}$, we have $$\Theta_H(\tau) =\frac{1}{3}[\phi_0(\tau)^3+\frac{1}{4}\{3\phi_0(3\tau)-\phi_0(\tau)\}^3], $$ where $\phi_0(\tau) =\theta_2(2\tau)\theta_2(6\tau)+\theta_3(2\tau)\theta_3(6\tau)$ and $\theta_2(\tau)=\sum_{m\in \Z}q^{(m+1/2)^2}$ and $\theta_3(\tau)=\sum_{m\in \Z}q^{m^2}$, see \cite{CS}. Applying $\phi_0(-1/\tau) =\frac{\tau}{i\sqrt{3}}\phi_0(\tau/3)$, we have $$\frac{\Theta_H(-1/\tau)}{\eta(-1/\tau)^6}=\frac{1}{9\sqrt{3}}q^{-1/4}+\cdots $$ and so $$\dim (\tilde{V}_N)_1=(6\times 12)/3+6+6\times 12+2\times\{3^{9/2}3^{-5/2}\}=120.$$ Clearly, from the construction we know that $(\tilde{V}_N)_1$ contains $A_2^3E_{6,3}$ as a subring. Therefore, $\tilde{V}_N$ is a new vertex operator algebra No 32 in the list of Schellekens \cite{S}. \\ \noindent {\bf Acknowledgement} The author would like to thank K.~Tanabe for his right questions.
2024-02-18T23:39:59.357Z
2010-03-01T02:55:43.000Z
algebraic_stack_train_0000
1,026
14,757
proofpile-arXiv_065-5149
\chapter*{TODOs}\@starttoc{tod}} \newcommand{\todo}[1]{} \newcommand{\l@todo}[2]{\par\noindent\parbox[t]{2cm}{Side #2}% \parbox[t]{\linewidth-2cm}{#1}} \makeatother \title{An $O(\log \log n)$-Competitive Binary Search Tree with Optimal Worst-Case Access Times} \author{Prosenjit Bose \thanks{School of Computer Science, Carleton University. The authors are partially supported by NSERC and MRI. Email: \{jit,karim,vida\}@cg.scs.carleton.ca.} \and Karim Dou\"ieb \footnotemark[1] \and Vida Dujmovi\'c \footnotemark[1] \and Rolf Fagerberg \thanks{Department of Mathematics and Computer Science, University of Southern Denmark. Email: rolf@imada.sdu.dk.}} \date{} \begin{document} \sloppy \maketitle \abstract{We present the \emph{zipper tree}, an $O(\log \log n)$-competitive online binary search tree that performs each access in $O(\log n)$ worst-case time. This shows that for binary search trees, optimal worst-case access time and near-optimal amortized access time can be guaranteed simultaneously. \todo{Note: took out a sentence of abstract (seemed unclear to me). Rolf.} } \section{Introduction} A \emph{dictionary} is a basic data structure for storing and retrieving information. The \emph{binary search tree} (BST) is a well-known and widely used dictionary implementation which combines efficiency with flexibility and adaptability to a large number of purposes. It constitutes one of the fundamental data structures of computer science. In the past decades, many BST schemes have been developed which perform element accesses (and indeed many other operations) in $O(\log n)$ time, where $n$ is the number of elements in the tree. This is the optimal single-operation worst-case access time in a comparison based model. Turning to \emph{sequences} of accesses, it is easy to realize that for specific access sequences, there may be BST algorithms which serve $m$ accesses in less than $\Theta(m \log n)$ time. A common way to evaluate how well the performance of a given BST algorithm adapts to individual sequences, is \emph{competitive analysis}: For an access sequence $X$, define ${\rm OPT}(X)$ to be the minimum time needed by any BST algorithm to serve it. To make this precise, a more formal definition of a BST model and of the sequences considered is needed---standard in the area is to use the binary search tree model (BST model) defined by Wilber~\cite{wilber}, in which the only existing non-trivial lower bounds on ${\rm OPT}(X)$ have been proven~\cite{tango,wilber}. A given BST algorithm $A$ is then said to be $f(n)$-\emph{competitive} if it performs $X$ in $O(f(n)\, {\rm OPT}(X))$ time for all $X$. In 1985, Sleator and Tarjan~\cite{splay} developed a BST called \emph{splay trees}, which they conjectured to be $O(1)$-competitive. Much of the research on BST's efficiency on individual input sequences has grown out of this conjecture. However, despite decades of research, the conjecture is still open. More generally, it is unknown if there exist asymptotically optimal BST data structures. In fact, for many years the best known competitive ratio for any BST structure was $O(\log n)$, which is achieved by plain balanced static trees. This situation was recently improved by Demaine~\emph{et al.}, who in a seminal paper~\cite{tango} developed a $O(\log \log n)$-competitive BST structure, called the \emph{tango tree}. This was the first improvement in competitive ratio for BSTs over the trivial $O(\log n)$ upper bound. Being $O(\log \log n)$-competitive, tango trees are always at most a factor $O(\log \log n)$ worse than ${\rm OPT}(X)$. On the other hand, they may actually pay this multiplicative overhead at each access, implying that they have $\Theta(\log \log n \log n)$ worst-case access time, and use $\Theta(m \log \log n \log n)$ time on some access sequences of length $m$. In comparison, any balanced BST (even static) has $O(\log n)$ worst-case access time and spends $O(m \log n)$ on every access sequence. The problem we consider in this paper is whether it is possible to combine the best of these bounds---that is, whether an $O(\log \log n)$-competitive BST algorithms that performs each access in optimal $O(\log n)$ worst-case time exists. We answer it affirmatively by presenting a data structure achieving these complexities. It is based on the overall framework of tango trees---however, where tango trees use red-black trees~\cite{redblack} for storing what is called preferred paths, we develop a specialized BST representation of the preferred paths, tuned to the purpose. This representation is the main technical contribution, and its description takes up the bulk of the paper. In the journal version of their seminal paper on tango trees, Demaine~\emph{et al.}\ suggested that such a structure exists. Specifically, in the further work section, the authors gave a short sketch of a possible solution. Their suggested approach, however, relies on the existence of a BST supporting dynamic finger, split and merge in O(log r) worst-case time where $r$ is $1$ plus the rank difference between the accessed element and the previously accessed element. Such a BST could indeed be used for the auxiliary tree representation of preferred paths. However, the existence of such a structure (in the BST-model) is an open problem. Consequently, since the publication of their work, the authors have revised their stance and consider the problem solved in this paper to be an open problem \cite{john}. Recently, Woo~\cite{woo} made some progress concerning the existence of a BST having the dynamic finger property in worst-case. He developed a BST algorithm satisfying, based on empirical evidence, the dynamic finger property in worst-case. Unfortunately this BST algorithm does not allow insertion/deletion or split/merge operations, thus it cannot be used to maintain the preferred paths in a tango tree. After the publication of the tango tree paper, two other $O(\log \log n)$-competitive BSTs have been introduced by Derryberry~\emph{et al.}~\cite{multisplay,MultisplayThesis} and Georgakopoulos~\cite{loglognsplay}. The multi-splay trees~\cite{multisplay} are based on tango trees, but instead of using red-black trees as auxiliary trees, they use splay trees~\cite{splay}. As a consequence, multi-splay trees can be shown~\cite{multisplay,MultisplayThesis} to satisfy additional properties, including the scanning and working-set bounds of splay trees, while maintaining $O(\log \log n)$-competitiveness. Georgakopoulos uses the interleave lower bound of Demaine {\em et al.} to develop a variation of splay trees called {\em chain-splay} trees that achieves $O(\log \log n)$-competitiveness while not maintaining any balance condition explicitly. However, neither of these two structures achieves a worst-case single access time of $O(\log n)$. A data structure achieving the same running time as tango trees alongside $O(\log n)$ worst-case single access time was developed by Kujala and Elomaa~\cite{poketree}, but this data structure does not adhere to the BST model (in which the lower bounds on ${\rm OPT}(X)$ are proved). The rest of this paper is organized as follows: In Section~\ref{preliminaries}, we formally define the model of BSTs and the access sequences considered. We state the lower bound on ${\rm OPT}(X)$ developed in~\cite{tango,wilber} for analyzing the competitive ratio of BSTs. We also describe the central ideas of tango trees. In Section~\ref{hybrid}, we introduce a preliminary data structure called \emph{hybrid trees}, which does not fit the BST model proper, but which is helpful in giving the main ideas of our new BST structure. Finally in Section~\ref{zippertree}, we develop this structure further to fit the BST model. This final structure, called \emph{zipper trees}, is a BST achieving the optimal worst-case access time while maintaining the $O(\log \log n)$-competitiveness property. \section{Preliminaries} \label{preliminaries} \subsection{BST Model} \label{model} In this paper we use the binary search tree model (BST model) defined by Wilber~\cite{wilber}, which is standard in the area. Each node stores a key from a totally ordered universe, and the keys obey in-order: at any node, all of the keys in its left subtree are less than the key stored in the node, and all of the keys in its right subtree are greater (we assume no duplicate keys appear). Each node has three pointers, pointing to its left child, right child, and parent. Each node may keep a constant\footnote{According to standard conventions, $O(\log_2 n)$ bits are considered as constant.} amount of additional information, but no further pointers may be used. To perform an access, we are given a pointer initialized to the root. An access consists of moving this pointer from a node to one of its adjacent nodes (through the parent pointer or one of the children pointers) until it reaches the desired element. Along the way, we are allowed to update the fields and pointers in any nodes that the pointer touches. The access cost is the number of nodes touched by the pointer. As is standard in the area, we only consider sequences consisting of element accesses on a fixed set $S$ of $n$~elements. In particular, neither unsuccessful searches, nor updates appear. \subsection{Interleave Lower Bound} The interleave bound is a lower bound on the time taken by any binary search tree in the BST model to perform an access sequence $X=\{x_1,x_2,\ldots,x_m\}$. The interleave bound was developed by Demaine~\emph{et al.}~\cite{tango} and was derived from a previous bound of Wilber~\cite{wilber}. Let $P$ be a static binary search tree of minimum height, built on the set of keys $S$. We call $P$ the \emph{reference} tree. For each node $y$ in $P$, we consider the accesses $X$ to keys in the nodes in the subtree of $P$ rooted at $y$ (including $y$). Each access of this subsequence is then labelled ``left'' or ``right'', depending on whether the accessed node is in the left subtree of $y$ (including $y$), or in its right subtree, respectively. The \emph{amount of interleaving through $y$} is the number of alternations between left and right labels in this subsequence. The interleave bound ${\rm IB}(X)$ is the sum of these interleaving amounts over all nodes $y$ in $P$. The exact statement of the lower bound from~\cite{tango} is as follows: \begin{thm} \label{ib} For any access sequence $X$, ${\rm IB}(X)/2- n$ is a lower bound on ${\rm OPT}(X)$. \end{thm} \subsection{Tango Trees} We outline the main ideas of tango trees~\cite{tango}. As in the previous section, denote by the reference tree $P$ a static binary search tree of height $O(\log n)$ built on a set of keys $S$. The \emph{preferred child} of an internal node $y$ in $P$ is defined as its left or right child depending on whether the last access to a node in the subtree rooted at $y$ (including $y$) was in the left subtree of $y$ (including $y$) or in its right subtree respectively. We call a maximal chain of preferred children a \emph{preferred path}. The set of preferred paths naturally partitions the elements of $S$ into disjoint subsets of size $O(\log n)$ (see the left part of Figure~\ref{fig-tango}). Remember that $P$ is a static tree, only the preferred paths may evolve over time (after each access). The ingenious idea of tango trees is to represent the nodes on a preferred path as a balanced \emph{auxiliary} tree of height $O(\log \log n)$. The tango tree can be seen as a collection of auxiliary trees linked together. The leaves of an auxiliary tree representing a preferred path $p$ link to the root of auxiliary trees representing the paths immediately below $p$ in $P$ (see Fig.~\ref{fig-tango}), with the links uniquely determined by the inorder ordering. The auxiliary tree containing the root of $P$ constitutes the top-part of the tango tree. In order to distinguish auxiliary trees within the tango tree, the root of each auxiliary tree is marked (using one bit). \begin{figure \begin{center} \includegraphics[width=0.6\textwidth]{Tango.eps} \end{center} \caption{\label{fig-tango} On the left, reference tree P with its preferred paths. On the right, the tango tree representation of P.} \end{figure} \todo{Why are the same letters used repeatedly for different subtrees in Fig.~\ref{fig-tango}? Rolf.} Note that the reference tree $P$ is not an explicit part of the structure, it just helps to explain and understand the concept of tango trees. When an access is performed, the preferred paths of $P$ may change. This change is actually a combination of several cut and concatenation operations involving subpaths. Auxiliary trees in tango tree are implemented as red-black trees~\cite{redblack}, and~\cite{tango} show how to implement these cut and concatenation operations using standard split and join operations on red-black tree. Here are the main two operations used to maintain tango trees: \begin{itemize} \item C{\scriptsize UT}-T{\scriptsize ANGO}($A$, $d$) -- cut the red-black tree $A$ into two red-black trees, one storing the path of all nodes of depth at most $d$, and the other storing the path of all nodes of depth greater than $d$. \item C{\scriptsize ONCATENATE}-T{\scriptsize ANGO}($A$, $B$) -- join two red-black trees that store two disjoint paths where the bottom of one path (stored in $A$) is the parent of the top of the other path (stored in $B$). So the root of $B$ is attached to a leaf of $A$. \end{itemize} These operations take $O(\log k)$ time for trees of size $k$ using extra information stored in nodes. As the trees store paths in $P$, we have $k = O(\log n)$. In addition to storing the key value and the depth in $P$, each node stores the minimum and maximum depth over the nodes in its subtree within its auxiliary tree. This additional data can be trivially maintained in red-black trees with a constant-factor overhead. Hence, if an access passes $i$ different preferred paths in $P$, the necessary change in the tango tree will be $O(i)$ cut and concatenation operations, which is performed in $O(i\log \log n)$ time. Over an access sequence $X$ the total number of cut and concatenation operations performed in $P$ corresponds to the interleave bound $O({\rm IB}(X))$, thus tango tree performs this access sequence in $O(\log \log n \, {\rm IB}(X))$ time. \section{Hybrid Trees} \label{hybrid} In this section, we introduce a data structure called \emph{hybrid trees}, which has the right running time, but which does not fit the BST model proper. However, it is helpful intermediate step which contains the main ideas of our final BST structure. \subsection{Path Representation} For all preferred paths in $P$, we keep the top $\Theta(\log \log n)$ nodes exactly as they appear on the path. We call this the \emph{top path{}}. The remaining nodes (if any) of the path we store as a red-black tree, called the \emph{bottom tree}, which we attach below the top path{}. Since a preferred path has size $O(\log n)$, this bottom tree{} has height $O(\log \log n)$. More precisely, we will maintain the invariant that a top path{} has length in $[\log\log n, 3\log\log n]$, unless no bottom tree{} appears, in which case the constraint is $[0,3\log\log n]$. (This latter case, where no bottom tree{} appears, will induce simple and obvious variants of the algorithms in the remainder of the paper, variants which we for clarity of exposition will not mention further.) A \emph{hybrid tree} consists of all the preferred paths of $P$, represented as above, linked together to form one large tree, analogous to tango trees. The required worst-case search complexity of hybrid trees is captured by the following lemma. \begin{lemma} \label{height} A hybrid tree $T$ satisfies the following property: $$ d_T(x)= O(d_P(x)) \quad \forall x \in S, $$ where $d_T(x)$ and $d_P(x)$ is defined as the depth of the node $x$ in the tree $T$ and in the reference tree $P$, respectively. In particular, $T$ has $O(\log n)$ height. \end{lemma} \begin{proof} Consider a preferred path $p$ in $P$ and its representation tree $h$. The distance, in terms of number of links to follow, from the root of $h$ to one of its nodes or leaves $x$ is no more than a constant times the distance between $x$ and the root of $p$. Indeed, if $x$ is part of the top path{}, then the distance to the root of the path by construction is the same in $h$ and $p$. Otherwise, this distance increases by at most a constant factor, since $h$ has a height of $O(\log \log n)$ and the distance in $p$ is already $\Omega(\log \log n)$. Since the number of links followed between preferred paths is the same in $P$ and $T$, the lemma follows. \todo{I (Rolf) here removed an argument I didn't see the need for. Karim, please check.\\ (Karim) That's correct.} \end{proof} \subsection{Maintaining Hybrid Trees under Accesses} \label{maintaining-hybrid-trees} Like in tango trees, the path $p$ traversed in $P$ to reach a desired node may pass through several preferred paths. During this access the preferred paths in $P$ must change such that $p$ becomes the new preferred path containing the root. This is performed by cut and concatenate operations on the preferred paths passed by $p$. When $p$ leaves a preferred path, this is cut at a depth corresponding to the depth in $P$ of the point of leave of the preferred path, and the top part cut out is concatenated with the next preferred path to be traversed. We note that the algorithm may as well restrict itself to cutting when traversing $p$, producing a sequence of cut out parts hanging below each other, which can then be concatenated in one go at the end, producing the new preferred path starting at the root. We will use this version below. In this subsection, we will show how to maintain the hybrid tree representation of the preferred paths after an access. Our goal is to describe how to perform the operations \emph{cut} and \emph{concatenate} in the following complexities: When the search path passes only the top path{} of a preferred path, the cut procedure takes $O(k)$ time, where $k$ is the number of nodes traversed in the top path{}. When the search path passes the entire top path{} and ends up in the bottom tree{}, the cut procedure takes $O(\log \log n )$ time. The concatenation operation, which glues together all the cut out path representation parts at the end of the access, is bounded by the time used by the search and the cut operations performed during the access. Assuming these running times, it follows, by the invariant that all top path{}s (with bottom tree{}s below them) have length $\Theta(\log\log n)$, that the time of an access involving $i$ cut operations in $P$ is bounded both by the number of nodes on the search path~$p$, and by $i \log \log n$. By Lemma~\ref{height}, this is $O(\min\{\log n, i \log \log n\})$ time. Hence, we will have achieved optimal worst-case access time while maintaining $O(\log \log n)$-competitiveness. \paragraph{CUT:} Case~1: We only traverse the top path{} of a path representation. Let $k$ be the number of nodes traversed in this top path{} and let $x$ be the last traversed node in this top path{}. The cut operation marks the node succeeding $x$ on the top path{} as the new root of the path representation, and unmarks the other child of $x$. The cut operation now has removed $k$ nodes from the top path{} of the path representation. This implies that we possibly have to update the representation, since the $\Theta(\log \log n)$ bound on the size of its top path{} has to be maintained. Specifically, if the size of the top path drops below $2 \log\log n$, we will move some nodes from the bottom tree{} to the top path{}. The nodes should be those from the bottom tree{} having smallest depth (in $P$), i.e., the next nodes on the preferred path in $P$. After a cut of $k$ nodes it is for small $k$ (smaller than $\log\log n$) not clear how to extract the next $k$ nodes from the bottom tree{} in $O(k)$ time. Instead, we use an \emph{extraction process}, described below, which extracts the next $\log \log n$ nodes from the bottom tree{} in $O(\log \log n)$ steps and run this process incrementally: Whenever further nodes are cut from the top path{}, the extraction process is advanced by $\Theta(k)$ steps, where $k$ is the number of nodes cut, and then the process is stopped until the next cut at this path occurs. Thus, the work of the extraction process is spread over several Case~1 cuts (if not stopped before by a Case 2 cut, see below). The speed of the process is chosen such that the extraction of $\log \log n$ nodes is completed before that number of nodes have been cut away from the top path, hence it will raise the size of the top path{} to at least $2 \log\log n$ again. In general, we maintain the additional invariant that the top path{} has size at least $2 \log\log n$, unless an extraction process is ongoing. For larger values of $k$ (around $\log\log n$), up to two extraction processes (the first of which could be partly done by a previous access) will be used to ensure this. Case 2: We traverse the entire top path{} of path representation $A$, and enter the bottom tree{}. Let $x$ be the last traversed node in $A$ and let $y$ be the marked child of $x$ that is the root of the next path representation on the search path. First, we finish any pending extraction process in $A$, so that its bottom tree{} becomes a valid red-black tree.\todo{This could move $x$ onto the top path, which does not seem anticipated by the existing phrasing. Hence the changed phrasing. Has implications for concatenate, see Todo below. Rolf.} Then we rebuild the top path{} into a red-black tree in linear time (details appear under the description of concatenate below), and we join it with the bottom tree{} using C{\scriptsize ONCATENATE}-T{\scriptsize ANGO}. Then we perform C{\scriptsize UT}-T{\scriptsize ANGO}($A'$, $d$) where $A'$ is the combined red-black tree, and $d=d_P(y)-1$. After this operation, all nodes of depth greater than $d$ are removed from the path representation $A$ to form a new red-black tree $B$ attached to $A$ (the root of $B$ is marked in the process). To make the tree~$B$ a valid path representation, we perform an extraction process twice, which extracts $2\log \log n$ nodes from it to form a top path. Finally we unmark $y$. This takes $O(\log \log n)$ time in total. \paragraph{CONCATENATE:} What is cut out during an access is a sequence of top path{}s (case~1 cuts) and red-black trees (case 2 cuts) hanging below each other. We have to concatenate this sequence into one path representation. We first rebuild all sequences of consecutive subpaths (maximum sequences of nodes which have one marked child) into valid red-black trees, in time linear in the number of nodes of each sequence (details below). This leaves a sequence of valid red-black trees hanging below each other. Then we iteratively perform C{\scriptsize ONCATENATE}-T{\scriptsize ANGO}($A$,$B$), where $A$ is the current highest red-black tree and $B$ is the tree hanging below $A$, until there is one remaining red-black tree. Finally we extract $2\log \log n$ nodes from the obtained red-black tree to construct the top path{} of the path representation. The time used for concatenate is bounded by the time used already during the search and cut part of the access. \todo{(Rolf)Karim: I here removed the note on efficiency by reusing the toppath, as this may no longer exists, by the change of Case 2, as mentioned in the last Todo above. Please check. Rolf.\\ (Karim) I think we still have that the sequence of cut out parts is composed first by $\Theta(\log \log n)$ nodes forming a path. Anyway we will discuss that for the final version.} One way to convert a path of length $k$ into a red-black tree in $O(k)$ time is as follows: consider each node on the path as a red-black tree of size one. We iteratively perform a series of C{\scriptsize ONCATENATE}-T{\scriptsize ANGO}($A$,$B$) operations for each pair of red-black trees $A$ followed by $B$. After each iteration the number of trees is divided by 2 and their size is doubled, giving a total time for rebuilding a path into a valid red-black tree of $O(\sum_{i=1}^{\log k} ik/2^i)=O(k)$. \paragraph{EXTRACT:} We now show how to perform the central process of our structure, namely extracting the next part of a top path{} from a bottom tree{}. Specifically, we will extract a subpath of $\log \log n$ nodes of minimum depth (in $P$) from the bottom tree{}~$A'$ of a given path representation $A$, using $O(\log \log n)$ time. Let $x$ be the deepest nodes on the top path{} of $A$, such that the unmarked child of $x$ corresponds to the root of the bottom tree{} $A'$. The extraction process will separate the nodes of depth (in $P$) smaller than $d=d_P(x)+\log \log n$ from the bottom tree{}~$A'$. Let a \emph{zig} segment of a preferred path $p$ be a maximal sequence of nodes such that each node in the sequence is linked to its right child in $p$. A \emph{zag} segment is defined similarly such that each node on the segment is linked to its left child (see Fig.~\ref{fig-zig-zag}). The key observation we exploit is the following: the sequence of all zig segments, ordered by their depth in the path, followed by the sequence of all reversed zag segments, ordered reversely by their depth in the path, is equal to the ordering of the nodes in key space (see Fig.~\ref{fig-zig-zag}). This implies that to extract the nodes of depth smaller than $d$ (in $P$) from a bottom tree{}, we can cut the extreme ends (in key space) of the tree, linearize them to two lists, and then combine them by a binary merge procedure using depth in $P$ as the ordering. This forms the core of the extract operation. \begin{figure \begin{center} \includegraphics[width=0.4\textwidth]{zig-zag.eps} \end{center} \caption{\label{fig-zig-zag} A path, its decomposition into zig (solid regions) and zag (dashed regions) segments, and its layout in key order.} \end{figure} We have to do this using rotations, while maintaining a tree at all times. We now give the details of how to do this, with Fig.~\ref{fig-extract} illustrating the process. Using extra fields of each node storing the minimum and maximum depth value (in $P$) of nodes inside its subtree, we can find the node $\ell'$ of minimum key value that has a depth greater than $d$ in $O(\log \log n)$ time, by starting at the root of $A'$ and repeatedly walking to the leftmost child whose subtree has a node of depth greater than $d$. Then define $\ell$ as the predecessor of $\ell'$. Symmetrically, we can find the node $r'$ of maximum key value that has depth greater than $d$ and define $r$ as the successor of $r'$. First we \todo{Operation split should be introduced properly, with BST formulation. Probably in tango section. Rolf.} split $A'$ at $\ell$ to obtain two subtrees $B$ and $C$ linked to the new root $\ell$ where $B$ contains a first sequence of nodes at depth smaller than $d$. Then we split $C$ at $r$ to obtain the subtrees $D$ and $E$ where $E$ contains a second sequence of nodes at depth smaller than $d$. In $O(\log \log n)$ time we convert the subtrees $B$ and $E$ into paths corresponding to an ordered sequences of zig segments for $B$ and zag segments for $E$. To do so we perform a left rotation at the root of $B$ until its right subtree is a leaf (i.e., when its right child is a marked node). Then we repeat the following: if the left child of the root has no right child the we perform a right rotation at the root of $B$ (adding one more node to right spine, which will constitute the final path). Otherwise we perform a left rotation at the left child of the root of $B$, moving its right subtree into the left spine. This process takes a time linear in the size of $B$, since each node is involved in a rotation at most 3 times (once a node enters the left spine, it can only leave it by being added to the right spine). A symmetric process is performed to convert the subtree $E$ into a path. The last operation, called a \emph{zip}, merges (in term of depths in $P$) the two paths $B$ and $E$, in order to form the next part of the top path. We repeatedly select the root of $B$ or $E$ that has the smallest depth in the tree $P$. The selected root is brought to the bottom of the top path{} using $O(1)$ rotations. The zip operation stops when the subtrees $B$ and $E$ are both empty. Eventually, we perform a left rotation at the node $\ell$ if needed, i.e., if $r$ has a smaller depth in $P$ than $\ell$. \begin{figure \begin{center} \includegraphics[width=0.9\textwidth]{extract.eps} \end{center} \caption{\label{fig-extract} (a) Tree $A'$. (b) Split $A'$ at $\ell$. (c) Split $C$ at $r$. (d) Convert the subtrees $B$ and $E$ into paths. (e) Zip the paths $B$ and $E$.} \end{figure} The time taken is linear in the extracted number of nodes, i.e, $\log \log n$. The process consists of a series of rotations, hence can stopped and resumed without problems. Therefore, the discussion presented in this section allows us to conclude with the following theorem. \begin{thm} Our hybrid tree data structure is $O(\log \log n)$-competitive and performs each access in $O(\log n)$ worst-case time. \end{thm} \subsection{Hybrid Trees and the BST Model} We specify in the description of the cut operation (more precisely, in case~1) that the extraction process is executed incrementally, i.e., the work is spread over several cut operations. In order to efficiently revive an extraction process which has been stopped at some point in the past, we have to return to the position where its next rotation should take place. This location is unique for each path representation, and is always in its bottom tree{}. Thus, traversing its top path{} to reach the bottom tree{} would be too costly for the analysis of case~1. Instead, we store in the marked node (the first node of the top path{}) appropriate information on the state of the process. Additionally, we store an extra pointer pointing to the node where the next rotation in the process should take place. This allows us to revive an extraction process in constant time. Unfortunately, the structure so obtained is not in the BST model (see Section~\ref{model}), due to the extra pointer. In the next section we show how to further develop the idea from this section into a data structure fitting the BST model. Still, we note that the structure of this section can be implemented in the comparison based model on a pointer machine, with access sequences $X$ being served in $O(\log \log n \, {\rm OPT}(X))$ time, and each access taking $O(\log n)$ time worst-case. \section{Zipper Trees} \label{zippertree} The data structure described in the previous section is a BST, except that each marked node has an extra pointer facilitating constant time access to the point in the path representation where an extraction process should be revived. In this section, we show how to get rid of this extra pointer and obtain a data structure with the same complexity bounds, but now fitting the BST model described in Section~\ref{model}. To do so, we develop a more involved version of the representation of preferred paths and the operations on them. The goal of this new path representation is to ensure that all rotations of an extraction process are located within distance $O(1)$ of the root of the tree of the representation. The two main ideas involved are: 1) storing the top path{} as lists, hanging to the sides of the root, from which the top path{} can be generated incrementally by merging as it is traversed during access, and 2) using a version of the split operations that only does rotations near the root. The time complexity analysis follows that of hybrid trees, and will not be repeated. \subsection{Path Representation} For all preferred paths in $P$ we decompose its highest part into two sequences, containing its zig and its zag segments, respectively. These are stored as two paths of nodes, of increasing and decreasing key values, respectively. As seen in Section~\ref{maintaining-hybrid-trees} (cf.\ Fig.~\ref{fig-zig-zag}), both will be ordered by their depth in $P$. Let $\ell$ and $r$ be the highest node in the zig and zag sequence respectively. The node $\ell$ will be the root of the auxiliary tree (the marked node). The remainder of the zig sequence is the left subtree of $\ell$, $r$ is its right child, and the remainder of the zag sequence is the right subtree of $r$. We call this upper part of the tree a \emph{zipper}. We repeat this decomposition once again for the next part of the path to obtain another zipper which is the left subtree of $r$. Finally the remaining of the nodes on the path are stored as a red-black tree of height $O(\log \log n)$, hanging below the lowest zipper. Fig.~\ref{fig-zipper} illustrates the construction. The two zippers constitute the \emph{top path}, and the red-black tree the \emph{bottom tree}. Note that the root of the bottom tree{} is reachable in $O(1)$ time from the root of the path representation. We will maintain the invariant that individually, the two zippers contain at most $\log\log n$ nodes each, while (if the bottom tree is non-empty) they combined contain at least $(\log\log n)/2$ nodes. A \emph{zipper tree} consists of all the preferred paths of $P$, represented as above, linked together to form one large tree. \begin{figure \begin{center} \includegraphics[width=0.45\textwidth]{fig-zipper.eps} \end{center} \caption{\label{fig-zipper} The path representation in zipper trees.} \end{figure} \subsection{Maintaining Zipper Trees under Accesses} We now give the differences, relative to Section~\ref{maintaining-hybrid-trees}, of the operations during an access. \paragraph{CUT:} When searching a path representation, we incrementally perform a zip operation (i.e., a merge based on depth order) on the top zipper, until it outputs either the node searched for, or a node that leads to the next path representation. If the top zipper gets exhausted, the lower zipper becomes the upper zipper, and an incremental creation of a new lower zipper by an extraction operation on the bottom tree{} is initiated (during which the lower zipper is defined to have size zero). Each time one more node from the top zipper is being output (during the current access, or during a later access passing through this path representation), the extraction advances $\Theta(1)$ steps. The speed of the extraction process is chosen such that it finishes with $\log\log n$ nodes extracted before $(\log\log n)/2$ nodes have been output from the top zipper. The new nodes will make up a fresh lower zipper, thereby maintaining the invariant. If the access through a path representation overlaps (in time) at most one extraction process (either initiated by itself or by a previous access), it is defined as a case~1 cut. No further actions takes place, besides the proper remarkings of roots of path representations, as in Section~\ref{maintaining-hybrid-trees}. If a second extraction process is about to be initiated during an access, we know that $\Theta(\log\log n)$ nodes have been passed in this path representation, and we define it as a case~2 cut. Like in Section~\ref{maintaining-hybrid-trees} this now ends by converting the path representation to a red-black tree, cutting it like in tango trees, and then converting the red-black tree remaining into a valid path representation (as defined in the current section), all in $\Theta(\log\log n)$ time. \paragraph{CONCATENATE:} There is no change from Section~\ref{maintaining-hybrid-trees}, except that the final path representation produced is as defined in the current section. \paragraph{EXTRACT:} The change from Section~\ref{maintaining-hybrid-trees} is that the final zip operation is not performed (the process stops at step~(d) in Fig.~\ref{fig-extract}), and that we must use a search and a split operation on red-black trees where all structural changes consist of rotations a distance $O(1)$ from the root\footnote{As no actual details of the split operation used are given in \cite{tango}, we do not know whether their split operation fulfills this requirement. It is crucial for our construction that such a split operation is possible, so we describe one solution here.} (of the bottom tree{}, which is itself at a distance $O(1)$ from the root of the zipper tree). Such a split operation is described in the appendix (Part I). Note that searching takes place incrementally as part of the split procedure. \section{Conclusion} The main goal in this area of research is to improve the competitive ratio of $O(\log \log n)$. Here we have been able to tighten other bounds, namely the worst-case search time. We think this result helps providing a better understanding of competitive BSTs. It could be that competitiveness is in conflict with balance maintenance, i.e., an $O(1)$-competitive binary search tree could possibly not guarantee an $O(\log n)$ worst-case search time. For instance splay-tree~\cite{splay} and GreedyFuture tree \cite{munro2000competitiveness, geoBST}, the two BSTs that are conjectured to be dynamically optimal, do not guarantee optimal worst-case search time. Thus even if dynamically optimal trees exist, our result could still be a good alternative with optimal worst-case performance. We also think that the ideas developed to achieve our result have their own interest. They can be used to improve the worst-case performance of a data structure while maintaining the same amortized performance. For example we show in the appendix (Part II) how to adapt them in order to improve the worst-case running time of the multipop operation on a stack. \bibliographystyle{abbrv}
2024-02-18T23:39:59.916Z
2010-02-28T00:39:35.000Z
algebraic_stack_train_0000
1,048
6,624
proofpile-arXiv_065-5248
\section{INTRODUCTION} The galactic X-ray binary source SS\,433, consisting of a stellar-mass black hole in close orbit about an early-type star \citep{BBS2008,HG2008,B2010}, is a miniature analogue of an AGN \citep{MR99}, and is often classified as a microquasar. Two mildly relativistic jets emerge from opposite sides of the compact object at speeds of $0.26~c$. Modeling of the optical spectrum shows that the jet system precesses with a period of 162 days about a cone of half-angle $20^{\circ}$ \citep{AM79, Fabian, Milgrom, M84}. Imaging by the Very Large Array (VLA) confirmed this picture, as the radio images showed helical jets on both sides of the source \citep{Spencer79,HJ81a,HJ81b}. Higher resolution images made by VLBI reveal the structure down to a scale of a few AU \citep{Ver87,Ver93}. Analysis of the 15~GHz VLA-scale structure of the jets in SS\,433, with angular resolution of about $0.1\arcsec$, was presented in \citet{PaperI}. Multi-epoch dual-frequency analysis of SS\,433 during the summer of 2003 will be presented in \citet{PaperIII}, hereafter Paper~III, and for the summer of 2007 in \citet{PaperIV}, hereafter Paper~IV. In this paper we use high dynamic range VLA images of SS\,433 to study the radiative intensity of the two jets as a function of the material's {\em birth epoch} $t$ and {\em age at emission} $\tau$ (hereafter simply ``age''); see Appendix~1 (\S\ref{s:app1}) for our definitions of these quantities. Our goals are (i) to determine if the two jets are intrinsically the same, and (ii) to learn if the jets behave as individual non-interacting components or as a continuous stream. SS\,433 offers a unique opportunity to answer these questions because it presents two jets with ever-changing mildly-relativistic velocities known as functions of time and position on the sky from their optical properties. In \S\ref{s:obs} we describe the observations and data reduction. In \S\ref{s:jets} we determine the properties of the jets and we discuss the physical implications of these results in \S\ref{s:Discussion}. Our conclusions are summarized in \S\ref{s:conclude}. \section{OBSERVATIONS AND DATA REDUCTION} \label{s:obs} Interferometer data for 2003 July 11 (JD 2452832) were obtained from the VLA data archive. The array was in the A configuration with 27 working antennas. The frequency used was 4.86~GHz, with a bandwidth of 50~MHz for each of the two IF systems. The data were edited and phase calibrated in AIPS \citep{aips} using the nearby source 1922+155, and amplitude calibrated against 3C\,48; the data were then imaged using the tasks IMAGR and CALIB, utilizing many cycles of phase and amplitude self-calibration. These data were previously used by \cite{BB04} to study the symmetries of the jets, and by \cite{MJ08} to study the magnetic field configuration in the jets. Figs.~\ref{fig:CUN}(a) \& (b) show the distribution of total intensity for SS\,433, made with uniform weighting (ROBUST = $-5$); this image has resolution of $0.32\arcsec$. The kinematic model without nutation or velocity variations is shown in (a), and the model with nutation and velocity variations is shown in (b). The kinematic model used to define the jet locus is described in Appendix~1, \S\ref{s:app1}. A naturally weighted image with a resolution of $ 0.47\arcsec$ (ROBUST = +5), shown in Figure~\ref{fig:CNA}, reveals very low level emission out to distances of at least $6\arcsec$, corresponding to ages of about 800~days for both jets. For all of the analysis that follows we used the image shown Fig.\ref{fig:CUN}, and included the velocity variations and nutation. This accounts for the ``spiking'' apparent in many of the figures. Uncertainty in the position of the model jet locus is by far the largest source of fluctuations in the total intensity curves; the root-mean-square difference between profiles generated with and without these terms is about 1.5~mJy/beam. Thermal noise is negligible until $I_\nu$ falls below about 0.1~mJy/beam. Comparison of the images with the kinematic model shows them to be compatible, with the kinematic locus being the leading edge of the jets; the source also contains significant off-jet material. We see no unambiguous evidence of either nutation or significant jet velocity variations, but this is not surprising given the limited resolution of the images. \section{PROPERTIES OF THE SS\,433 JETS} \label{s:jets} \subsection{Normalization Factors} \label{s:NormalizationFactors} In order to study the intrinsic properties of the jets, we used the kinematic model to determine the effects on the observed jets of projection and Doppler beaming. In what follows, we will present jet properties as functions of age $\tau$ instead of the birth epoch $t$ because we expect the aging of the jet material to be the most important factor determining its properties. Figure~\ref{fig:LTT} shows $\tau$ for both east and west jets as functions of $t$, and it can be used to estimate $\tau$ at any location on the images, using the average proper motion of $\mu \simeq 8 \mbox{ mas d}^{-1}$. Projection effects were determined using model jets consisting of discrete components equally-spaced in birth epoch $t$; the projected density on the sky was determined by performing a beam-averaged count of the number of components per beam area as functions of position down the kinematic locus of the the two jets. This was normalized to unity at the core. The Doppler factor $D$ of each component was found from the kinematic model, and Doppler boosts calculated as $D^{\alpha+n}$, where a continuous jet has $n=2$ and an isolated component has $n=3$ (see Appendix~2 (\S\ref{s:app2})), and $\alpha \simeq 0.7$ is the spectral index of the jet ($F_\nu \propto \nu^{-\alpha}$; Paper~III, Paper~IV). The total normalization factors were obtained by performing a beam-weighted sum of the boosts as a function of position down the model jets; all quantities were determined as functions of $\tau$. Figure~\ref{fig:Norm2} shows combined normalizations for the jets in SS\,433, expressed as multiplicative factors to be applied to observed total intensities, as functions of $\tau$, for $n = 2$. Curves for $n=3$ are similar and not shown. Data from Figure~\ref{fig:Norm2} will be used below to create model jet intensity curves. The ratio of the normalizations for the two values of $n$ ranges between 0.92 and 1.12, sufficiently different from unity to distinguish the two in the regions $\tau \leq 150~\mbox{ days}$ where the jet flux is greatest. \subsection{Determination of the Intrinsic Structure} \label{s:IntrinsicStructure} Figure~\ref{fig:FUN(age)} shows the profiles down the model locus of the observed total intensities of the jets derived from the image in Figure~\ref{fig:CUN} and the naturally-weighted image in Figure~\ref{fig:CNA}, as functions of $\tau$. For the region $\tau \ge 60 \mbox{ days}$, contamination by the core is less than 1\% of the jet intensity in Figure~\ref{fig:CUN}; the beam occupies the first 75 days in Figure~\ref{fig:CNA}. The loops in the curves for both jets are the result of the fact that material of multiple different birth epochs can have the same age as a result of differing light travel times. It is useful to know that the places along each jet where the radial velocity switches sign and the jet motions are both in the plane of the sky are located at $t = \tau = 14, \,125,\,176,\, 288,\, 338 ,\,450, \mbox{and} \, 500 $~days. At these ages the projection factors and Doppler boots of the two jets are equal, so if the two jets are intrinsically the same, the raw total intensities should be the same. Examination of the intensity curves shows that, allowing for the beams of full-width at half-maximum about $\Delta \tau \simeq 40 \mbox{ days}$ for Figure~\ref{fig:FUN(age)}(a) and $\simeq 60 \mbox{ days}$ for Figure~\ref{fig:FUN(age)}(b), this is satisfied. As can be seen from Figure~\ref{fig:F1F2(age)}, neither normalization for projection effects alone nor for Doppler boosting alone results in east and west jets appearing the same. This is true for either choice of $n$. Figure~\ref{fig:NormCUN} shows the intrinsic brightness\footnote{We use this term to refer to the total intensities that we would observe were we in a frame co-moving with a piece of the jet.} profiles of the jets in SS\,433 created by normalizing for both projection effects and Doppler boosting for $n=2$ and $n=3$, as functions of $\tau$, derived from the image in Figure~\ref{fig:CUN}. Especially in the case $n=2$, the intrinsic brightnesses of the two jets are very similar when compared for equal ages. The small discrepancies between the derived intrinsic brightnesses of the east and west jets for $n=2$ in the range $80 \leq \tau \leq 220\mbox{ days}$ can be understood as artifacts due to the tight bend in the east jet at RA~$\simeq 1^{\arcsec}$ and the first loop in the west jet, RA~$\simeq -1.2^{\arcsec}$; these correspond to ages $80 \leq \tau \leq 130\mbox{ days}$ and $140 \leq \tau \leq 220\mbox{ days}$, respectively. In the first range the measured total intensity for the east jet is artificially high, as seen in Fig.~\ref{fig:NormCUN}(a), due to two parts of the jet locus being ``in the beam'' at the same time. In the second range, the total intensity of the west jet is falsely measured to be high, again corresponding to Fig.~\ref{fig:NormCUN}(a). The normalization process cannot account for the contamination of one side of the loop by the part of the other that is in the same beam (the procedure lacks {\em a priori} information about the ratio of the two contributions). This is minimal at the cusp of the loop, $\tau \simeq 180\mbox{ days}$. However, when we reconstruct the west jet from the intrinsic brightness of the east jet and the normalization model this is accounted for, and the artifacts largely disappear (see below). While the same arguments apply to $n=3$, they cannot account for the discrepancy over $60 \leq \tau \leq 90 \mbox{ days}$; the agreement for $n=3$ is simply not as good where the jet is brightest, suggesting that $n=2$. We can see the results of the normalizations in another way if we examine the total intensity ratio of the two jets. Figure~\ref{fig:MasterRatiosPlot} shows the ratio of observed total intensities in the form east jet divided by west jet and the predicted ratios for complete correction for $n=2$ and $n=3$, all as functions of $\tau$. The fit of the combined normalization to the data is quite good for $n=2$; the prediction for $n=3$ is not as good a fit. In fact, the discrepancies between the data ratio and the $n=2$ model ratio are explained in the same way described above. In the range $80 \leq \tau \leq 130\mbox{ days}$, the east jet is measured to be artificially high, raising the measured ratio above the model ratio, in agreement with the figure; in the range $140 \leq \tau \leq 220 \mbox{ days}$, the west jet is measured artificially high, producing the opposite effect, again in accord with the figure. We can test the similarity of the two jets in a third way, by using the derived intrinsic brightness of the east jet in Figure~\ref{fig:NormCUN} and the model normalization factors in Figure~\ref{fig:Norm2} to predict the observed total intensity of the west jet. Comparisons for $n=2$ and $n=3$ are shown in Figure~\ref{fig:Recon1} for ages out to 350~days. In the region $\tau \leq 350$~days the east jet is a single-valued function of age, so it can be modeled (normalized) uniquely, and the loops in the west jet reconstructed free of artifacts. The small uncertainties in the intrinsic brightness of the east jet described above produce small disagreements over $60 \leq \tau \leq 200\mbox{ days}$, as expected. The overall fit is better for $n=2$ than for $n=3$, also in agreement with the results above. Because $t$ becomes a multiply-valued function of $\tau$ beyond ages of about $\tau = 350$~days, it is not possible to reconstruct the west jet uniquely beyond this point. Finally, if we regard $n$ simply as a phenomenological parameter, examination of Figs.~\ref{fig:NormCUN}--\ref{fig:Recon1} shows that $n=2$ is favored over $n=3$ because agreement of the intrinsic brightness of the two jets where they are brightest is superior. In summary, we conclude that {\em in SS\,433 jets the observed total intensities are compatible with the twin jets being intrinsically identical, and behaving as continuous jets.} We address the question of the expected behavior of the observed total intensity on the Doppler boosts in Appendix~2 (\S\ref{s:app2}). \section{DISCUSSION} \label{s:Discussion} Three interesting questions about the jets are: (i) do they contain features, such as breaks or peaks in the intensity profiles in either or both jets that are not the effect of projection and Doppler boosting on an otherwise smooth intensity distribution, (ii) if so, are they the same in the two jets, and (iii) if so, how do they behave in time. Such features might arise at the ejection of the jets (the core after all is quite variable; Paper~III), or as functions of their aging and/or propagation. Figure~\ref{fig:F12(t)} shows the intrinsic brightness for $n=2$ and $n=3$ as a function of birth epoch. Because this compares east and west jet material of different ages there is no {\em a priori} reason to expect matching features. However, the regions of roughly constant brightness located over the range $150 \leq t \leq 250 \mbox{ days}$ in both jets may be such features because we see the same behavior over the entire summers of 2003 (Paper III) and 2007 (Paper~IV), but at different age ranges. Thus the flattening of the decay curve may be the result of variations in the central engine. Comparison of the two jets is hindered by our inability to extract unique properties of the west jet through the first tight loop. Nonetheless, what does seem clear is that after the initial rapid falloff of each jet, there is a leveling-off, and then a somewhat slower decline with large observational uncertainties. Assuming that the jets are dominated by aging rather than by variability in ejection properties, we can parameterize the physical processes that govern these different time ranges by fitting the normalized intensity curves with linear, exponential, and power law models. Figure~\ref{fig:Norm12LinearFits} shows fits to the intrinsic brightness $B$ of the jets for $n=2$ over the range $60 \leq \tau \leq 150$ days. Exponential fits of the form $ B \propto e^{-\tau /T} $ yield half-lives of $T_{1/2} = T \ln{2} = 41$~days and 39~days for the east and west jets, respectively. Linear fits, while unrealistic for an entire jet, give half-lives of $T = -\bar{B}/2\dot{B}$ of 46~days and 40~days, and power law fits yield exponents of $-1.7$ and $-1.8$, for the east and west jets, respectively. Reassuringly, these timescales are very similar to the results of multi-epoch imaging that follows the aging of specific pieces of the jet (Paper~III, Paper~IV). The root-mean-square deviations of the exponential fits of $B(\tau)$ for the east jet and west jet are 1.0~mJy/beam and 0.8~mJy/beam, which suggests that a reasonable lower limit to the uncertainties in the total intensity curves is about 1~mJy/beam, comparable to the difference between using model curves with and without nutation and velocity variations (\S\ref{s:obs}). Given the limited span of these data, we are unable to distinguish definitively among these models; the exponentials appear superior, but this is not statistically significant. For constant speed, as in SS\,433, models that predict intensity behavior with distance should be compared to data fit in age. The model of a freely-expanding spherical cloud of magnetized relativistic plasma \citep{VanDerLaan}, which might be appropriate if the pieces of the jet behave as discrete non-interacting components, predicts that in the optically-thin regime the resulting synchrotron total intensity falls off as $r^{-2p}$, where $p$ is the electron energy distribution exponent. Since here $p \simeq 2.4$, this predicts a decay with a power law in component radius of index $\sim -4.8$. Were the components freely expanding we would have $r \propto \tau$, which would then be ruled out by the data. In the conical jet model of \citet{HJ88}, the intensity of the jet falls off as $dI \propto z^{-m} \, dz$, where $z$ is distance down the jet, $m = (7p-1)/(6 + 6\delta) $, and $\delta = 0$ and $\delta = 1$ correspond to freely-expanding and slowed expansion cases, respectively. For $p = 2.4$, $\delta = 0$ corresponds to $ m = 2.6$, and $\delta = 1$ to $m = 1.3$. Power fits to our data in this age range lie between the two cases. We also fit exponentials and power laws to the data for $250 \leq \tau \leq 800\mbox{ days}$, and find for both jets $T_{1/2} \simeq 80\mbox{ days}$ or $a \leq 4$ where $B \propto \tau^{-a}$, marginally incompatible with the freely expanding sphere model. \section{CONCLUSIONS} \label{s:conclude} The principal results in this paper are: \begin{enumerate} \item We have used a deep VLA A-array image of SS\,433 at 4.86~GHz to study the intrinsic brightness profiles of the twin jets. \item Radiation from both jets is detected out to at least $6\arcsec$ from the core, corresponding to jet ages of about 800~days. \item The observed brightnesses of the jets are strongly affected by projection effects and Doppler boosting. \item Intrinsically the two jets are remarkably similar, and they are best described by Doppler boosting of the form $D^{2+\alpha}$, as expected for a continuous jet. \item The intrinsic brightness of the jets behaves in a complex way that is not well described by single linear, exponential, or power law decay. \item During their first $\sim$150~days, the jet decays are well represented by linear or exponential functions of age, with linear half-lives or exponential half-lives of about 40~days, the same for the two jets. Power law fits to the data in this age range give exponents of about $-1.8$. \item There is a transition region, corresponding to jet ages between about 150 and 250~days, during which the jets maintain roughly constant intrinsic brightnesses. This represents nearly one complete precession period. This also corresponds to about $ 150 < t < 250 \mbox{ days}$ in either jet. \item At later times the jet decay can be roughly fit as exponential functions of age, with exponential half-lives of about 80~days, or as power laws with indices of $a \leq 4$. \end{enumerate} \section{Acknowledgments} Part of this work is based on the undergraduate thesis of M.R.M. This material is based upon work supported by the National Science Foundation under Grants Nos.~0307531 and 0607453, and prior grants. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. D.H.R.\ gratefully acknowledges the support of the William R.\ Kenan, Jr.\ Charitable Trust. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. D.H.R.\ thanks Dale Frail and NRAO for their hospitality, and Vivek Dhawan, Amy Mioduszewski, and Michael Rupen for interesting conversations. Herman Marshall kindly provided us with the ephemeris for velocity variations and nutation, and very helpful comments. Rebecca Andridge contributed invaluable statistical expertise. We have made use of the CFITSIO and MFITSIO packages made available by the NASA Goddard SFC and Damian Eads of LANL, respectively. Finally, we thank the referee for a number of comments and questions that led to significant improvements and clarifications. Facilities: \facility{VLA (A array, data archive (experiment code AF403))} \section{APPENDIX 1} \label{s:app1} We used the geometric model of \cite{HJ81b}, the precession parameters of \citet{Eik}, and the distance of \citet{LBG07}, augmented with jet velocity variations following \citet{BB05} and nutation following \citet{Katz82}, to determine the locus and kinematic properties of the jets and their appearance on the sky. The ephemeris used is listed in Table~\ref{tab:ephem}. Our convention for labeling a piece of jet that we see at a certain spot on the sky is as follows: We define its {\em birth epoch} $t$ as the time elapsed in the frame of the core since the material was born (ejected). Thus $t$ serves to label specific pieces of the jets. We also define its {\em age at emission} $\tau$ as the age of the same piece of material at the moment it emitted the photons that we detect at the same time as photons from the rest of the source. Due to the finite speed of light, material that is ``behind'' the core and moving away from us had to emit its photons at a younger age in order that they arrive at the observer at the same time as photons from the core, and the reverse for matter ``in front.'' This means that oncoming material has $\tau > t$, receding material the opposite. In this convention, the core has $t = \tau = 0$, material with larger $t$ was born earlier than material with smaller $t$, material with larger $\tau$ emitted its photons at an older age than did material with smaller $\tau$, and material moving in the plane of the sky has $t = \tau$. Quantitatively, birth epoch $t$ and age at emission $\tau$ are related by $$ \tau = \frac{t}{1-v_x/c}, $$ where $v_x$ is the component of material's velocity that is directed toward the observer. In SS\,433, each jet has pieces with both positive and negative $v_x$, as illustrated in Figures~\ref{fig:CUN} and \ref{fig:CNA}. Figure~\ref{fig:LTT} illustrates the light travel time effects in SS\,433. It shows the age at time of emission of photons $\tau$ for each part of the east and west jets, as a function of the birth epoch $t$ of each part of the jets. In the parts of the jets that are sufficiently bright that we can see them, the time differences $(t-\tau)$ range up to $\pm 200$~days, grow with distance from the core, produce the well-known distortion of the appearance of the jets, and affect our analysis. Note that at most places along the jets, the differences between $t$ and $\tau$ for the two jets are not equal for a given value of $t$. \section{APPENDIX 2} \label{s:app2} Here we address the question of why the jets behave such that their observed brightnesses vary as $D^{2+\alpha}$ rather than $D^{3+\alpha}$, as might be expected for isolated individual components such as those seen in VLBI imaging.\footnote{The appearance of isolated components in VLBI images could simply be due limited dynamic range.} There are four issues that should not be confused. (1) Are the jets a series of isolated components or a continuous fluid flow? (2) For these two cases, how does the observed total intensity depend on Doppler boosting in a jet whose locus is a helix but whose motion is radial? (3) Under what conditions can observations distinguish the the cases $n=2$ and $n=3$? (4) What do the data say about the SS\,433 jets? A moving optically-thin synchrotron source is Doppler boosted and K-corrected by a factor of $D^{3+\alpha}$ because $I_\nu / \nu^3$ is a Lorentz invariant and $I_\nu \propto \nu^{-\alpha}$ \citep{RL} . In the case of a continuous jet we want to know the flux density of a segment of the jet defined by the observing beam. As shown by \cite{LindBlandford, Sikora,DeYoung}, and others, not all the material in that segment of jet at a given instant contributes to a particular image. This is purely a light travel time effect. The fraction of the jet segment that contributes is $(1 - \beta\cos\theta) = 1/\gamma D$. The effect of this is that the boost factor is modified from $D^{3+\alpha}$ to $D^{2+\alpha}/\gamma$; in other words, $I_\nu^{jet} = \gamma I_\nu^{obs}/ D^{2+\alpha}$. This calculation assumes a straight jet with the velocity along the jet direction. In SS\,433 the locus of the jet is a helix even though the motion of the jet material is radial. This means that the time delay between the arrival of photons from the near and far ends of the jet is no longer $\Delta t = R \cos(\theta)/c$, where $R$ is the length of the part of the jet defined by the beam and $\theta$ is the angle between the jet velocity and the line of sight. Instead, it is $\Delta t \simeq R \cos(\eta)/c$, where $\eta$ is the angle between the tangent to the jet locus and the line of sight. In addition, the rate at which material is added to the observed part of the jet is no longer proportional to $v_{jet}$, but is instead determined by the component of the jet velocity in the direction of the locus of the helix. We have made such calculations using the known kinematics of each part of the jet, and the results are very similar to the continuous straight jet case, that is, with effective boosts of approximately $D^{2+\alpha}$, as observed. Critical to this analysis is the fact that the Lorentz factors of each part of the jets are the same. If the jets are each a series of components, then whether the total intensity is boosted by $D^{2+\alpha}$ or $D^{3+\alpha}$ depends on the spacing of the components relative to the spatial scale of the resolution of the imaging. If the components are sufficiently close, then there are always components that do not appear in the beam but belong there, and components that appear there but are not, just as for a continuous fluid flow, and $D^{2+\alpha}$ is appropriate. If the spacing is so great that during the light travel time from the back of the jet to the front no component moves into or out of the beam, then $D^{3+\alpha}$ is appropriate. However, depending on the properties of the components and the angular resolution of the observation, any value between $n=2$ and $n=3$ can be appropriate. For the current observation, the data strongly suggest that $n=2$ for SS\,433, indicating that the jets are either continuous, or if composed of discrete components, then there are many in the beam at any time. \begin{deluxetable}{lllc} \tablecaption{Ephemeris for SS\,433.\label{tab:ephem}} \tablewidth{0pt} \tablecolumns{4} \tablehead{\colhead{Property} & \colhead{Symbol\tablenotemark{a}} & \colhead{Value} } \startdata Speed & $\beta = v/c$ & $0.2647$ \\ Precession Period & $P$ & 162.375~d \\ Reference Epoch (JD) & $t_{ref}$ & 2443563.23 \\ Precession Cone Opening Angle & $\psi$ & $20.92^{\circ}$ \\ Precession Axis Inclination & $i$ & $78.05^{\circ}$ \\ Precession Axis Position Angle & $\chi + \pi/2$ & $98.2^{\circ}$ \\ Sense of Precession & $s_{rot}$ & $-1$ \\ Distance & $d$ & $5.5~\mbox{kpc}$ \\ Orbital Period Reference Epoch (JD) & \ldots & 2450023.62 \\ Orbital Period & \ldots & 13.08211~d \\ Nutation Reference Epoch (JD) & \ldots & 2450000.94 \\ Nutation Period & \ldots & 6.2877~d \\ Nutation Amplitude & \ldots & 0.009 \\ $\beta_{orb}$ & \dots & 0.0066 \\ $\beta_{orb \, phase}$ & \ldots & 4.7 \\ \enddata \tablenotetext{a}{As defined in \cite{HJ81b}.} \end{deluxetable}
2024-02-18T23:40:00.442Z
2010-06-03T02:00:33.000Z
algebraic_stack_train_0000
1,070
4,720
proofpile-arXiv_065-5325
\section{Historic background - Stallings' theorem, Wall's conjecture and structure trees} The Seifert-van-Kampen Theorem (see \cite{VanKampen1933}) says that if a topological space can be decomposed into two open path connected spaces $C$ and $D$ then its fundamental group is a free product of the fundamental groups of the two subspaces with amalgamation over he fundamental group of their intersection $C\cap D$. Formally, if $x\in C\cap D$ then $\pi_1(C\cup D,x)=\pi_1(C,x)*_{\pi_1(C\cap D,x)}\pi_1(D,x)$. In the 1960s and 1970s mathematicians began to consider groups as geometric objects themselves, and not as something that is defined by another geometric or topological object. Lie groups are considered as differentiable manifolds, and finitely generated groups are considered as Cayley graphs. Some geometric properties of finitely generated Cayley graphs can be regarded as properties of the group itself, because they do not depend on the choice of the finite set of generators. These are properties which are quasi-isometry invariants like the number of ends, growth, hyperbolicity, accessibility etc. The task of geometric group theory is to relate such geometric properties with algebraic properties of the group. In the 1980ies and 1990ies geometric group theory has become an own branch of mathematics. The counter part of the Seifert-van-Kampen Theorem in geometric group theory is Stallings' structure theorem. A group $G$ is said to \emph{split} over a subgroup $H$ if $G$ is a non-trivial free product with amalgamation over $H$ or $G$ is an HNN-extension of a group over $H$. A finitely generated group is said to have more than one end if its finitely generated Cayley graphs have more than on end. Equivalently, if there is a finite subgraph of the Cayley graph whose complement has at least two infinite components. \begin{theo}[Stallings' structure theorem] A finitely generated group has more than one end if and only if it splits over some finite subgroup. \end{theo} Stallings has proved this theorem for the torsion free case in 1968 \cite{Stallings1968} and for the general case in 1972 \cite{Stallings1971}. For his work he was awarded the Cole prize in 1970. It may happen that the factors of this group decomposition split again over some finite subgroup, and the factors of this second splitting may split again, and so on. A group is said to be \emph{accessible} if this process of splitting over finite subgroups stops after finitely many steps. In 1971 C.T.C.~Wall conjectured in \cite{Wall1971} that all finitely generated groups are accessible. A first progress in solving the problem was made in 1976 by Bamford and Dunwoody \cite{Bamford1976} by finding a criterion for accessibility. A tree which corresponds to an automorphism invariant tree-decomposition of a graph $X=(VX,EX)$ is called a \emph{structure tree}. In the present context, a \emph{structure cut} is a component $C$ in the complement of a finite set of edges such that $C$ and $VX\setminus C$ both contain a ray (one-way infinite path), and $C$ is nested with $g(C)$, for any automorphism $g$. Being nested means that $C\subset g(C)$ or $g(C)\subset C$ or $C\cap g(C)=\emptyset$ or $C\cup g(C)=VX$. Such structure cuts yield structure trees, see Sections~\ref{sec:blocks} and \ref{sec:trees}. Structure trees were introduced in 1979 \cite{Dunwoody1979}. Three years later in the paper ``Cutting up graphs'' \cite{Dunwoody1982} the existence of structure cuts was finally proved for all graphs with more than one edge-end. These are graphs with a finite set of edges whose complement has two components each of which contain a ray. When we consider the action of finitely generated groups with more than one end on structure trees of their Cayley-graphs, then Bass-Serre Theory implies Stallings' Theorem, see Section~\ref{sec:trees}. There are also applications of structure trees in graph theory, see \cite{Hamann2009,Kroen2009,Kroen2001,Moeller1992ends1,Moeller1996,Moeller1995,Seifter2008,Thomassen1993}. Structure cuts have been further developed by Dicks and Dunwoody in 1989 in the book \cite{Dicks1989}. In 1985 \cite{Dunwoody1985} Wall's conjecture was proved for finitely presented groups. In 1993 Thomassen and Woess introduced a graph theoretic notion of accessibility in \cite{Thomassen1993}. They called a graph \emph{accessible} if there is an integer $n$ such that any two ends can be separated by removing $n$ (or fewer) edges. They showed that a finitely generated group is accessible if and only if its finitely generated Cayley graphs are accessible. Dicks and Dunwoody have shown in \cite{Dicks1989} that for all $n$ there are systems of structure cuts which separate any pair of ends that can be separated by $n$ or fewer edges. Hence for Cayley graphs of accessible groups, there are structure trees which describe all possible splittings of the group. In the same year, Wall's conjecture was finally disproved, see ~\cite{Dunwoody1993}.\smallskip In \cite{Dunwoody2009} Dunwoody and the author have proved the existence of structure cuts which are based on the principle of removing finite sets of vertices instead of edges. These results imply a generalization of Stallings' theorem from finitely generated to arbitrarily generated groups. Namely, a group splits over a finite subgroup if and only if it has a Cayley graph with more than one vertex end. That is, a Cayley graph with two rays which can be separated from each other by removing finitely many vertices. Another application of \cite{Dunwoody2009} is the generalization of Tutte's tree decomposition of 2-connected graphs to $k$-connected graphs for any integer $k$. The arguments in \cite{Dunwoody2009} yield a proof of the classical result on the existence of structure cuts which is contained in Lemmas~\ref{lemm:finitely}, \ref{lemm:intersection}, \ref{lemm:not_nested_corner}, \ref{lemm:corners_equality} and Theorem~\ref{theo:optimally}. Together with a certain tree-construction in Section~\ref{sec:blocks} and some Bass-Serre Theory in Section~\ref{sec:trees} we obtain a complete proof of Stallings' Theorem in Section~\ref{sect:stallings}. This way of proving Stallings' theorem is in principle not new and was also mentioned as application in \cite{Dunwoody1982}. What is new are the arguments that follow from the results in \cite{Dunwoody2009} and give a short proof of the existence of structure cuts, for instance see Lemma~\ref{lemm:not_nested_corner} and Theorem~\ref{lemm:corners_equality}. The short proof of Thomassen and Woess of Lemma~\ref{lemm:finitely} also simplified the original proof. An improvement of structure tree theory is, that the vertices of the tree are not defined as equivalence classes of cuts, but as certain inseparable blocks which are subsets of the underlying graph. This approach does not shorten the construction of structure trees significantly, but we think that it is more accessible to inexperienced readers. And last but not least, one goal of the paper is to present a complete and detailed combinatorial proof of Stallings' theorem. \section{Minimal edge cuts} Let $X=(VX,EX)$ be an undirected simple graph. That is, edges are two-element sets of vertices. For $C,D\subset VX$ let $\delta(C,D)$ denote the set of edges with one vertex in $C$ and one vertex in $D$. We write $C^\mathrm{c}=VX\setminus C$ and call $\delta C=\delta(C,C^\mathrm{c})$ the \emph{edge boundary} of $C$. A set of vertices $C$ is \emph{connected} if the subgraph spanned by $C$ is connected. A \emph{$k$-separator} is a $k$-element edge boundary of a set of vertices $C$, where $C$ and $C^\mathrm{c}$ are connected. For a set of edges $F$ define $X-F$ as the graph $(VX,EX\setminus F)$. \begin{lemm}[Proposition 4.1 in \cite{Thomassen1993}]\label{lemm:finitely} Let $e$ be an edge of a connected graph $X$ and let $k$ be an integer. There are only finitely many $k$-separators which contain $e$. \end{lemm} \begin{proof} We prove the statement by induction on $k$. The case $k=1$ is obvious. Suppose the statement holds for all connected graphs for some integer $k\ge 1$. We show the statement in $X$ for $(k+1)$-separators containing $e=\{x,y\}$. The graph $X-\{e\}$ is connected, because $k\ge 2$. Hence there is a path $\pi$ from $x$ to $y$ in $X-\{e\}$. Every $(k+1)$-separator in $X$ which contains $e$ also contains an edge $e'$ of $\pi$. By the induction hypothesis there are only finitely many $k$-separators in $X-\{e\}$ which contain $e'$. Now the statement follows, because $\pi$ is finite and different $(k+1)$-separators in $X$ which contain $e$ and $e'$ correspond to different $k$-separators in $X-\{e\}$ which contain $e'$. \end{proof} The \emph{boundary} $NC$ of a set of vertices $C$ is the set of vertices in $C^\mathrm{c}$ which are adjacent to some vertex in $C$. We write $\beta C$ to denote the set $NC\cup NC^\mathrm{c}=\bigcup\delta C$. A \emph{component} of a set of vertices $A$ is a maximal connected subset of $A$. Vertices $x,y$ are separated by $S\subset VX$ if $x,y$ lie in different components of $VX\setminus S$. Sets of vertices $A,B$ are \emph{separated} by a set of vertices $S$ if any $x\in A$ and $y\in B$ lie in distinct components of $VX\setminus S$. Sets of vertices $A,B$ are \emph{separated} by a set of edges $F$ if any $x\in A$ and $y\in B$ lie in distinct components of the graph $X-F$, respectively. A vertex $x$ is said to be separated from a set of vertices $A$ (or a vertex) if $\{x\}$ is separated from $A$. A ray is a one-way infinite path (of distinct vertices). A \emph{tail} of a ray is an infinite subpath of a ray. Two rays are said to be separated by a set (of vertices or edges) if this set separates some tails of the rays. We call two rays \emph{edge equivalent} if they cannot be separated by a finite set of edges. The corresponding ends are called the \emph{edge ends}. In non-locally finite graphs there are different notion of ends (usually defined by separation by removing finite sets of vertices), but for locally finite graphs, all definitions coincide and they correspond to Freudenthal's end compactification of locally compact Hausdorff space \cite{Freudenthal1931,Freudenthal1942,Freudenthal1944}. A \emph{cut} (or \emph{edge-cut}) is a set of vertices $C$ with finite edge boundary such that $C$ and $C^\mathrm{c}$ are both connected and contain a ray. If a cut contains a ray $R$ then it contains all rays which are equivalent to $R$. Hence if $R$ lies in $C$ then we say that $C$ contains the corresponding end. If there is an edge cut then let $\kappa$ be the minimal cardinality of all boundaries of edge cuts. Edge cuts $C$ with $|\delta C|=\kappa$ are called \emph{minimal}. If $X$ is connected and has more than one edge end then there is a minimal edge cut. \begin{lemm}\label{lemm:intersection} Let $C$ and $D$ be minimal cuts. If $C\cap D$ and $C^\mathrm{c}\cap D^\mathrm{c}$ are cuts then they are minimal cuts. \end{lemm} In \cite[Theorem 2]{Jung1977} and \cite[Proposition 2.1]{Jung1993} Jung and Watkins prove a similar result. \begin{proof} According to Figure~\ref{fig:1block} we set\smallskip \begin{tabular}{lll} $a=|\delta(C\cap D,C^\mathrm{c}\cap D)|$,\quad& $b=|\delta(C\cap D,C\cap D^\mathrm{c})|$,& $c=|\delta(C\cap D^\mathrm{c},C^\mathrm{c}\cap D^\mathrm{c})|$,\\ $d=|\delta(C^\mathrm{c}\cap D,C^\mathrm{c}\cap D^\mathrm{c})|$,& $e=|\delta(C\cap D,C^\mathrm{c}\cap D^\mathrm{c})|$,&$f=|\delta(C\cap D^\mathrm{c},C^\mathrm{c}\cap D)|$. \end{tabular}\smallskip \begin{figure}[htbp] \centering \begin{tikzpicture} \path (0,0) coordinate (p1); \path (3.5,0) coordinate (p2); \path (0,3.5) coordinate (p4); \path (3.5,3.5) coordinate (p3); \draw (0.8,0) -- (2.7,0); \draw (0.8,3.5) -- (2.7,3.5); \draw (0,0.8) -- (0,2.7); \draw (3.5,0.8) -- (3.5,2.7); \draw (0.565685425,0.565685425) -- (2.934314575,2.934314575); \draw (2.934314575,0.565685425) -- (0.565685425,2.934314575); \draw (p1) circle (0.8cm) node {$C\cap D^\mathrm{c}$}; \draw (p2) circle (0.8cm) node {$C^\mathrm{c}\cap D^\mathrm{c}$}; \draw (p3) circle (0.8cm) node {$C^\mathrm{c}\cap D$}; \draw (p4) circle (0.8cm) node {$C\cap D$}; \draw (1.75,3.72) node {$a$}; \draw (1.75,0.22) node {$c$}; \draw (-0.25,1.8) node {$b$}; \draw (3.25,1.8) node {$d$}; \draw (1.1,2.75) node {$e$}; \draw (1.1,1.45) node {$f$}; \end{tikzpicture} \caption{One-connected graph and structure tree}\label{fig:1block} \end{figure} Then \[\kappa = |\delta C|= a + e+f +c = |\delta D|= b + e + f + d\] and hence \begin{equation}\label{equa:4n} 2\kappa = a +b +c +d +2e+2f. \end{equation} The sets $C\cap D$ and $C^\mathrm{c}\cap D^\mathrm{c}$ contain an end and so \[|\delta (C\cap D)|=a+e+b\ge\kappa\quad\mbox{and}\quad|\delta (C^\mathrm{c}\cap D^\mathrm{c})|=c+e+d\ge\kappa.\] Hence $a +b +c +d +2e\ge 2\kappa$ and, by (\ref{equa:4n}), $a +b +c +d +2e= 2\kappa$ and $f=0$. Finally, $a+e+b=c+e+d=\kappa$ and $|\delta (C\cap D)|=|\delta (C^\mathrm{c}\cap D^\mathrm{c})|=\kappa.$ \end{proof} \section{Main Theorem}\label{sect:maintheo} Sets of vertices $C$ and $D$ are \emph{nested} if $C\subset D$, $C^\mathrm{c}\subset D$, $C\subset D^\mathrm{c}$ or $C^\mathrm{c}\subset D^\mathrm{c}$. Equivalently, if $C\subset D$, $D\subset C$, $C\cap D=\emptyset$ or $C\cup D=VX$. Equivalently, if one of following intersections is empty $C\cap D,C\cap D^\mathrm{c},C^\mathrm{c}\cap D,C^\mathrm{c}\cap D^\mathrm{c}$. These intersections are called \emph{corners} of $C$ and $D$. According to Figure~\ref{fig:1block}, we say that $C\cap D$ is \emph{opposite} to $C^\mathrm{c}\cap D^\mathrm{c}$, and $C^\mathrm{c}\cap D$ is \emph{opposite} to $C\cap D^\mathrm{c}$. \begin{lemm}\label{lemm:not_nested_corner} Let $C,D,E$ be sets of vertices and let $C$ and $D$ be not nested. If $E$ is not nested with two opposite corners of $C$ and $D$ then $E$ is not nested with both $C$ and $D$. If $E$ is not nested with some corner of $C$ and $D$ then $E$ is either not nested with $C$ or not nested with $D$. \end{lemm} \begin{proof} Suppose $E$ is not nested with two opposite corners. By relabeling $C^\mathrm{c}$ as $C$, if necessary, we can assume that $E$ is not nested with $C\cap D$ and $C^\mathrm{c}\cap D^\mathrm{c}$. Suppose $E$ and $C$ are nested. If $E\subset C$ or $E^\mathrm{c}\subset C$ then this would contradict the assumption that $E$ is not nested with $C^\mathrm{c}\cap D^\mathrm{c}$. If $E\subset C^\mathrm{c}$ or $E^\mathrm{c}\subset C^\mathrm{c}$ then this would contradict the assumption that $E$ is not nested with $C\cap D$. The assumption that $E$ and $D$ are nested leads to a contradiction in the same way. Hence the first claim is established. Now suppose that $E$ is not nested with some corner, say $C\cap D$, and suppose $E$ is nested with both $C$ and $D$. There are four possible inclusions for $E$ and $C$ being nested, and four for $E$ and $D$ being nested. We show that the corresponding 16 possibilities all lead to a contradiction. If $C\subset E$ or $D\subset E$ then $C\cap D\subset E$, contradicting $E$ not being nested with $C\cap D$. Also if $C\subset E^\mathrm{c}$ or $D\subset E^\mathrm{c}$ then $C\cap D\subset E^\mathrm{c}$. Now 4 cases remain. If one of the sets $C^\mathrm{c}$ and $D^\mathrm{c}$ were in $E$ and the other in $E^\mathrm{c}$, then $C^\mathrm{c}\cap D^\mathrm{c}=\emptyset$, and $C$ and $D$ would be nested. If $C^\mathrm{c}$ and $D^\mathrm{c}$ are both in $E$ or both in $E^\mathrm{c}$ then $C^\mathrm{c}\cup D^\mathrm{c}\subset E$ or $C^\mathrm{c}\cup D^\mathrm{c}\subset E^\mathrm{c}$, implying $E^\mathrm{c}\subset C\cap D$ or $E\subset C\cap D$, respectively, and $C\cap D$ would be nested with $E$. \end{proof} Let $C$ be a cut and let $M(C)$ be the set of minimal cuts which are not nested with $C$. Set $m(C) = |M(C)|$. It follows from Lemma~\ref{lemm:finitely} that $m(C)$ is finite. \begin{lemm}\label{lemm:corners_equality} Let $C$ and $D$ be edge-cuts which are not nested and suppose $C\cap D$ and $C^\mathrm{c}\cap D^\mathrm{c}$ are cuts, then\[m(C\cap D) + m(C^\mathrm{c}\cap D^\mathrm{c}) < m(C) + m(D).\] \end{lemm} \begin{proof} It follows from Lemma~\ref{lemm:intersection} that $C\cap D$ and $C^\mathrm{c}\cap D^\mathrm{c}$ are minimal cuts. Let $E$ be a minimal cut. If $E$ is in $M(C^\mathrm{c}\cap D^\mathrm{c})\cap M(C\cap D)$ then, by Lemma~\ref{lemm:not_nested_corner}, $E$ is in $M(C)$ and in $M(D)$. Hence if $E$ is counted twice on the left of the above inequality then it is also counted twice on the right. If $E$ is in $M(C\cap D)\setminus M(C^\mathrm{c}\cap D^\mathrm{c})$ or in $M(C^\mathrm{c}\cap D^\mathrm{c})\setminus M(C\cap D)$, that is $E$ is counted once on the left, then, again by Lemma~\ref{lemm:not_nested_corner}, $E$ is in $M(C)$ or in $M(D)$. Hence $E$ is counted at least once on the right side of the inequality. We have now proved that $m(C\cap D) + m(C^\mathrm{c}\cap D^\mathrm{c}) \le m(C) + m(D)$. Since $C\in M(D)$ and $D\in M(C)$, the cuts $C$ and $D$ are counted on the right side, but not on the left side, so this inequality is a strict inequality. \end{proof} Let $\mathcal{C}$ be the set of all minimal cuts. Set $m=\min\{m(C)\mid C\in \mathcal{C}\}$. This minimum exists as integer, because the values $m(C)$ are all finite. A minimal cut $C$ with $m(C)=m$ is called \emph{optimally nested}. The following is the main theorem in classical structure tree theory. \begin{theo}\label{theo:optimally} Optimally nested cuts are nested with all other optimally nested cuts. \end{theo} \begin{proof} We show that optimally nested cuts are nested with all other cuts. Suppose there are optimally nested minimal cuts $E$ and $F$ which are not nested. Then $m\ge 1$. There cannot be two adjacent corners which do not contain an end, because $C$, $C^\mathrm{c}$, $D$ and $D^\mathrm{c}$ all contain an end. Hence there is a pair of opposite corners which contain an end. By relabeling we can assume that these corners are $C\cap D$ and $C^\mathrm{c}\cap D^\mathrm{c}$ and by Lemma~\ref{lemm:intersection}, each of $C\cap D$ and $C^\mathrm{c}\cap D^\mathrm{c}$ are minimal edge-cuts. Now Lemma \ref{lemm:corners_equality} says that \[m(C\cap D) + m(C^\mathrm{c}\cap D^\mathrm{c})< m(C) + m(D) = 2m. \] Thus one of the summands on the left side is less than $m$, contradicting the minimality of $m$. \end{proof} \section{Blocks and trees from nested systems}\label{sec:blocks} Let $\mathcal{C}$ be a set of sets of vertices. A nonempty set of vertices $B$ is called \emph{$\mathcal{C}$-inseparable} if no pair of vertices in $B$ can be separated by $\beta C$, for any $C\in \mathcal{C}$. In other words, for all $C\in\mathcal{C}$ either $B\subset C\cup NC$ or $B\subset C^\mathrm{c} \cup NC^\mathrm{c}$. Maximal $\mathcal{C}$-inseparable sets are called the \emph{$\mathcal{C}$-blocks}. Note that edges are $\mathcal{C}$-inseparable and distinct blocks are not necessarily disjoint. For a block $B$, let $\mathcal{C}(B)$ denote the set of all $C$ in $\mathcal{C}$ which are minimal with respect to the inclusion $B\subset C\cup NC$. That is, $C$ is in $\mathcal{C}(B)$ if $B\subset D\cup ND\subset C\cup NC$, for $C,D\in\mathcal{C}$, implies $C=D$. We call $\mathcal{C}$ nested if any two sets in $\mathcal{C}$ are nested. \begin{lemm}\label{lemm:blocks} Let $\mathcal{C}$ be a nested set of sets of vertices and $C\in\mathcal{C}$. No pair of vertices in $\beta C$ is separated by $\beta D$, for any $D\in\mathcal{C}$. Let $\mathcal{C}$ be minimal. There is precisely one $\mathcal{C}$-block $B_C$ such that $C\in\mathcal{C}(B_C)$. If $D\in\mathcal{C}(B_C)$ then $\beta D\varsubsetneq B_C$. Moreover, \begin{equation}\label{equa:block} \bigcup_{D\in\mathcal{C}(B_C)}\!\beta D\ \subset\ B_C\ =\ \bigcap_{D\in\mathcal{C}(B_C)}\!D\cup ND. \end{equation} \end{lemm} \begin{proof} Suppose $x,y\in\beta C$ are separated by $\beta D$. After possibly replacing $C$ with $C^\mathrm{c}$ and $D$ with $D^\mathrm{c}$, we have $C\cap D=\emptyset$ and $x\in C^\mathrm{c} \cap D$. Since $x\in\beta C$, $x$ is adjacent to some vertex in $C\cap D^\mathrm{c}$. Hence $x\in ND^\mathrm{c}$, contradicting the assumption that $\beta D$ separates $x$ from another vertex. Suppose there are different blocks $B,B'$ such that $C\in\mathcal{C}(B)\cap\mathcal{C}(B')$. Suppose there is a vertex $x\in B'\setminus B$ and $y\in B\cap B'$. They are separated by $\beta D$, for some $D\in\mathcal{C}$. Any path $\pi\subset C$ from $x$ to $y$ intersects $D$ and $D^\mathrm{c}$. Hence $C\cap D\ne\emptyset$ and $C\cap D^\mathrm{c}\ne\emptyset$. So either $C^\mathrm{c}\cap D=\emptyset$ or $C^\mathrm{c}\cap D^\mathrm{c}=\emptyset$, equivalently $D\subset C$ or $D^\mathrm{c}\subset C$. One of the sets $D\cup\beta D,D^\mathrm{c}\cup\beta D$ contains $B$, the other $B'$. If $D\subset C$ and $B\subset D\cup\beta D$ then $B\subset D\cup ND\varsubsetneq C\cup NC$ in contradiction to $C\in\mathcal{C}(B)$. Any of the other cases leads to a contradiction in the same way. The intersection in (\ref{equa:block}) is maximal inseparable, it contains $NC$ and it is contained in $C\cup NC$. Hence this is the unique block $B_C$ such that $C\in\mathcal{C}(B_C)$. If $D,E\in\mathcal{C}(B_C)$ then $E^\mathrm{c}\subset D$ which implies $E^\mathrm{c}\cup\beta E\subset D\cup\beta D$ and \[\beta E\subset \bigcap_{D\in\mathcal{C}(B_C)}\!D\cup ND=B_C\] and establishes the inclusion in (\ref{equa:block}). If $\mathcal{C}(B_C)=\{C\}$ then $B_C=C\cup NC$, and hence $\beta C$ is a proper subset of $B_C$. If $\mathcal{C}(B_C)$ contains a cut $D$, $D\ne C$, then $\beta C\cup\beta D\subset B_C$ and again $\beta D$ is a proper subset of $B_C$. \end{proof} Given a nested set $\mathcal{C}$ of minimal cuts we define a graph $T=T(\mathcal{C})$. Let $VT$ be the set of $\mathcal{C}$-blocks. Two vertices (blocks) $v,w$ of $T$ are defined to be adjacent if they intersect. \begin{theo} Let $\mathcal{C}$ be a nested minimal system of edge cuts. Then $T(\mathcal{C})$ is a tree. \end{theo} \begin{proof} Given an edge $\{v,w\}\in ET$ there is a cut $C\in\mathcal{C}$, such that $\beta C\cap v\cap w\ne\emptyset$. Lemma~\ref{lemm:blocks} implies $\beta C\subset v\cap w$. The graph $T-\{v,w\}$ is disconnected and hence $T$ does not contain any circuits. Let $B_1,B_2$ be two $\mathcal{C}$-blocks. Lemma~\ref{lemm:finitely} says that any pair of vertices in these blocks can be separated in $X$ only by finitely many sets $\beta C$, for $C\in\mathcal{C}$. This implies that there is a finite path from $B_1$ to $B_2$. Let $\pi$ be a path between vertices $x,y\in VT$. Lemma~\ref{lemm:finitely} implies that there are only finitely many sets $\beta C$, $C\in\mathcal{C}$, which contain some edge in $\pi$. Hence there is a finite path in $T$ connecting the blocks which contain $x$ and $y$, respectively. This means that $T$ is connected and thus $T$ is a tree. \end{proof} \section{Group splitting and Bass-Serre Theory}\label{sec:trees} Let $H,J$ be groups and $A<H$, $B<J$ be isomorphic subgroups. The amalgamated product with isomorphism $ \varphi:A\to B$ is \[H*_{A}J=\left<H,J\mid a= \varphi(a), a\in A\right>.\] Let $T_H$ be a system of representatives of the left cosets of $A$ in $H$ and $T_J$ of left cosets of $B$ in $J$, where $A$ and $B$ are represented by the neutral element $1$. A \emph{normal form for $H*_{A}J$} is a sequence $(x_0,x_1,\ldots,x_n,a)$ such that $a\in A$ and $x_i\in T_H\setminus\{1\}\cup T_J\setminus\{1\}$, and no consecutive elements $x_i$ and $x_{i+1}$ lie in the same system of representatives. Let $A,B$ be isomorphic subgroups of $H$. The HNN-extension with isomorphism $ \varphi:A\to B$ is \[H*^{A}=\left<H,t\mid tat^{-1}= \varphi(a), a\in A\right>,\] where $t$ is an additional generator, called the \emph{stable letter} which is not contained in $H$. A \emph{normal form for $H*^{A}$} is a sequence $(x_0,t^{\varepsilon_0},x_1,t^{\varepsilon_1},\ldots ,x_n,t^{\varepsilon_n},h)$ where $h$ is an arbitrary element of $H$, $\varepsilon_i\in\{-1,1\}$, there is no consecutive subsequence $t^\varepsilon,1,t^{-\varepsilon}$ and if $\varepsilon_i=1$ then $x_i\in B$, if $\varepsilon_i=-1$ then $x_i\in A$. Note that the notations $H*_{A}J$ and $H*^{A}$ are ambiguous, because the amalgamated product and the HNN-extension are not determined by $H,J,A,B$, they depend on the choice of $ \varphi$. The following can for instance be found as Theorems~11.3 and 14.3 in Bogopolski's book \cite{Bogopolski2008} in terms of right co-sets instead of left-cosets. \begin{lemm}\label{lem:normalform} For every element $g$ in a free product with amalgamation or in an HNN-extension there is a unique normal form $(a_1,a_2,\ldots,a_n)$ such that $g=a_1a_2\ldots a_n$. \end{lemm} \begin{proof} Any $h\in H$ can be uniquely written as $[[x]]^H\cdot [x]^H$ where $[[x]]^H\in T_H$ and $[x]^H\in A$. Let $W$ be the set of normal forms for $G=H*_{A}J$. We define an action of $H$ on $W$ on the right by $(x_0,x_1,\ldots,x_n,a)\cdot h$ \[=\begin{cases} (x_0,x_1,\ldots,x_n,ah)&\text{if } h\in A,\\ (x_0,x_1,\ldots,x_n,[[ah]]^H,[ah]^H)&\text{if }h\notin A,\ x_n\in T_J,\\ (x_0,x_1,\ldots, x_n ah)&\text{if } h\notin A,\ x_n\in T_H,\ x_n ah\in A,\\ (x_0,x_1,\ldots, [[x_n ah]]^H,[x_n ah]^H)&\text{if } g\notin A,\ x_n\in T_H,\ x_n ah\notin A\\ \end{cases}\] and we define \[(a)\cdot h=\begin{cases} (ah)&\text{if } h\in A,\\ ([[ah]]^H,[ah]^H)&\text{if }h\notin A. \end{cases}\] We can do the same for $J$. The actions of $H$ and $J$ on $W$ can be extended to an action of the free product $H*J$ on $W$. In this free product, elements $a\in A$ and $\varphi(a)\in B$ are not identified, but elements of the form $a\varphi(a)^{-1}$ are in the kernel of this action. The same holds for the normal closure $N$ of these elements. Hence we obtain a well defined action of $G=H*_{A}J=(H*J)/N$ on $W$. If an element $g\in G$ would have two different normal forms $(x_0,x_1,\ldots,x_n,a)$ and $(y_0,y_1,\ldots,y_n,a')$ then \[(1)\cdot g=(1)\cdot x_0x_1\ldots x_na=(x_0,x_1,\ldots,x_n,a)\text{\quad and}\] \[(1)\cdot g=(1)\cdot y_0 y_1\ldots y_na'=(y_0,y_1,\ldots,y_n,a'),\] which is impossible because $(1)\cdot g$ is well defined. In the case of an HNN-extension we first define actions of $H$ and $\{t\}$ on the set of all normal forms $W$, similar to the case of amalgamated products. We obtain an action of $H*\left<t\right>$ on $W$. The action of $tat^{-1}$ and $\varphi(a)$ on $W$ coincide for $a\in A$. Hence $tat^{-1}\varphi(a)^{-1}$ is in the kernel of this action, and so this also holds for the normal closure $N$ of all such elements. Since $G=H*^A=(H*\left<t\right>)/N$, we get an action $G$ on $W$ and proceed as before. \end{proof} With $\Aut(X)$ we denote the automorphism group of $X$. A group $G$ is said to \emph{act} on a graph $X$ if there is a homeomorphism $\psi:G\to\Aut(X)$. We usually write $g$ instead of $\psi(g)$ if this does not cause any confusion. An action on $X$ is said to be \emph{transitive} if it is transitive on $VX$. That is for all $x,y\in VX$ there is a $g\in G$ such that $g(x)=y$. If $G$ acts on $X$ and if $\mathcal{C}$ is a $G$-invariant nested set of cuts then the action of $G$ on on $X$ induces an action of $G$ on the set of blocks and hence on the tree $T(\mathcal{C})$. If the action is transitive on $X$ then it is also transitive on $T(\mathcal{C})$. If $G$ acts on $X$ then the vertices of the quotient graph $X/G$ are the orbits of the action of $G$ on $VX$. Two vertices $v,w$ of $X/G$ are adjacent if there are vertices $x\in v$, $y\in w$ such that $x,y$ are adjacent in $X$. We consider this quotient graph $Y$ as multigraph. That is, $VY$ and $EY$ are arbitrary sets and there are functions $\alpha:EY\to VY$ and $\omega:EY\to VY$ which termine origin and terminal vertex of an edge. A \emph{loop} is a multigraph with one vertex $x$ and one edge $e$. That is, $\alpha(e)=\omega(e)=x$. A \emph{segment} is a connected multigraph with two vertices $x,y$ and one edge $e$. That is, $\alpha(e)=x$ and $\omega (e)=y$. An \emph{edge inversion} is an element $g\in G$ together with an edge $\{x,y\}\in EX$ such that $g(x)=y$ and $g(y)=x$. The following theorem from Bass-Serre Theory can also be found in the books \cite{Bogopolski2008,Serre1980,Serre2003}. \begin{theo}\label{theo:serre} Let $G$ act without edge inversion and transitively on an infinite tree. Then $G$ splits over the stabilizer of an edge of the tree. If $X/G$ is a segment then $G$ splits as a non-trivial free product with amalgamation over the stabilizer of an edge of $T$. If $X/G$ is a loop then $G$ splits as HNN-extension over the stabilizer of an edge of $T$. The stable letter maps the origin vertex of that edge to the terminal vertex. \end{theo} Let $G_P,G_Q$ be the stabilizers of the vertices $P,Q$ and let $G_e=G_P\cap G_Q$ denote the pointwise stabilizer of the edge $e=\{P,Q\}$. \begin{proof} Suppose $X/G$ is a segment. Let $e=\{P,Q\}$ be an edge of $T$ and set $G'=G_P*_{G_e} G_Q$. Because $G_P\cup G_Q$ generates both $G$ and $G'$, and $G_P\cap G_Q=G_e$ in both groups $G,G'$ and because any relation in $G'$ is a relation in $G$, there is a unique homomorphism $\psi: G'\to G$ which is the identity on $G_P\cup G_Q$. The homeomorphism yields an action of $G'$ on $T$ and this action is transitive. We have to show that $\psi$ is bijective. Surjectivity follows from the fact that $G_P\cup G_Q$ generates both groups and that any relation in $G'$ is a relation in $G$. To see that $\psi$ is injective, choose an element $g\in G'\setminus \{1\}$. If $g\in G_P\cup G_Q$ then $g$ is not in the kernel of $\psi$ because $\psi$ is the identity on $G_P\cup G_Q$. Otherwise, if $g\in G\setminus (G_P\cup G_Q)$ then let $(x_0,x_1,\ldots,x_n,a)$ be the standard presentation of $g$ with respect to $T_1$ as a set of representatives of left cosets of $G_e$ in $G_P$ and $T_2$ as a set of representatives of left cosets of $G_e$ in $G_Q$. Suppose $x_n\in T_1\setminus\{1\}$. Then $x_n,x_{n-2},x_{n-4},\ldots$ act as rotations of $T$ around $P$ which do not fix $Q$ and $x_1,x_3,\ldots$ act as rotations around $Q$ which do not fix $P$, because $a$ is the only elements of the normal form which in $G_e$. Let $d_T$ denote the graph distance in $T$. Then $d_T(Q,a(Q))=0$, $d_T(Q,x_na(Q))=2$, $d_T(Q,x_{n-1}x_na(Q))=2$, $d_T(Q,x_{n-2}x_{n-1}x_na(Q))=4$, $d_T(Q,x_{n-3}x_{n-2}x_{n-1}x_n a(Q))=4$ etc. The case $x_n\in T_2$ is similar. Hence $g$ is not in the kernel of $ \varphi$ and so $ \varphi$ is injective. Now we assume that $X/G$ is a loop. Since $G$ acts without inversion, there is a $G$-invariant orientation of the edges. Let $e=(P,Q)$ be such an oriented edge and let $t$ be any element of $G$ such that $t(P)=Q$. Then $G_{t(e)}t=tG_{e}=\{g\in G\mid g(e)=t(e)\}$. The map $\varphi: G_e\to G_{t(e)}$ given by $\varphi (g)=tgt^{-1}$ is a group isomorphism. Let $G'$ denote the HNN-extension $\left<G_Q,t\mid tgt^{-1}=\varphi(g), g\in G_e\right>$. By identifying $G_Q$, $G_e$ and $t$ in $G'$ and $G$ we obtain a unique homomorphism $\psi:G'\to G$. This follows from the fact that any of the relations in $G'$ also hold in $G$. This homomorphism yields an action of $G'$ on $T$. The action is transitive, because $G_Q\cup \{t\}$ generates $G$. It follows that $\psi$ is surjective. To show that $\psi$ is injective we choose a $g$ in $G'\setminus \{1\}$. If $g$ is in $G_e$ then $g$ is not in the kernel of $\psi$, because $\psi$ is the identity on $G_e$. If $g$ is $G'\setminus G_e$, then $g$ has a normal form $(x_0,t^{\varepsilon_0},x_1,t^{\varepsilon_1},\ldots ,x_n,t^{\varepsilon_n},h)$ whose length is at least 2, $h$ is in $G_Q$, $x_i$ in $G_e\cup G_{t(e)}$ and $\varepsilon_i\in\{-1,1\}$. Given a vertex $v$ of $T$, then the action of $x_i$ or $h$ does not change the distance to $Q$, because $G_e\cup G_{t(e)}\subset G_Q$. The left multiplication with elements $t^{\varepsilon_n}$ will increase the distance to $Q$ except for the situation where we have subsequence of the form $t^{-1},1,t$ or $t,1,t^{-1}$ in the normal form, which is not possible. This shows that the action of $g$ does not fix $Q$ and so $g$ is not in the kernel of $\psi$. Hence $\psi$ is injective. \end{proof} \begin{coro}\label{coro:serre} A group which acts transitively on an infinite tree splits over the pointwise stabilizer of an edge. \end{coro} The barycentric subdivision $T'$ of a graph $T$ is obtained by replacing each edge by a path of length two. In other words, we add an additional vertex on each edge. \begin{proof}[Proof of Corollary~\ref{coro:serre}] If there is no edge inversion then $T/G$ is a loop and $G$ splits as HNN-extension according to Theorem~\ref{theo:serre}. If the is an edge inversion then the action of $G$ on the barycentric subdivision $T'$ of $T$ has no edge inversion, the quotient $T/G$ is a segment and $G$ splits as free product with amalgamation according to Theorem~\ref{theo:serre}. \end{proof} \section{Ends of groups and Stallings' Theorem}\label{sect:stallings} Let a group $G$ be generated by $S$. Then $X=\Cay(G,S)$ is defined by $VX=G$ and vertices (group elements) $x,y$ are adjacent if $x^{-1}y\in S$. Equivalently, if there is an $s\in S$ such that $xs=y$. A group acts on its Cayley graphs freely by left multiplication. That is, if a group element fixes some vertex then it is the neutral element and hence it fixes all vertices. This implies that the stabilizers of finite sets of vertices are finite subgroups. Right multiplication is only an action a Cayley graph if the group is Abelian. Given two finite generating sets $S$ and $S'$, the Cayley graphs $\Cay(G,S)$ and $\Cay(G,S')$ are quasi-isometric. That is, if $d_X,d_Y$ denote the graph metrics of $X=\Cay(G,S)$ and $Y=\Cay(G,S')$, respectively, then the identity map $\alpha:G\to G$ is a quasi-isometry between the metric spaces $(G,d_X)$ and $(G,d_Y)$. Formally, there is an integer $a$ such that \[d_X(x,y)/a-a\le d_Y(\alpha(x),\alpha(y))=d_Y(x,y)\le a d_X(x,y)+a,\] for all $x,y\in G$. The number of ends in locally finite graphs is a quasi-isometry invariant and hence it does not depend on the finite generating set. We can speak of the \emph{number of ends of a group}. Finite groups have no ends, infinite finitely generated groups have either one, two or infinitely many ends. The following criterion is purely algebraic and does not use graphs: An infinite finitely generated group $G$ has more than one end if and only if there is a subset $C$ of $G$ such that $C$ and $G\setminus C$ are infinite and $Cg\setminus C$ is finite, for all $g\in G$ (equivalently, for all $g$ in some set of generators). \begin{theo}[Stalling's Theorem] A finitely generated group has more than one end if and only if it splits over some finite subgroup. \end{theo} \begin{proof} Consider a finitely generated group $G$ which splits over some finite subgroup $A$. In the case of an amalgamated product $G=H*_{A}J$ let $S$ be a finite generating set which is contained in $H\cup J$. In the case of an HNN-extension $G=H*^{A}$, let $S$ be a finite generating set of $H$ together with $t$. If $G=H*_{A}J$ then every path from the set $C$ of vertices whose normal presentation starts with an $x_0\in T_H\setminus \{1\}$ is separated by the finite set $A$ from any vertex whose normal presentation starts with an element $x_0\in T_J\setminus \{1\}$, which is the set $G\setminus (C\cup A)$. Using the algebraic definition above, $C$ and $G\setminus C$ are infinite and $Cs\setminus C$ is finite, for all $s\in S$. Hence $\Cay(G,S)$ has more than one end. If $G=H*^A$ then every path from the set of vertices $C$ whose normal presentation starts with $t$ is separated from $G\setminus (C\cup A)$ by $A$. Again $\Cay(G,S)$ has more than one end. Hence an infinite finitely generated group which splits over some finite subgroup has more than one end. Let $X=\Cay(G,S)$ be the Cayley graph with respect to some finite generating set $S$ of a group with more than one end. By Theorem~\ref{theo:optimally} the set $\mathcal{C}$ of optimally nested cuts is nested and $G$-invariant, because it is invariant under any automorphism. We could as well choose any set $\mathcal{O}$ of optimally nested cuts and set $\mathcal{C}=\{g(C),g(C^\mathrm{c})\mid C\in\mathcal{O}, g\in G\}$. The transitive action of $G$ on $X$ by left multiplication induces a transient action on $T(\mathcal{C})$. By Corollary~\ref{coro:serre}, $G$ splits over a stabilizer of an edge of $T$. Stabilizers of edges in $T$ are stabilizers of edge boundaries $\delta C$ in $X$, for $C\in\mathcal{C}$. These stabilizers are finite, because the action on $X$ is free. \end{proof} \section{Comparison to other papers concerning structure trees}\label{sect:comparison} In previous papers (for instance \cite{Dunwoody1979,Kroen2001,Moeller1992ends1,Moeller1992ends2,Thomassen1993}), vertices of the structure tree where not defined as inseparable blocks but as equivalence classes of cuts. Cuts $C,D\in\mathcal{C}$ are called \emph{equivalent} if either (a) $C=D$ or if (b) $C^\mathrm{c}\subset D$ and $C^\mathrm{c}\subset E\subset D$ implies $C^\mathrm{c}=E$ or $E= D$. To prove transitivity of this relation is a bit technical. It is common to consider so-called structure maps $ \varphi:VX\to VT$ and, for locally finite graphs, $\Phi:\Omega X\to VT\cup \Omega T$. A vertex $x\in VX$ is mapped to the vertex (equivalence class) $v\in VT$ if $x$ is contained in all cuts of this equivalence class. Here it may happen that $ \varphi^{-1}(v)=\emptyset$. Hence there may be vertices of the tree which do not correspond to sets of vertices in the graph. Blocks which correspond to vertices $v$ of the tree where called \emph{regions of $v$} in \cite{Kroen2001}.
2024-02-18T23:40:00.776Z
2010-03-04T18:10:40.000Z
algebraic_stack_train_0000
1,084
7,131
proofpile-arXiv_065-5335
\section{Introduction} Long-term NASA plans \citep{nasa-plan} for placing scientific equipment on the moon face uncertainty regarding the environmental impact on such devices as hard information about the lunar environmental effect on scientific instruments has not been available. From a quantitative analysis of the performance of the laser reflectors, we find clear evidence for degradation of the retroreflectors, and note that degradation began within one decade of placement on the lunar surface. From 1969--1985, the McDonald Observatory 2.7~m Smith Telescope \citep[MST:][]{bender} dominated lunar laser ranging (LLR), using a 634~nm ruby laser. Starting around 1985, the McDonald operation moved away from the competitively-scheduled MST to a dedicated 0.76~m telescope designed to perform both satellite and lunar laser ranging, becoming the McDonald Laser Ranging System \citep[MLRS:][]{mlrs}. In 1984, other LLR operations began at the Observatoire de la C\^ote d'Azur \citep[OCA:][]{oca} in France and at the Haleakala site in Hawaii, that used 1.5~m and 1.74~m telescopes, respectively. These systems all operate Nd:YAG lasers at 532~nm. In 2006, the Apache Point Observatory Lunar Laser-ranging Operation \citep[APOLLO:][]{apollo} began science operations using the 3.5~m telescope and a 532~nm laser at the Apache Point Observatory in New Mexico. Primarily geared toward improving tests of gravity, APOLLO is designed to reach a range precision of one millimeter via a substantial increase in the rate of return photons. The large telescope aperture and good image quality at the site, when coupled with a $4\times4$ single-photon detector array, produces return photon rates from all three Apollo reflectors that are about 70 times higher than the best rates experienced by the previous LLR record-holder (OCA). Consequently, APOLLO is able to obtain ranges through the full moon phase for the first time since MST LLR measurements ceased around 1985. We find that the performance of the reflectors themselves degrades during the period surrounding full moon. In this paper we describe the full-moon deficit, report its statistical significance, and eliminate the possibility that it results from reduced system sensitivity at full moon. We show that this deficit began in the 1970's, and examine the significance of successful total-eclipse observations by OCA and MLRS. We see an additional factor-of-ten signal deficit that applies at all lunar phases, but this observation requires a detailed technical evaluation of the link, and is deferred to a later publication. We briefly discuss possible mechanisms that might account for the observed deficits. \section{Degradation at full moon} APOLLO observing sessions typically last less than one hour, with a cadence of one observing session every 2--3 nights. For a variety of practical reasons, APOLLO observations are confined to 75\% of the lunar phase distribution, from $D=45^{\circ}$ to $D=315^{\circ}$, where $D$ is the synodic phase relative to new moon at $D=0$. Within an observing session, multiple short ``runs'' are carried out, where a run is defined as a contiguous sequence of laser shots to a specific reflector. Typical runs last 250 or 500 seconds, consisting of 5000 or 10000 shots at a 20~Hz repetition rate. Each shot sends about $10^{17}$ photons toward the moon, and in good conditions we detect about one return photon per shot. If the signal level acquired on the larger Apollo~15 reflector is adequate, we cycle to the other two Apollo reflectors in turn, sometimes completing multiple cycles among the reflectors in the allotted time. When the Lunokhod~2 reflector is in the dark, we range to it as well. Its design leads to substantial signal degradation from thermal gradients, rendering it effectively unobservable during the lunar day. Figure~\ref{fig:APOLLO-rate} displays APOLLO's return rates, in return photons per shot, for the Apollo~15 reflector as a function of lunar phase, with 338 data points spanning 2006-10-03 to 2009-06-15. Signal rate is highly dependent on atmospheric seeing (turbulence-related image quality). When the seeing is greater than 2~arcsec, the signal rate scales like the inverse fourth power of the seeing scale \citep{apollo}. Variability in seeing and transparency dominate the observed spread of signal strength, resulting in at least two orders-of-magnitude of variation. \begin{figure}[tb] \begin{center}\includegraphics[width=89mm]{fig1.png} \end{center} \caption{APOLLO return rate (in photons per shot) from the Apollo~15 reflector as a function of lunar phase. The reduced signal strength around full moon ($D=180^{\circ}$) is apparent. The vertical scatter is predominantly due to variable atmospheric seeing and transparency. The dotted line across the top is a simple ad-hoc model of the signal deficit used to constrain background suppression in Fig.~\ref{fig:background}. The dotted line across the bottom indicates the background rate in a 1~ns temporal window, against which signal identification must compete.\label{fig:APOLLO-rate}} \end{figure} Below about 0.001 photons per shot (pps), we have difficulty identifying the signal against photon background and detector dark rate. A typical peak rate across phases is $\sim1$~pps, with the best runs reaching $\sim3$~pps. \emph{The key observation is the order-of-magnitude dip in signal rate in the vicinity of full moon}, at $D=180^{\circ}$. The best return rates at full moon were associated with pristine observing conditions that would have been expected to deliver $\sim1$ photon per shot at other phases, but only delivered 0.063~pps at full moon. Thus the deficit is approximately a factor of 15. The deficit appears to be confined to a relatively narrow range of $\pm30^{\circ}$ around the full moon, and is not due to uncharacteristically poor observing conditions during this period. A Kolmogorov-Smirnov (K-S) test confirms the improbability that random chance could produce a full-moon dip as large as that seen in Fig.~\ref{fig:APOLLO-rate}. There is $<0.03$\% chance that the measurements within $\pm 30^\circ$ of full moon were drawn from the same distribution as the out-of-window points. Similar tests using $60^\circ$-wide windows centered away from full moon do not produce comparably low probabilities. Additional evidence for the full-moon deficit is provided by the instances of failure to acquire a signal. Failure can occur for a variety of reasons \emph{not} related to the health of the lunar arrays: poor seeing; poor atmospheric transparency; inaccurate telescope pointing; optical misalignment between transmit and receive beams; time-of-flight prediction error; instrumental component failure. But none of these causes depend on the phase of the moon. We therefore plot a histogram of Apollo~15 acquisition failures as a function of lunar phase in Fig.~\ref{fig:failure}. Failures due to known instrumental problems were removed from this analysis, as were failures due to causes such as pointing errors that were ultimately remedied within the session. The bars are shaded to reflect observing conditions: light gray indicates poor conditions (seeing or transparency); medium gray indicates medium conditions; and black indicates excellent observing conditions, for which the lack of signal is especially puzzling. Note the cluster of failures centered around full moon. The phase distribution of run attempts is roughly uniform. \begin{figure}[tb] \begin{center}\includegraphics[width=89mm]{fig2.png} \end{center} \caption{Phase distribution in 5$^\circ$ bins of failed run attempts (left-hand scale) on Apollo~15 during the period from 2006-10-03 to 2009-06-15, excluding those due to known technical difficulties. Black indicates good observing conditions, medium gray corresponds to medium conditions, and light gray reflects bad conditions. The line histogram shows the phase distribution of all run attempts in 15$^\circ$ bins (right-hand scale).\label{fig:failure}} \end{figure} The other Apollo reflectors are similarly impacted at full moon. On the few occasions when the full-moon Apollo~15 signal was strong enough to encourage attempts on the other reflectors, we found that the expected 1:1:3 ratio between the Apollo 11, 14, and 15 rates is approximately preserved. In no case have we been able to raise a signal on other reflectors after repeated failures to acquire signal from Apollo~15. Could the full-moon deficit be explained by paralysis of our single-photon avalanche photodiode (APD) detectors in response to the increased background at full moon? Figure~\ref{fig:background} indicates that APOLLO sees a maximum background rate at full moon of $\sim0.6$ avalanche events per 100~ns detection gate across the $4\times4$ APD array---in agreement with throughput calculations. Therefore, a typical gate-opening has a $\sim$30\% chance that \emph{one} of the 11 consistently-functioning avalanche photodiode elements (out of 16) will be rendered blind \emph{prior} to the arrival of a lunar photon halfway into the 100~ns gate. The sensitivity for the entire array to signal return photons therefore remains above 97\% even at full moon. The background rates presented here are extrapolated from a 20~ns window in the early part of the gate, before any lunar return signal. \begin{figure}[tb] \begin{center}\includegraphics[width=89mm]{fig3.png} \end{center} \caption{APOLLO's background rate from the Apollo~15 site. The dotted line shows the dark-rate baseline and the solid line the contribution of the lunar illumination (not including the extra 40\% enhancement at full moon). The dashed curve shows the expected background rate if the detector's full-moon sensitivity were suppressed in the same way as the lunar signal seen in Fig.~\ref{fig:APOLLO-rate}. APOLLO's detector clearly has high sensitivity at full moon.\label{fig:background}} \end{figure} The Apollo~15 site is near the lunar prime meridian, so that its illumination curve is roughly symmetric about full moon. Small-aperture photometry measurements by \citet{peacock}---and more recently by \citet{kieffer}---show that the surface brightness increases roughly linearly on approach to full moon, with an additional $\sim40$\% enhancement very near full moon \citep{opposition}. A linear illumination curve is provided in Fig.~\ref{fig:background} for reference. APOLLO clearly sees the expected background enhancement at full moon. If \emph{any} phenomenon suppressed APD sensitivity to laser returns from the reflector array, it would \emph{likewise} suppress sensitivity to the background photons, as suggested by the dashed curve in Fig.~\ref{fig:background}. There is no hint of detector suppression in the background counts, so we conclude that the diminished return rate observed near full moon constitutes a genuine reduction in signal returning from the reflector. It is natural to ask if we can determine the timescale over which the full-moon deficit developed. MST LLR data \citep{cddis,ilrs} reveal that from 1973 to 1976 there was no indication of a full-moon deficit. Figure~\ref{fig:old_mcd} shows the photon count per run for two periods of the MST operation, where a run typically consisted of 150--200 shots at a rate of 20 shots per minute. A full-moon deficit began to develop in the period from 1977--1978 (not shown), and is markedly evident in the period from 1979--1984---but it appears somewhat narrower than the deficit now observed by APOLLO. K-S tests indicate that the probability that the distribution of photon counts between $160^{\circ}<D<200^{\circ}$ is the same as that outside the window is 18\%, 0.9\%, and 0.03\% for the three periods indicated above, in time-order. In the last period, roughly one decade after placement of the Apollo~15 array in 1971, the deficit was approximately a factor of three. The MST apparatus did not change in a substantial way between 1973 to 1984. \begin{figure}[tb] \begin{center}\includegraphics[width=165mm]{fig4.png} \end{center} \caption{Apollo~15 photon counts detected at MST in the four years spanning 1973--1976 and six years spanning 1979--1984 \citep{cddis}. The first period lacks any convincing drop in signal rate around full moon, while the later period reveals the emergence of a full-moon deficit. The probability that the full-moon distribution is compatible with the rest of the points is 18\% and 0.03\% for the two periods, respectively. The vertical spread of these points is smaller than the APOLLO spread in Fig.~\ref{fig:APOLLO-rate} because the MST's larger beam divergence and receiver aperture made it less sensitive to atmospheric seeing. \label{fig:old_mcd}} \end{figure} By 1985, LLR was performed only on smaller telescopes, and attempts at full moon ranging subsided---except during four total lunar eclipses. The 35 eclipse range measurements by OCA and MLRS add an interesting twist: the return strength during eclipse is statistically indistinguishable from that at other phases and not compatible with an order-of-magnitude signal deficit. Existing data do not probe the time evolution of reflector efficiency into and out of the total eclipse, but it appears that the arrays perform normally as soon as 15 minutes into the totality. APOLLO could not observe the eclipses of 2007 August 28 and 2008 February 21 because of bad weather, but will have a chance to follow a complete total eclipse on 2010 December 21. \section{Other evidence for degradation} In addition to the full-moon deficit, analysis of APOLLO's return rate reveals an overall factor-of-ten signal deficit at all lunar phases. Supporting evidence requires more analysis than can be covered here. In brief, the dominant contributors to the $\sim10^{17}$ photon throughput loss arise from beam divergence on both the uplink and downlink. We can measure the former by deliberately scanning the beam across the reflector on the moon, confirming a seeing-limited beam profile. We additionally measure the atmospheric seeing via the spatial distribution of the return point source on the $4\times4$ detector array. The downlink divergence is set by diffraction from the corner cubes, verified by measurements of the actual flight cubes. Receiver throughput losses, which constitute a small fraction of the total loss, were measured by imaging stars or the bright lunar surface on the APD, and agree well with a model of the optical and detector system. Careful analysis does not account for APOLLO's missing factor of ten in signal return, while early ranging data from MST do agree with the anticipated return rate \citep{llr-1970}. Further evidence for the damaging effects of the lunar environment comes from the Lunokhod~2 reflector. In the first six months of Lunokhod~2 observations in 1973, its signal was 25\% stronger than that from the Apollo~15 array. Today, we find that it is 10 times weaker. The Lunokhod corner cubes are more exposed than the recessed Apollo cubes, and unlike the Apollo cubes, have a silver coating on the rear surfaces. Both factors may contribute to the accelerated degradation of the Lunokhod array relative to the Apollo arrays. \section{Discussion} The full-moon deficit, the overall signal shortfall experienced by APOLLO, and the relative decline in performance of the Lunokhod array all show that the lunar reflectors have degraded with time. It may be possible to explain these observations with a single mechanism that causes both an optical impairment at all phases, and a thermal influence near full moon that abates during eclipse. One possibility is alteration of the corner-cube prisms' front surfaces either via dust deposition or surface abrasion from high-velocity impact ejecta or micrometeorites. Alternatively, any material coating on the back of the corner cubes---perhaps originating from the teflon mounting rings---could impact performance of the Apollo reflectors via frustration of total internal reflection (TIR) and absorption of solar energy. Bulk absorption in the glass could also produce the observed effects. The impact on reflection efficiency at all phases from each of these possibilities is obvious. The full-moon effect would arise from an enhancement of solar energy absorption by the corner-cube prisms and their housings---defeating the careful thermal design intended to keep the prisms nearly isothermal. Because the uncoated Apollo corner cubes work via TIR, their rejection of solar flux should be complete when sunlight arrives within 17$^\circ$ of normal incidence. But the temperature uniformity within the corner cubes is upset either by absorption of energy at the cube surfaces, or by defeat of TIR via scattering---which results in energy deposition in the pocket behind the cubes, heating the cubes from the rear. Temperature gradients in a corner-cube prism produce refractive index gradients, generating wavefront distortion within the prism. A 4~K gradient between the vertex and front face of the Apollo corner cubes reduces the peak intensity in their far-field diffraction pattern by a factor of ten \citep[][Fig. 10]{adl}. Apollo corner cubes are recessed by half their diameter in a tray oriented toward the earth. Near full moon, the weathered corner cubes are most fully exposed to solar illumination, maximizing the degradation. During eclipses, the reflector response may be expected to recover on a short timescale, governed by the $\sim15$~minute thermal diffusion timescale for 38~mm diameter fused silica corner-cube prisms. While any of the proposed mechanisms could account for the observations, objections can be raised to each of them. Bulk absorption is not expected in the Suprasil fused silica used for the Apollo cubes after 40 years of exposure. Micrometeorite rates on the lunar surface gleaned from study of return samples, and summarized in \citet{surveyor}, suggest the fill-factor of craters on an exposed surface to be $\sim 10^{-4}$ after 40 years, dominated by craters in the 10--100~$\mu$m range. Opportunities for a contaminant coating on the rear surfaces of the corner cubes are limited given that the only substance within the closed aluminum pocket besides the glass corner cube is the teflon support ring. Moreover, the Lunokhod array would not be subject to the same rear-surface phenomena as the Apollo cubes, yet shows an even more marked degradation. Dust is perhaps the most likely candidate for the observed degradation. Astronaut accounts from the surface and from lunar orbit, as well as a horizon glow seen by Surveyor~7, suggest the presence of levitated dust---possibly to altitudes in excess of 100~km, for which a lofting mechanism has been suggested by \citet{stubbs-dust}. The dust monitor placed on the lunar surface by the Apollo~17 mission measured large fluxes of dust in the east-west direction around the time of lunar sunrise and sunset---consistent with the electrostatic charging mechanisms described by \citet{farrell-terminator}. The main difficulty with the dust explanation is that electrostatic charging alone is not strong enough to liberate dust grains from surface adhesion. But mechanical disturbance seeded by micrometeorite and impact ejecta activity may be enough to free the already-charged grains. Whether or not dust is responsible, the supposed health of the reflector arrays has been used to argue that dust dynamics on the surface of the moon are of minimal importance. Our observations of the reduced reflector performance invalidate the invocation of reflector health in this argument. The only other relevant data for the environmental impact on optical devices on the lunar surface comes from the Surveyor~3 camera lens, retrieved by the Apollo~12 mission. After 945 days on the surface, the glass cover of the camera lens had dust obscuring an estimated 25\% of its surface---though it is suspected that much of this was due to Surveyor and Apollo~12 landing and surface activities \citep{surveyor}. Clearly, the ascent of the lunar modules could result in dust deposition on the nearby reflectors. But the effect reported here became established after several years on the lunar surface (e.g., Fig.~\ref{fig:old_mcd}), and is therefore not related to liftoff of the lunar modules. The evidence for substantially worsened performance of the lunar reflectors over time makes it important to consider the long-term usefulness of next-generation devices proposed for the lunar surface. Finding the mechanism responsible for the observed deficits is a high priority. Thermal simulations or testing deliberately altered corner-cube prisms in a simulated lunar environment would likely expose the nature of the problem with the Apollo arrays. Especially important would be to differentiate between permanent abrasion versus removable dust. The results could impact the designs of a wide variety of space hardware---especially next-generation laser ranging reflectors, telescopes, optical communication devices, or equipment dependent on passive thermal control. \acknowledgments We thank Doug Currie, Eric Silverberg, and Kim Griest for comments. APOLLO is indebted to the staff at the Apache Point Observatory, and to Suzanne Hawley and the University of Washington Astronomy Department for APOLLO's telescope time. We also acknowledge the technological prowess of Apollo-era scientists and engineers, who designed, tested, and delivered the first functional reflector array for launch on Apollo~11 within six months of receipt of the contract. APOLLO is jointly funded by NSF and NASA, and some of this analysis was supported by the NASA Lunar Science Institute as part of the LUNAR consortium (NNA09DB30A).
2024-02-18T23:40:00.809Z
2010-03-03T04:11:49.000Z
algebraic_stack_train_0000
1,085
3,524
proofpile-arXiv_065-5384
\section{Introduction} \section{Introduction} To understand the internal structure of scalar mesons has been a prominent topic in the last 30-40 years. Although the scalar mesons have been investigated for several decades, many properties of them are not so clear yet and identifying the scalar mesons is difficult, experimentally. Hence, the theoretical works can play a crucial role in this respect. In particle physics, the quarkonia refers to flavorless mesons containing a heavy b (c) quark and its own antiquark, i.e., $b\bar b$ (bottomonium) and $c\bar c$ (charmonium). These approximately non-relativistic systems are the best candidates to investigate the hadronic dynamics and study the perturbative and non-perturbative aspects of QCD. It was believed that the quarkonia can help us to extract the nature of quark-antiquark interaction at the hadronic scale and play the same role in probing the QCD as the hydrogen atom play in the atomic physics \cite{Novikov}. A large number of the beauty and charmed systems have been experimentally observed in the last few decades (see for instance \cite{Aubert,Augustin,Choi}) and the theoretical calculations on the properties of these systems have been made mainly using potential model, where the quarkonia is described by a static potential, $V=-\frac{4}{3}\frac{\alpha_s}{r}+kr$ and its extensions like the Coulomb gauge model \cite{Ebert,Crater,Wang1,Dudek,Guo}. The first term in the potential is related to one gluon exchange and the second term is called the confinement potential. The recent CLEO measurements on the two-photon decay rates of the even-parity, P-wave scalar $0^{++}, \chi_{b(c)0}$ and tensor $2^{++}, \chi_{b(c)2}$ states (\cite{Ecklund,CAmsler} and references therein) were motivation to investigate the properties of the quarkonia and their radiative decays from the quark-antiquark interaction point of view (see for example \cite{Lansberg1,Lansberg2}). In \cite{Luchab}, which is a recent study on extraction of ground-state decay constant from both sum rules and potential models, it is stated that results obtained at each step of the extraction procedure both in QCD and in potential models follow the same pattern, hence all our findings concerning the extraction of bound-state parameters from correlation functions obtained in potential model can apply also to QCD. It is also proven that in QCD sum rules approach by tunning the continuum threshold which is related to the energy of the first exited state specially with a Borel-parameter dependent threshold, we can get a more reliable and accurate determination of bound-state characteristics comparing the potential models. The QCD sum rules approach as a non-perturbative approach is one of the most powerful and applicable tools to spectroscopy of hadrons and can play a crucial role in investigation of the properties of the hadrons\cite{MAS,colangelo,braun,balitsky}. It has been used to calculate the masses and decay constants of mesons \cite{AIVainshtein,LJReinders,SNarison, MJamin,AAPenin,Du,Kazem1,Kazem2}. This approach was extended to contain the properties of the hadrons at finite temperature called thermal QCD sum rules \cite{Bochkarev,C.Adami,T.Hatsuda} supposing that the operator product expansion (OPE) and the quark-hadron duality assumption remain valid, however the quark-quark, quark-gluon and gluon-gluon condensates are altered by their thermal versions. The main aspiration of this addendum was to explain the results obtained from the heavy ion collision experiments. It is presently believed that the hot and dense medium where the hadrons are formed modifies masses and decay widths of hadrons. It is shown that heavy mesons like $J/\psi$ and also radial and orbital $c\overline{c}$ excitations have different behavior when the temperature of the medium changes (see \cite{HTDing} and references therein). In \cite{Colangelo}, scalar mesons and scalar glueballs are investigated in holographic QCD at finite temperature . A flood of papers have also been dedicated generally to the determination of the condensates, mass and decay constant of mesons and some properties of the nucleons at finite temperature \cite{Miller1,Furnstahl,Koike,Huang,Fetea,S.Mallik,Mukherjee,Mallik,Zschocke,Dominguez0,Aliev1,Meyer,Veliev,Panero}. In the present work, we calculate the mass and decay constant of the heavy scalar $\chi_{Q0}$ mesons with quantum numbers $I^G(J^{PC})=0^{+}(0^{++})$ using the thermal QCD sum rules approach. Here, we assume that with replacing the vacuum condensates and also the continuum threshold by their thermal version, the sum rules for the observables (masses and decay constants ) remain valid. In calculations, we take into account the additional operators in the Wilson expansion at finite temperature \cite{Shuryak} and modify spectral density in QCD side. These operators are due to the breakdown of Lorentz invariance at finite temperature by the selection of the thermal rest frame, where matter is at rest at a definite temperature \cite{Mukherjee,Weldon}. In this condition, the residual O(3) invariance brings these extra operators with the same mass dimension as the vacuum condensates. We also consider the interaction of the currents with the existing particles in the medium at finite temperature. Such interactions require modification of the hadron spectral density. The outline of the paper is as follows: in next section, sum rules for the the mass and the decay constant of the heavy scalar, $\chi_{Q0}$ mesons are obtained in the framework of the QCD sum rules at finite temperature. Section III encompasses our numerical predictions for the mass and decay constants as well as comparison of the results with the existing predictions of the other non-perturbative approaches and experimental values. \section{QCD Sum Rules for the Mass and Decay constant} In this section, we obtain sum rules for the mass as well as the decay constant of the scalar quarkonia containing b or c quark in the framework of the thermal QCD sum rules. For this aim, we will evaluate the two-point thermal correlation function, \begin{eqnarray}\label{correl.func.1} \Pi(q,T) =i\int d^{4}xe^{iq. x}{\langle} {\cal T}\left ( J^S (x) \bar J^S(0)\right){\rangle}, \end{eqnarray} in two different ways: physical and theoretical representations. In correlation function $T$ denotes the temperature, ${\cal T}$ is the time ordering product and $J^S(x)=\overline{Q}(x)Q(x)$ is the interpolating current of the heavy scalar meson, $S=\chi_{Q0} (Q=b,c)$. The thermal average of any operator, \textit{A} can be expressed as: \begin{eqnarray}\label{A} {\langle}A {\rangle}=\frac{Tr(e^{-\beta H} A)}{Tr( e^{-\beta H})}, \end{eqnarray} where $H$ is the QCD Hamiltonian, and $\beta = 1/T$ is the inverse of the temperature $T$ and traces are carried out over any complete set of states. The physical or phenomenological representation of the aforementioned two-point correlation function is obtained in terms of the hadronic parameters saturating it with a tower of scalar mesons with the same quantum numbers as the interpolating current. The theoretical or QCD representation is gained via operator product expansion (OPE) in terms of the QCD parameters such as quark's masses, and the vacuum condensates considering the internal structure of these mesons, i.e., quarks, gluons and their interactions with each other as well as with the QCD vacuum. Sum rules for the physical observables such as the decay constant and mass are obtained equating these two different representations through dispersion relation. To suppress the contribution of the higher states and continuum, Borel transformation with respect to the $Q_0^2=-q_0^2$ is applied to both sides of the sum rules for physical quantities. To calculate the phenomenological part, we insert a complete set of intermediate states owing the same quantum numbers with current $J^S$ between the currents in Eq. (\ref{correl.func.1}) and perform the integral over. As a result, at $T=0$, we obtain \begin{eqnarray}\label{phen1} \Pi(q,0)=\frac{{\langle}0\mid J(0) \mid S\rangle \langle S \mid J(0)\mid 0\rangle}{m_{S}^2-q^2} &+& \cdots, \end{eqnarray} where $\cdots$ represents the contributions of the higher states and continuum and $m_{S}$ is mass of the heavy scalar meson. The matrix element creating the scalar meson from the vacuum can be written in terms of the decay constant, $f_{S}$ by the following manner: % \begin{eqnarray}\label{lep} \langle 0 \mid J(0)\mid S\rangle=f_{S} m_{S}. \end{eqnarray} Note that Eqs. (\ref{phen1}) and (\ref{lep}) are valid also at finite temperature, hence, the final representation for the physical side can be written in terms of the temperature dependent mass and decay constant as: \begin{eqnarray}\label{phen2} \Pi(q,T)=\frac{f_{S}^2(T) m_{S}^2(T)}{m_{S}^2(T)-q^2} &+& \cdots \end{eqnarray} In QCD side, the correlation function is calculated in deep Euclidean region, $q^2\ll-\Lambda_{QCD}^2$ via OPE where the short or perturbative and long distance or non-perturbative effects are separated, i.e., \begin{eqnarray}\label{correl.func.QCD1} \Pi^{QCD}(q,T) =\Pi^{pert}(q,T)+\Pi^{nonpert}(q,T). \end{eqnarray} The short distance contribution (bare loop diagram in figure (\ref{fig1}) part (a)) is calculated using the perturbation theory, whereas the long distance contributions (diagrams shown in figure (\ref{fig1}) part (b) ) are represented in terms of the thermal expectation values of some operators. To proceed, we write the perturbative part in terms of a dispersion integral, \begin{eqnarray}\label{correl.func.QCD1} \Pi^{QCD}(q,T) =\int \frac{ds \rho(s,T)}{s-q^2}+\Pi^{nonpert}, \end{eqnarray} where, $\rho(s,T)$ is called the spectral density at finite temperature. The thermal spectral density at fixed $\mid \textbf{q}\mid$ can be expressed as: \begin{eqnarray}\label{rhoq} \rho(q,T)=\frac{1}{\pi}~Im\Pi^{pert}(q,T)~\tanh\left(\frac{\beta q_{0}}{2}\right). \end{eqnarray} To proceed, we need to know the fermion propagator at finite temperature. The thermal fermion propagator at real time is given as: \begin{eqnarray}\label{prop} S(k)=(\gamma_{\mu}~k^{\mu}+m_{Q})\left(\frac{i}{k^2-m_{Q}^2+i\varepsilon}-2~\pi ~n~(|k_{0}|)~\delta(k^2-m_{Q}^2)\right), \end{eqnarray} here, $n(x)$ is Fermi distribution function, \begin{eqnarray}\label{nf} n(x)=\left[ exp (\beta x)+1 \right ]^{-1}. \end{eqnarray} Using the above propagator, we find the following expression for the imaginary part of the correlation function at $\mid \textbf{q}\mid=0$ limit: \begin{eqnarray}\label{ImPhi} Im\Pi(q_0,T)=N_{c}\int\frac{d\textbf{k}}{8\pi^2} \frac{1}{\omega^2} (q_{0}~ \omega -2 m_{Q}^2)\Big(1-2~n(\omega)+2~n^2(\omega)\Big)\delta(q_{0}-2~\omega), \end{eqnarray} where, $\omega=\sqrt{m_Q^2+\textbf{k}^2}$. After some straightforward calculations, the thermal spectral density is obtained as: \begin{eqnarray}\label{rhoper} \rho(s)=\frac{3s}{8 \pi^2}\left(1-\frac{4m_Q^2}{s}\right)^{\frac{3}{2}} \left(1-2~n\left(\frac{\sqrt{s}}{2}\right)\right). \end{eqnarray} \begin{figure}[h!] \begin{center} \includegraphics[width=12cm]{Fig1.eps} \end{center} \caption{ (a): Bare loop diagram (b): Diagrams corresponding to gluon condensates.} \label{fig1} \end{figure} In the non-perturbative part, the main contribution comes from the two gluon condensates since the heavy quark condensates are suppressed by inverse powers of the heavy quark mass and can be safely removed. The gluon condensate diagrams are represented in part (b) of figure (\ref{fig1}). In order to calculate nonperturbative contributions, we use Fock-Schwinger gauge, $x^{\mu}A^{a}_{\mu}(x)=0$. In momentum space, the vacuum gluon field is expressed as: \begin{eqnarray}\label{Amu} A^{a}_{\mu}(k')=-\frac{i}{2}(2 \pi)^4 G^{a}_{\rho \mu}(0)\frac{\partial} {\partial k'_{\rho}}\delta^{(4)}(k'), \end{eqnarray} and in calculations, we use the quark-gluon-quark vertex as: \begin{eqnarray}\label{qgqver} \Gamma_{ij\mu}^a=ig\gamma_\mu \left(\frac{\lambda^{a}}{2}\right)_{ij}, \end{eqnarray} where $k'$ is the gluon momentum. After straightforward calculations, the non-perturbative part in momentum space is obtained as: \begin{eqnarray}\label{npPi} &&\Pi^{nonpert}\nonumber\\ &&=\int^{1}_{0} dx\frac{~x^2}{288\pi(m_Q^2+q^2 ~x~(-1+x))^4}\Big\{3\langle \alpha_s G^2\rangle\Big[40m_Q^6x^2-9q^6x^2(-1+x)^4(-1-2x+2x^2)-12m_Q^2q^4x(-1+x)^2 \nonumber\\ &&\times(1+4x-12x^2+6x^3)+m_Q^4q^2(-15+156x-441x^2+434x^3-134x^4) \Big]-\alpha_s \langle u^{\alpha}\Theta^{g}_{\alpha\beta}u^{\beta}\rangle\Big[ 4q^2(-1+x)\Big(q^4(-1+x)^2 \nonumber\\ &&\times x^2(9+11x-14x^2+12x^3)+m_Q^4(-15+135x-246x^2+176x^3) +4m_Q^2q^2x(-3-8x+28x^2-34x^3+17x^4)\Big) \nonumber\\ &&-16(-1+x)(q.u)^2\Big(q^4x^2(-1+x)^2(9+11x-14x^2+12x^3)+m_Q^4 (-15+135x-246x^2+176x^3) \nonumber\\ && +4m_Q^2q^2x(-3-8x+28x^2-34x^3+17x^4)\Big) \Big]\Big\}, \end{eqnarray} where, four-vector $u^{\mu}$ is the velocity of the heat bath and it is introduced to restore Lorentz invariance formally in the thermal field theory. In the rest frame of the heat bath, $ u^{\mu} = (1, 0, 0, 0)$ and $u^2 = 1$. In deriving the above expression, we have used the following relation considering the Lorentz covariance \cite{S.Mallik}: \begin{eqnarray}\label{} \langle Tr^c G_{\alpha \beta} G_{\lambda \sigma}\rangle = (g_{\alpha \lambda} g_{\beta \sigma} -g_{\alpha \sigma} g_{\beta \lambda})A -(u_{\alpha} u_{\lambda} g_{\beta \sigma} -u_{\alpha} u_{\sigma} g_{\beta \lambda} -u_{\beta} u_{\lambda} g_{\alpha \sigma} +u_{\beta} u_{\sigma} g_{\alpha \lambda})B . \end{eqnarray} Contracting indices on both sides, we obtain, \begin{eqnarray} A &=& {1\over 24} \langle G^a_{\alpha \beta} G^{a \alpha \beta}\rangle +{1\over 6}\langle u^{\alpha} {\Theta}^g_{\alpha \beta} u^{\beta}\rangle, \\ B &=& {1\over 3}\langle u^{\alpha} {\Theta}^g _{\alpha \beta} u^{\beta}\rangle, \end{eqnarray} where, $\Theta^{g}_{\alpha\beta}$ is the traceless, gluonic part of the stress-tensor of the QCD and it is defined as: \begin{eqnarray}\label{} \Theta^{g}_{\alpha\beta}=-G_{\alpha\lambda}^{a}G_{\beta}^{\lambda a}+\frac{1}{4}g_{\alpha\beta}G_{\lambda\sigma}^{a}G^{\lambda\sigma a}. \end{eqnarray} Matching the phenomenological and QCD sides of the correlation function, the sum rules for the mass and decay constant of scalar meson are obtained. To suppress the contribution of the higher states and continuum, Borel transformation over $q^2$ as well as continuum subtraction are performed. As a result of the above procedure, we obtain the following sum rule for the decay constant: \begin{eqnarray}\label{lepsum} m_S^2(T)f_S^2(T)e^{\frac{-m_S^2(T)}{M^2}}=\left\{ \int_{4m_Q^2}^{s_{0}(T)} ds~\rho(s)~e^{-\frac{s}{M^{2}}}+\hat{B}\Pi^{nonpert}\right\}, \end{eqnarray} where $M^2$ is the Borel mass parameter and $s_{0}(T)$ is the temperature dependent continuum threshold. The sum rules for the mass is obtained applying derivative with respect to $-\frac{1}{M^2}$ to the both sides of the sum rule for the decay constant of the scalar meson in Eq. (\ref{lepsum}) and dividing by itself: \begin{eqnarray}\label{mass2} m_S^2(T)=\frac{\int_{4m_Q^2}^{s_{0}(T)} ds~s~\rho(s)~exp(-\frac{s}{M^{2}})+\Pi^{nonpert}_{1}(M^2,T)}{\int_{4m_Q^2}^{s_{0}(T)} ds~\rho(s)~exp(-\frac{s}{M^{2}})+\hat{B}\Pi^{nonpert}(M^2,T)}, \end{eqnarray} where \begin{eqnarray}\label{Pinp1} \Pi^{nonpert}_{1}(M^2,T)=- \frac{d}{d(1/M^2)}\hat{B}\Pi^{nonpert}(M^2,T), \end{eqnarray} and $\hat{B}\Pi^{nonpert}(M^2,T)$ shows contribution of the gluon condensates in Borel transformed scheme and it is given by: \begin{eqnarray}\label{rhononper} \hat{B}\Pi^{nonpert} &=&\int^{1}_{0} dx e^{\frac{m_{Q}^2 }{M^2 x(x-1)}}\frac{1}{96 M^6 \pi (x-1)^4 x^3}\left\{\vphantom{\int_0^{x_2}}\left[\vphantom{\int_0^{x_2}} \langle \alpha_s G^2\rangle \left(\vphantom{\int_0^{x_2}}m_Q^6(1-2 x)^2 (-3+5 x) \right.\right.\right.\nonumber\\ &+&9 M^6 (-1+x)^4 x^3(-1-2 x+2 x^2)-3 m_Q^2 M^4 (-1+x)^2 x^2 (-5+7 x-12 x^2+6 x^3) \nonumber\\ &+&2m_Q^4 M^2 x (3-21x+48x^2-41x^3+11x^4)\left. \vphantom{\int_0^{x_2}}\right) -4\alpha_s \langle \Theta^g\rangle \left(\vphantom{\int_0^{x_2}}m_Q^6(1-2x)^2(-3+5x)\right. \nonumber\\ &+& M^6(-1+x)^3x^3(9+11x-14x^2+12x^3)+2m_Q^4 M^2 x(3-23x+58x^2-57x^3+19x^4) \nonumber\\ &-&m_Q^2~ M^4~ (-1+x)^2 x^2 (-15+11x-26x^2+32x^3) \left. \vphantom{\int_0^{x_2}} \right) \left. \vphantom{\int_0^{x_2}}\right] \left.\vphantom{\int_0^{x_2}}\right\}. \end{eqnarray} We use the gluonic part of energy density both obtained from lattice QCD \cite{MCheng} and chiral perturbation theory \cite{P.Gerber}. In the rest frame of the heat bath, the results obtained in \cite{MCheng} at lattice QCD are reproduced well by the following fit parametrization for the thermal average of total energy density, $\langle \Theta \rangle$, \begin{eqnarray}\label{tetag} \langle \Theta \rangle= 2 \langle \Theta^{g}\rangle= 6\times10^{-6}exp[80(T-0.1)](GeV^4), \end{eqnarray} where temperature $T $ is measured in units of $GeV$ and this parametrization is valid in the interval $0.1~GeV\leq T \leq 0.17~GeV$. Note that the total energy density has been known for $T\geq0 $ in the chiral perturbation theory, while this quantity has only been calculated for $T\geq100MeV$ in lattice QCD (see \cite{Miller1} and \cite{MCheng}). In low temperature chiral perturbation limit, the results presented in \cite{P.Gerber} are better described by the expression, \begin{eqnarray}\label{tetagchiral} \langle \Theta\rangle= \langle \Theta^{\mu}_{\mu}\rangle +3~p, \end{eqnarray} where, $p$ is pressure and $\langle \Theta^{\mu}_{\mu}\rangle$ is trace of the total energy momentum tensor. They are given as: \begin{eqnarray}\label{tetamumu} \langle \Theta^{\mu}_{\mu}\rangle=\frac{\pi^2}{270}\frac{T^{8}}{F_{\pi}^{4}} \ln \Big[\frac{\Lambda_{p}}{T}\Big], ~~~~~~~~~~~~~~p= 3~T~\Big(\frac{m_{\pi}~T}{2~\pi}\Big)^{\frac{3}{2}}\Big(1+\frac{15~T}{8~m_{\pi}}+\frac{105~T^{2}}{128~ m_{\pi}^{2}}\Big)exp\Big[-\frac{m_{\pi}}{T}\Big], \end{eqnarray} where $\Lambda_{p}=0.275GeV$, $F_{\pi}=0.093GeV$ and $m_{\pi}=0.14GeV$. Our final task in this section is to introduce the temperature dependent continuum threshold, $s_0(T)$, gluon condensate, $\langle G^2\rangle$ and the strong coupling constant. The temperature dependent continuum threshold \cite{CADominguez} and gluon condensate \cite{Miller1,MCheng} are well described by the following fit parameterizations: \begin{eqnarray}\label{sT} s(T)= s_{0}\left[\vphantom{\int_0^{x_2}} 1-\Big(\frac{T}{T^{*}_{c}}\Big)^8\vphantom{\int_0^{x_2}}\right]+4~m_Q^2~ \left(\vphantom{\int_0^{x_2}}\frac{T}{T^{*}_{c}} \vphantom{\int_0^{x_2}} \right)^8, \end{eqnarray} \begin{eqnarray}\label{G2TLattice} \langle G^2\rangle_=\frac{\langle 0|G^2|0\rangle}{exp\left[\vphantom{\int_0^{x_2}}12\Big(\frac{T}{T_{c}}-1.05\Big) \vphantom{\int_0^{x_2}}\right]+1}, \end{eqnarray} where $T^{*}_{c}=1.1\times T_c=0.176GeV$, and $s_0$ and $\langle0|G^2|0\rangle$ are the continuum threshold and the gluon condensate in vacuum, respectively. These parameterizations are valid only in the interval $0~ \leq T \leq 170~MeV$. Here, we should stress that the continuum threshold presented above is equal to the continuum threshold in vacuum at $T=0$ but it starts to diminish increasing the temperature such that at $T=T^{*}_c$ it reaches the perturbative QCD threshold, $4m_Q^2$. This parametrization belongs to heavy-heavy system and differ considerably with the case of light-light and heavy-light quark systems, where the continuum threshold is related to the thermal light quark condensate (for details see \cite{CADominguez}). We also use temperature dependent strong coupling constant \cite{Kaczmarek,SuHoungLee} as: \begin{eqnarray}\label{geks2T} g^{-2}(T)=\frac{11}{8\pi^2}\ln\Big(\frac{2\pi T}{\Lambda_{\overline{MS}}}\Big)+\frac{51}{88\pi^2}\ln\Big[2\ln\Big(\frac{2\pi T}{\Lambda_{\overline{MS}}}\Big)\Big], \end{eqnarray} where, $\Lambda_{\overline{MS}}\simeq T_c/1.14$ and in numerical calculations, instead of the $\alpha_s$ in front of $\langle \Theta^{g}\rangle$ in Eq. (\ref{rhononper}) the $\tilde{\alpha}(T)=2.096~\alpha^{pert}(T)$ has been used, where $\alpha^{pert}(T)=\frac{g^2(T)}{4 \pi}$ (for details see \cite{Kaczmarek}). \section{Numerical analysis} Present section is devoted to the numerical analysis of the sum rules for the mass and decay constant of the heavy scalar mesons. In further analysis, we use $m_c=(1.3\pm0.05)GeV$, $m_b=(4.7\pm0.1)GeV$ and ${\langle}0\mid \frac{1}{\pi}\alpha_s G^2 \mid 0 {\rangle}=(0.012\pm0.004)GeV^4$. The sum rules for the mass and decay constant also contain two auxiliary parameters, continuum threshold $s_0$ and Borel mass parameter $M^2$. The standard criteria in QCD sum rules is that the physical quantities should be independent of the auxiliary parameters. However, the continuum threshold $s_{0}$ is not completely arbitrary but is related to the energy of the first exited state with the same quantum numbers and can depend on the Borel mass parameter \cite{Lucha}. Therefore, the standard criteria, does not render realistic errors, and in fact the existent error should be large. Hence, we will add also the systematic errors to the numerical results. We choose the values $s_0=(110\pm4)~GeV^2$ and $s_0=(18\pm2)~GeV^2$ for the continuum threshold at $\chi_{b0}$ and $\chi_{c0}$ channels, respectively. The working region for the Borel mass parameter is determined requiring that not only the higher state and continuum contributions are suppressed but also the contribution of the highest order operator should be small, i.e., the sum rules for the mass and decay constant should converge. As a result of the above procedure, the working region for the Borel parameter is found to be $ 8~ GeV^2 \leq M^2 \leq 20~ GeV^2 $ for $\chi_{c0}$ and $ 15~ GeV^2 \leq M^2 \leq 30~ GeV^2 $ for $\chi_{b0}$ mesons. The dependences of the masses and decay constants at $T=0$ on Borel mass parameter are shown in Figs. \ref{fig2}-\ref{fig5}. These figures depict that the observables depend very weakly on the Borel mass parameter, $M^2$ in the working regions. \begin{table}[h] \renewcommand{\arraystretch}{1.5} \addtolength{\arraycolsep}{3pt} $$ \begin{array}{|c|c|c|} \hline \hline &f_{\chi_{b0}}(MeV) & f_{\chi_{c0}}(MeV) \\ \hline \mbox{Present Work} & 175\pm55 & 343\pm112 \\ \hline \mbox{QCD sum rules \cite{Novikov}} & - & 359 \\ \hline \mbox{Cornell potential model \cite{Eichten}} & - & 338\\ \hline \mbox{QCD sum rules \cite{P. Colangelo}} & - & 510\pm40 \\ \hline \hline \end{array} $$ \caption{Values of the leptonic decay constants of the heavy scalar, $\chi_{b0}$ and $\chi_{c0}$ mesons in vacuum. These results have been obtained using the values $s_{0}=110~GeV^{2}$ and $M^{2}=17~GeV^{2}$ for $\chi_{b0}$, and $s_{0}=18~GeV^{2}$ and $M^{2}=9~GeV^{2}$ for $\chi_{c0}$ mesons. } \label{tab:lepdecconst} \renewcommand{\arraystretch}{1} \addtolength{\arraycolsep}{-1.0pt} \end{table} \begin{table}[h] \renewcommand{\arraystretch}{1.5} \addtolength{\arraycolsep}{3pt} $$ \begin{array}{|c|c|c|} \hline \hline & m_{\chi_{b0}}~(GeV)& m_{\chi_{c0}}~(GeV) \\ \hline \mbox{Present Work } & 10.10\pm1.75 & 3.71\pm0.62 \\ \hline \mbox{Experiment \cite{CAmsler}} & 9.85944\pm 0.00042 & 3.41475\pm 0.00031 \\ \hline \hline \end{array} $$ \caption{Values of the mass of the heavy scalar, $\chi_{b0}$ and $\chi_{c0}$ mesons in vacuum. } \label{tab:mass} \renewcommand{\arraystretch}{1} \addtolength{\arraycolsep}{-1.0pt} \end{table} The dependence of the mass and decay constant of the $\chi_{b0}$ and $\chi_{c0}$ mesons on temperature are presented in Figs. \ref{mXb0Temp11}-\ref{mXb0Temp}. In these figures, we show the results obtained using both lattice QCD and chiral perturbation limit values for the gluonic part of energy density. These figures depict that the results depend very weakly on the values of the gluonic part of energy density, i.e., both values obtained from lattice and chiral limit have approximately the same predictions in the interval, $0.1~GeV\leq T \leq 0.17~GeV$ at which the lattice results are valid. These figures also show that the masses and decay constants don't change up to $T\simeq100~MeV$ but they start to diminish with increasing the temperature after this point. At near the critic or deconfinement temperature, the decay constants reach approximately to 25\% of their values in vacuum, while the masses are decreased about 6\% and 23\% for bottom and charm cases, respectively. From these figures we deduce the results on the decay constant and mass in vacuum as presented in Tables I and II. The quoted errors in these Tables are due to the errors in variation of the continuum threshold, Borel mass parameter and errors in other input parameters as well as the systematic uncertainties. Table I also include a comparison of the decay constant of charm case with the existing predictions of the same framework or other nonperturvative approaches. From this Table, we see that our predictions on the decay constant of the $\chi_{c0}$ at zero temperature are well consistent with the predictions of QCD sum rules \cite{Novikov} and Cornell potential model \cite{Eichten} predictions, but differ considerably from the result obtained in \cite{P. Colangelo} when the central values are considered. In Table II, we also compare our predictions on the masses of the heavy scalar mesons with the existing experimental data which are in a good consistency. Our results for the leptonic decay constants as well as their behavior with respect to the temperature can be verified in the future experiments. \section{Acknowledgment} This work has been supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under research project No. 110T284.
2024-02-18T23:40:01.060Z
2010-09-29T02:03:08.000Z
algebraic_stack_train_0000
1,094
4,469